在本案例中,我们展示如何编写一个多智能体对话的程序。

两个智能体对话

首先为两个智能体创建LLMConfigAgentMemory

  1. import os
  2. from dbgpt.agent import AgentContext, AgentMemory
  3. from dbgpt.model.proxy import OpenAILLMClient
  4. llm_client = OpenAILLMClient(
  5. model_alias="gpt-4o",
  6. api_base=os.getenv("OPENAI_API_BASE"),
  7. api_key=os.getenv("OPENAI_API_KEY"),
  8. )
  9. context: AgentContext = AgentContext(
  10. conv_id="test123",
  11. language="en",
  12. temperature=0.9,
  13. max_new_tokens=2048,
  14. max_chat_round=4,
  15. )
  16. # Create an agent memory, default memory is ShortTermMemory
  17. agent_memory: AgentMemory = AgentMemory()
  18. system_prompt_template = """\
  19. You are a {{ role }}, {% if name %}named {{ name }}, {% endif %}your goal is {{ goal }}.
  20. *** IMPORTANT REMINDER ***
  21. {% if language == 'zh' %}\
  22. Please answer in simplified Chinese.
  23. {% else %}\
  24. Please answer in English.
  25. {% endif %}\
  26. """ # noqa
  27. user_prompt_template = """\
  28. {% if most_recent_memories %}\
  29. Most recent observations:
  30. {{ most_recent_memories }}
  31. {% endif %}\
  32. {% if question %}\
  33. user: {{ question }}
  34. {% endif %}
  35. """

在上述代码中,我们在AgentContext中设置了max_chat_round=4,这意味着会话将在4轮后结束。 这里我们为两个智能体设置system_prompt_templateuser_prompt_template以进行简单对话,稍后我们将在配置文件模块中介绍它。

然后创建两个智能体BobAlice,并在他们之间发起聊天。

  1. import asyncio
  2. from dbgpt.agent import ConversableAgent, ProfileConfig, LLMConfig, BlankAction
  3. async def main():
  4. bob_profile = ProfileConfig(
  5. name="Bob",
  6. role="Comedians",
  7. system_prompt_template=system_prompt_template,
  8. user_prompt_template=user_prompt_template,
  9. )
  10. bob = (
  11. await ConversableAgent(profile=bob_profile)
  12. .bind(context)
  13. .bind(LLMConfig(llm_client=llm_client))
  14. .bind(agent_memory)
  15. .bind(BlankAction)
  16. .build()
  17. )
  18. alice_profile = ProfileConfig(
  19. name="Alice",
  20. role="Comedians",
  21. system_prompt_template=system_prompt_template,
  22. user_prompt_template=user_prompt_template,
  23. )
  24. alice = (
  25. await ConversableAgent(profile=alice_profile)
  26. .bind(context)
  27. .bind(LLMConfig(llm_client=llm_client))
  28. .bind(agent_memory)
  29. .bind(BlankAction)
  30. .build()
  31. )
  32. await bob.initiate_chat(alice, message="Tell me a joke.")
  33. if __name__ == "__main__":
  34. asyncio.run(main())

运行代码,将观察到BobAlice之间的对话

  1. --------------------------------------------------------------------------------
  2. Bob (to Alice)-[]:
  3. "Tell me a joke."
  4. --------------------------------------------------------------------------------
  5. un_stream ai response: Why don't scientists trust atoms?
  6. Because they make up everything!
  7. --------------------------------------------------------------------------------
  8. Alice (to Bob)-[gpt-4o]:
  9. "Why don't scientists trust atoms?\n\nBecause they make up everything!"
  10. >>>>>>>>Alice Review info:
  11. Pass(None)
  12. >>>>>>>>Alice Action report:
  13. execution succeeded,
  14. Why don't scientists trust atoms?
  15. Because they make up everything!
  16. --------------------------------------------------------------------------------
  17. un_stream ai response: That's a classic! You know, it's always good to have a few science jokes in your toolbox—they have the potential energy to make everyone laugh, and they rarely get a negative reaction!
  18. --------------------------------------------------------------------------------
  19. Bob (to Alice)-[gpt-4o]:
  20. "That's a classic! You know, it's always good to have a few science jokes in your toolboxthey have the potential energy to make everyone laugh, and they rarely get a negative reaction!"
  21. >>>>>>>>Bob Review info:
  22. Pass(None)
  23. >>>>>>>>Bob Action report:
  24. execution succeeded,
  25. That's a classic! You know, it's always good to have a few science jokes in your toolbox—they have the potential energy to make everyone laugh, and they rarely get a negative reaction!
  26. --------------------------------------------------------------------------------
  27. un_stream ai response: Absolutely, science jokes have a universal appeal! Here's another one for your collection:
  28. Why did the biologist go to the beach?
  29. Because they wanted to study the current events!
  30. --------------------------------------------------------------------------------
  31. Alice (to Bob)-[gpt-4o]:
  32. "Absolutely, science jokes have a universal appeal! Here's another one for your collection:\n\nWhy did the biologist go to the beach?\n\nBecause they wanted to study the current events!"
  33. >>>>>>>>Alice Review info:
  34. Pass(None)
  35. >>>>>>>>Alice Action report:
  36. execution succeeded,
  37. Absolutely, science jokes have a universal appeal! Here's another one for your collection:
  38. Why did the biologist go to the beach?
  39. Because they wanted to study the current events!
  40. --------------------------------------------------------------------------------
  41. un_stream ai response: Haha, that's a good one! You know, biologists at the beach must have some serious kelp issues, too. They just can’t help but dive into their work—whether it's in the lab or lounging in the sand!
  42. --------------------------------------------------------------------------------
  43. Bob (to Alice)-[gpt-4o]:
  44. "Haha, that's a good one! You know, biologists at the beach must have some serious kelp issues, too. They just can’t help but dive into their work—whether it's in the lab or lounging in the sand!"
  45. >>>>>>>>Bob Review info:
  46. Pass(None)
  47. >>>>>>>>Bob Action report:
  48. execution succeeded,
  49. Haha, that's a good one! You know, biologists at the beach must have some serious kelp issues, too. They just can’t help but dive into their work—whether it's in the lab or lounging in the sand!
  50. --------------------------------------------------------------------------------

附录