Skip to content

Commit 3e33a2c

Browse files
qingyun-wusonichiekzhu
authored
New initiate_chats Interface for Managing Dependent Chats in ConversableAgent (#1402)
* add initiate_chats implementation and example * update notebook * improve takeaway method * improve print * improve print * improve print * improve print * add tests * minor changes * format * correct typo * make prompt a parameter * add takeaway method * groupchat messages * add SoM example * fix typo * fix SoM typo * simplify chat function * add carryover * update notebook * doc * remove async for now * remove condition on reply * correct argument name * add notebook in website * format * make get_chat_takeaway private * rename takeaway method and add example * removing SoM example for now * carryover test * add test * takeaway_method * update tests * update notebook * chats_queue * add get_chat_takeaway * delete * add test * Update autogen/agentchat/conversable_agent.py Co-authored-by: Eric Zhu <[email protected]> * docstr * wording etc * add chat res * revise title * update agent_utils * unify the async method * add todo about overriding * attribute check * ChatResult type * revise test * takeaway to summary * cache and documentation * Use cache in summarize chat; polish tests --------- Co-authored-by: Chi Wang <[email protected]> Co-authored-by: Eric Zhu <[email protected]>
1 parent 7811c15 commit 3e33a2c

File tree

7 files changed

+2737
-362
lines changed

7 files changed

+2737
-362
lines changed

autogen/agent_utils.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,7 @@
11
from typing import List, Dict, Tuple
2-
from autogen import Agent
32

43

5-
def gather_usage_summary(agents: List[Agent]) -> Tuple[Dict[str, any], Dict[str, any]]:
4+
def gather_usage_summary(agents: List) -> Tuple[Dict[str, any], Dict[str, any]]:
65
"""Gather usage summary from all agents.
76
87
Args:
@@ -44,7 +43,7 @@ def aggregate_summary(usage_summary: Dict[str, any], agent_summary: Dict[str, an
4443
actual_usage_summary = {"total_cost": 0}
4544

4645
for agent in agents:
47-
if agent.client:
46+
if getattr(agent, "client", None):
4847
aggregate_summary(total_usage_summary, agent.client.total_usage_summary)
4948
aggregate_summary(actual_usage_summary, agent.client.actual_usage_summary)
5049

autogen/agentchat/chat.py

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
import logging
2+
from typing import Dict, List
3+
from dataclasses import dataclass
4+
5+
logger = logging.getLogger(__name__)
6+
7+
8+
@dataclass
9+
class ChatResult:
10+
"""(Experimental) The result of a chat. Almost certain to be changed."""
11+
12+
chat_history: List[Dict[str, any]] = None
13+
"""The chat history."""
14+
summary: str = None
15+
"""A summary obtained from the chat."""
16+
cost: tuple = None # (dict, dict) - (total_cost, actual_cost_with_cache)
17+
"""The cost of the chat. a tuple of (total_cost, total_actual_cost), where total_cost is a dictionary of cost information, and total_actual_cost is a dictionary of information on the actual incurred cost with cache."""
18+
human_input: List[str] = None
19+
"""A list of human input solicited during the chat."""

autogen/agentchat/conversable_agent.py

Lines changed: 268 additions & 22 deletions
Large diffs are not rendered by default.

notebook/agentchat_auto_feedback_from_code_execution.ipynb

Lines changed: 227 additions & 336 deletions
Large diffs are not rendered by default.

notebook/agentchat_multi_task_chats.ipynb

Lines changed: 2017 additions & 0 deletions
Large diffs are not rendered by default.

test/agentchat/test_chats.py

Lines changed: 202 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,202 @@
1+
from autogen import AssistantAgent, UserProxyAgent
2+
from autogen import GroupChat, GroupChatManager
3+
from test_assistant_agent import KEY_LOC, OAI_CONFIG_LIST
4+
import pytest
5+
from conftest import skip_openai
6+
import autogen
7+
8+
9+
@pytest.mark.skipif(skip_openai, reason="requested to skip openai tests")
10+
def test_chats_group():
11+
config_list = autogen.config_list_from_json(
12+
OAI_CONFIG_LIST,
13+
file_location=KEY_LOC,
14+
)
15+
financial_tasks = [
16+
"""What are the full names of NVDA and TESLA.""",
17+
"""Pros and cons of the companies I'm interested in. Keep it short.""",
18+
]
19+
20+
writing_tasks = ["""Develop a short but engaging blog post using any information provided."""]
21+
22+
user_proxy = UserProxyAgent(
23+
name="User_proxy",
24+
system_message="A human admin.",
25+
human_input_mode="NEVER",
26+
code_execution_config={
27+
"last_n_messages": 1,
28+
"work_dir": "groupchat",
29+
"use_docker": False,
30+
},
31+
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
32+
)
33+
34+
financial_assistant = AssistantAgent(
35+
name="Financial_assistant",
36+
llm_config={"config_list": config_list},
37+
)
38+
39+
writer = AssistantAgent(
40+
name="Writer",
41+
llm_config={"config_list": config_list},
42+
system_message="""
43+
You are a professional writer, known for
44+
your insightful and engaging articles.
45+
You transform complex concepts into compelling narratives.
46+
Reply "TERMINATE" in the end when everything is done.
47+
""",
48+
)
49+
50+
critic = AssistantAgent(
51+
name="Critic",
52+
system_message="""Critic. Double check plan, claims, code from other agents and provide feedback. Check whether the plan includes adding verifiable info such as source URL.
53+
Reply "TERMINATE" in the end when everything is done.
54+
""",
55+
llm_config={"config_list": config_list},
56+
)
57+
58+
groupchat_1 = GroupChat(agents=[user_proxy, financial_assistant, critic], messages=[], max_round=50)
59+
60+
groupchat_2 = GroupChat(agents=[user_proxy, writer, critic], messages=[], max_round=50)
61+
62+
manager_1 = GroupChatManager(
63+
groupchat=groupchat_1,
64+
name="Research_manager",
65+
llm_config={"config_list": config_list},
66+
code_execution_config={
67+
"last_n_messages": 1,
68+
"work_dir": "groupchat",
69+
"use_docker": False,
70+
},
71+
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
72+
)
73+
manager_2 = GroupChatManager(
74+
groupchat=groupchat_2,
75+
name="Writing_manager",
76+
llm_config={"config_list": config_list},
77+
code_execution_config={
78+
"last_n_messages": 1,
79+
"work_dir": "groupchat",
80+
"use_docker": False,
81+
},
82+
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
83+
)
84+
85+
user = UserProxyAgent(
86+
name="User",
87+
human_input_mode="NEVER",
88+
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
89+
code_execution_config={
90+
"last_n_messages": 1,
91+
"work_dir": "tasks",
92+
"use_docker": False,
93+
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
94+
)
95+
chat_res = user.initiate_chats(
96+
[
97+
{
98+
"recipient": financial_assistant,
99+
"message": financial_tasks[0],
100+
"summary_method": "last_msg",
101+
},
102+
{
103+
"recipient": manager_1,
104+
"message": financial_tasks[1],
105+
"summary_method": "reflection_with_llm",
106+
},
107+
{"recipient": manager_2, "message": writing_tasks[0]},
108+
]
109+
)
110+
111+
chat_w_manager = chat_res[manager_2]
112+
print(chat_w_manager.chat_history, chat_w_manager.summary, chat_w_manager.cost)
113+
114+
manager_2_res = user.get_chat_results(manager_2)
115+
all_res = user.get_chat_results()
116+
print(manager_2_res.summary, manager_2_res.cost)
117+
print(all_res[financial_assistant].human_input)
118+
print(all_res[manager_1].summary)
119+
120+
121+
@pytest.mark.skipif(skip_openai, reason="requested to skip openai tests")
122+
def test_chats():
123+
config_list = autogen.config_list_from_json(
124+
OAI_CONFIG_LIST,
125+
file_location=KEY_LOC,
126+
)
127+
128+
financial_tasks = [
129+
"""What are the full names of NVDA and TESLA.""",
130+
"""Pros and cons of the companies I'm interested in. Keep it short.""",
131+
]
132+
133+
writing_tasks = ["""Develop a short but engaging blog post using any information provided."""]
134+
135+
financial_assistant_1 = AssistantAgent(
136+
name="Financial_assistant_1",
137+
llm_config={"config_list": config_list},
138+
)
139+
financial_assistant_2 = AssistantAgent(
140+
name="Financial_assistant_2",
141+
llm_config={"config_list": config_list},
142+
)
143+
writer = AssistantAgent(
144+
name="Writer",
145+
llm_config={"config_list": config_list},
146+
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
147+
system_message="""
148+
You are a professional writer, known for
149+
your insightful and engaging articles.
150+
You transform complex concepts into compelling narratives.
151+
Reply "TERMINATE" in the end when everything is done.
152+
""",
153+
)
154+
155+
user = UserProxyAgent(
156+
name="User",
157+
human_input_mode="NEVER",
158+
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
159+
code_execution_config={
160+
"last_n_messages": 1,
161+
"work_dir": "tasks",
162+
"use_docker": False,
163+
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
164+
)
165+
166+
chat_res = user.initiate_chats(
167+
[
168+
{
169+
"recipient": financial_assistant_1,
170+
"message": financial_tasks[0],
171+
"clear_history": True,
172+
"silent": False,
173+
"summary_method": "last_msg",
174+
},
175+
{
176+
"recipient": financial_assistant_2,
177+
"message": financial_tasks[1],
178+
"summary_method": "reflection_with_llm",
179+
},
180+
{
181+
"recipient": writer,
182+
"message": writing_tasks[0],
183+
"carryover": "I want to include a figure or a table of data in the blogpost.",
184+
"summary_method": "last_msg",
185+
},
186+
]
187+
)
188+
189+
chat_w_writer = chat_res[writer]
190+
print(chat_w_writer.chat_history, chat_w_writer.summary, chat_w_writer.cost)
191+
192+
writer_res = user.get_chat_results(writer)
193+
all_res = user.get_chat_results()
194+
print(writer_res.summary, writer_res.cost)
195+
print(all_res[financial_assistant_1].human_input)
196+
print(all_res[financial_assistant_1].summary)
197+
# print(blogpost.summary, insights_and_blogpost)
198+
199+
200+
if __name__ == "__main__":
201+
# test_chats()
202+
test_chats_group()

website/docs/Examples.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,8 @@ Links to notebook examples:
2222
- Automated Task Solving with Coding & Planning Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_planning.ipynb)
2323
- Automated Task Solving with transition paths specified in a graph - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_graph_modelling_language_using_select_speaker.ipynb)
2424
- Running a group chat as an inner-monolgue via the SocietyOfMindAgent - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_society_of_mind.ipynb)
25-
25+
1. **Sequential Multi-Agent Chats**
26+
- Automated Sequential Multi-Agent Chats - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_multi_task_chats.ipynb)
2627
1. **Applications**
2728

2829
- Automated Chess Game Playing & Chitchatting by GPT-4 Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_chess.ipynb)

0 commit comments

Comments
 (0)