You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the right way to call agent_builder.build_from_library function
which type of json format?
Section
How can we reproduce it (as minimally and precisely as possible)?
My code:
import json
import autogen
from autogen.agentchat.contrib.agent_builder import AgentBuilder
from autogen import ConversableAgent, UserProxyAgent
config_file_or_env = "OAI_CONFIG_LIST.json"
llm_config = {"temperature": 0}
config_list = autogen.config_list_from_json(config_file_or_env, filter_dict={"model": ["llama3.2"]})
def start_task(execution_task: str, agent_list: list):
group_chat = autogen.GroupChat(agents=agent_list, messages=[], max_round=12)
manager = autogen.GroupChatManager(groupchat=group_chat, llm_config={"config_list": config_list, **llm_config})
agent_list[0].initiate_chat(manager, message=execution_task)
AGENT_SYS_MSG_PROMPT = """Acccording to the following postion name, write a high quality instruction for the position following a given example. You should only return the instruction.
# Position Name
{position}
# Example instruction for Data Analyst
As Data Analyst, you are tasked with leveraging your extensive knowledge in data analysis to recognize and extract meaningful features from vast datasets. Your expertise in machine learning, specifically with the Random Forest Classifier, allows you to construct robust predictive models adept at handling both classification and regression tasks. You excel in model evaluation and interpretation, ensuring that the performance of your algorithms is not just assessed with precision, but also understood in the context of the data and the problem at hand. With a command over Python and proficiency in using the pandas library, you manipulate and preprocess data with ease.
"""
AGENT_DESC_PROMPT = """According to position name and the instruction, summarize the position into a high quality one sentence description.
# Position Name
{position}
# Instruction
{instruction}
"""
position_list = [
"Environmental_Scientist",
"Astronomer",
"Software_Developer",
"Data_Analyst",
"Journalist",
"Teacher",
"Lawyer",
"Programmer",
"Accountant",
"Mathematician",
"Physicist",
"Biologist",
"Chemist",
"Statistician",
"IT_Specialist",
"Cybersecurity_Expert",
"Artificial_Intelligence_Engineer",
"Financial_Analyst",
]
build_manager = autogen.OpenAIWrapper(config_list=config_list)
sys_msg_list = []
for pos in position_list:
resp_agent_sys_msg = (
build_manager.create(
messages=[
{
"role": "user",
"content": AGENT_SYS_MSG_PROMPT.format(
position=pos,
),
}
]
)
.choices[0]
.message.content
)
resp_desc_msg = (
build_manager.create(
messages=[
{
"role": "user",
"content": AGENT_DESC_PROMPT.format(
position=pos,
instruction=resp_agent_sys_msg,
),
}
]
)
.choices[0]
.message.content
)
sys_msg_list.append({"name": pos, "system_message": resp_agent_sys_msg, "description": resp_desc_msg})
json.dump(sys_msg_list, open("./agent_library_example.json", "w"), indent=4)
library_path_or_json = "./agent_library_example.json"
building_task = "Find a paper on arxiv by programming, and analyze its application in some domain. For example, find a recent paper about gpt-4 on arxiv and find its potential applications in software."
new_builder = AgentBuilder(config_file_or_env=config_file_or_env, builder_model="llama3.2", agent_model="llama3.2")
agent_list, _ = new_builder.build_from_library(building_task, library_path_or_json, llm_config)
start_task(
execution_task="Find a recent paper about explainable AI on arxiv and find its potential applications in medical.",
agent_list=agent_list,
)
new_builder.clear_all_agents()
To clarify the issue, all prompt formats like AGENT_DESC_PROMPT, DEFAULT_DESCRIPTION or 'CODING_AND_TASK_SKILL_INSTRUCTION' create a format like \n generate an error when used with json.loads like in line 623 of the agent_builder file. py
Is there a possibility for prompt generation to not cause a format problem when it is subsequently used as input in json.loads?
What happened?
When I try to launch motebook form
notebook/autobuild_agent_library.ipynb
Part of code:
Return error code:
Associated with the use of json.loads function in
build_from_library
functionand the use of code below to generate json file
What did you expect to happen?
What is the right way to call
agent_builder.build_from_library
functionwhich type of json format?
Section
How can we reproduce it (as minimally and precisely as possible)?
My code:
AutoGen version
autogen==0.3.2 autogen-agentchat==0.2.38 autogenstudio==0.1.5 pyautogen==0.3.2
Which package was this bug in
AgentChat
Model used
llama3.2
Python version
3.10
Operating system
ubuntu
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered: