Issues with with_structured_output(...) in ChatVertexAI: KeyError and TypeError #18975
Replies: 6 comments 19 replies
-
Hi @jorge2najo, Can you try JSON mode ? Tell me what you get structured_llm = llm.with_structured_output(AnswerWithJustification, method="json_mode") |
Beta Was this translation helpful? Give feedback.
-
Hi @maximeperrindev, thank you again for your answer. Unfortunately it didn't work. Script: from langchain_core.pydantic_v1 import BaseModel
#from langchain_community.chat_models.vertexai import ChatVertexAI
#from langchain_google_vertexai import ChatVertexAI
from langchain_google_vertexai import create_structured_runnable
from langchain_google_vertexai import VertexAI
class AnswerWithJustification(BaseModel):
'''An answer to the user question along with justification for the answer.'''
answer: str
justification: str
llm = VertexAI(model="gemini-pro", temperature=0)
# structured_llm = llm.with_structured_output(AnswerWithJustification)
chain = create_structured_runnable(AnswerWithJustification, llm)
chain.invoke("What weighs more a pound of bricks or a pound of feathers") Error:
|
Beta Was this translation helpful? Give feedback.
-
I get an related error when following the example notebook The script below runs fine if the schema is set to
However, it failed when the schema is set to runnable = prompt | llm.with_structured_output(schema=Data) Versionimport langchain
from google.cloud import aiplatform
print(f"LangChain version: {langchain.__version__}")
print(f"Vertex AI SDK version: {aiplatform.__version__}") LangChain version: 0.1.11 Scriptfrom typing import List, Optional
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_google_vertexai import ChatVertexAI, HarmCategory, HarmBlockThreshold
class Person(BaseModel):
"""Information about a person."""
name: Optional[str] = Field(..., description="The name of the person")
hair_color: Optional[str] = Field(
..., description="The color of the person's hair if known"
)
height_in_meters: Optional[str] = Field(
..., description="Height measured in meters"
)
class Data(BaseModel):
"""Extracted data about people."""
people: List[Person]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert extraction algorithm. "
"Only extract relevant information from the text. "
"If you do not know the value of an attribute asked to extract, "
"return null for the attribute's value.",
),
# Please see the how-to about improving performance with
# reference examples.
# MessagesPlaceholder('examples'),
("human", "{text}"),
]
)
llm = ChatVertexAI(
model_name="gemini-pro",
temperature=0,
safety_settings={
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
},
convert_system_message_to_human=True,
)
text = "My name is Jeff, my hair is black and i am 6 feet tall. Anna has the same color hair as me."
runnable = prompt | llm.with_structured_output(schema=Data)
runnable.invoke({"text": text}) Error
Other questionsIs there an analog function to import json
from langchain_core.utils.function_calling import convert_to_openai_tool
print(json.dumps(convert_to_openai_tool(Data), indent=2)) Output: {
"type": "function",
"function": {
"name": "Data",
"description": "Extracted data about people.",
"parameters": {
"type": "object",
"properties": {
"people": {
"type": "array",
"items": {
"description": "Information about a person.",
"type": "object",
"properties": {
"name": {
"description": "The name of the person",
"type": "string"
},
"hair_color": {
"description": "The color of the person's hair if known",
"type": "string"
},
"height_in_meters": {
"description": "Height measured in meters",
"type": "string"
}
},
"required": [
"name",
"hair_color",
"height_in_meters"
]
}
}
},
"required": [
"people"
]
}
}
} The How to see the generated Schema used in the call of the tool? |
Beta Was this translation helpful? Give feedback.
-
@jorge2najo okay i get what you want. Originally, we were trying to make from typing import Optional
from langchain_google_vertexai import ChatVertexAI, create_structured_runnable
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
class RecordPerson(BaseModel):
"""Record some identifying information about a person."""
name: str = Field(..., description="The person's name")
age: int = Field(..., description="The person's age")
fav_food: Optional[str] = Field(None, description="The person's favorite food")
class RecordDog(BaseModel):
"""Record some identifying information about a dog."""
name: str = Field(..., description="The dog's name")
color: str = Field(..., description="The dog's color")
fav_food: Optional[str] = Field(None, description="The dog's favorite food")
llm = ChatVertexAI(model_name="gemini-pro")
prompt = ChatPromptTemplate.from_template("""
You are a world class algorithm for recording entities.
Make calls to the relevant function to record the entities in the following input: {input}
Tip: Make sure to answer in the correct format"""
)
chain = create_structured_runnable([RecordPerson, RecordDog], llm, prompt=prompt)
chain.invoke({"input": "Harry was a chubby brown beagle who loved chicken"})
# -> RecordDog(name="Harry", color="brown", fav_food="chicken") Can you just try to run this ? If it doesn't work, we will probably need to open an issue then |
Beta Was this translation helpful? Give feedback.
-
Hi @maximeperrindev, the script you suggested me works. 👍 Thank you. If for example I used 'create_structured_runnable' in the script 1 I get an error: Script 1: from typing import Optional, Type
from langchain_google_vertexai import ChatVertexAI, create_structured_runnable
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.tools import tool, BaseTool
class RecordDog(BaseModel):
"""Record some identifying information about a dog."""
name: str = Field(..., description="The dog's name")
color: str = Field(..., description="The dog's color")
fav_food: Optional[str] = Field(None, description="The dog's favorite food")
class RecordDogTool(BaseTool):
name = "record_dog_tool"
description = "useful for when you need to record information about a dog"
args_schema: Type[BaseModel] = RecordDog
return_direct: bool = False
def _run(self, name: str = "Harry", color: str = "brown", fav_food: Optional[str] = "chicken"
) -> str:
"""Use the tool."""
return f"Recorded {name} who is {color} and loves {fav_food}."
class RecordPerson(BaseModel):
"""Record some identifying information about a person."""
name: str = Field(..., description="The person's name")
age: int = Field(..., description="The person's age")
fav_food: Optional[str] = Field(None, description="The person's favorite food")
class RecordPersonTool(BaseTool):
name = "record_person_tool"
description = "useful for when you need to record information about a person"
args_schema: Type[BaseModel] = RecordPerson
return_direct: bool = False
def _run(self, name: str = "Pedro", age: int = 30, fav_food: Optional[str] = "Pizza"
) -> str:
"""Use the tool."""
return f"Recorded {name} who is {age} years old and loves {fav_food}."
llm = ChatVertexAI(model_name="gemini-pro")
prompt = ChatPromptTemplate.from_template("""
You are a world class algorithm for recording entities.
Make calls to the relevant function to record the entities in the following input: {input}
Tip: Make sure to answer in the correct format"""
)
chain = create_structured_runnable([RecordDogTool, RecordPersonTool], llm, prompt=prompt)
result = chain.invoke({"input":"Martin is 30 years old who loves paella"})
print(result) Error:
Problem: The problem is, I would like to use the following structure to get the 'function_name' and the 'arguments'. This works perfectly if I use ChatOpenAI. This information is relevant for me, because then I can use the module 'ToolExecutor' from langgraph.prebuilt.tool_executor. Maybe some idea for ChatVertexAI?: Script I would like to use: class RecordPerson(BaseModel):
"""Record some identifying information about a person."""
name: str = Field(..., description="The person's name")
age: int = Field(..., description="The person's age")
fav_food: Optional[str] = Field(None, description="The person's favorite food")
class RecordPersonTool(BaseTool):
name = "record_person_tool"
description = "useful for when you need to record information about a person"
args_schema: Type[BaseModel] = RecordPerson
return_direct: bool = False
def _run(self, name: str = "Pedro", age: int = 30, fav_food: Optional[str] = "Pizza"
) -> str:
"""Use the tool."""
return f"Recorded {name} who is {age} years old and loves {fav_food}."
# Creation of the list of tools
tools = [RecordPersonTool(), ..............]
llm = ChatVertexAI(model=self.model, temperature=0)
# Binding the tools to the model
functions = [convert_to_openai_function(t) for t in tools]
llm_with_tools = llm.bind(tools=functions)
result = llm_with_tools("Martin is 30 years old who loves paella")
#### function_name ####
tool = result.additional_kwargs["function_call"]["name"]
print(tool)
#### arguments ####
tool_input = json.loads(result.additional_kwargs["function_call"]["arguments"])
print(tool_input)
Expected output: |
Beta Was this translation helpful? Give feedback.
-
@jorge2najo the mistake here from langchain_core.pydantic_v1 import BaseModel
from langchain_google_vertexai import ChatVertexAI
class AnswerWithJustification(BaseModel):
'''An answer to the user question along with justification for the answer.'''
answer: str
justification: str
llm = ChatVertexAI(model="gemini-pro", temperature=0)
structured_llm = llm.with_structured_output(AnswerWithJustification)
structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers") is you need to specify apologies for the issue, i know this mistake is from the docs. will update docs and also update ChatVertexAI to treat model and model_name the same |
Beta Was this translation helpful? Give feedback.
-
I'm encountering errors when attempting to use ChatVertexAI with the with_structured_output(...) method in two separate scripts. The first script results in a KeyError: 'type', and the second script throws a TypeError: _ChatModelBase.start_chat() got an unexpected keyword argument 'functions'.
Details:
Code Samples:
Script 1:
Script 1 Error: Error Message: KeyError: 'type'
Script 2:
Script 2 Error: Error Message: TypeError: _ChatModelBase.start_chat() got an unexpected keyword argument 'functions'
Environment Details:
I'm working with the following libraries:
Questions:
I appreciate any insights or suggestions the community can offer. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions