RFC: LLM structured output interface #18154
Replies: 11 comments 9 replies
-
this is a great improve that could save time to call
then expecting result are:
the purpose is that my business has a feature that allow user construct report datasources structure (just table columns, like: name, age, gender...) and persisted as json object, I later send these datasource jsons directly (and with private pdf text) to LLM to gernerate concreat report data source object. I previously done this by like:
|
Beta Was this translation helpful? Give feedback.
-
Does this need an adapter for, eg, YAML / Protobuf? Let me know if that would help |
Beta Was this translation helpful? Give feedback.
-
i ve installed the last version and tried to test your example fron official guide but it still returns NotImplementedError |
Beta Was this translation helpful? Give feedback.
-
This doesn't work with locally hosted Mistral-Instruct models. Sad!
Error:
|
Beta Was this translation helpful? Give feedback.
-
Hi, I wanted to check if it will support below use case with models like LLMA-70B. Lets say that I get JSON documents from different sources, the keys are all same in the JSON, but the name of the keys are not consistent. Is it possible to standardize the JSON documents to a golden JSON structure. |
Beta Was this translation helpful? Give feedback.
-
from langchain.chains.openai_tools import create_extraction_chain_pydantic table_chain = create_extraction_chain_pydantic(Table, openAI_llm, system_message=prompt) when I'm trying run this code I'm facing an Error |
Beta Was this translation helpful? Give feedback.
-
create_openai_fn_runnable accepted multiple functions, this only appears to accept one. Is there a work around? |
Beta Was this translation helpful? Give feedback.
-
I like the idea and the concept. However how can this be used in combination with
This worked with the old |
Beta Was this translation helpful? Give feedback.
-
Hi, i used pydantic and json output structures in the past, but switched to telling the LLM the json structure in the prompt itself (that is what is happening in the background with the outputstructures too). Tested it with different models and the output is always the same (with outputclasses not quite). Benefits are that everything is in the same prompt(file), and no need for coding seperate outputclasses. Question is what would be the benefit of the LLM structured output interface? |
Beta Was this translation helpful? Give feedback.
-
Hi all, sinc Trying to execute the code from the template link class Joke(BaseModel):
structured_llm = llm.with_structured_output(Joke) I get the following error: const joke: functions.Joke = ({ setup, punchline, rating = 7 }) => { joke({ are not valid JSON. Received JSONDecodeError Expecting value: line 1 column 1 (char 0)_ I'm using langchain-openai version: 0.1.14. |
Beta Was this translation helpful? Give feedback.
-
Hi, when i was using structured_llm with pydantic model defined, i found that if i provide documents (that contains two or more records within a single document) to structured llm, it will not natively output all records for you. the default behavior of the structured_llm with a Pydantic model is to return a single instance of the model, which limits output to one record per query. Wondering if there are any workarounds for letting the structured_llm to capture more than one record within a single document? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Getting structured outputs from a model is essential for most LLM tasks. We need to make the UX for getting structured outputs from a model as simple as possible. Our current idea is to add a
ChatModel.with_structured_output(schema, **kwargs)
constructor that handles creating a chain for you, which under the hood does function-calling, or whatever else the specific model supports for structuring outputs, plus some nice output parsing. The interface is simple:You can see some initial implementations here:
langchain/libs/partners/openai/langchain_openai/chat_models/base.py
Line 776 in a4896da
langchain/libs/partners/fireworks/langchain_fireworks/chat_models.py
Line 633 in a4896da
And you can try out the OpenAI implementation in this notebook: https://colab.research.google.com/drive/1UL9wfnHcbKEAhIU193kTCI7ccNe1AQto
Everything here is in beta and rapidly evolving. Any and all feedback on the interface would be super appreciated. Again, structuring outputs is an essential part of most LLM applications, we really want to make sure we get this right:
Beta Was this translation helpful? Give feedback.
All reactions