RFC: LLM structured output interface #18154
Replies: 7 comments 4 replies
-
this is a great improve that could save time to call
then expecting result are:
the purpose is that my business has a feature that allow user construct report datasources structure (just table columns, like: name, age, gender...) and persisted as json object, I later send these datasource jsons directly (and with private pdf text) to LLM to gernerate concreat report data source object. I previously done this by like:
|
Beta Was this translation helpful? Give feedback.
-
Does this need an adapter for, eg, YAML / Protobuf? Let me know if that would help |
Beta Was this translation helpful? Give feedback.
-
i ve installed the last version and tried to test your example fron official guide but it still returns NotImplementedError |
Beta Was this translation helpful? Give feedback.
-
This doesn't work with locally hosted Mistral-Instruct models. Sad!
Error:
|
Beta Was this translation helpful? Give feedback.
-
Hi, I wanted to check if it will support below use case with models like LLMA-70B. Lets say that I get JSON documents from different sources, the keys are all same in the JSON, but the name of the keys are not consistent. Is it possible to standardize the JSON documents to a golden JSON structure. |
Beta Was this translation helpful? Give feedback.
-
from langchain.chains.openai_tools import create_extraction_chain_pydantic table_chain = create_extraction_chain_pydantic(Table, openAI_llm, system_message=prompt) when I'm trying run this code I'm facing an Error |
Beta Was this translation helpful? Give feedback.
-
create_openai_fn_runnable accepted multiple functions, this only appears to accept one. Is there a work around? |
Beta Was this translation helpful? Give feedback.
-
Getting structured outputs from a model is essential for most LLM tasks. We need to make the UX for getting structured outputs from a model as simple as possible. Our current idea is to add a
ChatModel.with_structured_output(schema, **kwargs)
constructor that handles creating a chain for you, which under the hood does function-calling, or whatever else the specific model supports for structuring outputs, plus some nice output parsing. The interface is simple:You can see some initial implementations here:
langchain/libs/partners/openai/langchain_openai/chat_models/base.py
Line 776 in a4896da
langchain/libs/partners/fireworks/langchain_fireworks/chat_models.py
Line 633 in a4896da
And you can try out the OpenAI implementation in this notebook: https://colab.research.google.com/drive/1UL9wfnHcbKEAhIU193kTCI7ccNe1AQto
Everything here is in beta and rapidly evolving. Any and all feedback on the interface would be super appreciated. Again, structuring outputs is an essential part of most LLM applications, we really want to make sure we get this right:
Beta Was this translation helpful? Give feedback.
All reactions