-
-
Notifications
You must be signed in to change notification settings - Fork 602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Achieve seamless connection between program code and LLM prompt #653
Comments
Can't you do manual unpacking pretty easily? |
If this is not what you are after, could you show us some code?
Let's imagine we just have the object.
This is actual output, not pseudo-code. call_llm_with_class is just a wrapper |
Closing this issue for now due to a lack of activity. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is your feature request related to a problem? Please describe.
My Feature request has nothing to do with the current issue
Describe the solution you'd like
The current capability is one-way, i.e. formatting the output of LLM into program objects. The upgrade I would like to suggest is two-way. For example, you can execute the LLM call as a function, pass in the program object as a parameter, automatically convert the program object into a prompt internally, then execute the LLM call, and then format the return result into a program. object.
This feature will enable seamless execution of program code and LLM prompts. This is great!
Describe alternatives you've considered
Additional context
The text was updated successfully, but these errors were encountered: