New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Advanced Proxy Integration for Consumption Management and Caching in TaskingAI #105
Comments
Thank you very much for your suggestions! After discussion, we have included your requirements in our development plan and expect to launch it in a few months. Your suggestions are incredibly valuable; thank you once again. :) |
Hello, is there any progress on this issue at the moment? |
@zeahoo we are testing it internally now. the feature is expected to be released in early June :-) |
Is your feature request related to a problem? Please describe.
Currently, when developing applications that rely on large language models (LLMs) with TaskingAI, we face challenges in efficiently managing token consumption and optimizing costs through caching. The inability to integrate proxy solutions, such as Helicone, LangSmith, LangFuse, and Lunary, limits our ability to monitor token usage by project, assistant, and user, as well as to reuse responses to reduce costs with paid models.
Describe the solution you'd like
I would like TaskingAI to implement functionalities that would allow easy configuration of proxies for LLM models. This would include the ability to replace the base URL of the LLM model API and add specific authentication and configuration headers in requests to the models. This functionality would enable the use of market solutions like Helicone, LangSmith, LangFuse, and Lunary to:
Describe alternatives you've considered
Given TaskingAI's current limitations in not allowing the customization of LLM model configurations, traditional workarounds or alternative solutions are not feasible. Without the ability to modify the base URL and add necessary headers for authentication and configuration, integrating third-party proxy solutions to manage token consumption and caching is impossible. This lack of flexibility significantly hampers our ability to optimize and manage costs effectively, leaving us without viable alternatives.
Additional context
Below are examples of how the mentioned proxy solutions can currently be configured, demonstrating the simplicity and effectiveness of these integrations:
Example of Helicone usage:
Example of Lang Smith usage:
Integrating with these proxy solutions would not only optimize resource use and costs in our projects but also significantly improve the management and scalability of applications developed with TaskingAI.
The text was updated successfully, but these errors were encountered: