Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SIP-128] AI/LLM query generation in SQL lab #28167

Open
surapuramakhil opened this issue Apr 21, 2024 · 12 comments
Open

[SIP-128] AI/LLM query generation in SQL lab #28167

surapuramakhil opened this issue Apr 21, 2024 · 12 comments
Labels
sip Superset Improvement Proposal

Comments

@surapuramakhil
Copy link

surapuramakhil commented Apr 21, 2024

Please make sure you are familiar with the SIP process documented
here. The SIP will be numbered by a committer upon acceptance.

[SIP] Proposal for AI/LLM query generation in SQL lab <title>

Motivation

To make Apache superset dashboard/chats creation possible for Non Dev/SQL background users.
#27272

Proposed Change

Describe how the feature will be implemented, or the problem will be solved. If possible, include mocks, screenshots, or screencasts (even if from different tools).

  1. A prompt box on the top of current SQL editor box, Along with generate query button.

Screenshot 2024-04-21 at 3 33 17 AM
This is current SQL LAB used for showing SQL editor box

  1. Forward user prompts to LLM Model Along with other system prompts - which shares databases schema information (consider it as RAG) for quality prompt responses. (optional) Some additional query / prompts which are required for understanding data. Maybe sharing first 10 rows. (Or) distinct values for a column, etc. (whatever is necessary)

  2. populate the editor with the query generated by the model.

Query Generation
there are already pipelines in langchain for this https://python.langchain.com/docs/use_cases/sql/quickstart/#convert-question-to-sql-query.

we can use these pipelines for generating Queries from text. It works for any llm model. we can just piggyback on that. All I am planning to be have llm_provider or llm_factory which creates llm based on user needs and send to their pipeline.

Few Technical Implementations / Considerations

  • LLM access will be provided by configuring API Endpoint by Superset admin / owner. (this also included other necessary information like access & secret keys) (this can also be via langChain As i have seen some of the PR's have done this)

LLM access as API would give choice whether they want to use existing services rather than deploying. Packaging LLM in superset deployment is not feasible

  • OpenAI standard model interface has become pretty common and a lot of other models follow. (inputs required on this)

Backend Architecture Diagram

image

New or Changed Public Interfaces

Describe any new additions to the model, views or REST endpoints. Describe any changes to existing visualizations, dashboards and React components. Describe changes that affect the Superset CLI and how Superset is deployed.

New dependencies

Describe any npm/PyPI packages that are required. Are they actively maintained? What are their licenses?

Migration Plan and Compatibility

Describe any database migrations that are necessary, or updates to stored URLs.

Rejected Alternatives

Describe alternative approaches that were considered and rejected.

@surapuramakhil surapuramakhil added the sip Superset Improvement Proposal label Apr 21, 2024
@rusackas
Copy link
Member

We'll need to cover a lot of things that are missing from this SIP:
• What packages/licenses are needed, and are the compatible?
• What are the security/privacy implications?
• How do we (as an open-source solution) stay vendor-agnostic here? What's the abstraction layer?

This will need to be put up for a DISCUSS thread on the mailing list to move forward, but I think the proposal needs more detail/resolution.

@surapuramakhil
Copy link
Author

surapuramakhil commented Apr 23, 2024

We'll need to cover a lot of things that are missing from this SIP:
• What packages/licenses are needed, and are they compatible?
Python langchain package (or) modules required for making HTTP calls

• What are the security/privacy implications?
User configures necessary API keys. LLM calls happen though backend, since Schema needs to be passed to RAG for quality responses.
Either Approaches - support both options self-hosted (protecting security & privacy) or using provider of choice.

• How do we (as an open-source solution) stay vendor-agnostic here? What's the abstraction layer?
we can stay vendor-agnostic by leaving choice to the user with their preferred mode (self-hosted or LLM as service etc.) and also choice of LLM.
What's the abstraction layer --
I have found https://python.langchain.com/docs/use_cases/sql/quickstart/#convert-question-to-sql-query in Langchain which we can directly use.

@surapuramakhil
Copy link
Author

surapuramakhil commented Apr 24, 2024

Either two Options (draft) - feel free to add what you think. (probably I need to find a way of doing this as collaborative one).

If maintainers can create a sheet that works, else I can create a spreadsheet. For evaluating or suggesting various approaches or implementation ideas

  1. Either use HTTP request - where users can configure endpoint.
    Or use sophisticated LLM FW like Langchain. Which helps when scaling in terms of functionality
LLM access using LangChain LLM access using HTTP
Advantages Scalability while adding/extending features Little code - leaves LLM to end user
Supports lot of LLM's but switch might requried User configures HTTP endpoint - giving him choice of either self-hosted or LLM as servie where he can just configure end point.
Congigurations user configures necessary API keys/options con work while user define http end point - he will configure headers
Code solves for 1 particular usescases. extensibility is tough
levrage Langchain we might need to add code which FW's like langchain already does
Provider agnostic - as it supports Almost all providers Provider-agnostic - user configures endpoint
Less changes requried - as these are SDK's Request need to changed whenever releases happens etc
Above table is a draft - dumping my thoughts

Based on my evaluation - use langchain is better -

  1. as I provided sophisticated FW while working with LLMs (Although it might not require imminently will definitely useful in future).
  2. Another concern I had with langchain was able to use self-Hosted LLM modes - it seems Lang chain supports that. It can be has selfHostedPipeline, or we can write generic LLM model (just a HTTP wrapper) for llm to access by langchain

@geido
Copy link
Member

geido commented Apr 24, 2024

@surapuramakhil thanks. I think it makes sense to update the description with all new info and make sure you are covering all the technical/architectural considerations. First question that comes to mind, how do you intend to pull the right metadata from the database for the LLM to use? There is a limited context window and you just can't pull the whole schema for both context and performance limitations.

@surapuramakhil
Copy link
Author

@geido based on my research langchain already solves this.
https://python.langchain.com/docs/use_cases/sql/quickstart/#convert-question-to-sql-query.

They wrote pipelines for generating Queries from text. It works for any llm model. we can just piggyback on that. All I am planning to is have llm_provider or llm_factory which creats llm based on user needs and send to their pipeline.

@surapuramakhil
Copy link
Author

@geido as you said updated description.

First question that comes to mind, how do you intend to pull the right metadata from the database for the LLM to use? There is a limited context window, and you just can't pull the whole schema for both context and performance limitations.

Let's try with langchain and see its results.

@geido
Copy link
Member

geido commented Apr 24, 2024

It looks more like a toy for now:

Has definitions for all the available tables.

This won't work for production databases that might have hundreds of tables and columns.

@geido
Copy link
Member

geido commented Apr 24, 2024

I think having langchain in the repo might be a nice thing to have to enable LLM-related capabilities. However, that would be a separate SIP to illustrate how langchain could be leveraged in the repo. It looks like starting from SQL generation is hard.

@surapuramakhil
Copy link
Author

surapuramakhil commented Apr 24, 2024

It looks like starting from SQL generation is hard.

Why do you think so? It's the first use case which Apache superset needs

@tseruga
Copy link

tseruga commented Apr 25, 2024

As someone who has actually implemented this exact idea in superset for a hackathon a few months back, this is a pipe-dream at best (to be fairly blunt). Using RAG to pull relevant table metadata at prompt-time still led to unmanageable levels of LLM hallucination that only grows worse as the size of the warehouse being queried increases.

Something like this may be feasible for a user with a handful of tables, but at-scale it simply doesn't work. And a query that is 99% correct is functionally worthless if this is intended to be utilized by folks who don't have the skills necessary to parse through AI-generated SQL.

@surapuramakhil
Copy link
Author

surapuramakhil commented Apr 25, 2024

like this may be feasible for a user with a handful of tables, but at-scale it simply doesn't work

This is the problem with Language Model. That's exactly why LLM choice is given to users. If the situation were the scale is high, the best they can with high context size model like Gemini pro-1.5. Thats a separate Data Science problem which Apache Superset doesn't need to solve. just leverage what is available.


Using RAG to pull relevant table metadata at prompt-time still led to unmanageable levels of LLM hallucination that only grows worse as the size of the warehouse being queried increases.

This is a separate data science problem which Apache Superset doesn't need to solve, currently langchain community (quite popular in datascience) are solving this problem. we just leverage it.

this might protect from hallucination https://python.langchain.com/docs/use_cases/sql/query_checking/
Prompting / RAG strategies while working at scale - https://python.langchain.com/docs/use_cases/sql/large_db/

As both evolve (by time), Quality of Queries will become better & better.


a query that is 99% correct is functionally worthless if this is intended to be utilized by folks who don't have the skills necessary to parse through AI-generated SQL.

I agree with you about this, this doesn't solve fully for those who doesn't necessary knowledge to understand AI generated SQL. It's a copilot instead of an auto pilot.

@surapuramakhil
Copy link
Author

surapuramakhil commented Apr 26, 2024

Ah, I have found this.

This is a premium feature of Preset

https://preset.io/blog/preset-ai-assist/

@rusackas rusackas changed the title [SIP] AI/LLM query generation in SQL lab [SIP-128] AI/LLM query generation in SQL lab May 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sip Superset Improvement Proposal
Projects
Development

No branches or pull requests

4 participants