Skip to content

Use RAG with Langchain to chat with your data and display the retrieved source(s)

Notifications You must be signed in to change notification settings

march038/LlamaIndex-RAG-Document-Chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

Chat with your data using RAG and vector stores with LlamaIndex and GPT)

Hi!

This LlamaIndex RAG chatbot loads all your specified directory data into a vector store and then queries the vector store given the user's input. Using the pprint_response function from LlamaIndex, the chatbot not only displays the answer but also the retrieved data source(s) and the confidence percentage.

The following libraries are needed:

  • os (for setting up the OpenAI API-key)

  • llama_index

As always, the code is thoroughly commented.

Have fun!

About

Use RAG with Langchain to chat with your data and display the retrieved source(s)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages