The application showcases the integration of github repos or documentations with llm powered assistance with Nemo Guardrails. By combining these technologies, the application ensures advanced safety features and effective mitigation's, enhancing the overall security and reliability of the chatbot system.
Without Guardrails |
---|
With Guardrails |
---|
git clone https://github.com/SSK-14/chatbot-guardrails.git
- Create virtual environment
pip3 install env
python3 -m venv env
source env/bin/activate
- Install required libraries
pip3 install -r requirements.txt
-
Get an Gemini API key or OpenAI API key or Groq API key or Use local models using Ollama.
-
Setup a vector database Qdrant Cloud.
Make sure you replace your key rightly.
# You can use your preferred models.
MODEL_API_KEY = "Your OpenAI/Gemini/Groq API Key"
QDRANT_URL = "Your Qdrant cloud cluster URL"
#If you are using qdrant cloud
QDRANT_API_KEY = "Your Qdrant API Key"
- Update the constants in
vectorstore.py
andnemo/config.py
- Change GITHUB_URL and BRANCH for your preferred github repo.
- Run the command -
python vectorstore.py
gradio app.py
- Update the
nemo/config.yml
file with models andexport OPENAI_API_KEY=sk...
nemoguardrails server --config nemo
chatbot-guardrails/
│
├── nemo/ // Contains all files for Guardrails
├── app.py // Main file to run for gradio UI
├── vectorstore.py // Run this to create vectorstore
├── README.md
└── requirements.txt
Contributions to this project are welcome! If you find any issues or have suggestions for improvement, please open an issue or submit a pull request on the project's GitHub repository.
This project is licensed under the MIT License. Feel free to use, modify, and distribute the code as per the terms of the license.