This repository contains experiments with LLMs.
IMPORTANT: Each python
command can be replaced with uv run python
if you are using uv. The pyproject.toml
file is already configured to create a virtual environment with the correct dependencies.
- An
ollama
server running somewhere.
Copy the .env.example
file to .env
and set the variables to your liking.
Be sure you can reach the ollama server from the machine running the scripts.
NOTE: This step is not required if you are using uv.
pip install -r requirements.txt
This is a simple implementation of the an artificial COT method.
The idea is to have an assistant that can generate a Chain of Thought (COT) for a given problem and use it to solve the problem in a second answer.
The approach can use any LLM to generate the COT and the second answer, even different models for each step.
python src/artificial_cot.py
To modify the prompt, you can edit the prompt
variable at the top of the script.
The artificial_cot.py
file instantiates two OllamaConnector
instances, one for the COT and one for the response.
The system prompts are overriden to customize the behavior of the COT and the response.
The script will use the model specified in the .env
file as the model for the COT.
You can use cot_ollama.set_model()
and response_ollama.set_model()
to change the model for the COT and the response.
You can also change the system prompts to customize the behavior of the COT and the response.
This experiment aims to create a self conversational AI.
The idea is to have an AI that can converse with itself or with another LLM model.
python src/self_conversational_ai.py
The self_conversational_ai.py
file instantiates two OllamaConnector
instances, with the same system prompt and model.
The script will then enter a loop where it will:
- Generate an initial greeting to kickstart the conversation.
- Feed the greeting to the other LLM instance.
- Feed the response to the first LLM instance.
- Repeat the process until the conversation ends (CTRL+C)
The script will override the system prompt using the self_system_prompt
variable at the top of the script.
You can change the greeting
variable at the top of the script to customize the initial greeting.
You can also change the models by using first_ollama.set_model()
and second_ollama.set_model()
to change the model for the first and second instances.
Same goes for the system prompts.