Skip to content

tcsenpai/llms-experiments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Experiments with LLMs

This repository contains experiments with LLMs.

IMPORTANT: Each python command can be replaced with uv run python if you are using uv. The pyproject.toml file is already configured to create a virtual environment with the correct dependencies.

Requirements

  • An ollama server running somewhere.

Setup

Env file

Copy the .env.example file to .env and set the variables to your liking. Be sure you can reach the ollama server from the machine running the scripts.

Install dependencies

NOTE: This step is not required if you are using uv.

pip install -r requirements.txt

Experiments List

Artificial COT

This is a simple implementation of the an artificial COT method.

The idea is to have an assistant that can generate a Chain of Thought (COT) for a given problem and use it to solve the problem in a second answer.

The approach can use any LLM to generate the COT and the second answer, even different models for each step.

How to use

python src/artificial_cot.py

To modify the prompt, you can edit the prompt variable at the top of the script.

How it works

The artificial_cot.py file instantiates two OllamaConnector instances, one for the COT and one for the response.

The system prompts are overriden to customize the behavior of the COT and the response.

The script will use the model specified in the .env file as the model for the COT.

Customizing the script

You can use cot_ollama.set_model() and response_ollama.set_model() to change the model for the COT and the response.

You can also change the system prompts to customize the behavior of the COT and the response.

Self Conversational AI

This experiment aims to create a self conversational AI.

The idea is to have an AI that can converse with itself or with another LLM model.

How to use

python src/self_conversational_ai.py

How it works

The self_conversational_ai.py file instantiates two OllamaConnector instances, with the same system prompt and model.

The script will then enter a loop where it will:

  1. Generate an initial greeting to kickstart the conversation.
  2. Feed the greeting to the other LLM instance.
  3. Feed the response to the first LLM instance.
  4. Repeat the process until the conversation ends (CTRL+C)

The script will override the system prompt using the self_system_prompt variable at the top of the script.

Customizing the script

You can change the greeting variable at the top of the script to customize the initial greeting.

You can also change the models by using first_ollama.set_model() and second_ollama.set_model() to change the model for the first and second instances.

Same goes for the system prompts.

About

A list of LLM and AI related experiments

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages