Skip to content

Use your open source local model from the terminal

Notifications You must be signed in to change notification settings

Belluxx/LlamaTerm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

d4f6f25 · Sep 26, 2024
Nov 16, 2023
Sep 26, 2024
Jul 9, 2024
Jul 10, 2024
Sep 26, 2024
Sep 26, 2024
Sep 26, 2024
Sep 26, 2024
Sep 26, 2024
Sep 26, 2024
Sep 26, 2024
Sep 26, 2024
Nov 15, 2023
Sep 26, 2024
Apr 20, 2024

Repository files navigation

LlamaTerm

LlamaTerm is a simple CLI utility that allows to use local LLM models easily and with some additional features.

⚠️ Currently this project supports models that use ChatML format or something similar. Use for example Gemma-2 or Phi-3 GGUFs.

Preview

Basic usage:

Injecting file content:

Features

  • Give local files to the model using square brackets
    User: Can you explain the code in [helloworld.c] please?
  • More coming soon

Setup

You can setup LLamaTerm by:

  1. Rename example-<model_name>.env to .env
  2. Modify the .env so that the model path corresponds (you may also need to edit EOS and PREFIX_TEMPLATE if it's a non-standard model)
  3. If you need syntax highlighting for code and markdown, then set REAL_TIME=0 in the .env. Note that you will lose real time output generation.
  4. Install python dependencies with pip install -r requirements.txt

Run

Run LlamaTerm by adding the project directory to the PATH and then running llamaterm.

Alternatively you can just run ./llamaterm from the project directory.

Models supported out of the box

For the following models you will just need to rename the corresponding example example-*.env file to .env and set the MODEL_PATH field in the .env:

All the other models that have a prompt template similar to ChatML are supported but you will need to customize some fields like PREFIX_TEMPLATE, EOS etc... in the .env.