Skip to content

A CLI tool that lets you chat with Ollama and OpenAI (or compatible) models right from your terminal, with built-in Markdown rendering.

License

Notifications You must be signed in to change notification settings

QinCai-rui/mdllama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mdllama

Build and Publish mdllama DEB and RPM

Build and Publish mdllama DEB and RPM (testing branch)

Publish to PyPI on mdllama.py Update

PPA development (GH Pages)

A CLI tool that lets you chat with Ollama and OpenAI models right from your terminal, with built-in Markdown rendering and websearch functionalities.

Features

  • Chat with Ollama models from the terminal
  • Built-in Markdown rendering
  • Web-search functionality
  • Extremely simple installation and removal (see below)

Screenshots

Chat Interface

Chat

Help

Help

Interactive Commands

When using mdllama run for interactive chat, you have access to special commands:

See man(1) mdllama for details.

man 1 mdllama

OpenAI and Provider Support

Supported Providers

  • Ollama: Local models running on your machine
  • OpenAI: Official OpenAI API (GPT-3.5, GPT-4, etc.)
  • OpenAI-compatible: Any API that follows OpenAI's format (I like Groq, so use it! https://groq.com)

Setup Instructions

For Ollama (Default)

mdllama setup
# Or specify explicitly
mdllama setup --provider ollama

For OpenAI

mdllama setup --provider openai
# Will prompt for your OpenAI API key

For OpenAI-Compatible APIs

mdllama setup --provider openai --openai-api-base https://ai.hackclub.com
# Then provide your API key when prompted

Usage Examples

# Use with OpenAI
mdllama chat --provider openai "Explain quantum computing"

# Use with specific model and provider
mdllama run --provider openai --model gpt-4

# Interactive session with streaming
mdllama run --provider openai --stream=true --render-markdown

Live Demo

Configure the demo endpoint used by mdllama by running the setup flow and entering the API credentials below when prompted.

mdllama setup -p openai --openai-api-base https://ai.qincai.xyz

When prompted, provide the following values:

  • API key: sk-proxy-7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e

After setup you can run the CLI as usual, for example:

mdllama run -p openai

Note

Try asking the model to give you some markdown-formatted text, or test the web search features:

  • Give me a markdown-formatted text about the history of AI.
  • search:Python 3.13 (web search)
  • site:python.org (fetch website content)
  • websearch:What are the latest Python features? (AI-powered search)

So, try it out and see how it works!

Installation

Install using package manager (recommended, supported method)

Debian/Ubuntu Installation

  1. Add the PPA to your sources list:

    echo 'deb [trusted=yes] https://packages.qincai.xyz/debian stable main' | sudo tee /etc/apt/sources.list.d/qincai-ppa.list
    sudo apt update
  2. Install mdllama:

    sudo apt install python3-mdllama

Fedora Installation

  1. Download the latest RPM from: https://packages.qincai.xyz/fedora/

    Or, to install directly:

    sudo dnf install https://packages.qincai.xyz/fedora/mdllama-<version>.noarch.rpm

    Replace <version> with the latest version number.

  2. (Optional, highly recommended) To enable as a repository for updates, create /etc/yum.repos.d/qincai-ppa.repo:

    [qincai-ppa]
    name=Raymont's Personal RPMs
    baseurl=https://packages.qincai.xyz/fedora/
    enabled=1
    metadata_expire=0
    gpgcheck=0

    Then install with:

    sudo dnf install mdllama

3, Install the ollama library from pip:

pip install ollama

You can also install it globally with:

sudo pip install ollama

Note

The ollama library is not installed by default in the RPM package since there is no system ollama package avaliable (python3-ollama). You need to install it manually using pip in order to use mdllama with Ollama models. This issue has been resolved by including a post-installation script for RPM packages that automatically installs the ollama library using pip.


PyPI Installation (Cross-Platform)

Install via pip (recommended for Windows/macOS and Python virtual environments):

pip install mdllama

Traditional Bash Script Installation (Linux)

Warning

This method of un-/installation is deprecated and shall be avoided Please use the pip method, or use DEB/RPM packages instead

To install mdllama using the traditional bash script, run:

bash <(curl -fsSL https://raw.githubusercontent.com/QinCai-rui/mdllama/refs/heads/main/install.sh)

To uninstall mdllama, run:

bash <(curl -fsSL https://raw.githubusercontent.com/QinCai-rui/mdllama/refs/heads/main/uninstall.sh)

License

This project is licensed under the GNU General Public License v3.0. See the LICENSE file for details.


About

A CLI tool that lets you chat with Ollama and OpenAI (or compatible) models right from your terminal, with built-in Markdown rendering.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •