Skip to content

A mini version of ollama-buddy

License

Notifications You must be signed in to change notification settings

emacsmirror/ollama-buddy

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ollama Buddy: Local LLM Integration for Emacs

img/ollama-buddy-banner.jpg

Overview

A friendly Emacs interface for interacting with Ollama models. This package provides a convenient way to integrate Ollama’s local LLM capabilities directly into your Emacs workflow with little or no configuration required.

The name is just something a little bit fun and it seems to always remind me of the “bathroom buddy” from the film Gremlins (although hopefully this will work better than that seemed to!)

Features

  • Minimal Setup
    • If desired, the following will get you going! (of course, have ollama running with some models loaded).
      (use-package ollama-buddy
        :load-path "path/to/ollama-buddy"
        :bind ("C-c l" . ollama-buddy-menu)
        :config (ollama-buddy-enable-monitor))
              

      OR (when on MELPA)

      (use-package ollama-buddy
        :ensure t
        :bind ("C-c l" . ollama-buddy-menu)
        :config (ollama-buddy-enable-monitor))
              
  • Interactive Command Menu
    • Quick-access menu with single-key commands (M-x ollama-buddy-menu)
    • Quickly define your own menu using defcustom with dynamic adaptable menu
    • Quickly switch between LLM models with no configuration required
    • Works on selected text from any buffer
    • Presets easily definable
    • Real-time model availability status
  • Smart Model Management
    • Models can be assigned to individual commands
    • Intelligent model fallback
    • Real-time model availability monitoring
    • Easy model switching during sessions
  • AI Operations
    • Code refactoring with context awareness
    • Automatic git commit message generation
    • Code explanation and documentation
    • Text operations (proofreading, conciseness, dictionary lookups)
    • Custom prompt support for flexibility
  • Lightweight
    • Single package file
    • Minimal code suitable for an air-gapped system
    • No external dependencies (curl not used)
  • Robust Chat Interface
    • Dedicated chat buffer with conversation history
    • Stream-based response display
    • Save/export conversation functionality
    • Visual separators for clear conversation flow

Screenshots / Demos

Note that all the demos are in real time.

First Steps

img/ollama-buddy-screen-recording_001.gif

Swap Model

img/ollama-buddy-screen-recording_002.gif

Code Queries

Note that I have cancelled some requests, otherwise we could be waiting a while, note I was using the LLM qwen2.5-coder:7b

img/ollama-buddy-screen-recording_003.gif

The Menu

img/ollama-buddy-screenshot_001.jpg

Whats New

<2025-02-13>

Models can be assigned to individual commands

  • Set menu :model property to associate a command with a model
  • Introduce `ollama-buddy-fallback-model` for automatic fallback if the specified model is unavailable.
  • Improve `ollama-buddy–update-status-overlay` to indicate model substitution.
  • Expand `ollama-buddy-menu` with structured command definitions using properties for improved flexibility.
  • Add `ollama-buddy-show-model-status` to display available and used models.
  • Refactor command execution flow to ensure model selection is handled dynamically.

<2025-02-12>

  • ollama-buddy updated in preparation for MELPA submission
  • Removed C-c single key user keybinding as part of package definition and in the README gave guidance on defining a user keybinding to activate the ollama buddy menu
  • Added ellama comparison description
  • Activating and deactivating the ollama monitor process now users responsibility
  • Updated Screenshots / Demos

Summary of my design ethos

  • Focused Design Philosophy
    • Dedicated solely to Ollama integration (unlike general-purpose LLM packages)
    • Intentionally lightweight and minimal setup
    • Particularly suitable for air-gapped systems
    • Avoids complex backends and payload configurations
  • Interface Design Choices
    • Flexible, customizable menu through defcustom
    • Easy-to-extend command system via simple alist modifications
    • Region-based interaction model across all buffers
  • Buffer Implementation
    • Simple, editable chat buffer approach
    • Avoids complex modes or bespoke functionality
    • Trying to leverage standard Emacs text editing capabilities
  • User Experience
    • “AI assistant” style welcome interface
    • Zero-config startup possible
    • Built-in status monitoring and model listing
    • Simple tutorial-style introduction
  • Technical Simplicity
    • REST-based Ollama
    • Quickly switch between small local LLMs
    • Backwards compatibility with older Emacs versions
    • Minimal dependencies
    • Straightforward configuration options

Usage

  1. Start your Ollama server locally
  2. Use M-x ollama-buddy-menu or a user defined keybinding C-c l to open the menu
  3. Select your preferred model using the [m] option
  4. Select text in any buffer
  5. Choose an action from the menu:
KeyActionDescription
oOpen chat bufferOpens the main chat interface
mSwap modelSwitch between available Ollama models
hHelp assistantDisplay help message
lSend regionSend selected text directly to model
rRefactor codeGet code refactoring suggestions
gGit commit messageGenerate commit message for changes
cDescribe codeGet code explanation
dDictionary LookupGet dictionary definition
nWord synonymGet a synonym for the word
pProofread textCheck text for improvements
zMake conciseReduce wordiness while preserving meaning
eCustom PromptEnter bespoke prompt through the minibuffer
sSave chatSave the chat to a file
xKill requestCancel current Ollama request
qQuitExit the menu

AI assistant

A simple text information screen will be presented on the first opening of the chat, or when requested through the menu system, its just a bit of fun, but I wanted a quick start tutorial/assistant type of feel.

=========================  n_____n  =========================
========================= | o Y o | =========================
         ╭──────────────────────────────────────╮
         │              Welcome to               │
         │             OLLAMA BUDDY              │
         │       Your Friendly AI Assistant      │
         ╰──────────────────────────────────────╯

    Hi there!

    Models available:

      qwen-4q:latest
      qwen:latest
      llama:latest

    Quick Tips:
    - Select text and use M-x ollama-buddy-menu
    - Switch models [m], cancel [x]
    - Send from any buffer

------------------------- | @ Y @ | -------------------------

Installation

Prerequisites

  • Ollama installed and running locally
  • Emacs 24.3 or later

Manual Installation

Clone this repository:

git clone https://github.com/captainflasmr/ollama-buddy.git

init.el

With the option to add your own user keybinding for the ollama-buddy-menu

Also provided the user with the option to enable the monitor manually rather than starting it automatically. This gives users control over when they want to initiate the monitoring, which can help avoid unnecessary resource usage or unexpected behavior.

(add-to-list 'load-path "path/to/ollama-buddy")
(require 'ollama-buddy)
(global-set-key (kbd "C-c l") #'ollama-buddy-menu)
(ollama-buddy-enable-monitor)

OR

(use-package ollama-buddy
  :load-path "path/to/ollama-buddy"
  :bind ("C-c l" . ollama-buddy-menu)
  :config (ollama-buddy-enable-monitor))

OR (when on MELPA)

(use-package ollama-buddy
  :ensure t
  :bind ("C-c l" . ollama-buddy-menu)
  :config (ollama-buddy-enable-monitor))

Customization

Custom variableDescription
ollama-buddy-menu-columnsNumber of columns to display in the Ollama Buddy menu.
ollama-buddy-hostHost where Ollama server is running.
ollama-buddy-default-modelDefault Ollama model to use.
ollama-buddy-portPort where Ollama server is running.
ollama-buddy-connection-check-intervalInterval in seconds to check Ollama connection status.
ollama-buddy-command-definitionsComprehensive command definitions for Ollama Buddy.

Emacs Init

Customize the package in your Emacs init:

Simple

Nothing else required after Installation above, just run the ollama-buddy-menu and select the model when prompted.

Normal

Just setting the default fallback model

(setq ollama-buddy-default-model "qwen-4q:latest")

Fancy

Do you want to change the number of menu columns presented?

(setq ollama-buddy-default-model "qwen-4q:latest")
(setq ollama-buddy-menu-columns 4)

Advanced

Ollama is running somewhere!

(setq ollama-buddy-default-model "qwen-4q:latest")
(setq ollama-buddy-menu-columns 4)
(setq ollama-buddy-host "http://<somewhere>")
(setq ollama-buddy-port 11400)

Super Fiddler

Lets get the ollama status as soon as is humanly perceptable!

(setq ollama-buddy-default-model "qwen-4q:latest")
(setq ollama-buddy-menu-columns 4)
(setq ollama-buddy-host "http://<somewhere>")
(setq ollama-buddy-port 11400)
(setq ollama-buddy-connection-check-interval 1)

Customizing the Ollama Buddy Menu System

Ollama Buddy provides a flexible menu system that can be easily customized to match your workflow. The menu is built from ollama-buddy-command-definitions, which you can modify or extend in your Emacs configuration.

Basic Structure

Each menu item is defined using a property list with these key attributes:

(command-name
 :key ?k              ; Character for menu selection
 :description "desc"  ; Menu item description
 :model "model-name"  ; Specific Ollama model (optional)
 :prompt "prompt"     ; System prompt (optional)
 :prompt-fn function  ; Dynamic prompt generator (optional)
 :action function)    ; Command implementation

Examples

Adding New Commands

You can add new commands to ollama-buddy-command-definitions in your config:

;; Add a single new command
(add-to-list 'ollama-buddy-command-definitions
               '(pirate
                 :key ?i
                 :description "R Matey!"
                 :model "mistral:latest"
                 :prompt "Translate the following as if I was a pirate:"
                 :action (lambda () (ollama-buddy--send-with-command 'pirate))))

;; Incorporate into a use-package
(use-package ollama-buddy
  :load-path "path/to/ollama-buddy"
  :bind ("C-c l" . ollama-buddy-menu)
  :config (ollama-buddy-enable-monitor)
  (add-to-list 'ollama-buddy-command-definitions
               '(pirate
                 :key ?i
                 :description "R Matey!"
                 :model "mistral:latest"
                 :prompt "Translate the following as if I was a pirate:"
                 :action (lambda () (ollama-buddy--send-with-command 'pirate))))
  :custom ollama-buddy-default-model "llama:latest")

;; Add multiple commands at once
(setq ollama-buddy-command-definitions
      (append ollama-buddy-command-definitions
              '((summarize
                 :key ?u
                 :description "Summarize text"
                 :model "tinyllama:latest"
                 :prompt "Provide a brief summary:"
                 :action (lambda () 
                          (ollama-buddy--send-with-command 'summarize)))
                (translate-spanish
                 :key ?t
                 :description "Translate to Spanish"
                 :model "mistral:latest"
                 :prompt "Translate this text to Spanish:"
                 :action (lambda () 
                          (ollama-buddy--send-with-command 'translate-spanish))))))

Creating a Minimal Setup

You can create a minimal configuration by defining only the commands you need:

;; Minimal setup with just essential commands
(setq ollama-buddy-command-definitions
      '((send-basic
         :key ?s
         :description "Send to Ollama"
         :model "mistral:latest"
         :action (lambda () 
                  (ollama-buddy--send-with-command 'send-basic)))
        (quick-define
         :key ?d
         :description "Define word"
         :model "tinyllama:latest"
         :prompt "Define this word:"
         :action (lambda () 
                  (ollama-buddy--send-with-command 'quick-define)))
        (quit
         :key ?q
         :description "Quit"
         :model nil
         :action (lambda () 
                  (message "Quit Ollama Shell menu.")))))

Tips for Custom Commands

  1. Choose unique keys for menu items
  2. Match models to task complexity (small models for quick tasks)
  3. Use clear, descriptive names

Command Properties Reference

PropertyDescriptionRequired
:keySingle character for menu selectionYes
:descriptionMenu item descriptionYes
:modelSpecific Ollama model to useNo
:promptStatic system promptNo
:prompt-fnFunction to generate dynamic promptNo
:actionFunction implementing the commandYes

Remember that at least one of :prompt or :prompt-fn should be provided for commands that send text to Ollama.

Defining the menu through presets

Attached currently in this repository is a presets directory which contains the following different menu systems. To activate, just open up the el file and evaluate and the menu will adapt accordingly!

default

KeyDescriptionAction Description
oOpen chat bufferOpens the chat buffer and inserts an intro message if empty.
vView model statusDisplays the current status of available models.
mSwap modelAllows selecting and switching to a different model.
hHelp assistantDisplays an introduction message in the chat buffer.
lSend regionSends the selected region of text for processing.
rRefactor codeSends selected code for refactoring.
gGit commit messageGenerates a concise Git commit message.
cDescribe codeProvides a description of the selected code.
dDictionary LookupRetrieves dictionary definitions for a selected word.
nWord synonymLists synonyms for a given word.
pProofread textProofreads the selected text.
zMake conciseReduces wordiness while preserving meaning.
eCustom promptPrompts the user to enter a custom query for processing.
sSave chatSaves the current chat buffer to a file.
xKill requestTerminates the active processing request.
qQuitExits the Ollama Shell menu with a message.

academic

KeyDescriptionAction Description
oOpen chat bufferOpens the chat buffer and inserts an intro message if empty.
vView model statusDisplays the current status of available models.
mSwap modelAllows selecting and switching to a different model.
hHelp assistantDisplays an introduction message in the chat buffer.
lSend regionSends the selected text region for processing.
lLiterature review helpSuggests related papers and research directions.
mReview methodologyReviews a research methodology and suggests improvements.
sAcademic style checkReviews text for academic writing style improvements.
cCitation suggestionsIdentifies statements needing citations and suggests sources.
aAnalyze argumentsAnalyzes the logical structure and strength of arguments.
eCustom promptPrompts the user to enter a custom query for processing.
sSave chatSaves the current chat buffer to a file.
xKill requestTerminates the active processing request.
qQuitExits the Ollama Shell menu with a message.

developer

KeyDescriptionAction Description
oOpen chat bufferOpens the chat buffer and inserts an intro message if empty.
vView model statusDisplays the current status of available models.
mSwap modelAllows selecting and switching to a different model.
hHelp assistantDisplays an introduction message in the chat buffer.
lSend regionSends the selected text region for processing.
eExplain codeExplains the purpose and functionality of the selected code.
rCode reviewReviews the code for potential issues, bugs, and improvements.
oOptimize codeSuggests optimizations for performance and readability.
tGenerate testsGenerates test cases for the selected code.
dGenerate documentationProduces detailed documentation for the provided code.
pSuggest design patternsRecommends design patterns applicable to the provided code.
cCustom promptAllows the user to enter a custom query.
sSave chatSaves the current chat buffer to a file.
xKill requestTerminates the active processing request.
qQuitExits the Ollama Shell menu with a message.

writer

KeyDescriptionAction Description
oOpen chat bufferOpens the chat buffer and inserts an intro message if empty.
vView model statusDisplays the current status of available models.
mSwap modelAllows selecting and switching to a different model.
hHelp assistantDisplays an introduction message in the chat buffer.
lSend regionSends the selected text region for processing.
bBrainstorm ideasGenerates creative ideas on a given topic.
uGenerate outlineCreates a structured outline for the provided content.
sEnhance writing styleImproves the writing style while maintaining meaning.
pDetailed proofreadingConducts comprehensive proofreading for grammar, style, and clarity.
fImprove flowEnhances text coherence and transition between ideas.
dPolish dialogueEnhances dialogue to make it more natural and engaging.
nEnhance scene descriptionAdds vivid and sensory details to a scene description.
cAnalyze characterAnalyzes a character’s development and motivations.
lAnalyze plotExamines the structure, pacing, and coherence of the plot.
rResearch expansionSuggests additional research directions and sources.
kFact checking suggestionsIdentifies statements needing verification and suggests fact-checking strategies.
vConvert formatConverts text to another format while maintaining content meaning.
wWord choice suggestionsRecommends alternative word choices for better clarity and precision.
zSummarize textGenerates a concise summary while retaining key information.
eCustom writing promptPrompts the user to enter a custom query for processing.
aSave chatSaves the current chat buffer to a file.
xKill requestTerminates the active processing request.
qQuitExits the Ollama Shell menu with a message.

Model Selection and Fallback Logic in Ollama Buddy

Overview

You can associate specific commands defined in the menu with an Ollama LLM to optimize performance for different tasks. For example, if speed is a priority over accuracy, such as when retrieving synonyms, you might use a lightweight model like TinyLlama or a 1B–3B model. On the other hand, for tasks that require higher precision, like code refactoring, a more capable model such as Qwen-Coder 7B can be assigned to the “refactor” command on the buddy menu system.

Since this package enables seamless model switching through Ollama, the buddy menu can present a list of commands, each linked to an appropriate model. All Ollama interactions share the same chat buffer, ensuring that menu selections remain consistent. Additionally, the status bar on the header line and the prompt itself indicate the currently active model.

Ollama Buddy also includes a model selection mechanism with a fallback system to ensure commands execute smoothly, even if the preferred model is unavailable.

Command-Specific Models

Commands in ollama-buddy-command-definitions can specify preferred models using the :model property. This allows optimizing different commands for specific models:

(defcustom ollama-buddy-command-definitions
  '((refactor-code
     :key ?r
     :description "Refactor code"
     :model "qwen-coder:latest"
     :prompt "refactor the following code:")
    (git-commit
     :key ?g
     :description "Git commit message"
     :model "tinyllama:latest"
     :prompt "write a concise git commit message for the following:")
    (send-region
     :key ?l
     :description "Send region"
     :model "llama:latest"))
  ...)

When :model is nil, the command will use whatever model is currently set as ollama-buddy-default-model.

Fallback Chain

When executing a command, the model selection follows this fallback chain:

  1. Command-specific model (:model property)
  2. Current model (ollama-buddy-default-model)
  3. User selection from available models

Configuration Options

Setting the Fallback Model

(setq ollama-buddy-default-model "llama:latest")

User Interface Feedback

When a fallback occurs, Ollama Buddy provides clear feedback:

  • The header line shows which model is being used
  • If using a fallback model, an orange warning appears showing both the requested and actual model
  • The model status can be viewed using the “View model status” command (v key)

Example Scenarios

  1. Best Case: Requested model is available
    • Command requests “mistral:latest”
    • Model is available
    • Request proceeds with “mistral:latest”
  2. Simple Fallback: Requested model unavailable
    • Command requests “mistral:latest”
    • Model unavailable
    • Falls back to ollama-buddy-default-model
  3. Complete Fallback Chain:
    • Command requests “mistral:latest”
    • Model unavailable
    • Current model (“llama2:latest”) unavailable
    • Falls back to, prompts user to select

Error Handling

  • If no models are available at all, an error is raised: “No Ollama models available. Please pull some models first”
  • Connection issues are monitored and reported in the status line
  • Active processes are killed if connection is lost during execution

Best Practices

  1. Command-Specific Models:
    • Assign models based on task requirements
    • Use smaller models for simple tasks (e.g., “tinyllama” for git commits)
    • Use more capable models for complex tasks (e.g., “mistral/qwen-coder” for code refactoring)
  2. Fallback Configuration:
    • Set ollama-buddy-default-model to a reliable, general-purpose model
  3. Model Management:
    • Use ollama-buddy-show-model-status to monitor available models
    • Keep commonly used models pulled locally
    • Watch for model availability warnings in the header line and chat buffer

Design ethos expanded / why create this package?

The Ollama Emacs package ecosystem is still emerging. Although there are some great implementations available, they tend to be LLM jack-of-all-trades, catering to various types of LLM integrations, including, of course, the major online offerings.

Recently, I have been experimenting with a local solution using ollama. While using ollama through the terminal interface with readline naturally leans toward Emacs keybindings, there are a few limitations:

  • Copy and paste do not use Emacs keybindings like readline navigation. This is due to the way key codes work in terminals, meaning that copying and pasting into Emacs would require using the mouse!
  • Searching through a terminal with something like Emacs isearch can vary depending on the terminal.
  • Workflow disruption occur when copying and pasting between Emacs and ollama.
  • There is no easy way to save a session.
  • It is not using Emacs!

I guess you can see where this is going. The question is: how do I integrate a basic query-response mechanism to ollama into Emacs? This is where existing LLM Emacs packages come in, however, I have always found them to be more geared towards online models with some packages offering experimental implementations of ollama integration. In my case, I often work on an air-gapped system where downloading or transferring packages is not straightforward. In such an environment, my only option for LLM interaction is ollama anyway. Given the limitations mentioned earlier of interacting with ollama through a terminal, why not create a dedicated ollama Emacs package that is very simple to set up, very lightweight and leverages Emacs’s editing capabilities to provide a basic query response interface to ollama?

I have found that setting up ollama within the current crop of LLM Emacs packages can be quite involved. I often struggle with the setup, I get there in the end, but it feels like there’s always a long list of payloads, backends, etc., to configure. But what if I just want to integrate Emacs with ollama? It has a RESTful interface, so could I create a package with minimal setup, allowing users to define a default model in their init file (or select one each time if they prefer)? It could also query the current set of loaded models through the ollama interface and provide a completing-read type of model selection, with potentially no model configuration needed!

Beyond just being lightweight and easy to configure, I also have another idea: a flexible menu system. For a while, I have been using a simple menu-based interface inspired by transient menus. However, I have chosen not to use transient because I want this package to be compatible with older Emacs versions. Additionally, I haven’t found a compelling use case for a complex transient menu and I prefer a simple, opaque top level menu.

To achieve this, I have decided to create a flexible defcustom menu system. Initially, it will be configured for some common actions, but users can easily modify it through the Emacs customization interface by updating a simple alist.

For example, to refactor code through an LLM, a prepended text string of something like “Refactor the following code:” is usually applied. To proofread text, “Proofread the following:” could be prepended to the body of the query. So, why not create a flexible menu where users can easily add their own commands? For instance, if someone wanted a command to uppercase some text (even though Emacs can already do this), they could simply add the following entry to the ollama-buddy-menu-items alist:

(?u . ("Upcase" 
       (lambda () (ollama-buddy--send "convert the following to uppercase:"))))

Then the menu would present a menu item “Upcase” with a “u” selection, upcasing the selected region. You could go nuts with this, and in order to double down on the autogeneration of a menu concept, I have provided a defcustom ollama-buddy-menu-columns variable so you can flatten out your auto-generated menu as much as you like!

This is getting rambly, but another key design consideration is how prompts should be handled and in fact how do I go about sending text from within Emacs?. Many implementations rely on a chat buffer as the single focal point, which seems natural to me, so I will follow a similar approach.

I’ve seen different ways of defining a prompt submission mechanism, some using <RET>, others using a dedicated keybinding like C-c <RET>, so, how should I define my prompting mechanism? I have a feeling this could get complicated, so lets use the KISS principle, also, how should text be sent from within Emacs buffers? My solution? simply mark the text and send it, not just from any Emacs buffer, but also within the chat window. It may seem slightly awkward at first (especially in the chat buffer, where you will have to create your prompt and then mark it), but it provides a clear delineation of text and ensures a consistent interface across Emacs. For example, using M-h to mark an element requires minimal effort and greatly simplifies the package implementation. This approach also allows users to use the **scratch** buffer for sending requests if so desired!

Many current implementations create a chat buffer with modes for local keybindings and other features. I have decided not to do this and instead, I will provide a simple editable buffer (ASCII text only) where all ollama interactions will reside. Users will be able to do anything in that buffer; there will be no bespoke Ollama/LLM functionality involved. It will simply be based on a special buffer and to save a session?, just use save-buffer to write it to a file, Emacs to the rescue again!

Regarding the minimal setup philosophy of this package, I also want to include a fun AI assistant-style experience. Nothing complicated, just a bit of logic to display welcome text, show the current ollama status, and list available models. The idea is that users should be able to jump in immediately. If they know how to install/start ollama, they can install the package without any configuration, run `M-x ollama-buddy-menu`, and open the chat. At that point, the “AI assistant” will display the current ollama status and provide a simple tutorial to help them get started.

The backend?, well I initially decided simply to use curl to stimulate the ollama RESTful API but after getting that to work I thought it might be best to completely remove that dependency, so now I am using a native network solution using make-network-process. Yes it is a bit overkill, but it works, and ultimately gives me all the flexibility I could every want without having to depend on an external tool.

I have other thoughts regarding the use of local LLMs versus online AI behemoths. The more I use ollama with Emacs through this package, the more I realize the potential of smaller, local LLMs. This package allows for quick switching between these models while maintaining a decent level of performance on a regular home computer. I could, for instance, load up qwen-coder for code-related queries (I have found the 7B Q4/5 versions to work particularly well) and switch to a more general model for other queries, such as llama or even deepseek-r1.

Phew! That turned into quite a ramble, maybe I should run this text through ollama-buddy for proofreading! :)

Kanban

Here is a kanban of the features that will be (hopefully) added in due course, and visually demonstrating their current status via a kanban board

TODODOING
Test on WindowsAdd to MELPA
Create more specialized system prompts

Roadmap

Add to MELPA

Test on Windows

Create more specialized system prompts

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Commit your changes
  4. Open a pull request

License

MIT License

Acknowledgments

  • Ollama for making local LLM inference accessible
  • Emacs community for continuous inspiration

Issues

Report issues on the GitHub Issues page

Alternative LLM based packages

To the best of my knowledge, there are currently a few Emacs packages related to Ollama, though the ecosystem is still relatively young:

  1. llm.el (by Jacob Hacker)
    • A more general LLM interface package that supports Ollama as one of its backends
    • GitHub: https://github.com/ahyatt/llm
    • Provides a more abstracted approach to interacting with language models
    • Supports multiple backends including Ollama, OpenAI, and others
  2. gptel (by Karthik Chikmagalur)
    • While primarily designed for ChatGPT and other online services, it has experimental Ollama support
    • GitHub: https://github.com/karthink/gptel
    • Offers a more integrated chat buffer experience
    • Has some basic Ollama integration, though it’s not the primary focus
  3. chatgpt-shell (by xenodium)
  4. ellama (by s-kostyaev)
    • A comprehensive Emacs package for interacting with local LLMs through Ollama
    • GitHub: https://github.com/s-kostyaev/ellama
    • Features deep org-mode integration and extensive prompt templates
    • Offers streaming responses and structured interaction patterns
    • More complex but feature-rich approach to local LLM integration

Alternative package comparison

Let’s compare ollama-buddy to the existing solutions:

  1. llm.el
    • Pros:
      • Provides a generic LLM interface
      • Supports multiple backends
      • More abstracted and potentially more extensible

    ollama-buddy is more:

    • Directly focused on Ollama
    • Lightweight and Ollama-native
    • Provides a more interactive, menu-driven approach
    • Simpler to set up for Ollama specifically
  2. gptel
    • Pros:
      • Sophisticated chat buffer interface
      • Active development
      • Good overall UX

    ollama-buddy differentiates by:

    • Being purpose-built for Ollama
    • Offering a more flexible, function-oriented approach
    • Providing a quick, lightweight interaction model
    • Having a minimal, focused design
  3. chatgpt-shell
    • Pros:
      • Mature shell-based interaction model
      • Rich interaction capabilities

    ollama-buddy stands out by:

    • Being specifically designed for Ollama
    • Offering a simpler, more direct interaction model
    • Providing a quick menu-based interface
    • Having minimal dependencies
  4. ellama
    • Pros:
      • Tight integration with Emacs org-mode
      • Extensive built-in prompt templates
      • Support for streaming responses
      • Good documentation and examples

    ollama-buddy differs by:

    • Having a simpler, more streamlined setup process
    • Providing a more lightweight, menu-driven interface
    • Focusing on quick, direct interactions from any buffer
    • Having minimal dependencies and configuration requirements

About

A mini version of ollama-buddy

0

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Emacs Lisp 100.0%