A friendly Emacs interface for interacting with Ollama models. This package provides a convenient way to integrate Ollama’s local LLM capabilities directly into your Emacs workflow with little or no configuration required.
The name is just something a little bit fun and it seems to always remind me of the “bathroom buddy” from the film Gremlins (although hopefully this will work better than that seemed to!)
- Minimal Setup
- If desired, the following will get you going! (of course, have
ollama
running with some models loaded).(use-package ollama-buddy :load-path "path/to/ollama-buddy" :bind ("C-c l" . ollama-buddy-menu) :config (ollama-buddy-enable-monitor))
OR (when on MELPA)
(use-package ollama-buddy :ensure t :bind ("C-c l" . ollama-buddy-menu) :config (ollama-buddy-enable-monitor))
- If desired, the following will get you going! (of course, have
- Interactive Command Menu
- Quick-access menu with single-key commands (
M-x ollama-buddy-menu
) - Quickly define your own menu using defcustom with dynamic adaptable menu
- Quickly switch between LLM models with no configuration required
- Works on selected text from any buffer
- Presets easily definable
- Real-time model availability status
- Quick-access menu with single-key commands (
- Smart Model Management
- Models can be assigned to individual commands
- Intelligent model fallback
- Real-time model availability monitoring
- Easy model switching during sessions
- AI Operations
- Code refactoring with context awareness
- Automatic git commit message generation
- Code explanation and documentation
- Text operations (proofreading, conciseness, dictionary lookups)
- Custom prompt support for flexibility
- Lightweight
- Single package file
- Minimal code suitable for an air-gapped system
- No external dependencies (curl not used)
- Robust Chat Interface
- Dedicated chat buffer with conversation history
- Stream-based response display
- Save/export conversation functionality
- Visual separators for clear conversation flow
Note that all the demos are in real time.
First Steps
Swap Model
Code Queries
Note that I have cancelled some requests, otherwise we could be waiting a while, note I was using the LLM qwen2.5-coder:7b
The Menu
Models can be assigned to individual commands
- Set menu :model property to associate a command with a model
- Introduce `ollama-buddy-fallback-model` for automatic fallback if the specified model is unavailable.
- Improve `ollama-buddy–update-status-overlay` to indicate model substitution.
- Expand `ollama-buddy-menu` with structured command definitions using properties for improved flexibility.
- Add `ollama-buddy-show-model-status` to display available and used models.
- Refactor command execution flow to ensure model selection is handled dynamically.
ollama-buddy
updated in preparation for MELPA submission- Removed C-c single key user keybinding as part of package definition and in the README gave guidance on defining a user keybinding to activate the ollama buddy menu
- Added
ellama
comparison description - Activating and deactivating the
ollama
monitor process now users responsibility - Updated Screenshots / Demos
- Focused Design Philosophy
- Dedicated solely to Ollama integration (unlike general-purpose LLM packages)
- Intentionally lightweight and minimal setup
- Particularly suitable for air-gapped systems
- Avoids complex backends and payload configurations
- Interface Design Choices
- Flexible, customizable menu through defcustom
- Easy-to-extend command system via simple alist modifications
- Region-based interaction model across all buffers
- Buffer Implementation
- Simple, editable chat buffer approach
- Avoids complex modes or bespoke functionality
- Trying to leverage standard Emacs text editing capabilities
- User Experience
- “AI assistant” style welcome interface
- Zero-config startup possible
- Built-in status monitoring and model listing
- Simple tutorial-style introduction
- Technical Simplicity
- REST-based Ollama
- Quickly switch between small local LLMs
- Backwards compatibility with older Emacs versions
- Minimal dependencies
- Straightforward configuration options
- Start your Ollama server locally
- Use
M-x ollama-buddy-menu
or a user defined keybindingC-c l
to open the menu - Select your preferred model using the [m] option
- Select text in any buffer
- Choose an action from the menu:
Key | Action | Description |
---|---|---|
o | Open chat buffer | Opens the main chat interface |
m | Swap model | Switch between available Ollama models |
h | Help assistant | Display help message |
l | Send region | Send selected text directly to model |
r | Refactor code | Get code refactoring suggestions |
g | Git commit message | Generate commit message for changes |
c | Describe code | Get code explanation |
d | Dictionary Lookup | Get dictionary definition |
n | Word synonym | Get a synonym for the word |
p | Proofread text | Check text for improvements |
z | Make concise | Reduce wordiness while preserving meaning |
e | Custom Prompt | Enter bespoke prompt through the minibuffer |
s | Save chat | Save the chat to a file |
x | Kill request | Cancel current Ollama request |
q | Quit | Exit the menu |
A simple text information screen will be presented on the first opening of the chat, or when requested through the menu system, its just a bit of fun, but I wanted a quick start tutorial/assistant type of feel.
========================= n_____n ========================= ========================= | o Y o | ========================= ╭──────────────────────────────────────╮ │ Welcome to │ │ OLLAMA BUDDY │ │ Your Friendly AI Assistant │ ╰──────────────────────────────────────╯ Hi there! Models available: qwen-4q:latest qwen:latest llama:latest Quick Tips: - Select text and use M-x ollama-buddy-menu - Switch models [m], cancel [x] - Send from any buffer ------------------------- | @ Y @ | -------------------------
- Ollama installed and running locally
- Emacs 24.3 or later
Clone this repository:
git clone https://github.com/captainflasmr/ollama-buddy.git
With the option to add your own user keybinding for the ollama-buddy-menu
Also provided the user with the option to enable the monitor manually rather than starting it automatically. This gives users control over when they want to initiate the monitoring, which can help avoid unnecessary resource usage or unexpected behavior.
(add-to-list 'load-path "path/to/ollama-buddy")
(require 'ollama-buddy)
(global-set-key (kbd "C-c l") #'ollama-buddy-menu)
(ollama-buddy-enable-monitor)
OR
(use-package ollama-buddy
:load-path "path/to/ollama-buddy"
:bind ("C-c l" . ollama-buddy-menu)
:config (ollama-buddy-enable-monitor))
(use-package ollama-buddy
:ensure t
:bind ("C-c l" . ollama-buddy-menu)
:config (ollama-buddy-enable-monitor))
Custom variable | Description |
---|---|
ollama-buddy-menu-columns | Number of columns to display in the Ollama Buddy menu. |
ollama-buddy-host | Host where Ollama server is running. |
ollama-buddy-default-model | Default Ollama model to use. |
ollama-buddy-port | Port where Ollama server is running. |
ollama-buddy-connection-check-interval | Interval in seconds to check Ollama connection status. |
ollama-buddy-command-definitions | Comprehensive command definitions for Ollama Buddy. |
Customize the package in your Emacs init:
Nothing else required after Installation above, just run the ollama-buddy-menu
and select the model when prompted.
Just setting the default fallback model
(setq ollama-buddy-default-model "qwen-4q:latest")
Do you want to change the number of menu columns presented?
(setq ollama-buddy-default-model "qwen-4q:latest")
(setq ollama-buddy-menu-columns 4)
Ollama is running somewhere!
(setq ollama-buddy-default-model "qwen-4q:latest")
(setq ollama-buddy-menu-columns 4)
(setq ollama-buddy-host "http://<somewhere>")
(setq ollama-buddy-port 11400)
Lets get the ollama status as soon as is humanly perceptable!
(setq ollama-buddy-default-model "qwen-4q:latest")
(setq ollama-buddy-menu-columns 4)
(setq ollama-buddy-host "http://<somewhere>")
(setq ollama-buddy-port 11400)
(setq ollama-buddy-connection-check-interval 1)
Ollama Buddy provides a flexible menu system that can be easily customized to match your workflow. The menu is built from ollama-buddy-command-definitions
, which you can modify or extend in your Emacs configuration.
Each menu item is defined using a property list with these key attributes:
(command-name
:key ?k ; Character for menu selection
:description "desc" ; Menu item description
:model "model-name" ; Specific Ollama model (optional)
:prompt "prompt" ; System prompt (optional)
:prompt-fn function ; Dynamic prompt generator (optional)
:action function) ; Command implementation
You can add new commands to ollama-buddy-command-definitions
in your config:
;; Add a single new command
(add-to-list 'ollama-buddy-command-definitions
'(pirate
:key ?i
:description "R Matey!"
:model "mistral:latest"
:prompt "Translate the following as if I was a pirate:"
:action (lambda () (ollama-buddy--send-with-command 'pirate))))
;; Incorporate into a use-package
(use-package ollama-buddy
:load-path "path/to/ollama-buddy"
:bind ("C-c l" . ollama-buddy-menu)
:config (ollama-buddy-enable-monitor)
(add-to-list 'ollama-buddy-command-definitions
'(pirate
:key ?i
:description "R Matey!"
:model "mistral:latest"
:prompt "Translate the following as if I was a pirate:"
:action (lambda () (ollama-buddy--send-with-command 'pirate))))
:custom ollama-buddy-default-model "llama:latest")
;; Add multiple commands at once
(setq ollama-buddy-command-definitions
(append ollama-buddy-command-definitions
'((summarize
:key ?u
:description "Summarize text"
:model "tinyllama:latest"
:prompt "Provide a brief summary:"
:action (lambda ()
(ollama-buddy--send-with-command 'summarize)))
(translate-spanish
:key ?t
:description "Translate to Spanish"
:model "mistral:latest"
:prompt "Translate this text to Spanish:"
:action (lambda ()
(ollama-buddy--send-with-command 'translate-spanish))))))
You can create a minimal configuration by defining only the commands you need:
;; Minimal setup with just essential commands
(setq ollama-buddy-command-definitions
'((send-basic
:key ?s
:description "Send to Ollama"
:model "mistral:latest"
:action (lambda ()
(ollama-buddy--send-with-command 'send-basic)))
(quick-define
:key ?d
:description "Define word"
:model "tinyllama:latest"
:prompt "Define this word:"
:action (lambda ()
(ollama-buddy--send-with-command 'quick-define)))
(quit
:key ?q
:description "Quit"
:model nil
:action (lambda ()
(message "Quit Ollama Shell menu.")))))
- Choose unique keys for menu items
- Match models to task complexity (small models for quick tasks)
- Use clear, descriptive names
Property | Description | Required |
---|---|---|
:key | Single character for menu selection | Yes |
:description | Menu item description | Yes |
:model | Specific Ollama model to use | No |
:prompt | Static system prompt | No |
:prompt-fn | Function to generate dynamic prompt | No |
:action | Function implementing the command | Yes |
Remember that at least one of :prompt
or :prompt-fn
should be provided for commands that send text to Ollama.
Attached currently in this repository is a presets directory which contains the following different menu systems. To activate, just open up the el file and evaluate and the menu will adapt accordingly!
Key | Description | Action Description |
---|---|---|
o | Open chat buffer | Opens the chat buffer and inserts an intro message if empty. |
v | View model status | Displays the current status of available models. |
m | Swap model | Allows selecting and switching to a different model. |
h | Help assistant | Displays an introduction message in the chat buffer. |
l | Send region | Sends the selected region of text for processing. |
r | Refactor code | Sends selected code for refactoring. |
g | Git commit message | Generates a concise Git commit message. |
c | Describe code | Provides a description of the selected code. |
d | Dictionary Lookup | Retrieves dictionary definitions for a selected word. |
n | Word synonym | Lists synonyms for a given word. |
p | Proofread text | Proofreads the selected text. |
z | Make concise | Reduces wordiness while preserving meaning. |
e | Custom prompt | Prompts the user to enter a custom query for processing. |
s | Save chat | Saves the current chat buffer to a file. |
x | Kill request | Terminates the active processing request. |
q | Quit | Exits the Ollama Shell menu with a message. |
Key | Description | Action Description |
---|---|---|
o | Open chat buffer | Opens the chat buffer and inserts an intro message if empty. |
v | View model status | Displays the current status of available models. |
m | Swap model | Allows selecting and switching to a different model. |
h | Help assistant | Displays an introduction message in the chat buffer. |
l | Send region | Sends the selected text region for processing. |
l | Literature review help | Suggests related papers and research directions. |
m | Review methodology | Reviews a research methodology and suggests improvements. |
s | Academic style check | Reviews text for academic writing style improvements. |
c | Citation suggestions | Identifies statements needing citations and suggests sources. |
a | Analyze arguments | Analyzes the logical structure and strength of arguments. |
e | Custom prompt | Prompts the user to enter a custom query for processing. |
s | Save chat | Saves the current chat buffer to a file. |
x | Kill request | Terminates the active processing request. |
q | Quit | Exits the Ollama Shell menu with a message. |
Key | Description | Action Description |
---|---|---|
o | Open chat buffer | Opens the chat buffer and inserts an intro message if empty. |
v | View model status | Displays the current status of available models. |
m | Swap model | Allows selecting and switching to a different model. |
h | Help assistant | Displays an introduction message in the chat buffer. |
l | Send region | Sends the selected text region for processing. |
e | Explain code | Explains the purpose and functionality of the selected code. |
r | Code review | Reviews the code for potential issues, bugs, and improvements. |
o | Optimize code | Suggests optimizations for performance and readability. |
t | Generate tests | Generates test cases for the selected code. |
d | Generate documentation | Produces detailed documentation for the provided code. |
p | Suggest design patterns | Recommends design patterns applicable to the provided code. |
c | Custom prompt | Allows the user to enter a custom query. |
s | Save chat | Saves the current chat buffer to a file. |
x | Kill request | Terminates the active processing request. |
q | Quit | Exits the Ollama Shell menu with a message. |
Key | Description | Action Description |
---|---|---|
o | Open chat buffer | Opens the chat buffer and inserts an intro message if empty. |
v | View model status | Displays the current status of available models. |
m | Swap model | Allows selecting and switching to a different model. |
h | Help assistant | Displays an introduction message in the chat buffer. |
l | Send region | Sends the selected text region for processing. |
b | Brainstorm ideas | Generates creative ideas on a given topic. |
u | Generate outline | Creates a structured outline for the provided content. |
s | Enhance writing style | Improves the writing style while maintaining meaning. |
p | Detailed proofreading | Conducts comprehensive proofreading for grammar, style, and clarity. |
f | Improve flow | Enhances text coherence and transition between ideas. |
d | Polish dialogue | Enhances dialogue to make it more natural and engaging. |
n | Enhance scene description | Adds vivid and sensory details to a scene description. |
c | Analyze character | Analyzes a character’s development and motivations. |
l | Analyze plot | Examines the structure, pacing, and coherence of the plot. |
r | Research expansion | Suggests additional research directions and sources. |
k | Fact checking suggestions | Identifies statements needing verification and suggests fact-checking strategies. |
v | Convert format | Converts text to another format while maintaining content meaning. |
w | Word choice suggestions | Recommends alternative word choices for better clarity and precision. |
z | Summarize text | Generates a concise summary while retaining key information. |
e | Custom writing prompt | Prompts the user to enter a custom query for processing. |
a | Save chat | Saves the current chat buffer to a file. |
x | Kill request | Terminates the active processing request. |
q | Quit | Exits the Ollama Shell menu with a message. |
You can associate specific commands defined in the menu with an Ollama LLM to optimize performance for different tasks. For example, if speed is a priority over accuracy, such as when retrieving synonyms, you might use a lightweight model like TinyLlama or a 1B–3B model. On the other hand, for tasks that require higher precision, like code refactoring, a more capable model such as Qwen-Coder 7B can be assigned to the “refactor” command on the buddy menu system.
Since this package enables seamless model switching through Ollama, the buddy menu can present a list of commands, each linked to an appropriate model. All Ollama interactions share the same chat buffer, ensuring that menu selections remain consistent. Additionally, the status bar on the header line and the prompt itself indicate the currently active model.
Ollama Buddy also includes a model selection mechanism with a fallback system to ensure commands execute smoothly, even if the preferred model is unavailable.
Commands in ollama-buddy-command-definitions
can specify preferred models using the :model
property. This allows optimizing different commands for specific models:
(defcustom ollama-buddy-command-definitions
'((refactor-code
:key ?r
:description "Refactor code"
:model "qwen-coder:latest"
:prompt "refactor the following code:")
(git-commit
:key ?g
:description "Git commit message"
:model "tinyllama:latest"
:prompt "write a concise git commit message for the following:")
(send-region
:key ?l
:description "Send region"
:model "llama:latest"))
...)
When :model
is nil
, the command will use whatever model is currently set as ollama-buddy-default-model
.
When executing a command, the model selection follows this fallback chain:
- Command-specific model (
:model
property) - Current model (
ollama-buddy-default-model
) - User selection from available models
(setq ollama-buddy-default-model "llama:latest")
When a fallback occurs, Ollama Buddy provides clear feedback:
- The header line shows which model is being used
- If using a fallback model, an orange warning appears showing both the requested and actual model
- The model status can be viewed using the “View model status” command (
v
key)
- Best Case: Requested model is available
- Command requests “mistral:latest”
- Model is available
- Request proceeds with “mistral:latest”
- Simple Fallback: Requested model unavailable
- Command requests “mistral:latest”
- Model unavailable
- Falls back to
ollama-buddy-default-model
- Complete Fallback Chain:
- Command requests “mistral:latest”
- Model unavailable
- Current model (“llama2:latest”) unavailable
- Falls back to, prompts user to select
- If no models are available at all, an error is raised: “No Ollama models available. Please pull some models first”
- Connection issues are monitored and reported in the status line
- Active processes are killed if connection is lost during execution
- Command-Specific Models:
- Assign models based on task requirements
- Use smaller models for simple tasks (e.g., “tinyllama” for git commits)
- Use more capable models for complex tasks (e.g., “mistral/qwen-coder” for code refactoring)
- Fallback Configuration:
- Set
ollama-buddy-default-model
to a reliable, general-purpose model
- Set
- Model Management:
- Use
ollama-buddy-show-model-status
to monitor available models - Keep commonly used models pulled locally
- Watch for model availability warnings in the header line and chat buffer
- Use
The Ollama Emacs package ecosystem is still emerging. Although there are some great implementations available, they tend to be LLM jack-of-all-trades, catering to various types of LLM integrations, including, of course, the major online offerings.
Recently, I have been experimenting with a local solution using ollama
. While using ollama
through the terminal interface with readline
naturally leans toward Emacs keybindings, there are a few limitations:
- Copy and paste do not use Emacs keybindings like readline navigation. This is due to the way key codes work in terminals, meaning that copying and pasting into Emacs would require using the mouse!
- Searching through a terminal with something like Emacs
isearch
can vary depending on the terminal. - Workflow disruption occur when copying and pasting between Emacs and
ollama
. - There is no easy way to save a session.
- It is not using Emacs!
I guess you can see where this is going. The question is: how do I integrate a basic query-response mechanism to ollama
into Emacs? This is where existing LLM Emacs packages come in, however, I have always found them to be more geared towards online models with some packages offering experimental implementations of ollama
integration. In my case, I often work on an air-gapped system where downloading or transferring packages is not straightforward. In such an environment, my only option for LLM interaction is ollama
anyway. Given the limitations mentioned earlier of interacting with ollama
through a terminal, why not create a dedicated ollama
Emacs package that is very simple to set up, very lightweight and leverages Emacs’s editing capabilities to provide a basic query response interface to ollama
?
I have found that setting up ollama
within the current crop of LLM Emacs packages can be quite involved. I often struggle with the setup, I get there in the end, but it feels like there’s always a long list of payloads, backends, etc., to configure. But what if I just want to integrate Emacs with ollama
? It has a RESTful interface, so could I create a package with minimal setup, allowing users to define a default model in their init file (or select one each time if they prefer)? It could also query the current set of loaded models through the ollama
interface and provide a completing-read
type of model selection, with potentially no model configuration needed!
Beyond just being lightweight and easy to configure, I also have another idea: a flexible menu system. For a while, I have been using a simple menu-based interface inspired by transient menus. However, I have chosen not to use transient
because I want this package to be compatible with older Emacs versions. Additionally, I haven’t found a compelling use case for a complex transient menu and I prefer a simple, opaque top level menu.
To achieve this, I have decided to create a flexible defcustom
menu system. Initially, it will be configured for some common actions, but users can easily modify it through the Emacs customization interface by updating a simple alist.
For example, to refactor code through an LLM, a prepended text string of something like “Refactor the following code:” is usually applied. To proofread text, “Proofread the following:” could be prepended to the body of the query. So, why not create a flexible menu where users can easily add their own commands? For instance, if someone wanted a command to uppercase some text (even though Emacs can already do this), they could simply add the following entry to the ollama-buddy-menu-items
alist:
(?u . ("Upcase"
(lambda () (ollama-buddy--send "convert the following to uppercase:"))))
Then the menu would present a menu item “Upcase” with a “u” selection, upcasing the selected region. You could go nuts with this, and in order to double down on the autogeneration of a menu concept, I have provided a defcustom
ollama-buddy-menu-columns
variable so you can flatten out your auto-generated menu as much as you like!
This is getting rambly, but another key design consideration is how prompts should be handled and in fact how do I go about sending text from within Emacs?. Many implementations rely on a chat buffer as the single focal point, which seems natural to me, so I will follow a similar approach.
I’ve seen different ways of defining a prompt submission mechanism, some using <RET>, others using a dedicated keybinding like C-c <RET>, so, how should I define my prompting mechanism? I have a feeling this could get complicated, so lets use the KISS principle, also, how should text be sent from within Emacs buffers? My solution? simply mark the text and send it, not just from any Emacs buffer, but also within the chat window. It may seem slightly awkward at first (especially in the chat buffer, where you will have to create your prompt and then mark it), but it provides a clear delineation of text and ensures a consistent interface across Emacs. For example, using M-h to mark an element requires minimal effort and greatly simplifies the package implementation. This approach also allows users to use the **scratch** buffer for sending requests if so desired!
Many current implementations create a chat buffer with modes for local keybindings and other features. I have decided not to do this and instead, I will provide a simple editable buffer (ASCII text only) where all ollama
interactions will reside. Users will be able to do anything in that buffer; there will be no bespoke Ollama/LLM functionality involved. It will simply be based on a special
buffer and to save a session?, just use save-buffer
to write it to a file, Emacs to the rescue again!
Regarding the minimal setup philosophy of this package, I also want to include a fun AI assistant-style experience. Nothing complicated, just a bit of logic to display welcome text, show the current ollama
status, and list available models. The idea is that users should be able to jump in immediately. If they know how to install/start ollama
, they can install the package without any configuration, run `M-x ollama-buddy-menu`, and open the chat. At that point, the “AI assistant” will display the current ollama
status and provide a simple tutorial to help them get started.
The backend?, well I initially decided simply to use curl
to stimulate the ollama
RESTful API but after getting that to work I thought it might be best to completely remove that dependency, so now I am using a native network solution using make-network-process
. Yes it is a bit overkill, but it works, and ultimately gives me all the flexibility I could every want without having to depend on an external tool.
I have other thoughts regarding the use of local LLMs versus online AI behemoths. The more I use ollama
with Emacs through this package, the more I realize the potential of smaller, local LLMs. This package allows for quick switching between these models while maintaining a decent level of performance on a regular home computer. I could, for instance, load up qwen-coder
for code-related queries (I have found the 7B Q4/5 versions to work particularly well) and switch to a more general model for other queries, such as llama
or even deepseek-r1
.
Phew! That turned into quite a ramble, maybe I should run this text through ollama-buddy
for proofreading! :)
Here is a kanban of the features that will be (hopefully) added in due course, and visually demonstrating their current status via a kanban board
TODO | DOING |
---|---|
Test on Windows | Add to MELPA |
Create more specialized system prompts |
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Commit your changes
- Open a pull request
- Ollama for making local LLM inference accessible
- Emacs community for continuous inspiration
Report issues on the GitHub Issues page
To the best of my knowledge, there are currently a few Emacs packages related to Ollama, though the ecosystem is still relatively young:
- llm.el (by Jacob Hacker)
- A more general LLM interface package that supports Ollama as one of its backends
- GitHub: https://github.com/ahyatt/llm
- Provides a more abstracted approach to interacting with language models
- Supports multiple backends including Ollama, OpenAI, and others
- gptel (by Karthik Chikmagalur)
- While primarily designed for ChatGPT and other online services, it has experimental Ollama support
- GitHub: https://github.com/karthink/gptel
- Offers a more integrated chat buffer experience
- Has some basic Ollama integration, though it’s not the primary focus
- chatgpt-shell (by xenodium)
- Primarily designed for ChatGPT, but has some exploration of local model support
- GitHub: https://github.com/xenodium/chatgpt-shell
- Not specifically Ollama-focused, but interesting for comparison
- ellama (by s-kostyaev)
- A comprehensive Emacs package for interacting with local LLMs through Ollama
- GitHub: https://github.com/s-kostyaev/ellama
- Features deep org-mode integration and extensive prompt templates
- Offers streaming responses and structured interaction patterns
- More complex but feature-rich approach to local LLM integration
Let’s compare ollama-buddy to the existing solutions:
- llm.el
- Pros:
- Provides a generic LLM interface
- Supports multiple backends
- More abstracted and potentially more extensible
ollama-buddy
is more:- Directly focused on Ollama
- Lightweight and Ollama-native
- Provides a more interactive, menu-driven approach
- Simpler to set up for Ollama specifically
- Pros:
- gptel
- Pros:
- Sophisticated chat buffer interface
- Active development
- Good overall UX
ollama-buddy
differentiates by:- Being purpose-built for Ollama
- Offering a more flexible, function-oriented approach
- Providing a quick, lightweight interaction model
- Having a minimal, focused design
- Pros:
- chatgpt-shell
- Pros:
- Mature shell-based interaction model
- Rich interaction capabilities
ollama-buddy
stands out by:- Being specifically designed for Ollama
- Offering a simpler, more direct interaction model
- Providing a quick menu-based interface
- Having minimal dependencies
- Pros:
- ellama
- Pros:
- Tight integration with Emacs org-mode
- Extensive built-in prompt templates
- Support for streaming responses
- Good documentation and examples
ollama-buddy
differs by:- Having a simpler, more streamlined setup process
- Providing a more lightweight, menu-driven interface
- Focusing on quick, direct interactions from any buffer
- Having minimal dependencies and configuration requirements
- Pros: