Skip to content

Ollama Local Custom Agent #269

@nrdevau

Description

@nrdevau

G'day guys!

First off, thanks heaps for the nvim plugin. It's very exciting getting the shiny AI buffer output!

I just wanted to document a bit of an edgecase, now that I have figured it out, maybe it will help someone else.

If using Ollama locally, there were two gotchas that tripped me up a bit.

  1. Specifying the provider as ollama (as in the docs) wasn't enough, as the backwards compatibility open AI provider would still error out.
  2. Specifying an agent through ollama required the llama3.1 model to be pulled (even if I wanted to use my own model)

So basically, here is what I believe is the smallest required config for lazy, using ollama with a local llm

        -- gpt prompting
        { "robitx/gp.nvim",
          version = "*",
          config = function()
                local conf = {
                    providers = {
                        ollama = {
                            endpoint = "http://localhost:11434/v1/chat/completions",
                        },
                        openai = {},
                    },
                    agents = {
                            -- Turns out disabling this didn't work. Path of least resistance was to download the default llama3.1 model, only to be able to then `:GpNextAgent` to my shiny branded agent
                            -- {
                            --     name = "CodeOllamaLlama3-8B", -- standard agent name to disable
                            --     disable = true,
                            -- },
                            {
                                provider = "ollama",
                                name = "NRDevCodeAi", -- obv not required to call it that
                                chat = true,
                                -- string with model name or table with model name and parameters
                                model = {
                                     model = "llama3", -- in my case, not llama3.1
                                     temperature = 0.6,
                                     top_p = 1,
                                     min_p = 0.05,
                                },
                                -- system prompt (use this to specify the persona/role of the AI)
                                system_prompt = "You are a general AI assistant.",
                            },
                    }
                }
                require("gp").setup(conf)
          end,
        },

Hopefully this helps someone else

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions