-
-
Notifications
You must be signed in to change notification settings - Fork 115
Open
Description
G'day guys!
First off, thanks heaps for the nvim plugin. It's very exciting getting the shiny AI buffer output!
I just wanted to document a bit of an edgecase, now that I have figured it out, maybe it will help someone else.
If using Ollama locally, there were two gotchas that tripped me up a bit.
- Specifying the provider as ollama (as in the docs) wasn't enough, as the backwards compatibility open AI provider would still error out.
- Specifying an agent through ollama required the llama3.1 model to be pulled (even if I wanted to use my own model)
So basically, here is what I believe is the smallest required config for lazy, using ollama with a local llm
-- gpt prompting
{ "robitx/gp.nvim",
version = "*",
config = function()
local conf = {
providers = {
ollama = {
endpoint = "http://localhost:11434/v1/chat/completions",
},
openai = {},
},
agents = {
-- Turns out disabling this didn't work. Path of least resistance was to download the default llama3.1 model, only to be able to then `:GpNextAgent` to my shiny branded agent
-- {
-- name = "CodeOllamaLlama3-8B", -- standard agent name to disable
-- disable = true,
-- },
{
provider = "ollama",
name = "NRDevCodeAi", -- obv not required to call it that
chat = true,
-- string with model name or table with model name and parameters
model = {
model = "llama3", -- in my case, not llama3.1
temperature = 0.6,
top_p = 1,
min_p = 0.05,
},
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are a general AI assistant.",
},
}
}
require("gp").setup(conf)
end,
},
Hopefully this helps someone else
Metadata
Metadata
Assignees
Labels
No labels