-
Notifications
You must be signed in to change notification settings - Fork 24
Home
Kurama622 edited this page Jun 7, 2025
·
12 revisions
-
Q & A
- The format of curl usage in Windows is different from Linux, and the default request format of llm.nvim may cause issues under Windows.
- Switching between multiple LLMs and frequently changing the value of LLM_KEY is troublesome, and I don't want to expose my key in Neovim's configuration file.
- Priority of different parse/streaming functions
- How can the AI-generated git commit message feature be integrated with lazygit
- How to switch models
- How to display the thinking (reasoning) contents
The format of curl usage in Windows is different from Linux, and the default request format of llm.nvim may cause issues under Windows.
Use a custom request format
-
Basic Chat and some AI tools (using streaming output) with customized request format
Define the
args
parameter at the same level as theprompt
.--[[ custom request args ]] args = [[return {url, "-N", "-X", "POST", "-H", "Content-Type: application/json", "-H", authorization, "-d", vim.fn.json_encode(body)}]],
-
AI tools (using non-streaming output) custom request format
Define args in
opts
WordTranslate = { handler = tools.flexi_handler, prompt = "Translate the following text to Chinese, please only return the translation", opts = { fetch_key = function() return vim.env.GLM_KEY end, url = "https://open.bigmodel.cn/api/paas/v4/chat/completions", model = "glm-4-flash", api_type = "zhipu", args = [[return {url, "-N", "-X", "POST", "-H", "Content-Type: application/json", "-H", authorization, "-d", vim.fn.json_encode(body)}]], exit_on_move = true, enter_flexible_window = false, }, },
Note
You need to modify the args according to your actual situation.
Switching between multiple LLMs and frequently changing the value of LLM_KEY is troublesome, and I don't want to expose my key in Neovim's configuration file.
- Create a
.env
file specifically to store your various keys. Note: Do not upload this file to GitHub.
export GITHUB_TOKEN=xxxxxxx
export DEEPSEEK_TOKEN=xxxxxxx
export SILICONFLOW_TOKEN=xxxxxxx
-
Load the
.env
file inzshrc
orbashrc
source ~/.config/zsh/.env # Default to using the LLM provided by Github Models. export LLM_KEY=$GITHUB_TOKEN
-
Finally, switching keys is completed through
fetch_key
.fetch_key = function() return vim.env.DEEPSEEK_TOKEN end,
AI tool configuration's streaming_handler
or parse_handler
> AI tool configuration's api_type
> Main configuration's streaming_handler
or parse_handler
> Main configuration's api_type
{
"kdheepak/lazygit.nvim",
lazy = true,
cmd = {
"LazyGit",
"LazyGitConfig",
"LazyGitCurrentFile",
"LazyGitFilter",
"LazyGitFilterCurrentFile",
},
-- optional for floating window border decoration
dependencies = {
"nvim-lua/plenary.nvim",
},
config = function()
vim.keymap.set("t", "<C-c>", function()
vim.api.nvim_win_close(vim.api.nvim_get_current_win(), true)
vim.api.nvim_command("LLMAppHandler CommitMsg")
end, { desc = "AI Commit Msg" })
end,
}
Need to configure models:
{
"Kurama622/llm.nvim",
dependencies = { "nvim-lua/plenary.nvim", "MunifTanjim/nui.nvim"},
cmd = { "LLMSessionToggle", "LLMSelectedTextHandler", "LLMAppHandler" },
config = function()
require("llm").setup({
-- set models list
models = {
{
name = "GithubModels",
url = "https://models.inference.ai.azure.com/chat/completions",
model = "gpt-4o-mini",
api_type = "openai"
fetch_key = function()
return "<your api key>"
end,
-- max_tokens = 4096,
-- temperature = 0.3,
-- top_p = 0.7,
},
{
name = "Model2",
-- ...
}
},
})
end,
keys = {
{ "<leader>ac", mode = "n", "<cmd>LLMSessionToggle<cr>" },
-- float style
["Input:ModelsNext"] = { mode = {"n", "i"}, key = "<C-S-J>" },
["Input:ModelsPrev"] = { mode = {"n", "i"}, key = "<C-S-K>" },
-- Applicable to AI tools with split style and UI interfaces
["Session:Models"] = { mode = "n", key = {"<C-m>"} },
},
}
Configure enable_thinking
(thinking_budget
can be optionally configured)
{
url = "https://api.siliconflow.cn/v1/chat/completions",
api_type = "openai",
max_tokens = 4096,
model = "Qwen/Qwen3-8B", -- think
fetch_key = function()
return vim.env.SILICONFLOW_TOKEN
end,
temperature = 0.3,
top_p = 0.7,
enable_thinking = true,
thinking_budget = 512,
}