-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error with the local LLM scripts in init file #307
Comments
How are you calling these? Are they inside a use-package form? If so please provide the full form. |
Hi. I mark them and I eval the region. |
I mark them and I eval the region.
What you see is what I eval.
I mean, how is it placed in your init file, relative to the rest of your gptel configuration?
|
First I added it just below chatgpt API key. |
karthink
added a commit
that referenced
this issue
May 17, 2024
* gptel.el (gptel--known-backends): Move `gptel--known-backends` to gptel-openai. This fixes the warning where `gptel--known-backends` is not defined when `gptel-make-openai` is called in user configuration. * gptel-openai.el (gptel--known-backends): Move here.
Should be fixed now, please test. |
Fixed! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi
I have added some scripts to my init file, to use local LLM. Something extrange is happening:
I receive an error because of them, when I start emacs (if I comment them, I receive no error).
But, if after emacs is running I eval them, one after the other, I receive no error. And init file at complete doesn't send any error.
The error message:
■ Warning (initialization): An error occurred while loading ‘c:/Users/user/AppData/Roaming/.emacs.d/init.el’:
Symbol's value as variable is void: gptel--known-backends
To ensure normal operation, you should investigate and remove the
cause of the error in your initialization file. Start Emacs with
the ‘--debug-init’ option to view a complete error backtrace.
The scripts:
(gptel-make-ollama "Ollama" ;Any name of your choosing
:host "localhost:11434" ;Where it's running
:stream t ;Stream responses
:models '("llama3" "mistral" "phi3"))
(gptel-make-gpt4all "GPT4All" ;Name of your choosing
:protocol "http"
:host "localhost:4891" ;Where it's running
:models '("Meta-Llama-3-8B-Instruct.Q4_0.gguf"))
The text was updated successfully, but these errors were encountered: