Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama response delayed or error #217

Open
karthink opened this issue Feb 11, 2024 · 4 comments
Open

Ollama response delayed or error #217

karthink opened this issue Feb 11, 2024 · 4 comments

Comments

@karthink
Copy link
Owner

karthink commented Feb 11, 2024

I get the the error when I run gptel-send with the following configurations. It also takes 10 minutes before the response arrives. I also got Response Error: nil sometimes.

image
(setq-default gptel-model "mistral:latest" ;Pick your default model
                   gptel-backend (gptel-make-ollama "Ollama"
                                   :host "localhost:11434"
                                   :stream t
                                   :models '("mistral")))
  • gptel-curl:
{"model":"mistral:latest","created_at":"2024-02-10T22:43:58.789276Z","response":"","done":true,"total_duration":417571583,"load_duration":417073333}
(8d87d74a71c4ad1eb816d1778ae4e5db . 120)

  • gptel-log:
{
  "gptel": "request body",
  "timestamp": "2024-02-11 11:13:45"
}
  {
  "model": "mistral",
  "system": "You are a large language model living in Emacs and a helpful assistant. Respond concisely.",
  "prompt": "Test",
  "stream": true
}

ollama server is also active and ollama run mistral works normally.
image

Originally posted by @luyangliuable in #181 (comment)

@karthink
Copy link
Owner Author

karthink commented Feb 11, 2024

The response from Ollama is empty.

Could you run (setq gptel-log-level 'debug), try to use Ollama and paste the contents of the *gptel-log* buffer? Please wait until either an error or a timeout.

@luyangliuable
Copy link

I got the following:

On gptel-log:

{
  "gptel": "request Curl command",
  "timestamp": "2024-02-11 13:06:13"
}
[
  "curl",
  "--disable",
  "--location",
  "--silent",
  "--compressed",
  "-XPOST",
  "-y300",
  "-Y1",
  "-D-",
  "-w(5242174a9fcb32555dea3157193c24d7 . %{size_header})",
  "-d{\"model\":\"mistral\",\"system\":\"You are a large language model living in Emacs and a helpful assistant. Respond concisely.\",\"prompt\":\"Generate while loop in rust.\",\"stream\":true}",
  "-HContent-Type: application/json",
  "http://localhost:11434/api/generate"
]

image

@luyangliuable
Copy link

luyangliuable commented Feb 11, 2024

It seems the problem may stem from Ollama itself. I attempted to execute the following command:

curl -X POST -d "{\"model\":\"mistral\",\"system\":\"You are a large language model living in Emacs and a helpful assistant. Respond concisely.\",\"prompt\":\"Generate while loop in rust.\",\"stream\":true}" -H "Content-Type: application/json" "http://localhost:11434/api/generate"

in the shell, but it ends up hanging for hours without any response.

@karthink
Copy link
Owner Author

Has Ollama ever worked for you on this machine?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants