Skip to content

[Feature] Mannually interrupt when outputing answers #1005

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
valentimarco opened this issue Jan 14, 2025 Discussed in #748 · 0 comments
Open

[Feature] Mannually interrupt when outputing answers #1005

valentimarco opened this issue Jan 14, 2025 Discussed in #748 · 0 comments
Labels
agent Related to cat agent (reasoner / prompt engine) endpoints Related to http / ws endpoints LLM Related to language model / embedder V2

Comments

@valentimarco
Copy link
Member

Discussed in #748

Originally posted by NeverOccurs March 13, 2024
Hi I really love your works. Just a quick question, when I use local ollama models sometimes it produces gibberish and never stops. I have to restart the container and it is quite annoying. How to mannually interrupt a generating process when I don't want it to continue? Many thanks in advance!

@valentimarco valentimarco added agent Related to cat agent (reasoner / prompt engine) endpoints Related to http / ws endpoints LLM Related to language model / embedder V2 labels Jan 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agent Related to cat agent (reasoner / prompt engine) endpoints Related to http / ws endpoints LLM Related to language model / embedder V2
Projects
None yet
Development

No branches or pull requests

1 participant