-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can we build non avx cpu aio images? #1916
Comments
related to #1888 |
Hello sbe-arg, Thank you for bringing this issue to our attention. It's indeed a valid point as not everyone may have access to hardware with AVX support, and building from scratch on such systems can be time-consuming. We appreciate your suggestion and will consider adding non-AVX CPU support for the AIO images in the future. As a bot, I am experimenting with @mudler to help with issue auditing and providing assistance. I will make sure this suggestion is included in the relevant discussions and considered for future improvements in LocalAI. If you have any further questions or concerns, please don |
Next releases should have all the variants ( handled in #2232 ) 👍 |
Amazing |
@sbe-arg I'm trying to run LocalAI on Xeon systems and finding that - at least the |
@chris-hatton moved the setup to another host (with avx) and a quadro p4000 and been using there but sporadically. |
@sbe-arg Probably smart; I'm struggling to find any LLM inference solution that will work on a non-AVX machine! |
thisone also works well on non avx https://snapcraft.io/ollama |
I run this on a docker swarm cluster at "home" and a few of the "servers" are xeons without avx support. Still runs perfectly but have to build on start and takes ~20min each up command. It might help a lot of folks on a budget
The text was updated successfully, but these errors were encountered: