Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can we build non avx cpu aio images? #1916

Closed
sbe-arg opened this issue Mar 28, 2024 · 8 comments
Closed

can we build non avx cpu aio images? #1916

sbe-arg opened this issue Mar 28, 2024 · 8 comments
Labels
enhancement New feature or request roadmap

Comments

@sbe-arg
Copy link

sbe-arg commented Mar 28, 2024

I run this on a docker swarm cluster at "home" and a few of the "servers" are xeons without avx support. Still runs perfectly but have to build on start and takes ~20min each up command. It might help a lot of folks on a budget

@sbe-arg sbe-arg added the enhancement New feature or request label Mar 28, 2024
@mudler
Copy link
Owner

mudler commented Mar 28, 2024

related to #1888

@mudler mudler added the roadmap label Mar 28, 2024
@localai-bot
Copy link
Contributor

Hello sbe-arg,

Thank you for bringing this issue to our attention. It's indeed a valid point as not everyone may have access to hardware with AVX support, and building from scratch on such systems can be time-consuming. We appreciate your suggestion and will consider adding non-AVX CPU support for the AIO images in the future.

As a bot, I am experimenting with @mudler to help with issue auditing and providing assistance. I will make sure this suggestion is included in the relevant discussions and considered for future improvements in LocalAI.

If you have any further questions or concerns, please don

@mudler
Copy link
Owner

mudler commented May 5, 2024

Next releases should have all the variants ( handled in #2232 ) 👍

@mudler mudler closed this as completed May 5, 2024
@sbe-arg
Copy link
Author

sbe-arg commented May 6, 2024

Amazing

@chris-hatton
Copy link

@sbe-arg I'm trying to run LocalAI on Xeon systems and finding that - at least the hipblas AIO image - still requires AVX. This appears to be a bug; how are you going with your setup?

@sbe-arg
Copy link
Author

sbe-arg commented Nov 19, 2024

@chris-hatton moved the setup to another host (with avx) and a quadro p4000 and been using there but sporadically.

@chris-hatton
Copy link

@sbe-arg Probably smart; I'm struggling to find any LLM inference solution that will work on a non-AVX machine!

@sbe-arg
Copy link
Author

sbe-arg commented Nov 19, 2024

thisone also works well on non avx https://snapcraft.io/ollama

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request roadmap
Projects
None yet
Development

No branches or pull requests

4 participants