Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance or "just" capability? #41

Open
mbetrifork opened this issue Jan 11, 2025 · 0 comments
Open

Performance or "just" capability? #41

mbetrifork opened this issue Jan 11, 2025 · 0 comments

Comments

@mbetrifork
Copy link

I was looking for a way to run inference across multiple devices when I came across your project. Previously I have looked at Distributed Llama (https://github.com/b4rtaz/distributed-llama) and gotten that to work but it was frankly a lot of work. I was hoping your project would be easier to use - and it seems to be from the instructions at least.

Could you expand the README.md file and be more specific as to the purpose of the project. I would like to know if Cake is aimed at providing users with the "capability" to run inference on large models, or if it also does so in a parallel way, resulting in increased inference speed?

This information would help users know beforehand if their goals and intentions align with what your project aims to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant