Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration of privacy-by-design inference with remote enclaves using BlindLlama for powerful models such as Llama 2 70b & Falcon 180b #8

Open
lyie28 opened this issue Sep 20, 2023 · 1 comment
Labels
BlindLlama integration Feature related to BlindLlama key feature core feature of BlindChat

Comments

@lyie28
Copy link
Collaborator

lyie28 commented Sep 20, 2023

No description provided.

@lyie28 lyie28 added key feature core feature of BlindChat BlindLlama integration Feature related to BlindLlama labels Sep 20, 2023
@lyie28 lyie28 added this to the Remote enclave support milestone Sep 20, 2023
@dhuynh95
Copy link
Contributor

We will connect soon BlindLlama, our open-source Confidential and verifiable AI APIs, to BlindChat.

This will enable users to still benefit from a fully in-browser and private experience while offloading most of the work to a remote enclave. This option implies:

  • No heavy bandwidth requirement, compared to the local version that pulls a model on the device (700MB)
  • No heavy computing requirement, compared to running computing locally
  • Better model performance as we can use large models, like Llama 2 70b that would not run on most users' device

Privacy is still ensure by our end-to-end protected AI APIs.

If you want to learn more about privacy guarantees of BlindLlama, you can look at our docs or whitepaper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BlindLlama integration Feature related to BlindLlama key feature core feature of BlindChat
Projects
Status: In progress
Status: Planned
Development

No branches or pull requests

2 participants