Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Secure Delivery of Trained LLM for Client Demo #1080

Open
humza-sami opened this issue Mar 25, 2024 · 0 comments
Open

Secure Delivery of Trained LLM for Client Demo #1080

humza-sami opened this issue Mar 25, 2024 · 0 comments

Comments

@humza-sami
Copy link

I have recently trained a LLM based on llama-2 using a private dataset for a client. They require the model for a demo on their machines. Unfortunately, I don't have the option to host the model and provide them with an endpoint. The model size is approximately 68GB, and it's stored in SafeTensors. Additionally, I have developed a binary for inference+RAG pipeline. I am using vLLM for inference. The model is located within a folder . The client needs to test the demo on their local machines.
I am seeking advice on the best possible secure method to deliver the LLM model to the client while ensuring that the model files are encrypted to prevent misuse. Given the sensitivity of the model and its potential misuse, encryption is crucial to maintain data security.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant