Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sending sensitive information to OpenAI #18

Open
kristopher-smith opened this issue Apr 2, 2024 · 0 comments
Open

Sending sensitive information to OpenAI #18

kristopher-smith opened this issue Apr 2, 2024 · 0 comments

Comments

@kristopher-smith
Copy link

I have been looking through how the new CALM implements "intent-less" policies.

Reading through the prompts to LLM's it appears we would be relying heavily on the generative model to steer and trigger other policies and function calls.

Unless we are housing our own LLM in house, would we not be sending potentially private information to a third party?

For example: https://github.com/RasaHQ/rasa-calm-demo/blob/main/data/prompts/gpt_3-5_turbo_cmd_gen_prompt.jinja2

If we ask a user for personally identifiable information to fill slots, are those not being fed to OpenAI as well as the entire context of what type of info it is for?
If we try to anonymize the data to send in the prompt to OpenAI but the users are sharing information outside of what is caught in the prompt, isn't this also being shared as well?

Is there a way aside from housing an LLM on our internal servers to be sure we are not sharing sensitive information?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant