Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bias in models? #1

Open
braaannigan opened this issue Mar 16, 2021 · 2 comments
Open

Bias in models? #1

braaannigan opened this issue Mar 16, 2021 · 2 comments

Comments

@braaannigan
Copy link

There is a question about data privacy. Should there also be a question about potential sources of bias that could arise from using the model and how this is monitored?

@eugeneyan
Copy link
Owner

Hey @braaannigan, yes, data privacy is a key concern in systems involving customer data and is covered in Section 6.5.

Your point on bias is spot on. Yes, monitoring for bias, data drift, model validation metrics can also be part of the design doc and should be added if it's critical to the system. I've not included it here so as not to overwhelm.

@erikcvisser
Copy link
Contributor

I would also argue that it would be useful to add this to 6.6 (Monitoring). You want to monitor not only for technical issues, but also for data drift. I want the ML dev team to think about when a model needs to be retrained, or potentially deprecated altogether (lifecycle management).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants