Skip to content

iMerica/safely

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

7 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Safely

Add to Slack

(Pending Approval From Slack)

Background

This is a Slack bot that uses machine learning to infer if anything posted to Slack is NSFW.

Architecture

There are two core services involved

  • Safely Listener, The Real Time Slack events processor that dispatches classification tasks.
  • Safely Moderator, The Classifier that receives tasks from the real time listener and makes inferences about content.

Usage

If you don't want to bother deploying this and running your own infrastructure, you can join Safely (coming soon), which is a paid content moderation service that provides the same capability in addition to:

  • Fine tuning inference parameters.
  • Continuously trained classification models.
  • Custom reporting and behaviors.
  • Additional content-moderation insights.
  • Daily/Weekly digests.

Initial Setup.

If you prefer to run this yourself, follow these initial steps

  • Create a Slack App and Bot app in Slack.
  • export your Slack token as SAFELY_SLACKBOT_TOKEN=<your token>.
  • export your list of Slack admins as SAFELY_ADMINS_LIST=@michael,@admin (separated by a comma).
  • Invite your Slack bot to all the channels you would like to monitor.

Deploy

See Deployment Options to learn how to deploy using Kubernetes or Docker.

Disclaimer

  • Safely focuses on basic NSFW content including nudity and profanity. See the Open NSFW project for further information about the scope of the model.

  • This tool is imperfect. There will be some false positives & negatives. Please see the license file to learn more about guarantees (there are none).

Credits

Special thanks to:

License

MIT