A fast accurate API for detecting NSFW images.
-
Updated
May 31, 2024 - Python
A fast accurate API for detecting NSFW images.
A MyBB extension that executes moderation actions based on content quality.
Software and Resources for Mitigating Online Gender Based Violence in India
Deep learning based content moderation from text, audio, video & image input modalities.
AI agent for automated content moderation of movies and books, employing Retrieval-Augmented Generation (RAG) and natural language processing Large Language Models to identify, discover, and summarize potentially concerning content for informed decision-making.
BullyBarrier is a proactive solution against cyberbullying, leveraging advanced technologies to identify and mitigate bullying comments in real-time. With automatic detection and user alerts,
Cascades entity status to all referenced entities in an Islandora repository item
Content Moderation using Reality.Eth with Kleros arbitration
Varbase Workflow includes a toolkit for robust, quick, and enterprise content moderation features. Varbase Workflow is useful for small sites with simple publishing workflow to enterprise complex publishing workflows, all thanks to leveraging Drupal Content Moderation and Workflow modules.
🔍 Curated papers & blogs on ML in risk industries like 🛡️fraud detection, 📈content moderation, and more!
🚀 Cyberbullying Detection API: Safeguarding the Digital Realm 🛡️
Collection of scripts to aggregate image data for the purposes of training an NSFW Image Classifier
A React application that serves as a report queue and content moderation system, designed to be extended upon for any use case.
Social media application that uses Perspective API by google to filter out toxic posts. Connect with friends around the world using OctoVerse. Share your thoughts as a post or a message and follow your friends to see what they are up to.
This is a WordPress plugin that moderates content using the OpenAI Moderation API, allowing you to avoid abusive content on your website.
Dataset and code implementation for the paper "Decoding the Underlying Meaning of Multimodal Hateful Memes" (IJCAI'23).
Code implementation for the paper "Evaluating GPT-3 Generated Explanations for Hateful Content Moderation" (IJCAI'23).
🤝 Using large language models to seamlessly help content moderators make better decisions, faster.
Add a description, image, and links to the content-moderation topic page so that developers can more easily learn about it.
To associate your repository with the content-moderation topic, visit your repo's landing page and select "manage topics."