Using current machine learning methods, an artificial intelligence (AI) is trained on data, learns relationships in that data, and then is deployed to the world to operate on new data. For example, an AI can be trained on images of traffic signs, learn what stop signs and speed limit signs look like, and then be deployed as part of an autonomous car. The problem is that an adversary that can disrupt the training pipeline can insert Trojan behaviors into the AI. For example, an AI learning to distinguish traffic signs can be given just a few additional examples of stop signs with yellow squares on them, each labeled “speed limit sign.” If the AI were deployed in a self-driving car, an adversary could cause the car to run through the stop sign just by putting a sticky note on it. The goal of the TrojAI program is to combat such Trojan attacks by inspecting AIs for Trojans. This page is a list of resources for doing research on detecting Trojan attacks, including a leaderboard for Trojan detectors, code to create AIs with/without Trojans at scale, etc.
-
Notifications
You must be signed in to change notification settings - Fork 3
usnistgov/trojai
About
TrojAI Resources
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published