Adversarial attack methods, FGSM and TGSM, implemented in Chainer
-
Updated
Jul 7, 2017 - Python
Adversarial attack methods, FGSM and TGSM, implemented in Chainer
Caffe code for the paper "Adversarial Manipulation of Deep Representations"
Common adversarial noise for fooling a neural network
Simple pytorch implementation of FGSM and I-FGSM
Wasserstein Introspective Neural Networks (CVPR 2018 Oral)
Physical adversarial attack for fooling the Faster R-CNN object detector
Fixed baseline to MCS 2018
PyTorch implementation of "One Pixel Attack for Fooling Deep Neural Networks"
Adversarial attacks generated for the ACL paper "Did the Model Understand the Question?"
Pytorch implementation of https://github.com/val-iisc/nag
Implementation for <Decoupled Networks> in CVPR'18.
Tensorflow implementation for generating adversarial examples using convex programming
Code to reproduce results in our ACL 2018 paper "Did the Model Understand the Question?"
MCS 2018. Adversarial Attacks on Black Box Face Recognition
Thesis: Detecting Adversaries in DQNs and Computer Vision using Bayesian CNNs
Performing C&W attack on Recurrent Neural Network
Generalized Data-free Universal Adversarial Perturbations
Different Adversarial attack methods implemented in PyTorch on CIFAR-10 Dataset
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."