Skip to content

๐Ÿ“• Adversarial Attacks and Defenses for Image-Based Recommendation Systems using Deep Neural Networks.

Notifications You must be signed in to change notification settings

philippnormann/adversarial-recsys

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

1 Commit
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Adversarial Recommender Systems

This repo contains the PyTorch implementation and LaTeX code for my master thesis, Adversarial Attacks and Defenses for Image-Based Recommendation Systems using Deep Neural Networks.

Abstract

Today recommendation systems (RSs) are an established part of modern web-services and are deployed by numerous companies. Owing to the success of deep neural networks (DNNs) in other domains, the industry has started implementing RSs using DNNs. However, DNNs were shown to be vulnerable to targeted adversarial attacks in recent studies. While there have been several studies on the subject of adversarial attacks against collaborative filtering (CF) based RSs only few studies focusing on content-based RSs have been published. In this thesis, we showed that a visual content-based RSs using DNNs is vulnerable to targeted adversarial attacks using state-of-the-art white-box attacks. In the next step, we tested different defense mechanisms utilizing adversarial training (AT) and were able to show that AT had a significant positive impact on the robustness of our trained models against our performed attacks.

Results

Proposed targeted item-to-item attack setup for an image based k-NN recommender

Adversarial example, created using PGD with ฮต=0.03 and 32 iterations

Recommendation results with injected PGD adversarial example

Attack success rates (%) for reaching a target rank <= 3 for an undefended model

Attack success rates (%) for reaching a target rank <= 3 for an adversarially trained model

Attack success rates (%) for reaching a target rank <= 3 and ฮต=0.05 for all evaluated attacks and defenses

Attacks
Defenses FGSM PGD-128 CW-1000
Unsecured 0.07 98.32 99.70
AT 0.03 0.07 0.30
CAT 0.00 14.89 32.80

Intallation

To install all required dependencies a script for debian based distros is included

./setup.sh

for other distros or operating systems you need to install Python 3.7 and Pipenv manually

Downloading and data preprocessing

./data.sh

Training a normal model

pipenv run python -m src.train --batch-size 32 normal --num-epochs 12

Training a model using adversarial training

pipenv run python -m src.train --batch-size 32 adversarial --num-epochs 12

Training a model using curriculum adversarial training

pipenv run python -m src.train --batch-size 32 curriculum-adversarial --num-epochs 12

Evaluating all models

./evaluate.sh

Attacking all models

./attack.sh

Attacking a single model using FGSM

pipenv run python -m src.attack --model-name normal-24-epochs --epsilon 0.03 fgsm

Attacking a single model using PGD

pipenv run python -m src.attack --model-name normal-24-epochs --epsilon 0.03 pgd

Attacking a single model using CW

pipenv run python -m src.attack --model-name normal-24-epochs --epsilon 0.03 cw

Cite

@mastersthesis{normann2020advrecsys,
  title={Adversarial Attacks and Defenses for Image-Based Recommendation Systems using Deep Neural Networks},
  author={Philipp Normann},
  year={2020}
}