Skip to content
This repository has been archived by the owner on Mar 28, 2022. It is now read-only.

Some thoughts ... most adversarial examples that looks ok to human is because... #7

Open
RnMss opened this issue Apr 10, 2018 · 2 comments

Comments

@RnMss
Copy link

RnMss commented Apr 10, 2018

For a typical example
2018-04-10 4 46 37

Human may read it as "4" only because we know it's handwriting. And handwriting is done with a pen, and written by strokes.

If I tell you this is not written by hand, but printed by a printer.
You probably tell me it's definitely a "9" not a "4".
(And you might use your common sense, that a printer might lack ink.)

If I just tell myself, they are not handwritings, they are prints, ink sprayed on water or paper made of rubber, many examples doesn't look strange anymore.

So the difference is probably in the training data.

@gongzhitaao
Copy link
Owner

The MNIST example is only for illustration. For real RGB images, you could make it an adversarial one by changing the color of one pixel. Surely it depends on the data and the model.

@RnMss
Copy link
Author

RnMss commented Apr 14, 2018

The example is also ... just an example.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants