Datasets
Standard Dataset
NIPS 2017: Adversarial Learning Development Set (ImageNet-NIPS)
- Citation Author(s):
- Submitted by:
- Hao Wu
- Last updated:
- Mon, 01/13/2025 - 09:29
- DOI:
- 10.21227/pxe9-a956
- Links:
- License:
- Categories:
- Keywords:
Abstract
Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake.
Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model.
To accelerate research on adversarial examples, Google Brain is organizing Competition on Adversarial Examples and Defenses within the NIPS 2017 competition track. This dataset contains the development images for this competition.
The competition on Adversarial Examples and Defenses consist of three sub-competitions:
- Non-targeted Adversarial Attack. The goal of the non-targeted attack is to slightly modify source image in a way that image will be classified incorrectly by generally unknown machine learning classifier.
- Targeted Adversarial Attack. The goal of the targeted attack is to slightly modify source image in a way that image will be classified as specified target class by generally unknown machine learning classifier.
- Defense Against Adversarial Attack. The goal of the defense is to build machine learning classifier which is robust to adversarial example, i.e. can classify adversarial images correctly.
In each of the sub-competitions you're invited to make and submit a program which solves the corresponding task. In the end of the competition we will run all attacks against all defenses to evaluate how each of the attacks performs against each of the defenses.