adversarial attack
![](https://ieee-dataport.org/sites/default/files/styles/3x2/public/tags/images/eye-3374462_1920.jpg?itok=rR1Ez_Cm)
Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake.
- Categories: