
This dataset is for the paper “FMOBA: Frequency-Domain Multi-Objective Black-Box Adversarial Attacks for SAR Image”
- Categories:
This dataset is for the paper “FMOBA: Frequency-Domain Multi-Objective Black-Box Adversarial Attacks for SAR Image”
Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake.