Making images unlearnable through imperceptible perturbations is to prevent unauthorized image scraping from training deep neural networks (DNNs). Most existing methods for breaking these unlearnable data focus on applying image transformation techniques to disrupt the added perturbations, with limited attention given to modifying the classification tasks (target classes) of DNNs as an alternative approach. In this paper, we explore the vulnerabilities of unlearnable data, focusing mainly on modifying the classification tasks rather than applying image transformation techniques.