Data and code for "Adversarial Destabilization Attacks to Direct Data-driven Control"

Citation Author(s):
Hampei
Sasahara
Submitted by:
Hampei Sasahara
Last updated:
Mon, 07/08/2024 - 15:58
DOI:
10.21227/bs5n-pz03
Data Format:
License:
87 Views
Categories:
Keywords:
0
0 ratings - Please login to submit your rating.

Abstract 

This is a MATLAB code and data for the work [1].

"code" folder includes MATLAB codes.

Requirements: Control System Toolbox, Parallel Computing Toolbox, cvx

"data" folder includes the data described in the manuscript.

[1] Hampei Sasahara, "Adversarial Destabilization Attacks to Direct Data-driven Control," submitted to IEEE OJ-CSYS, 2023.

Abstract: This paper examines the vulnerability of direct data-driven control to malicious attacks that aim at destabilizing the resulting closed-loop system by introducing a subtle yet sophisticated perturbation into the observed input and output data. An effective attack method called the directed gradient sign method (DGSM) is devised. It build upon the fast gradient sign method (FGSM), which has initially been developed in the context of image classification. DGSM utilizes the gradient of the eigenvalues in the resulting closed-loop system to create a severe perturbation in the direction that deteriorates system's stability. This study demonstrates that the attack can destabilize the system, even when the original closed-loop system designed with the clean data exhibits a considerable stability margin. Remarkably, it is shown that the attack can have a significant impact even when the attacker lacks complete knowledge of the data and hyperparameters in the controller design algorithm. Furthermore, to enhance resilience against such attacks, regularization methods originally developed for dealing with random disturbances are explored. Their effectiveness is verified through numerical experiments. Finally, statistical analysis with randomly generated systems identifies system's features significant to the attack impact.