Deep Learning Systems are being widely used in real-time applications, including resource-constrained environments like mobile and IoT devices.
Dynamic Deep Learning Systems (DDLSs) adjust inference computation based on input complexity to improve efficiency and reduce overhead.
However, the dynamic nature of DDLSs opens up vulnerabilities to efficiency adversarial attacks, which exploit these adaptive mechanisms to degrade system performance.
This paper introduces a taxonomy of efficiency attacks on DDLSs, categorizing them into three types based on dynamic behaviors.
The identified attack categories include those focusing on dynamic computations per inference, dynamic inference iterations, and dynamic output production for downstream tasks.
The study delves into adversarial strategies targeting DDLSs' efficiency and highlights the challenges in securing these systems.
Existing defense mechanisms are examined, revealing their limitations against the evolving landscape of efficiency attacks, urging the need for innovative mitigation strategies.