Sharpness-aware optimization for real-world adversarial attacks for diverse compute platforms with enhanced transferability
2024
In recent years, deep neural networks (DNNs) have become integral to many real-world applications. A pressing concern in these deployments pertains to their vulnerability to adversarial attacks. In this work, we focus on the transferability of adversarial examples in a real-world deployment setting involving both a cloud model and an edge model. The cloud model is a black-box victim model, while the edge model is a surrogate model that is fully accessible to users. We investigated scenarios where attackers leverage information from the known surrogate model to generate adversarial examples to attack the unknown black-box victim model. Existing methods often optimize the adversarial example generation based on the steepest gradients estimated from the surrogate model, which do not generalize effectively to the victim model. To better gauge the for real-world adversarial risks in a cloud-edge deployment setting, we proposed an novel attack mechanism that enhanced transferability by incorporating a sharpness-aware objective into the optimization process. Our evaluation on image classification benchmarks demonstrates that our method significantly improves adversarial example’s transferability, even on the foundational computer vision models such as OFA-Large, showcasing its potential as a new standard in assessing attack transferability within a cloud-edge hybrid deployment scenario.
Research areas