Publikationen

Team RoMa @ AADD-2025: On the Generation of Transferable and Visually Imperceptible Adversarial Attacks Against Deepfake Detectors

AutorGöller, Nicolas; Graner, Lukas; Frick, Raphael; Bunzel, Niklas
Datum2025
ArtConference Paper
AbstraktThe rapid development of generative AI and in particular deepfake technology enables the seamless creation and manipulation of visual content. As the resulting syntheses are often indistinguishable from authentic images, they threaten the integrity of visual evidence. While forensic detectors can be used to detect syntheses, they can become targets of adversarial attacks. In the "Adversarial Attacks on Deepfake Detectors" challenge, competitors were tasked with perturbing a dataset of AI-synthesized images so that four classifiers would mistakenly accept them as authentic. In this paper, we introduce our solution, a white-box adversarial framework that injects globally distributed, data-driven noise perturbations optimized via additional surrogate Vision Transformer and EfficientNet classifiers. Empirical comparisons to both conventional post-processing transforms and localized adversarial patches demonstrate that our approach based on globally distributed noise achieves the highest attack success rates across all public detectors while preserving superior SSIM, confirming its efficacy and visual imperceptibility. In the final evaluation of the challenge, our proposed approach placed third with a final score of 2679.
KonferenzInternational Conference on Multimedia 2025
Urlhttps://publica.fraunhofer.de/handle/publica/499532