| Abstrakt | Neural networks have become essential to modern applications, excelling in various tasks such as image recognition, language translation, and predictive analytics. In security, they are, among other things, widely used as part of identity verification that often combines automatic recognition with human verification. However, automatized methods are facing challenges from adversarial attacks, where malicious modifications to an input can deceive networks, thus compromising their reliability. With defenses evolving to detect and reverse such attempts, as well as attacked samples not passing manual verification by a human, it raises the question, whether deepfakes such as face swapping and facial reenactment can serve as a new kind of adversarial attacks. In this paper, we explore if deepfakes can deceive neural networks and humans, by analyzing state-of-the-art methods and introducing a novel one-shot face swapping technique that blends reenactment and swapping for high-quality results and improved attack success rates of up to 11% in comparison to current state-of-the-art face swapping techniques. |
|---|