Publications

Towards Resource-Efficient Deepfake Detection

AuthorFrick, Raphael; Petri, Matthias
Date2025
TypeConference Paper
AbstractDeepfake technology, which allows media content manipulation using AI, poses significant risks to society. While current deepfake detection methods primarily utilize deep neural networks like CNNs and Vision Transformers, they often demand substantial computational resources, limiting their practical application, especially in scenarios requiring real-time processing. This paper explores enhancing the efficiency of deepfake classifiers by focusing on model architectures like EfficientNet-B3, ResNet50, and Vision Transformer (ViT-B/16), and implementing optimization techniques such as quantization and pruning. Our evaluation aims to minimize inference time and memory consumption while maintaining detection performance, facilitating real-time processing. Quantization-Aware Training (QAT) emerges as the most effective optimization strategy, significantly increasing inference speed with minimal impact on recognition accuracy, making QAT-ResNet50 and QAT-EfficientNet-B3 promising solutions for efficient CPU-based deepfake detection.
ConferenceWorkshop on Security Implications of Deep­fakes and Cheapfakes 2025
Urlhttps://publica.fraunhofer.de/handle/publica/490843