Publications

A Concise Analysis of Pasting Attacks and their Impact on Image Classification

AuthorBunzel, Niklas; Graner, Lukas
Date2023
TypeConference Paper
AbstractNeural networks are used for a variety of tasks in a wide range of applications, including high security applications, such as face recognition systems. These are used for identification, authentication and authorization. However, neural networks have been shown to be vulnerable against a variety of attacks that apply small perturbations to an input image in order to alter the predictions of a target model. In this paper, we present a simple pasting attack, which inserts objects, such as a face of a target into a source image. Since there is no reliance on gradients, it can be applied to any black-box image classifier. During evaluation, an average of 4.6 queries were sufficient to render an attack on a FaceNet model successful, 1.8 queries for an ImageNet classifier and 7.7 for the unknown black-box classifier used in the MLSec Competetion. By taking solely advantage of simple image operations, such as translation, scaling, rotation and change of transparency, the approach is lightweight and can be implemented in few lines of code. We make our code publicly available at: https://github.com/bunni90/FacePastingAttack.
ConferenceInternational Conference on Dependable Systems and Networks Workshops 2023
Urlhttps://publica.fraunhofer.de/handle/publica/450754