News

From our research: Greater fairness in facial recognition

28/11/2025

Prof. Dr.-Ing. Naser Damer and his team at ATHENE are developing innovative methods for fair biometric systems. In this interview, he explains how continuous demographic labels instead of discrete categories reduce bias in facial recognition systems and what role this research plays in trustworthy AI applications.

Prof. Damer, facial recognition systems are being used more and more frequently—from smartphones to border control. Why is fairness even an issue with such procedures?

Naser Damer: Facial recognition systems are based on machine learning. This means that they are trained using large quantities of facial images in order to recognize similarities and differences. However, this training data is often not evenly distributed. If certain ethnic groups are overrepresented, the system learns to recognize these faces better. The system is then less accurate with other groups. This leads to distortions, known as biases. The following example illustrates why this can be a problem: when biometric systems are used for border control or by the police to identify individuals, they should deliver equally good results for all ethnic groups. If there is a trained bias here, certain groups will be disadvantaged.

You and your team have developed a new method to reduce this bias, the multidimensional ethnicity score... What is the core of this approach?

Naser Damer: Previous methods have mostly worked with fixed categories such as “white,” “Asian,” or “African.” However, this classification is very rough and does not reflect the actual diversity of human faces. We have therefore introduced a new labeling system that uses continuous demographic values. This means that instead of a rigid category, each face is given a so-called multidimensional ethnicity score that describes the degree of affiliation with several ethnic groups. This allows for fluid transitions and mixed forms that often occur in reality.

How exactly does this labeling work in practice?

Naser Damer: For each image in the training data set, we estimated the extent to which it corresponds to characteristics of different ethnic groups. For example, 70 percent demographic group A, 20 percent group B, 10 percent group C. These values form a continuum rather than clear boundaries. We then balanced the data sets using so-called balancing scores. This means that if a certain combination of characteristics occurred too frequently in the data, we tended to favor faces with rare characteristics. This results in a more diverse and representative training set.

What results were you able to achieve with this method?

Naser Damer: We trained over 65 facial recognition models and compared how our method affects fairness and accuracy. The result was very clear: recognition rates became fairer overall, meaning that the differences in how well the system recognized people from different ethnic groups decreased significantly. At the same time, overall recognition performance remained stable or even improved. This shows that fairness and precision are not mutually exclusive – on the contrary, they can reinforce each other.

Why is this an important step for the future of biometric systems?

Naser Damer: Biometric methods are being used in more and more everyday and security applications. If they systematically disadvantage certain groups, this undermines public confidence in this technology. Our research shows that it is possible to develop fair facial recognition that better reflects the real diversity of human faces. This is not only a technical advance, but also a contribution to social justice and responsible AI.

How is your research embedded in ATHENE?

Naser Damer: Both within the framework of my ATHENE professorship and in our biometrics laboratories—the Biometrics Application Lab and the Next Generation Biometric Systems Research Area—we are working to make biometric processes more secure, reliable, and traceable. Working on fair facial recognition is part of this mission. We combine basic research with application-oriented development and want to show that responsible innovation in AI is possible.

What are the next steps in ATHENE biometrics research?

Naser Damer: We want to extend the method to other demographic dimensions, such as age and gender, and integrate it into real-world application scenarios. We are also interested in how such balanced data sets work with other biometric features, such as fingerprints or iris recognition. In the long term, the goal is to anchor fairness as an integral part of AI systems.

Where can interested parties learn more about your research?

Naser Damer: Our paper „Balancing Beyond Discrete Categories: Continuous Demographic Labels for Fair Face Recognition“ is freely available online at https://doi.org/10.48550/arXiv.2506.01532.

Further information on biometrics laboratories and projects: www.athene-center.de/forschung/labs and www.athene-center.de/forschung/ngbs.

show all news