Mission Projects

ATHENE Mission Projects: Where research secures the future

Critical infrastructures in Germany and Europe are facing an increasing number of threats in the digital space. ATHENE's mission projects address fundamental security issues relating to Internet protocols, AI-supported software development and the analysis of unique physical or behavioural characteristics for identifying individuals. Through interdisciplinary research, ATHENE develops solutions that combine technical excellence with social responsibility. Through these projects, ATHENE is establishing the scientific basis for a secure digital transformation in business, administration and society.

Advancing Internet Security - ADISEC

more info

SecureCoder

more info

Responsible AI for Biometrics

more info


Advancing Internet Security - ADISEC

Securing the internet infrastructure

The internet is based on protocols that were developed without in-depth security considerations, making it vulnerable in many ways. Unauthorised parties can interfere with the routing of data streams. For instance, Border Gateway Protocol routing can be manipulated to redirect data packets via other countries and continents, where they can be read. Complex attacks such as KeyTrap exploit vulnerabilities in the backbone protocol used to resolve internet addresses, and could have paralysed large parts of the internet. This was prevented by ATHENE researchers. More recently introduced protection systems for the internet, such as Resource Public Key Infrastructure, often fail due to misconfiguration. The interconnection of various internet protocols creates additional attack vectors at all technical levels.

In the ADISEC mission project, researchers are developing bespoke security solutions for specific protocols. Various methods are employed for this purpose, including artificial intelligence prediction mechanisms, threshold cryptography and quantum computer-secure encryption. Intelligent verification mechanisms protect security systems from attacks that aim to reduce their level of protection.

ADISEC's ATHENE research protects critical infrastructures such as banks, hospitals and energy suppliers from internet-based attacks. Companies benefit from significantly improved operational stability and are protected from financial losses due to the redirection of their data traffic. The resilience of critical infrastructures strengthens national security and capacity to act. ADISEC's pioneering approaches to fault-tolerant security in distributed systems advance scientific research. Publicly available tools and research data accelerate global research and have a worldwide impact.

Lead: Prof. Dr. Haya Schulmann


AI-assisted Secure and Safe Software Development - SecureCoder

AI-assisted software development – with security

AI-assisted code generation systems, such as GitHub Copilot, are transforming software development and are now used routinely by almost all developers. However, studies show that a significant proportion of AI-generated code contains security vulnerabilities. There are many reasons for this. For instance, the training data may contain vulnerabilities; the models may be susceptible to targeted manipulation; the security mechanisms designed to prevent users from circumventing them may be bypassed (e.g. prompt injection); and vulnerabilities may arise during code generation due to pattern matching without sufficient security context. The dominant reactive security paradigm of retrospective static and dynamic analyses and manual reviews cannot usually keep pace with the speed and complexity of AI-generated software.

At SecureCoder, researchers are developing the algorithmic and technical foundations for security-oriented, AI-supported programming systems. The aim is to ensure security from the ground up, rather than adding it retrospectively. The project is pursuing several research paths to increase code security.

SecureCoder makes a valuable scientific contribution to the development of innovative model architectures and training methods that integrate AI-driven code generation with structured representations of code dependencies, mathematical proof techniques to ensure accuracy, and targeted attack simulations for robustness testing. Companies benefit from reduced security risks in software development and lower subsequent security patch costs, as well as support for compliance with regulations such as the EU Cyber Resilience Act and the German IT Security Act. Authorities receive standardised evaluation frameworks for AI technologies to protect sensitive data and critical infrastructures. The software development ecosystem benefits from the provision of practical, integrated security solutions as extensions for common development environments, making secure AI tools accessible to all without the need for in-depth security expertise.

Lead: Prof. Dr. Mira Mezini


Responsible AI for Biometrics

Biometrische systems – fair, explainable and privacy-friendly thanks to AI support

AI-supported biometric technologies are being used more and more in critical areas such as border control, law enforcement, and digital identity verification. However, studies show that there are significant differences in the performance of facial recognition systems across different demographic groups, for example in terms of age, gender, and ethnic origin. These differences are not statistical anomalies, but can lead to systematic bias in decision-making, particularly in sensitive areas of application. Furthermore, the decision-making processes of biometric systems are largely opaque to humans, despite directly affecting individuals. The complex comparisons of feature representations in neural networks are difficult to understand. This complicates legal accountability, end-user trust, and the practical use of the systems. Furthermore, biometric systems require sensitive real-world data, raising significant ethical, legal and security concerns.

As part of the Responsible AI for Biometrics mission project, our researchers are developing fair normalisation methods that reduce bias between population groups with the highest technically achievable precision. Generative networks and diffusion models produce balanced training data without posing any privacy risks. Explainable AI methods make decisions transparent. Bloom filters and homomorphic encryption secure biometric information. Multimodal systems combine different recognition methods to improve fairness and security.

The project makes scientific contributions by developing new techniques for reducing bias and improving interpretability, establishing new evaluation standards and generating high-quality synthetic datasets that promote reproducibility and collaboration within the research community. Responsible AI for Biometrics strengthens trust in digital interactions between citizens and the state, promotes transparency, and protects fundamental rights such as privacy and protection against discrimination. Businesses and public authorities benefit from legally robust biometric solutions that comply with the General Data Protection Regulation, the EU AI Act and emerging digital identity legislation. These results support the development of secure digital identity infrastructures and the strategic goal of digital sovereignty, by providing access to ethically sound biometric technologies developed in Europe.

Lead: Prof. Dr. Naser Damer, Prof. Dr. Christian Rathgeb