News
ATHENE project on AI to combat online hate speech: Dr Thea Riebe introduces herself
Hate speech on the internet is on the rise – and AI is set to help identify and combat it more effectively. The CYNTRA project will launch in July 2026 within the ATHENE research area Reliable and Verifiable Information through Secure Media (REVISE). It will be coordinated by Dr. rer nat. Thea Riebe and her colleague Dr. rer. nat. Marc-André Kaufhold, two researchers at TU Darmstadt's group PEASEC – Science and Technology for Peace and Security. In this interview, Thea Riebe introduces herself and her research, and provides insights into the project’s research approach and practical relevance.

What research are you conducting at ATHENE?
I am a postdoctoral researcher at PEASEC and, from July 2026, me and Dr Marc-André Kaufholdwill be coordinating the ATHENE project "Towards an Effective Multi-Label Classification and Model Auditing Ecosystem for Combatting Textual Online Hate Speech - CYNTRA ". We are developing a system that supports experts in the AI-based analysis of hate speech on the internet. Professionals in civil society reporting centres or law enforcement agencies amongst others, are making an enormously important contribution to the societal recording of hate speech, to initial legal assessments and to the support of those affected.However, due to the massive influx of hateful posts, these agencies are often overwhelmed. This is where our research comes in: how can we sensibly automate detection and classification using AI? In doing so, I focus on two central questions regarding human-computer interaction (HCI) within the project:
1. How can we reliably support domain experts in assessing complex cases?
2. How do we enable them to independently review the AI, identify errors and improve the model using their own data?
In my research, I am therefore investigating how explainable AI (XAI) should be visualised using a dashboard. The aim is to enable users to maintain a critical perspective on AI classifications and to actively intervene in the machine’s learning process.
How does this help us move forward? What is the impact of your research?
In the CYNTRA project, we are first compiling the current state of research and technology regarding XAI visualisations in the fields of hate speech and NLP. To this end, we are conducting literature reviews and user studies, including interviews, to identify specific practical requirements. On this basis, we are implementing technical approaches to interactive machine learning – a particularly user-centred form of AI interaction – and evaluating these directly with the users. The aim here is to enable domain experts to maintain and improve AI systems independently, for example to detect and rectify so-called concept drift, caused by data obsolescence or imbalance.
This approach is highly transferable: whether in IT security or medicine – wherever experts have to make complex decisions under time pressure, AI can provide support. When systems are designed in a way that users can interact with them intuitively and independently, decisions are made and documented in a better, more informed manner. In this way, AI does not restrict our scope for action, but creates capacity for what really matters.
What will be important in the future, and what will you be working on next?
As we increasingly rely on AI for advice in more and more areas of life, research into interaction design to encourage critical engagement with system proposals is becoming essential. We must ensure that we do not blindly trust AI outcomes. Our ATHENE project CYNTRA provides an important building block in this regard, which is why I am very much looking forward to the project launch.
How did you come to be involved with ATHENE?
Together with Dr Marc-André Kaufhold, I have already carried out several projects on situational awareness funded by the BMFTR, including CYWARN, CYLENCE and the previous ATHENE project, CyAware. Whilst CYWARN and CyAware focused on IT security and were carried out in collaboration with IT security experts and CERTs within public authorities, CYLENCE was already dedicated to the AI-supported classification of hate speech.
I explored these areas in greater depth during my research visit to the University of Glasgow in the summer of 2025. In collaboration with Prof. Simone Stumpf, Professor of Responsible and Interactive Artificial Intelligence, I further developed concepts for user-centred model steering in the context of hate speech. This collaboration formed the basis for our current user-centred concept in CYNTRA.
What do you particularly value about your work?
I find the direct interaction with partners in the field particularly motivating. I get to work on highlighting their day-to-day challenges and incorporating them directly into the development of socio-technical systems. In an environment where research and technology are constantly evolving, being able to exchange ideas with inspiring colleagues from a wide range of disciplines is a huge privilege.
show all news
