01.09.2025

For the third time in a row, researchers from TU Dortmund University have received the prestigious Best Paper Award. The awarded study, conducted by Carina Newen, Sofia Vergara Puccini, and Prof. Dr. Emmanuel Müller, introduces innovative approaches to understanding and improving the trustworthiness of AI systems. The recognition highlights the team’s leading role in shaping the future of secure and reliable artificial intelligence.

Best Paper Award Photo by Carina Newen

TU Dortmund research team achieves a rare hattrick for trustworthy AI publication

Carina Newen, Sofia Vergara Puccini and Emmanuel Müller from the Faculty of Computer Science at TU Dortmund University have been honored with the Best Paper Award at the 27th International Conference on Big Data Analytics and Knowledge Discovery, held in Bangkok from August 25 to 27. This marks the third Best Paper Award for the team, underscoring their leading role in the field of trustworthy artificial intelligence and their groundbreaking contributions within the RC Trust.AI research center.
This achievement also marks a personal milestone: Carina Newen successfully completes her highly interdisciplinary Ph.D. at Prof. Dr. Müller’s Chair (Faculty of Computer Science, TU Dortmund University) within the RC Trust. Her work showcases the impact of cross-disciplinary collaboration in advancing the field of trustworthy artificial intelligence.

The awarded paper: Understanding and improving AI robustness

The team’s paper, “Certainty Attacks Using Explainability Preprocessing,” explores how modern machine learning models can be manipulated and what this means for the trustworthiness and security of AI systems.
Machine learning models are increasingly used in critical applications — from medical diagnostics to autonomous driving. However, they can be vulnerable to adversarial attacks: carefully crafted inputs designed to mislead the model into making incorrect predictions. The TU Dortmund researchers introduced a novel method that combines explainable AI techniques with advanced machine learning to analyze and optimize such attacks.
Their findings provide important insights into how attackers can bypass existing defense mechanisms and, in turn, how researchers can develop more robust and trustworthy AI systems. By improving our understanding of adversarial behavior, the work contributes to building safer and more reliable AI technologies.
In this work, we highlight how the combination of explainable AI with attack methodologies in machine learning can lead to stronger attacks that are able to mitigate common adversarial detection strategies. We highlight the need to consider more than just the success rates of attacks in future work regarding adversarial examples on machine learning algorithms.

Looking ahead: Towards more trustworthy AI

Prof. Dr. Emmanuel Müller emphasizes the broader vision behind the research: “Our goal is to better understand the limits and vulnerabilities of current AI systems. By exposing their weaknesses, we can develop stronger safeguards and ultimately make AI more secure and trustworthy.”
This latest success marks the third Best Paper Award, following previous wins in 2022 and 2023:

📄 Unsupervised Features Ranking (2022)

📄 State-Transition-Aware Anomaly Detection Under Concept Drifts (2023)

This achievement also highlights the importance of interdisciplinary research within RC Trust.AI, which combines expertise in computer science, data analytics, and explainable AI to tackle some of the most pressing challenges in the field.

Read the awarded paper here: https://link.springer.com/chapter/10.1007/978-3-032-02215-8_15

Selected award-winning papers from this conference will also be invited for a special issue of the renowned Elsevier journal Data & Knowledge Engineering.

Category

  • Publication
  • Award
  • Artificial Intelligence
  • Machine Learning
Scroll To Top