06.05.2026
Photo: „Bremer Stadtmusikanten“ von Cat, CC BY-NC-SA 2.0
Why does an AI system make a certain decision – and can we trust it when it matters most? In many real-world applications, this is no longer an abstract question. Whether in automated decision-making, safety-critical systems, or AI-supported workflows, understanding how and why an AI behaves in a certain way has become essential. At the same time, ensuring that these systems are not only performant but also transparent, explainable, and safe remains a major challenge.
This is exactly where current research is shifting its focus – also driven by regulatory frameworks such as the EU AI Act, which place increasing emphasis on trustworthy AI. Against this backdrop, a new initiative is bringing together researchers from different fields to tackle these questions jointly.
A new forum for Explainability, Transparency, and Safety
The 1st Workshop on Explainability, Transparency, and Safety (ExTraSafe) will take place in August 2026 in Bremen as part of the KI 2026 conference. It also serves as the first annual meeting of the newly established ExTraSafe working group within the German Computer Science Society (GI).
What makes this format particularly relevant is its transdisciplinary approach. The workshop brings together perspectives from:
The goal is not only to advance technical methods, but also to better understand how AI systems operate within real-world contexts and societal expectations.
Why Daniel Neider’s work fits this field
Among the organizers is Prof. Daniel Neider, who holds the Chair of Verification and Formal Guarantees of Machine Learning at the Department of Computer Science at TU Dortmund University. He is also a principal investigator at the Research Center Trustworthy Data Science and Security (RC Trust) and affiliated with the Lamarr Institute for Machine Learning and Artificial Intelligence.
His research focuses on how formal methods can be used to analyze and verify machine learning systems–a perspective that is essential when moving from abstract models to real-world applications.
In the context of ExTraSafe, this expertise plays a crucial role: Explainability and transparency are important, but without verifiable guarantees, trust in AI systems remains incomplete. Neider’s work helps bridge this gap by connecting machine learning with rigorous logical and mathematical foundations.
From technical challenge to societal relevance
The ExTraSafe workshop addresses a wide range of pressing questions, including:
These questions highlight a key insight: Trustworthy AI is not just a technical problem – it requires collaboration across disciplines.
Call for Papers: Contribute to the discussion
As a first milestone, the workshop now invites contributions from the research community.
Submissions are open in two formats:
📄 Research papers (up to 8 pages, Springer LNCS format)
💬 Discussion entries (short contributions to stimulate debate)
📅 Submission deadline: May 15, 2026
📍 Workshop: August 11 or 12, 2026 (tba), Bremen
🔗Submission details: Link to the workshop website
Topics of interest include, among others:
The workshop aims to publish accepted contributions as proceedings, fostering further exchange within the community.
With the launch of the ExTraSafe workshop and working group, a new platform is emerging that connects technical innovation with societal responsibility.
For RC Trust, this initiative reflects a core principle: Advancing AI not only in terms of performance, but also in terms of trust, reliability, and real-world impact. And as AI continues to shape critical aspects of our lives, these questions are becoming more urgent than ever.
Patrick Wilking