20.03.2026
Prof. Sven Mayer (RC Trust & TU Dortmund University), Prof. Nicole Krämer (RC Trust & University of Duisburg-Essen), Dir. and Prof. Thomas Alexander (BAuA), Prof. Jens Teubner (TU Dortmund University & Lamarr Institute), PD Dr. Thea Radüntz (BAuA).
How can artificial intelligence be integrated into the working world in a meaningful, responsible, and human-centered way? This question was at the heart of the 2nd AI Workshop: Research – Practice – Working World. The event took place on March 17, 2026, at the DASA Working World Exhibition in Dortmund and was jointly organized by the Federal Institute for Occupational Safety and Health (BAuA), TU Dortmund University, the Lamarr Institute, and the Research Center Trustworthy Data Science and Security (RC Trust).
Alongside representatives of BAuA and the Lamarr Institute, Prof. Nicole Krämer helped open the event and placed the human perspective at the centre of the discussion. As Scientific Director of RC Trust, she focused on the concept of calibrated trust. Her key message was clear: people should neither dismiss AI too quickly nor rely on it without sufficient reflection. Instead, trust in AI should be aligned with the system’s actual capabilities and limitations. This is particularly important in professional settings where AI supports decision-making. Too little trust may lead to useful assistance being ignored, while too much trust can result in people accepting incorrect or misleading outputs without critical evaluation. Prof. Krämer used this perspective to highlight why human-centered research is essential when AI enters the workplace. Trust in AI is not only a technical issue – it is also shaped by experience, judgement, responsibility, and context.
Building on these topics, Prof. Sven Mayer, Chair of Human-AI Interaction at TU Dortmund University and RC Trust, gave the keynote “Intelligent Assistance Systems for the Modern Workplace.” His central message was that intelligent assistance systems aim to allow humans and machines to learn and accomplish tasks better together. In other words, the main goal is not simply to replace people, but to improve collaboration between human and machine. Prof. Mayer deliberately widened the view of what “the workplace” means. It is not only the office desk, the computer screen, or the keyboard. Work also takes place in factories, logistics, healthcare, production, and other physical environments. In all of these contexts, intelligent assistance systems can help make information more accessible, support better decisions, reduce strain, and improve safety. His contribution therefore highlighted the more technical side of the discussion, including privacy, security, embodied AI, and intelligent systems in real working environments.
Together, the contributions by both professors from RC Trust showed very clearly what the Research Center Trustworthy Data Science and Security stands for in the context of work. On the one hand, there is research into how people understand AI, how trust develops, and why both overtrust and distrust can become problems. On the other hand, there is research into how intelligent systems can be designed so that humans and machines can work together more effectively, more safely, and more meaningfully.
Altogether, the workshop enabled a fruitful exchange between research, practice, and institutional partners, in particular with BAuA and the Lamarr Institute. It created space for discussion, new connections, and shared perspectives on the role of AI in the working world.
The series will continue: the next AI Workshop is planned for March 21, 2028. Further details will follow.