08.05.2026

Magdalena Wischnewski leads a new Young Investigator Group on human-centered and trustworthy AI systems.

Portrait of Magdalena Wischnewski Photo: Magdalena Wischnewski

Artificial intelligence is increasingly shaping how people make decisions, interact with information, communicate with each other, and relate to technology. Trust plays a central role in this process: people need to understand when AI systems can be relied on, when caution is necessary, and how these systems influence human thinking, emotions, and behavior.
This is where the new Young Investigator Group Human Factors in AI Systems, led by Dr. Magdalena Wischnewski, begins its work. Starting in May, the group will investigate how AI affects the way people think, feel, interact, and behave – and how these effects can be better assessed, understood, and guided.

Trustworthy AI starts with people

The group’s research focuses on human-centric trustworthy AI. It examines questions of trust calibration, trust assessments, and the auditing of AI systems, for example through AI seals of trust. The aim is not only to analyze current challenges, but also to develop frameworks and practical recommendations for human-centered AI design and governance.
By combining perspectives from psychology, computer science, and human–computer interaction, the group addresses AI not only as a technical system, but as part of social and cognitive environments. This interdisciplinary perspective is essential for understanding how AI systems are perceived, used, questioned, or accepted by people.
 “If we want AI systems to be trustworthy, we need to understand how people relate to them,” says Magdalena Wischnewski. “Trust is not simply something users either have or lack. It needs to be appropriate, informed, and aligned with what a system can actually do.”

From misinformation research to trustworthy AI

Magdalena Wischnewski is a psychologist by training. Before joining RC Trust, she completed her PhD in Social Psychology at the University of Duisburg-Essen under the supervision of Prof. Nicole Krämer, who is now Scientific Director of RC Trust. Her dissertation examined why people believe misinformation and how emotions and social identities contribute to this process. She completed her doctorate summa cum laude.
This background provides an important foundation for her current research. Questions of misinformation, motivated reasoning, trust, and identity are closely connected to the way people interact with AI systems. Her new group builds on this expertise and expands it toward the design, assessment, and governance of trustworthy AI.
After her PhD, Wischnewski continued her academic work at RC Trust as one of the first PostDocs who was hired, helping to build today’s RC Trust. With the new Young Investigator Group, she now takes the next step and leads her own research team at the University of Duisburg-Essen.

Strengthening early-career research at RC Trust

Young Investigator Groups are an important part of RC Trust’s strategy to support outstanding early-career researchers across the UA Ruhr universities. They open space for new scientific perspectives, strengthen interdisciplinary collaboration, and address important interfaces in the Center’s research profile.
With Human Factors in AI Systems, RC Trust further expands its expertise in the human-centered study of trustworthy technology. The group will also be explicitly connected to relevant institutions within the University Alliance Ruhr, including potential links to the College for Social Sciences and Humanities, the DFG Research Training Group 2535, and the Center for Advanced Internet Studies.

Scroll To Top