20.02.2026
Photo: Sebastian Meinken
A dramatic emergency landing on the A3 motorway in Duisburg. A large aircraft standing between cars and flashing blue lights. A reporter seemingly covering the scene live. More than four million people viewed the video on Instagram. Yet the event never happened. The footage was generated by artificial intelligence.
In a recent interview with the German broadcaster WDR, Bianca Nowak, postdoctoral researcher at the Chair of Human Understanding of Algorithms and Machines (HUAM) led by Prof. Nils Köbis, explained why such content spreads so quickly – and why it matters.
“Anyone with internet access can create AI videos today,” Bianca Nowak notes in the interview. “In the past, tools like Photoshop required specific skills. Now it can be a single prompt–a sentence–and the AI generates an image or video sequence.”
What makes this development particularly powerful is not only the technical quality of the content, but the speed and scale at which it circulates. On social media platforms, users often watch only a few seconds before scrolling on. Yet even brief exposure can leave a cognitive trace. Over time, repeated encounters with synthetic content can subtly shift perceptions of what is plausible, normal, or credible.
At RC Trust, this is precisely where research begins. The work at HUAM investigates how people perceive, interpret, and evaluate algorithmic systems in everyday life. Rather than focusing solely on technological performance, the team studies the psychological and societal dimensions of AI–how trust is formed, how credibility is assessed, and how communication shapes public understanding.
Nowak’s research explores how people around the world make sense of artificial intelligence, including large language models and other generative systems. Her background in trust and science communication provides an important lens for understanding why some users immediately recognize AI-generated content as fake, while others search online to verify whether an event truly occurred.
The viral Duisburg video illustrates a broader challenge. As generative AI tools become easier to use, the barrier to producing persuasive misinformation decreases. This has implications not only for everyday users, but also for democratic discourse, public safety communication, and institutional credibility.
By contributing expert analysis to public media, RC Trust helps translate scientific insight into societal orientation. The goal is not to create alarm, but to foster critical awareness: How do we evaluate digital content? What signals of authenticity do we rely on? And how can communication strategies strengthen resilience against manipulation?
In an environment where synthetic content can appear indistinguishable from reality, understanding human perception becomes as important as advancing the technology itself.
Patrick Wilking