30.04.2026
Artificial Intelligence is no longer a distant technology. It actively shapes how people work, communicate, make decisions, and form relationships. Yet, while debates on AI ethics often focus on how systems should behave, a crucial question remains underexplored: How do people actually respond to AI in ethically relevant situations?
At the Research Center Trustworthy Data Science and Security (RC Trust), this question is central to ongoing research. At the Chair of Human Understanding of Algorithms and Machines (HUAM), led by Prof. Nils Köbis at the University of Duisburg-Essen, researchers investigate how interactions with AI influence human judgment, trust, and behavior in everyday life.
“AI ethics is not only about designing systems,” Köbis notes. “It is about understanding how people interpret, follow, or resist what these systems suggest.”
Shifting the focus: From normative ideals to real behavior
To address this gap, Prof. Köbis is co-editing a Special Issue titled "AI Ethics: A Psychological Perspective” in the journal Computers in Human Behavior, together with international collaborators.
The initiative reflects a growing recognition within the field: ethical questions around AI cannot be answered solely through abstract principles. They require empirical insight into how people actually think, feel, and act when interacting with intelligent systems.
The Special Issue therefore focuses on behavioral and psychological research. It aims to explore:
By bringing together experimental, observational, and field-based approaches, the issue seeks to build a more complete picture of the human side of AI ethics.
Interdisciplinary research for complex challenges
The initiative also highlights the importance of interdisciplinary collaboration. Understanding AI in ethically complex situations requires insights from psychology, computer science, economics, and communication studies. This approach is closely aligned with the mission of RC Trust: to study trustworthy data science and AI not only from a technical perspective, but in relation to human behavior and societal impact. As AI systems increasingly influence high-stakes domains such as healthcare, education, employment, and justice, understanding these human dynamics becomes critical.
An invitation to the research community
The Special Issue is currently open for contributions and particularly encourages empirical work grounded in behavioral science and psychology.
Researchers are invited to submit their work by 31 May 2026.
Rather than prescribing what ethical AI should look like, the initiative aims to build a deeper understanding of how ethical behavior emerges in human–AI interaction. This includes examining how people interpret AI outputs, how responsibility is attributed, and how decision-making changes when choices are delegated to intelligent systems.
Patrick Wilking