25.03.2026

In a new study, RC Trust researcher Dr. Greta Ontrup examines how AI reshapes team dynamics.

What happens to a team when one of its members is no longer human?

This question lies at the heart of a new study co-authored by Dr. Greta Ontrup, head of a Young Investigators Group at the Research Center Trustworthy Data Science and Security (RC Trust). Together with colleagues from Bremen and Bochum, she investigates how human-AI teams function – and where they differ from teams composed entirely of humans.

The paper, published in the International Journal of Human–Computer Interaction, explores so-called “team emergent states”–the subtle but crucial dynamics that make teams work. These include team cohesion (how strongly members feel connected), team identification (whether people feel a sense of “we”), and psychological safety (whether team members feel safe to speak up).

To study this, participants worked in small teams on collaborative, creative brainstorming tasks. In a second step, a third team member was introduced–either another human or an AI system. This allowed the researchers to directly compare how team dynamics change when AI becomes part of the team.

The results reveal a nuanced picture. Teams that included an AI showed lower cohesion and weaker identification compared to human-only teams. In simple terms: people felt less connected and less like a unified group when an AI was involved.
At the same time, participants clearly identified less with the AI than with their human teammates, highlighting that AI is not perceived as a full social equal within the team.

Interestingly, not all aspects of teamwork were affected. The study found no differences in psychological safety, suggesting that working with AI does not necessarily make people feel less comfortable contributing their ideas.

Overall, the findings challenge a common assumption: that AI integration is a purely ‘technical’ decision, affecting task processes. Instead, the study shows that team processes are directly affected and highlights that human-AI collaboration follows its own rules–and that established knowledge about teamwork cannot be transferred without adjustment.

This is exactly where Greta Ontrup’s research comes in. With her Young Investigators Group Psychological Aspects of Human-Algorithm-Interaction at RC Trust, she explores how people perceive, understand, and collaborate with intelligent systems at the workplace. A central goal is to develop guidelines for designing AI in ways that support not only efficiency, but also well-being, trust, and effective collaboration.

The study provides important insights for organizations that are increasingly integrating AI into everyday work. It suggests that successful human-AI teaming depends not only on technical performance, but also on how these systems are embedded into social and psychological team processes.

The publication is available as Open Access.

Category

  • Publication
Scroll To Top