29.09.2024

Nils Köbis, Nicole Krämer and Greta Ontrop of RCTrust presented their research at the DGPs (german association for pyschology) congress.

the building of the conference in Vienna

The 53. DGPs congress took place from 16th to 19th Sep in Vienna. It is the largest German psychological congress!

Main topics there were humans, environment & media. It also focussed on connections between humans and their social and physical environment, including the digital or social media. There have been several key note lectures and numerous talks and symposia to these topics. For exapmle, Sabine Pahl (Vienna) and Vera Araujo Soares (Heidelberg) gave key note lectures on Environmental Psychology and additionelly Amy Orben (Cambridge) and Kelly Babchishin (Carleton) gave one on the Psychology of Social Media.

 

RCTrust was also represented!

On top of it: Three of our researchers gave lectures, too! 

1. Nils Köbis: His position paper has the title “Behavioral AI Safety - How AI can corrupt human ethical behavior & how psychological science can help”.

His paper is about: AI agents can influence human behavior similarly and differently compared to how humans influence one another raising concerns about the potential corrupting power of AI. Four social roles of AI in influencing ethical behavior are proposed: AI as role models, advisors, partners, and delegates. Experimental evidence shows that AI advisors can corrupt just as effectively as humans, while AI delegates enable unethical behavior while allowing people to maintain a positive self-image. Behavioral science insights help understand the risks posed by AI’s influence on human ethics. Policy interventions were discussed.

 

2. Nicole Krämer with Marie Beisemann; Philipp Doebler & Magdalena Wischnewski: Her research talk was about “Development and validation of a questionnaire to measure trust in AI”.

It is about: Trust in automated systems should be calibrated to their reliability (e.g., accuracy, fairness). A new 30-item questionnaire, covering six trust dimensions, was developed to measure perceived trust. The questionnaire was tested with 883 participants across three automated system vignettes (skin cancer, mushrooms, driving). A global trust factor and five trust dimensions were identified through factorial analysis.

 

3. Greta Ontrop with Michèle Rieth; Vera Hagemann and Annette Kluge: Her research talk had the title “The development of team processes in human- AI teams - an empirical study”

It is about: Human-AI teams (HATs) involve collaboration between humans and AI towards a shared goal, with theories suggesting classic team processes may apply. The study compares team processes (action, transition, interpersonal) in HATs versus human-only teams using a video-recorded lab experiment. Video analysis and behavioral coding are used to investigate how team processes in HATs differ from traditional teams and what classic theories cannot explain.

 

More information on the conference are here.

 

 

Category

  • Talk
  • Event
Scroll To Top