12.11.2024
The German delegation of RCTrust travelled to New York to co-host the workshop "Towards Calibrated Trust in AI" at the UA Ruhr Liaison Office in New York City. Working together with 15 researchers from institutions in the United States, the aim of the workshop was to exchange ideas, develop new research collaborations, and explore shared funding opportunities. The workshop opened with brief introductory presentations, where participants introduced their areas of research and explained how the concept of calibrated trust applies to their work. These initial talks set the stage for deeper discussions on the challenges and opportunities involved in measuring and ensuring trust in AI.
Throughout the event, participants engaged in rich discussions about the key issues surrounding AI trust:
1. Defining Trustworthiness
2. Trustworthiness Benchmarks
3. Trust Signals
4. Causal Factors of Trust
The need for interdisciplinary collaboration was a key takeaway from the discussions. As AI technologies continue to evolve, it will be essential for experts in AI, ethics, security, and human-computer interaction to work together to develop frameworks and systems that ensure AI is trustworthy and aligned with users' expectations.
A highlight of the workshop was the panel discussion, featuring Shyam Sundar (Penn State), Daniel Neider (RC Trust), and Magdalena Wischnewski (RC Trust) who discussed the function of trust calibration and the difficulty of reaching appropriately calibrated trust in real-life settings. Dr. Wischnewski said, that "trust is a multifaceted complex research challenge on both human and system side". Prof. Neider mentioned: "Technology is optimized for existing measures like accuracy; however, not yet for trust." And Prof. Sundar explained, that "we should be truthful about the trustworthiness of AI systems."
Thank you for having us at the workshop!