Hello there, my name is Christian. I am fascinated by Artificial Intelligence, Deep Learning and Natural Language Processing (NLP). More specifically Large Language Models (LLMs) and their ability to “understand” and interact with humans are of particular interest to me. I am passionate about interdisciplinary research that bridges computer science with psychology, Human-Computer Interaction (HCI), and social sciences to unlock the transformative potential of AI and overcome today's limitations.
I am a PhD candidate and full-time researcher at the Research Center Trustworthy Data Science and Security (RC-Trust) in Nils Köbis' group "Human Understanding of Algorithms and Machines" (HUAM). My research centers on building and reliably evaluating socially-intelligent LLM agents with steerable characteristics and personalities that incorporate Theory of Mind capabilities and Common Ground Seeking Behavior to achieve alignment of goals and mental models between users and systems. Another key focus involves identifying and mitigating problematic behaviors such as hallucinations, sycophancy, manipulation, and deception in model outputs—essential steps towards Calibrated Trust in human-AI teaming and overall trustworthy AI systems.
I earned my Master of Science in Computer Science at the University of Bonn, building upon interdisciplinary training in physics, economics, and political science at the University of Cologne and a subsequent Bachelor's degree from the University of Bonn. During my studies, I worked at the Fraunhofer Institute for Applied Information Technology (Fraunhofer FIT) on blockchain applications and served as a teaching assistant at both universities.
My Master's thesis investigated emergent Theory of Mind-like capabilities in Large Language Models, examining how evaluation perturbations influence these capabilities and exploring enhancement techniques through alternative prompting methods. This work resulted in publications presented at WiNLP @ EMNLP 2024 and ToM4AI @ AAAI 2025.
I collaborate closely with Prof. Lucie Flek at the University of Bonn's CAISA Lab and Prof. Bilal Zafar from RC Trust at the University of Bochum, leveraging their expertise in computational linguistics and algorithmic fairness to advance trustworthy AI systems.
I welcome collaborations with researchers and practitioners across all relevant disciplines.