For us, trustworthiness can only exist on a solid foundation of four pillars, constructed by our four spokespersons: Artificial Intelligence & Machine Learning, Cybersecurity & Privacy, Data Science & Statistical Learning, and Psychology & Social Sciences. Read more about the foundation for our unique perspective on trustworthiness here.

Artificial Intelligence & Machine Learning

The field of machine learning (ML) and artificial intelligence (AI) faces several challenges that researchers and practitioners strive to overcome. One key challenge is the need for vast amounts of high-quality labeled data to train effective models, which can be particularly difficult to obtain for certain tasks. Ethical considerations, including biases in training data and the potential for discriminatory outcomes, pose another significant hurdle. Interpretability and explainability of AI models remain critical, especially in applications where decision-making impacts individuals or society. Additionally, the rapid evolution of technology necessitates ongoing efforts to keep algorithms up-to-date and secure against adversarial attacks. Striking a balance between innovation and ethical, transparent deployment is crucial to harness the full potential of machine learning and AI while mitigating potential risks and ensuring responsible development.

Portrait of Emmanuel Müller Photo: TU Dortmund

»We need fundamental research on "Calibrated Trust". Machine learning should be equally perceived as trustworthy by humans but also ensure the necessary technical reliability. A balance between both is important for a sustainable, trustworthy AI!«

Prof. Dr. Emmanuel Müller, TU Dortmund University

In addition to the aforementioned challenges, building and maintaining trustworthiness in machine learning and AI systems is a paramount concern. As these technologies become increasingly integrated into various aspects of our lives, establishing trust in their reliability, fairness, and security becomes essential. Transparent and accountable practices, along with robust mechanisms for addressing biases and ensuring privacy, contribute to the overall trustworthiness of AI systems. Striking a balance between innovation and adherence to ethical standards is crucial for fostering public trust in the deployment of machine learning and artificial intelligence, as the societal impact of these technologies continues to grow.

Head of this research area is Prof. Emmanuel Müller.

Psychology & Social Sciences

Planes have crashed when intelligent warning systems were ignored. Autonomous cars have killed pedestrians when the 'driver' relied too much on the car’s abilities. These (unfortunate) instances of people trusting a system too little or too much demonstrate how important it is to account for the human-in-the-loop when developing any system. That is why, in our research, we aim to understand how individuals make sense of, evaluate and use AI systems. This entails creating meaningful interventions that establish warranted levels of trust. We craft these interventions based on qualitative and quantitative studies that probe human-AI interactions.

In our research, we are interested in how human users interact with and employ AI technologies. Our previous work on human-AI interactions has yielded two observations:

  1. Users sometimes resist using AI.
  2. Users sometimes also rely too heavily on AI. To understand and explain these observations, we are especially interested in the role of trust in AI. Originating in earlier works on automation (e.g., Lee & See, 2004), trust has become the central variable for explaining both resistance to using AI (disuse) and overreliance on AI (misuse). It is worth noting that system disuse and misuse can be equally detrimental to individuals.

Initial studies aim to adjust the level of trust bestowed in a system to reflect the trustworthiness of the system, producing what is known as calibrated trust. Differentiating between a system’s perceived trustworthiness and its actual trustworthiness is crucial in calibrated trust, since user perceptions of an AI system’s trustworthiness may differ from the system’s actual trustworthiness, which is a function of its functionality and reliability. In calibrated trust, the perceived trustworthiness of a system is equal to its actual trustworthiness. To achieve calibrated trust, we examine how system design features, individual differences and contextual factors influence human-AI interactions. We are also interested in scrutinizing the relationship between trust and people’s understanding of the system’s functioning. Our work is mostly quantitative and employs experimental designs to analyze psychological mechanisms; however, we include qualitative studies, too.

Head of this research area is Prof. Nicole Krämer.

Data Science & Statistical Learning

Our group conducts basic research on data analytic tools to provide better, more robust and more trustworthy methods that benefit other disciplines. We design our methods based on fundamental statistical concepts. This allows us to extract more information from our data and get a better understanding about the processes and structures underlying the problem. We thereby cover the entire data science pipeline from experimental design and sample size planning up to modeling, prediction and complex inference.

Although most studies in the social sciences actually aim to answer causal questions, their approaches are generally limited to studying associations. Likewise, machine learning often focuses on accurate point predictions; however, understanding variations, relationships and quantifying uncertainties are equally important. Topics such as causal investigations or uncertainty quantification are among the central pillars of statistical inference. Our group supports the RC Trust's efforts by considering and incorporating fundamental principles of statistics and statistical learning into our joint development of reliable and trustworthy data science and security methods.

It all starts with how extensively the methods are tailored to the structure of the data (e.g. experimental or observational data, independent, longitudinal or time series data etc.) and the mission (e.g., causal, exploratory or predictive analysis) to avoid biases from the very start. We draw on these insights to use, combine, develop and analyze new methodology. This covers uncertainty quantification in terms of confidence, credible or prediction regions (by Bayesian approximations, resampling, Gaussian processes etc.), statistical understanding and guarantees for learning methods (covering unbiasedness, error bounds, robustness, variable importance etc.) or mathematical and simulation-based analyses (covering proofs, Monte Carlo studies etc.).

Portrait of Markus Pauly

I have always been very curious about new and unknown things, which is why I became a statistician, the freest of all scientists. Or, as phrased by D.R. Brillinger in "Past, Present, and Future of Statistical Science" (2014):

 

... how wonderful the field of statistics is...

Prof. Dr. Markus Pauly, Head of Data Science & Statistical Learning

One key aspect of our research is that it tries to bridge the gap between (deep) mathematical theory, statistical applications and quantitative science. This manifests itself in many ways, including by delivering meaningful output for the user (e.g. a medical doctor). Examples are given by effect sizes with mathematically valid and distributionally robust comparative confidence intervals or comprehensible model equations that reflect causal or distributional relations and have been demonstrated to provide trustworthy estimates.

We collaborate with national and international scientists (from China, India, US, across Europe, etc.), different disciplines (e.g. medicine, logistics, (neuro)psychology), external as well as industrial research institutions (e.g. Fraunhofer, Google, DFKI) on a wide range of topics and are always happy to learn and investigate more.

Head of this research area is Prof. Markus Pauly.

Cybersecurity & Privacy

The digital society and economy we live in today offers convenience, creates value and enables many exciting innovations. They also attract a wide range of attackers looking to disrupt digital systems and infrastructure, subvert their operations, and steal data – all to threaten trustworthy data science.  Cybersecurity research designs effective and efficient ways of defending against known attacks, as well as monitoring and anticipating how they will develop in the future. The ultimate goal being to have security and privacy baked into future systems – security-by-design and privacy-by-design - to ensure they are trustworthy and trusted.
 
This goal can only be delivered through intensive collaboration between researchers from a number of disciplines: cryptographers developing algorithms that withstand capabilities in quantum computing, hardware researchers creating components that are tamper-proof and free of vulnerabilities and backdoors, software researchers creating secure development frameworks, methods and component, to human-centered security researchers making security and privacy technology as usable and easy to understand as possible, to experts on policy, law and regulation surrounding the use of novel technologies.  
 
To ensure that the trustworthy technology we create is also trusted by humans who use it, we collaborate with experts in individual and collective human behavior - psychology, crime science and economics.  Most of our research is empirical – conducting studies  with stakeholder groups who will be using future systems, but also with those involved in designing and implementing systems.

Scroll To Top