16.01.2026
Photo: ChatGPT
Generative artificial intelligence is rapidly changing research workflows – but in medicine and health data analysis, innovation must go hand in hand with responsibility. This is the central premise of a new paper in Statistics in Medicine, authored under the lead of Dennis Dobler, which examines the practical opportunities and limitations of large language models such as ChatGPT in biostatistics. Markus Pauly, Professor of Statistics at TU Dortmund University and founding member of the RC Trust within the University Alliance Ruhr, contributed to the work as a co-author. Dobler was affiliated with the RC Trust until September 2025, reflecting the Center’s sustained engagement with questions of trustworthy and responsible AI.
The paper, titled ChatGPT as a Tool for Biostatisticians: A Tutorial on Applications, Opportunities, and Limitations, examines how generative AI can support tasks such as study planning, simulation studies, causal analyses, research synthesis, or code generation. Based on a wide range of concrete use cases, the authors show that AI tools can be valuable assistants – especially for structuring workflows and handling routine steps. At the same time, they highlight serious risks: inconsistent results, hidden methodological errors, and so-called hallucinations that can undermine scientific validity.
This balanced perspective closely reflects the mission of the RC Trust. Instead of treating AI as an autonomous decision-maker, the study emphasizes the need for strong statistical foundations, transparency, and continuous human oversight. Trustworthy AI, the authors argue, emerges only when uncertainty is quantified, assumptions are made explicit, and results remain interpretable – particularly in sensitive application areas like medicine.
The work also stands out as an example of interdisciplinary and cross-university collaboration, bringing together expertise from statistics, medical biometry, and AI. For policymakers and stakeholders, it underscores the importance of clear standards and responsible governance. For researchers and students, it offers realistic guidance on how AI can augment – rather than replace – scientific expertise. For the broader public, it sends a reassuring message: the use of AI in science is being critically examined, not blindly adopted.
The full article is available open access in Statistics in Medicine and can be read here: