14.04.2026
Photo: Leonard Papenmeier
Optimizing a system with thousands of variables sounds like a task that demands highly sophisticated algorithms. Yet in practice, surprisingly simple methods often perform just as well – or even better. Why is that?
In his talk Bayesian Optimization at 1,000 Dimensions: Why It Works, What Breaks, and What’s Next, Leonard Papenmeier explores this puzzle at the heart of modern machine learning. Bayesian optimization (BO), a widely used approach for tuning complex models and systems, is often assumed to struggle beyond a few dozen dimensions. At the same time, a growing body of research reports success in hundreds or even thousands of variables.
Papenmeier takes a closer look at this apparent contradiction. Drawing on recent work, including methods such as BAxUS and Bounce, he shows how high-dimensional optimization techniques adaptively restrict the search space to make problems more tractable. However, benchmark results reveal a surprising trend: relatively “vanilla” Gaussian-process-based BO methods can match or outperform many specialized approaches – sometimes not because they scale better, but because the problems themselves are easier than they appear.
In the second part of the talk, Papenmeier offers a diagnostic explanation. He highlights how vanishing gradients in model training and optimization can effectively stall learning unless carefully initialized, and how commonly used acquisition strategies implicitly enforce local behavior. In other words, what appears to be global optimization in high dimensions often works because it focuses the search on subregions of the problem space.
These insights raise important questions for the field: Are current benchmarks truly representative of real-world complexity? How should models be adapted to limited evaluation budgets? And what does “scalability” actually mean in practice?
Dr. Papenmeier is a postdoctoral researcher at the University of Münster, specializing in Bayesian optimization in high-dimensional settings. He completed his PhD at Lund University in 2025 and has published at leading venues such as NeurIPS and ICML/UAI. His work focuses on developing robust optimization methods and investigating under which conditions state-of-the-art performance translates to the real world.
Event details
📅 Date: 16 April 2026
⏰ Time: 1:00–2:00 PM
📍 Location: JvF25/3-303 – Conference Room (Lamarr/RC Trust Dortmund)
AI Colloquium
The AI Colloquium is a series of lectures dedicated to cutting-edge research in the field of machine learning and artificial intelligence, coorganized by the Research Center Trustworthy Data Science and Security (RC Trust), the Lamarr Institute for Machine Learning and Artificial Intelligence (Lamarr Institute), and the Center for Data Science & Simulation at TU Dortmund University (DoDas).
Patrick Wilking