• Data Science
  • Machine Learning

Junior Research Group Leader

Dr. Linara Adilova

TU Dortmund University
Room 315
Otto-Hahn-Straße 14
44227 Dortmund
Germany

Academic Career and Research Areas

Linara Adilova is a Junior Research Group Leader at TU Dortmund University and the Research Center Trustworthy Data Science and Security (RC Trust), where she contributes to research on the theoretical foundations of trustworthy artificial intelligence. Her work focuses on understanding and improving the reliability and efficiency of deep learning models, with a particular emphasis on generalization and learning theory.

Her research lies at the intersection of deep learning theory and practice. She investigates information-theoretic perspectives on learning dynamics as well as geometric properties of loss surfaces, such as flatness and linear mode connectivity, to better explain why and when deep neural networks perform reliably beyond their training data. Through this work, she aims to bridge formal theoretical insights with phenomena observed in real-world training and inference scenarios.

Linara completed her PhD at Ruhr University Bochum in February 2025, successfully defending her dissertation “Generalization in Deep Learning: From Theory to Practice” supervised by Prof. Dr. Asja Fischer. In her doctoral research, she addressed one of the central challenges of modern artificial intelligence: moving beyond heuristic-driven progress toward a more principled and sustainable understanding of deep learning. Her work combines mathematically precise formulations with practical validation in state-of-the-art neural networks and also explores federated learning as a key approach for real-world, distributed AI systems.

Before her academic work at RC Trust, Linara was a Research Associate at Fraunhofer IAIS in the field of AI safety, where she was involved in projects related to safeguarding artificial intelligence systems, including applications in autonomous driving. She is also an ELLIS Member and a graduate of the ELLIS PhD Program for which she has completed a research exchange at EPFL in Switzerland, in the Machine Learning Optimization (MLO) lab of Prof. Dr. Martin Jaggi. As part of ELLIS she is co-organizing a public reading group (since 2022) dedicated to Mathematics and Efficiency of Deep Learning.

Key Career Milestones

  • Since 2025: Junior Research Group Leader, TU Dortmund University & Research Center for Trustworthy Data Science and Security (RC Trust), Germany
  • 2024: Recipient of competitive ENFIELD research grant for information-theoretic approaches to deep learning, TU Graz
  • 2023: Visiting Researcher, ELLIS PhD Program, Machine Learning Optimization Lab, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
  • 2021–2025: Doctoral Researcher in Computer Science, Ruhr University Bochum (ELLIS PhD Excellence Program), Germany
  • 2017–2022: Data Scientist, Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS), Germany
  • 2010–2015: Professional Software Developer (Backend, Ruby on Rails), industry projects involving large-scale web systems and team-based software development

Key publications

Adilova, L., Andriushchenko, M., Kamp, M., Fischer, A., Jaggi, M. Layer-wise Linear Mode Connectivity. In International Conference on Learning Representations (ICLR), 2024.
Preprint DOI: doi.org/10.48550/arXiv.2307.06966

Adilova, L., Geiger, B. C., Fischer, A. Information Plane Analysis for Dropout Neural Networks. In International Conference on Learning Representations (ICLR), 2023.
Preprint DOI: doi.org/10.48550/arXiv.2303.00596

Han, T., Adilova, L., Petzka, H., Kleesiek, J., Kamp, M. Flatness Is Necessary, Neural Collapse Is Not: Rethinking Generalization via Grokking. Advances in Neural Information Processing Systems (NeurIPS), 2025.

Petzka, H., Kamp, M., Adilova, L., Sminchisescu, C., Boley, M. Relative Flatness and Generalization. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
NeurIPS Proceedings: https://proceedings.neurips.cc/paper/2021/hash/995f5e03890b029865f402e83a81c29d-Abstract.html

Weimar, M., Rachbauer, L. M., Starshynov, I., Faccio, D., Adilova, L., et al. Fisher Information Flow in Artificial Neural Networks. Physical Review X 15(3), 2025. https://journals.aps.org/prx/abstract/10.1103/kn3z-rmm8

Kamp, M., Adilova, L., Sicking, J., Hüger, F., Schlicht, P., Wirtz, T., Wrobel, S. Efficient Decentralized Deep Learning by Dynamic Model Averaging. Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD), 2018.
Preprint DOI: doi.org/10.48550/arXiv.1807.03210

Andrienko, N., Andrienko, G., Adilova, L., Wrobel, S. Visual Analytics for Human-Centered Machine Learning. IEEE Computer Graphics and Applications, 42(1), 2022. https://pubmed.ncbi.nlm.nih.gov/35077350/

Singh, S. P., Adilova, L., Kamp, M., Fischer, A., Schölkopf, B., Hofmann, T. Landscaping Linear Mode Connectivity. High-Dimensional Learning Dynamics Workshop, 2024.
Preprint DOI: doi.org/10.48550/arXiv.2406.16300

Adilova, L., Abourayya, A., Li, J., Dada, A., Petzka, H., Egger, J., et al. FAM: Relative Flatness-Aware Minimization. Topological, Algebraic and Geometric Learning Workshops, 2023.
Proceedings DOI: proceedings.mlr.press/v221/adilova23a.html

Adilova, L., Rosenzweig, J., Kamp, M. Information-Theoretic Perspective of Federated Learning. NeurIPS Workshop on Information Theory and Machine Learning, 2019.

 

Scroll To Top