Our next guest at the AI Colloquium is Prof. Dr. Alexander Marx!

What: AI Colloquium - organized by Lamarr Institute and us - about Causal Discovery under Scaled Noise: Identifiability and Robust Estimation

Who: speaker is Prof. Dr. Alexander Marx (RCTrust and TU Dortmund University)

When: 27th June 2024, 11:00 to 12:00am

Where: Joseph-von-Fraunhoferstrasse 25, 3. Floor, Room 303 or via Zoom


About the speaker:

Alexander Marx is a professor at TU Dortmund, leading the Causality group at the Research Center for Trustworthy Data Science and Security and the Department of Statistics, and a member of the ELLIS society. His research is at the intersection of causality and machine learning, focusing on causal discovery, causal representation learning, information theory, and Bayesian deep learning. He was a postdoctoral researcher in the Computational Biology Group at ETH Zürich, a postdoc fellow at the ETH AI Center, and part of the Medical Data Science Group. He did his PhD in the Exploratory Data Analysis group affiliated with the CISPA Helmholtz Center for Information Security and the Max Planck Institute for Informatics.



Causal discovery aims to learn causal networks, i.e., directed acyclic graphs (DAGs), from observational data. Although the problem is non-identifiable in the most general form, we can achieve identifiability of the causal DAG when stating assumptions about the underlying causal mechanism, such as assuming causal sufficiency and additive noise. In this talk, we relax the additive noise assumption and show that the broader class of location-scale or heteroscedastic noise models (LSNMs) is identifiable up to pathological cases. Further, we propose a consistent estimator for cause-effect identification for Gaussian LSNMs, which achieves state-of-the-art performance on commonly used benchmarks for bi-variate causal discovery. Beyond that, in the second part of the talk, we focus on the estimation of Gaussian LSNMs (heteroscedastic regression) on moderately high dimensional data via deep neural networks. Connecting to prior work on Bayesian inference, we propose an efficient Laplace approximation for heteroscedastic neural networks that provides epistemic uncertainties and can be applied post hoc or trained using automatic regularization through empirical Bayes.


More information here.



  • Talk
Scroll To Top