|Ms Alhida Murati|
+49 231 7557729
(8 AM to 12 PM)
I am the professor for Verification and Formal Guarantees of Machine Learning at the Computer Science department of TU Dortmund University, a member of the Center for Trustworthy Data Science and Security at the University Alliance Ruhr, and principal investigator in several research projects. My group's research seeks to develop practical tools to make artificial intelligence safe, reliable, and trustworthy. To this end, we develop formal methods for artificial intelligence and collaborate with researchers from various disciplines, including IT-Security, Statistics, and Psychology.
You can also find me on DBLP, Google Scholar, ORCID (ID 0000-0001-9276-6342), Scopus (ID 26664925200), and the Web of Science (ResearcherID ADV-0056-2022).
We offer Bachelor's and Master's theses on topics at the intersection of formal methods (e.g., verification, logic, constraint solving) and artificial intelligence (e.g., neural networks, reinforcement learning, automated reasoning).
If you are an enthusiastic and highly qualified student, please get in touch with me for more details. Preferably, attach a CV and/or a recent transcript.
My research interests broadly lie in the intersection of machine learning and formal methods. I am especially interested in combining inductive techniques from the area of machine learning and symbolic techniques from the area of logic. My goal is to build automated tools for the design, construction, and analysis of safe artificial intelligence.
My chair is working on many topics, including:
Our projects often extend into theoretical computer science, specifically automata theory, game theory, and logic.
You can try some of the tools we have developed online:
As a byproduct of my research, I have developed and contributed to several tools:
You can try some of our tools online:
Horn-ICE Learning for Synthesizing Invariants and Contracts
Invariant Synthesis for Incomplete Verification Engines
I am principal investigator in the following projects.
The DAAD-supported project LeaRNNify is at the interface of formal methods and artificial intelligence. It aims at bringing together two different kinds of algorithmic learning, namely grammatical inference and learning of neural networks.
More precisely, we promote the use of recurrent neural networks (RNNs) in the process of verifying reactive systems, which until now has been reserved for grammatical inference. On the other hand, grammatical inference is finding its way into the field of classical machine learning. In fact, our second goal is to use automata-learning techniques to enhance the verification, explainability, and interpretability of machine learning algorithms and, in particular, RNNs.
Also visit our project website.
The goal of the DFG-funded project Temporal Logic Sketching is to develop computer-aided methods to assist engineers in writing formal specifications. To this end, the project combines inductive and deductive techniques from machine learning, artificial intelligence, and logic. As a byproduct, the project also investigates explainable machine learning, specifically the learning of human-interpretable models.
In the context of the DFG Research Unit "Deep Learning for Sparse Chemical Process Data", the sub-project Verification of Anomaly Detectors is concerned with the quality assurance of deep neural networks for anomaly detection. To this end, the project develops new deductive and abstraction-based verification methods for neural networks over sequences, including recurrent neural networks and transformers.
We offer Bachelor's and Master's theses on topics at the intersection of formal methods (e.g., verification, logic, constraint solving) and artificial intelligence (e.g., neural networks, reinforcement learning, automated reasoning). Should you be looking for a thesis or project in these areas, please get in touch with me for further details.
See my DBLP and Google Scholar entries for a complete list.