I am usually at ATHENA RC, doing research on Explainable AI under the supervision of Christos Diou, Theodore Dalamagas, and Eirini Ntoutsi. I also collaborate on explainability research with colleagues in Munich, including Giuseppe Casalicchio and the entire IML team led by Bernd Bischl. I keep an ongoing interest in probabilistic ML, especially likelihood-free inference, where I often work with Michael Gutmann.
I actively contribute (and sometimes lead) open-source projects like Effector and ELFI.
I have served as a reviewer for conferences like NeurIPS, AISTATS, ECML and journals like JAI.
I believe in slow science. The current pace burns people out and buries meaningful research under an avalanche of noise.
Personal websites are great, but we need no more 500-pound websites.
I sometimes work on interpretability projects in industry, like LLM explainability for novelcore (6-month project at 2023).
If you want a more detailed look at what I do, check out my CV.
Researching explainable-by-design models and post-hoc techniques for black-box models, with a special focus on explaining deep learning systems trained on tabular data.
There, I took two pivotal courses, MLPR (Ian Murray) and PMR (Michael Gutmann), that shaped my understanding of machine learning.