About

InterACT is a collaborative workshop between the interpretable machine learning / explainable AI group lead by Dr. Giuseppe Casalicchio of the Chair of Statistical Learning and Data Science (SLDS) of LMU Munich lead by Prof. Dr. Bernd Bischl and the Emmy Noether Junior Research Group lead by Prof. Dr. Marvin N. Wright of the Leibniz Institute for Prevention Research and Epidemiology - BIPS.

The workshop is aimed primarily at PhD students in the field of interpretable machine learning (IML) and explainable AI (XAI) and includes postdoctoral and senior researchers.

Past Workshops

InterACT #1 was held November 13th – 16th 2023 in Munich.
Among many brainstorming session, it laid the foundation for Ewald et al. (2024) and spawned the “CountARFactuals” project Dandl et al. (2024), bridging counterfactual explanations and tree-based generative modeling.

InterACT #2 was held September 23rd – 26th 2024 in Bremen. It was directly supported by the Minds, Media, Machines Integrated Graduate School (MMMIGS) PhD Grant Bremen, and among others lead to Kapar, Koenen, and Jullum (2025), combining generative modeling with interpretable machine learning.

Supporting Organizations

InterACT is made possible by the following organizations whose support is greatly appreciated:

LMU Munich

Leibniz Institute for Prevention Research and Epidemiology - BIPS

University of Bremen

Munich Center for Machine Learning

Minds, Media, Machines

All Publications

Baniecki, Hubert, Giuseppe Casalicchio, Bernd Bischl, and Przemyslaw Biecek. 2024. “On the Robustness of Global Feature Effect Explanations.” In Machine Learning and Knowledge Discovery in Databases. Research Track, edited by Albert Bifet, Jesse Davis, Tomas Krilavičius, Meelis Kull, Eirini Ntoutsi, and Indrė Žliobaitė, 125–42. Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-70344-7_8.
Dandl, Susanne, Kristin Blesch, Timo Freiesleben, Gunnar König, Jan Kapar, Bernd Bischl, and Marvin N. Wright. 2024. CountARFactualsGenerating Plausible Model-Agnostic Counterfactual Explanations with Adversarial Random Forests.” In Explainable Artificial Intelligence, edited by Luca Longo, Sebastian Lapuschkin, and Christin Seifert, 2155:85–107. Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-63800-8_5.
Ewald, Fiona Katharina, Ludwig Bothmann, Marvin N. Wright, Bernd Bischl, Giuseppe Casalicchio, and Gunnar König. 2024. “A Guide to Feature Importance Methods for Scientific Inference.” In Explainable Artificial Intelligence, edited by Luca Longo, Sebastian Lapuschkin, and Christin Seifert, 440–64. Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-63797-1_22.
Kapar, Jan, Niklas Koenen, and Martin Jullum. 2025. “What’s Wrong with Your Synthetic Tabular Data? Using Explainable AI to Evaluate Generative Models.” https://arxiv.org/abs/2504.20687.
Back to top