ExTraSafe26: 1st Workshop on Explainability, Transparency, and Safety Part of the 49th German Conference on Artificial Intelligence (KI2026) Bremen, Germany, August 11-12, 2026 |
| Conference web page | https://fb-ki.gi.de/extrasafe |
| Submission link | https://easychair.org/conferences/?conf=extrasafe26 |
With the increasing deployment of AI systems in application scenarios that profoundly impact humans, demands on the trustworthiness of AI systems also increase, as exemplified prominently in the EU guidelines for trustworthy AI and the EU AI act. As key dimensions of trustworthiness, explainability, transparency, and safety (ExTraSafe) are particularly prominent and extensive AI research has focused on developing novel methods to enhance ExTraSafe properties of AI models and systems. The goal of this workshop is to engage in transdisciplinary discussion between AI research, human-AI interaction, AI practitioners, and law. Further, the workshop functions as annual meeting of the newly introduced work group ExTraSafe of the AI section of the German Computer Science Society (GI - FBKI - AK ExTraSafe).
Key topics of interest are (but are not limited to):
Legal regulation of AI systems, in particular papers on the legal compliance of AI systems in certain application domains and design implications of legal regulations.
Practitioners' perspectives on ExTraSafe properties of AI systems in high-stakes application domains, e.g. automotive industry, industrial automation, healthcare, education, or agriculture.
Analysis of ExTraSafe properties across the entire system lifecycle, especially with respect to architecture considerations, algorithm design, and evaluation practices.
Approaches to social explainable artificial intelligence (sXAI), taking into account heterogeneous explanatory needs, social roles and contexts, incremental multi-step explanations, and/or multimodality.
Novel approaches to the evaluation of ExTraSafe properties, especially in user and field studies.
Submission Guidelines
We invite two types of submissions:
- Research papers (up to 8 pages + references in Springer LNCS layout ; shorter papers are explicitly welcome), which will receive 10 min presentation time + 5 min discussion time in the workshop
- Discussion entries (up to 5000 characters of description directly in easy chair), which will receive 5 min presentation (impulse) time + 10 min discussion time in the workshop
Committees
The workshop is organized by the AK ExTraSafe of the AI section of the German Computer Science Society (GI - FBKI - AK ExTraSafe). The organization committee is:
Benjamin Paaßen (Bielefeld University; DFKI), bpaassen@techfak.uni-bielefeld.de (Main Contact Person)
Vera Schmitt (TU Berlin)
Thomas Kosch (Humboldt-Universität zu Berlin)
Julius Schöning (Osnabrück University of Applied Sciences)
Hanna Drimalla (Bielefeld University)
Florian Rabe (University Erlangen Nuremberg)
Tim Schrills (University of Lübeck)
Daniel Neider (TU Dortmund University; Research Center Trustworthy Data Science and Security; Lamarr Institute for Machine Learning and Artificial Intelligence)
Peter Fettke (German Research Center for Artificial Intelligence (DFKI); Saarland University)
Venue
The workshop will be held as part of the 49th German Conference on Artificial Intelligence (KI2026) in Bremen, Germany, August 11 or 12, 2026.
Contact
All questions about submissions should be emailed to Benjamin Paaßen ( bpaassen@techfak.uni-bielefeld.de )
