Tell Me More

XAI for U: Explainable AI for Ubiquitous, Pervasive and Wearable Computing

In conjunction with UbiComp'25 Espoo, Finland

The XAI for U workshop is dedicated to addressing the critical need for transparency in AI systems embedded in our daily lives through mobile systems, wearables, and smart environments. The workshop aims to foster the development and application of Explainable AI (XAI) tools to overcome the opacity of these systems, focusing on the unique challenges of XAI in time-series and multimodal data, interconnected ML components, and user-centered explanations. This workshop offers a vital platform for sharing recent advancements, addressing open challenges, and proposing future research directions to ensure AI-driven solutions are explainable, ethical, and aligned with user expectations.

Call For Papers

We invite submissions of original research, insightful case studies, and work in progress that address XAI applications within Ubiquitous and Wearable Computing, including but not limited to:

XAI in time-series and multimodal data analysis

Techniques and challenges in interpreting complex data streams from wearable and ubiquitous computing devices.

User-centered explanations for AI-driven systems

Designing explanations that are meaningful and accessible to end-users.

Deployment and evaluation of XAI tools in real-world scenarios

Case studies and empirical research on the effectiveness of XAI applications.

Multimodal XAI for behavior analysis

Leveraging diverse data sources for comprehensive behavior analysis.

Interconnected ML components in wearable and ubiquitous computing

Strategies for explaining the dynamics and decisions of interconnected AI systems and models.

Ethical considerations and user privacy in XAI

Addressing the ethical implications and privacy concerns of deploying XAI in ubiquitous computing.

Multimodal XAI in affective computing

Techniques for understanding and interpreting human emotions through AI.

Empirical evaluation methods

Methods for assessing the effectiveness and impact of XAI and multimodal AI systems.

Paper format

Submissions should be anonymized and use the double-column template avialable on the UbiComp website .

Standard submissions: up to 4 pages (including references). Accepted standard submissions will be published in the Adjunct Proceedings.

Short submissions: up to 2 pages (including references). Accepted short submissions will not be published in the Adjunct Proceedings, but will instead be made available on this website.

Important Dates (AoE)

Submission deadline:
Standard (4 pages): June 29, 2025
Short (2 pages): July 28, 2025

Notification of acceptance:
Standard (4 pages): July 11, 2025
Short (2 pages): Rolling basis (not later than July 31, 2025)

Camera-ready deadline:
Standard (4 pages): July 31, 2025
Short (2 pages): August 15, 2025

Workshop date: October 12, 2025

Submission

Submissions are now closed!

Thank you for your interest in our workshop. For registration information, fees, and deadlines, please visit the UbiComp'25 registration page.

Schedule


Time Event
12:30 - 14:30 Lunch
14:30 - 14:45 Welcome
14:45 - 15:30 Keynote by Brian Y. Lim + Q&A
15:30 - 16:00 Oral presentations:
1. Generating Explanations for Models Predicting Student Exam Performance by Swathy Satheesan Cheruvalath et al.
2. VISAR: Visualization and Interpretation of Sensor-based Activity Recognition for Smart Homes by Alexander Karpekov et al.
3. Evaluating the Quality of Counterfactual Explanations in Multivariate Time-Series by Mandani Ntekouli et al.
16:00 - 16:30 Coffee break
16:30 - 17:15 Keynote by Katharina Weitz + Q&A
17:15 - 17:30 Final discussion and conclusion

Keynote Speaker

...
Katharina Weitz
Fraunhofer Heinrich Hertz Institute, Berlin, Germany.

"Understandable Enough?" Human-Centered XAI in Risk-Sensitive AI Systems
...
Brian Y. Lim
National University of Singapore, Singapore.

"Toward human-aligned explainable AI"

Accepted Papers


Paper Author
Generating Explanations for Models Predicting Student Exam Performance (link) Swathy Satheesan Cheruvalath, Matias Laporte, Francesco Bombassei De Bona, Prof. Dr. Teena Hassan, Martin Gjoreski
VISAR: Visualization and Interpretation of Sensor-based Activity Recognition for Smart Homes (link) Alexander Karpekov, Sonia Chernova, Thomas Ploetz
Evaluating the Quality of Counterfactual Explanations in Multivariate Time-Series (link) Mandani Ntekouli, Francesco Bombassei De Bona, Martin Gjoreski, Gerasimos Spanakis

Organising Team

...

Mandani (Mado) Ntekouli

Maastricht University, The Netherlands

...

Teena Hassan

Bonn-Rhein-Sieg University of Applied Sciences, Germany

...

Mor Vered,

Monash University, Australia

...

Martin Gjoreski

Università della Svizzera Italiana, Switzerland

...

Sang Won Bae

Stevens Institute of Technology, USA

Website Chair

Francesco Bombassei De Bona

Università della Svizzera Italiana, Switzerland

Student Volunteers

Francesco Bombassei De Bona

Università della Svizzera Italiana, Switzerland

Youssef Mahmoud Youssef

Hochschule Bonn-Rhein-Sieg, Germany

Acknowledgement

We acknowledge the financial support for this workshop from:

Swiss National Science Foundation (SNSF), Project XAI-PAC: Towards Explainable and Private Affective Computing (PZ00P2_216405)

Ministerium für Kultur und Wissenschaft (MKW) des Landes Nordrhein-Westfalen (NRW), "Profilbildung 2022" project: Zentrum Assistive Technologien Rhein-Ruhr

Contact Us

martin.gjoreski@usi.ch

<
Copyright © ubicomp-xai 2024