IJCNN 2021, Virtual event
Special Session on
Transparent and Explainable Artificial Intelligence (XAI) for Health


Submission link

Aims & Scope
From the widespread implementation and use of electronic health records to basic research in pharma, and from the popularization of health wearables to the digitalization of procedures at the point of care, the domains of medicine and healthcare are bringing data to the fore of their practice. The abundance of data in turn calls for methods that allow transforming such raw information into novel knowledge that is truly usable, including high stakes decision support.
Machine Learning (ML) is enjoying unprecedented attention in healthcare and medicine, riding the current wave of popularity of deep learning (DL) and the umbrella concept of Big Data. But such attention may bear little fruit unless data scientists effectively address one major limitation that is particularly sensitive in the medical domain: the lack of interpretability of many ML approaches and, particularly, DL methods, leading in turn to limited explainability. This may limit ML to niche applications and poses a significant risk of costly mistakes without the mitigation of a sound understanding of the flow of information in the model.
Domains where decision-making impacts our health motivate this special session, to which we invite current research on eXplainable Artificial Intelligence (XAI). The goal of XAI is the design of techniques and approaches that still retain model performance, while being able to explain their outputs in human-understandable terms. With these capabilities, clinical practitioners will be able to integrate the models into their own reasoning, gaining insights about the data and checking compatibility with working guidelines at the point-of-care.
This session aims to explore such performance-versus-explanation trade-off space for medical and healthcare applications of ML. We aim to bring together researchers from different fields to discuss key issues related to the research and applications of XAI methods and to share their experiences of solving problems in medicine and healthcare. Applications leading towards routine clinical practice are particularly welcome.
Topics that are of interest to this session include but are not limited to:

•    Interpretable ML Models in medicine and healthcare: theoretical and practical developments
•    XAI for electronic health records
•    Integration of XAI in medical devices
•    Human-in-the-loop ML: bridging the gap between data and medical experts
•    Interpretability through Data Visualization
•    Interpretable ML pipelines in medicine and healthcare
•    Query Interfaces for DL
•    Active and Transfer learning
•    Relevance and Metric Learning
•    Deep Neural Reasoning
•    Interfaces with Rule-Based Reasoning, Fuzzy Logic and Natural Language Processing
•    Assessment of bias and discrimination in databased models

Preliminary Dates
Paper submission: January 15, 2021
Paper decision notification: March 15, 2021


Session Chairs
Alfredo Vellido (avellido@cs.upc.edu), Universitat Politècnica de Catalunya, Spain

Paulo Lisboa (P.J.Lisboa@ljmu.ac.uk), Liverpool John Moores University, U.K.
José D. Martín (jose.d.martin@uv.es) Universitat de València, Spain