IJCNN 2020, Glasgow, Scotland, U.K.
Special Session on Explainable Computational/Artificial Intelligence


Sumission link

Aims & Scope
The spectacular successes in machine learning (ML) have led to a plethora of Artificial Intelligence (AI) applications. However, the large majority of these successful models, like deep neural networks, support vector machines, etc. are black boxes, opaque, non-intuitive and difficult for people to understand. There are critical domains that demand more intelligent, autonomous, and symbiotic systems, like medicine, security, legal, the military, finance and transportation, to mention a few, for which performance is not the only quality indicator. These are areas where decision-making faces high risks due to the involvement of human lives, critical infrastructure, very costly operations, national threats, etc. In situations like these, decision makers need much more that numeric performance in favor of alternative solutions that provide rationale and are more knowledge-based.

The goal of Explainable AI (XAI) is to create a suite of ML techniques that i) result in more explainable models, while maintaining a high level of learning performance, but also ii) enable human users to develop understanding to be able to trust the model, and effectively manage a new generation of artificially intelligent machine tools. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machines' current inability to explain their decisions and actions to human users.

This session will explore the performance-versus-explanation trade-off space. This will include ML models that are interpretable by design. Some, like fuzzy systems and rule induction, have general function approximation properties. Very important are also algorithms producing models in mathematic languages such as algebraic functions and differential equations, piecewise non-linear models, etc. Despite of the differences in the approaches, there are common elements and basic methodologies that are present in many applications.  We will bring together researchers from different fields to discuss key issues related to the research and applications of XAI methods and to share their experiences of solving common problems.

Topics that are of interest to this session include but are not limited to:-
•    Interpretable ML Models
•    Query Interfaces for Deep Learning
•    Interactive User Interfaces
•    Active and Transfer learning
•    Relevance and Metric Learning
•    Practical Applications of Interpretable Machine Learning
•    Deep Neural Reasoning

Preliminary Dates
Paper submission: EXTENDED DEADLINE January 30, 2020
Paper acceptance notification: March 15, 2020


Session Chairs
Julio J. Valdés (Julio.Valdes@nrc-cnrc.gc.ca), National Research Council Canada
Paulo Lisboa, Sandra Ortega-Martorell and Ivan Olier ({
P.J.Lisboa, S.OrtegaMartorell, I.A.OlierCaparroso}@ljmu.ac.uk), Liverpool John Moores University, U.K.
Alfredo Vellido (avellido@cs.upc.edu), Universitat Politècnica de Catalunya, Spain