AAAI 2021 Tutorial
On Explainable AI: From Theory to Motivation, Industrial Applications and Coding Practices
Half-day (3 hours) Tutorial
Wednesday, February 3rd, 2021
12:00 PM – 3:00 PM (Pacific Time)
The goal of the tutorial is to provide answers to the following questions:
What is explainable AI (XAI for short) i.e., what are explanations from the various streams of the AI community (Machine Learning, Logics, Constraint Programming, Diagnostics)? What are the metrics for explanations?
Why is explainable AI important? even crucial in some applications? What are the motivations in elaborating AI systems which expose explanations?
What are the real-world applications that are in real needs of explanations to deploy AI systems at scale?
What are the state-of-the-art techniques for elaborating explanation in computer vision, natural language processing? What does work well, not so well, for which data format, use case, application, industry?
How to develop XAI components? Where to start from?How to develop XAI components? Where to start from?
What are the lessons learned and limitations in deploying existing XAI systems? in communicating explanation to human?
What are some of the promising future directions in XAI?
The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. XAI (eXplainable AI) aims at addressing such challenges by combining the best of symbolic AI and traditional Machine Learning. Such topic has been studied for years by all different communities of AI, with different definitions, evaluation metrics, motivations and results.
This tutorial is a snapshot on the work of XAI to date, and surveys the work achieved by the AI community with a focus on machine learning and symbolic AI related approaches (given the halfday format). We will motivate the needs of XAI in real-world and large-scale application, while presenting state-of-the-art techniques, with best XAI coding practices. In the first part of the tutorial, we give an introduction to the different aspects of explanations in AI. We then focus the tutorial on two specific approaches: (i) XAI using machine learning and (ii) XAI using a combination of graph-based knowledge representation and machine learning. For both we get into the specifics of the approach, the state of the art and the research challenges for the next steps. The final part of the tutorial gives an overview of real-world applications of XAI as well as best XAI coding practices.
Broad-spectrum introduction on explanation in AI. This will include describing and motivating the need for explainable AI techniques, from both theoretical and applicative standpoints. In this part we also summarize the prerequisites, and we introduce the different angles taken by the rest of the tutorial.
General overview of explanation in various field of AI (optimization, knowledge representation and reasoning, machine learning, search and constraint optimization, planning, natural language processing, robotics and vision) to align everyone on the various definitions of explanation. Evaluation of explainability will be also covered. The tutorial will cover most of definitions but will only go deep in the following areas: (i) Explainable Machine Learning, (ii) Explainable AI with Knowledge Graphs and Machine Learning.
In this section of the tutorial we address the explanatory power of combining graph-based knowledge bases with machine learning approaches.
We will review some XAI open source and commercial tools applied in real-world examples. We describe how XAI could be instantiated based on the technical and business challenge. In particular we focus on a number of use cases: (1) explaining object detection, (2) explaining obstacle detection for autonomous trains, (3) explaining flight performance, (4) an interpretable flight delay prediction system, with built-in explanation capabilities, (5) a wide-scale contract management system that predicts and explains the risk tier of corporate projects with semantic reasoning over knowledge graphs, (6) an expenses system that identifies, explains, and predict abnormal expense claims by employees of large organizations in 500+ cities, (7) an explanation system for credit decisions, (8) an explanation system for medical conditions, as well as 8 other use cases in industry.
We go through XAI coding practices by demonstrating how XAI could be integrated, and tested. This section will go through development codes, which are shared with Google Colab for easy interaction with the AAAI audience. A Google account (to access Google Colab) is required for this section.
[12:00pm - 12:20pm Pacific Time]
[12:20pm - 1:00pm Pacific Time]
[1:00pm - 1:40pm Pacific Time]
[1:40pm - 2:20pm Pacific Time]
[2:20pm - 3:00pm Pacific Time]
Freddy Lecue (PhD 2008, Habilitation 2015) is the Chief Artificial Intelligence (AI) Scientist at CortAIx (Centre of Research & Technology in Artificial Intelligence eXpertise) @Thales in Montreal, Canada. He is also a research associate at INRIA, in WIMMICS, Sophia Antipolis - France. Before joining Thales he was principal scientist and research manager in Artificial Intelligent systems, systems combining learning and reasoning capabilities, in Accenture Technology Labs, Dublin - Ireland. Before joining Accenture Labs, he was a Research Scientist at IBM Research, Smarter Cities Technology Center (SCTC) in Dublin, Ireland, and lead investigator of the Knowledge Representation and Reasoning group. His main research interests are Explainable AI systems. The application domain of his current research is Smarter Cities, with a focus on Smart Transportation and Building. In particular, he is interested in exploiting and advancing Knowledge Representation and Reasoning methods for representing and inferring actionable insight from large, noisy, heterogeneous and big data. He has over 40 publications in refereed journals and conferences related to Artificial Intelligence (AAAI, ECAI, IJCAI, IUI) and Semantic Web (ESWC, ISWC), all describing new system to handle expressive semantic representation and reasoning. He co-organized the first three workshops on semantic cities (AAAI 2012, 2014, 2015, IJCAI 2013), and the first two tutorial on smart cities at AAAI 2015 and IJCAI 2016. Prior to joining IBM, Freddy Lecue was a Research Fellow (2008-2011) with the Centre for Service Research at The University of Manchester, UK. He has been awarded by a second prize for his Ph.D thesis by the French Association for the Advancement of Artificial Intelligence in 2009, and has been recipient of the Best Research Paper Award at the ACM/IEEE Web Intelligence conference in 2008.
Riccardo Guidotti is currently a post-doc researcher at the Department of Computer Science University of Pisa, Italy and a member of the Knowledge Discovery and Data Mining Laboratory (KDDLab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. Riccardo Guidotti was born in 1988 in Pitigliano (GR) Italy. In 2013 and 2010 he graduated cum laude in Computer Science (MS and BS) at University of Pisa. He received the PhD in Computer Science with a thesis on Personal Data Analytics in the same institution. He won the IBM fellowship program and has been an intern in IBM Research Dublin, Ireland in 2015. His research interests are in personal data mining, clustering, explainable models, analysis of transactional data related to recipes and to migration flows.
Pasquale Minervini is a Senior Research Fellow at University College London (UCL). He received a PhD in Computer Science from University of Bari, Italy, with a thesis on relational learning. After his PhD, he worked as a postdoc researcher at the University of Bari, and at the INSIGHT Centre for Data Analytics (INSIGHT), where he worked in a group composed by researchers and engineers from INSIGHT and Fujitsu Ireland Research and Innovation. Pasquale published peer-reviewed papers in top-tier AI conferences, receiving two best paper awards, participated to the organisation of tutorials on Explainable AI and relational learning (three for AAAI, one for ECML, and others), and was a guest lecturer at UCL and at the Summer School on Statistical Relational Artificial Intelligence. He is the main inventor of a patent application assigned to Fujitsu Ltd, and recently he was awarded a seven-figures H2020 research grant involving applications of relational learning to cancer research. For more information about him, see http://www.neuralnoise.com
Fosca Giannotti is Director of Research at the Information Science and Technology Institute “A. Faedo” of the National Research Council, Pisa, Italy. Fosca Giannotti is a scientist in Data mining and Machine Learning and Big Data Analytics. Fosca leads the Pisa KDD Lab - Knowledge Discovery and Data Mining Laboratory http://kdd.isti.cnr.it, a joint research initiative of the University of Pisa and ISTI-CNR, founded in 1994 as one of the earliest research lab centered on data mining. Fosca’s research focus is on social mining from big data: human dynamics, social networks, diffusion of innovation, privacy enhancing technology and explainable AI. She has coordinated tens of research projects and industrial collaborations. Fosca is now the coordinator of SoBigData, the European research infrastructure on Big Data Analytics and Social Mining, an ecosystem of ten cutting edge European research centres providing an open platform for interdisciplinary data science and data-driven innovation http://www.sobigdata.eu. In 2012-2015 Fosca has been general chair of the Steering board of ECML-PKDD (European conference on Machine Learning) and is currently member of the steering committee EuADS (European Association on Data Science) and of the AIIS: Italian Lab. of Artificial Intelligence and Autonomous Systems.