Survey of Explainable Machine Learning with Visual and Granular Methods Beyond Quasi-Explanations
Document Type
Article
Department or Administrative Unit
Computer Science
Publication Date
3-27-2021
Abstract
This chapter surveys and analyses visual methods of explainability of Machine Learning (ML) approaches with focus on moving from quasi-explanations that dominate in ML to actual domain-specific explanation supported by granular visuals. The importance of visual and granular methods to increase the interpretability and validity of the ML model has grown in recent years. Visuals have an appeal to human perception, which other methods do not. ML interpretation is fundamentally a human activity, not a machine activity. Thus, visual methods are more readily interpretable. Visual granularity is a natural way for efficient ML explanation. Understanding complex causal reasoning can be beyond human abilities without “downgrading” it to human perceptual and cognitive limits. The visual exploration of multidimensional data at different levels of granularity for knowledge discovery is a long-standing research focus. While multiple efficient methods for visual representation of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasi-explanations. This chapter starts with the motivation and the definitions of different forms of explainability and how these concepts and information granularity can integrate in ML. The chapter focuses on a clear distinction between quasi-explanations and actual domain specific explanations, as well as between potentially explainable and an actually explained ML model that are critically important for the further progress of the ML explainability domain. We discuss foundations of interpretability, overview visual interpretability and present several types of methods to visualize the ML models. Next, we present methods of visual discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). This family of methods take the critical step of creating visual explanations that are not merely quasi-explanations but are also domain specific visual explanations while these methods themselves are domain-agnostic. The chapter includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches, and real-world case studies. The chapter also covers traditional visual methods for understanding multiple ML models, which include deep learning and time series models. We illustrate that many of these methods are quasi-explanations and need further enhancement to become actual domain specific explanations. The chapter concludes with outlining open problems and current research frontiers.
Recommended Citation
Kovalerchuk B., Ahmad M.A., Teredesai A. (2021). Survey of Explainable Machine Learning with Visual and Granular Methods Beyond Quasi-Explanations. In Pedrycz W., Chen SM. (Eds.), Interpretable Artificial Intelligence: A Perspective of Granular Computing. Studies in Computational Intelligence, vol 937 (pp. 217-267). Springer, Cham. https://doi.org/10.1007/978-3-030-64949-4_8
Rights
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
Comments
This book chapter was originally published in Interpretable Artificial Intelligence: A Perspective of Granular Computing. The full-text article from the publisher can be found here.