We investigate the two problems of computing the union join graph as well as computing the subset graph for acyclic hypergraphs and their subclasses. In the *union join graph* *G* of an acyclic hypergraph *H*, each vertex of *G* represents a hyperedge of *H* and two vertices of *G* are adjacent if there exits a join tree *T* for *H* such that the corresponding hyperedges are adjacent in *T*. The *subset graph* of a hypergraph *H* is a directed graph where each vertex represents a hyperedge of *H* and there is a directed edge from a vertex *u* to a vertex *v* if the hyperedge corresponding to *u* is a subset of the hyperedge corresponding to *v*.

For a given hypergraph H=(V,E), let n=|V|, m=|E|, and N=∑E∈ε|E|. We show that, if the Strong Exponential Time Hypothesis is true, both problems cannot be solved in O(N^{2−ε}) time for α-acyclic hypergraphs and any constant ε>0, even if the created graph is sparse. Additionally, we present algorithms that solve both problems in O(N^{2}/logN+|G|) time forα-acyclic hypergraphs, in O(N log(n+m)+|G|) time for β-acyclic hypergraphs, and in O(N+|G|) time for γ-acyclic hypergraphs as well as for interval hypergraphs, where |*G*| is the size of the computed graph.

This book contains 21 chapters that have been grouped into five parts: (1) visual problem solving and decision making, (2) visual and heterogeneous reasoning, (3) visual correlation, (4) visual and spatial data mining, and (5) visual and spatial problem solving in geospatial domains. Each chapter ends with a summary and exercises.

The book is intended for professionals and graduate students in computer science, applied mathematics, imaging science and Geospatial Information Systems (GIS). In addition to being a state-of-the-art research compilation, this book can be used a text for advanced courses on the subjects such as modeling, computer graphics, visualization, image processing, data mining, GIS, and algorithm analysis.

]]>This chapter surveys and analyses visual methods of explainability of Machine Learning (ML) approaches with focus on moving from quasi-explanations that dominate in ML to actual domain-specific explanation supported by granular visuals. The importance of visual and granular methods to increase the interpretability and validity of the ML model has grown in recent years. Visuals have an appeal to human perception, which other methods do not. ML interpretation is fundamentally a human activity, not a machine activity. Thus, visual methods are more readily interpretable. Visual granularity is a natural way for efficient ML explanation. Understanding complex causal reasoning can be beyond human abilities without “downgrading” it to human perceptual and cognitive limits. The visual exploration of multidimensional data at different levels of granularity for knowledge discovery is a long-standing research focus. While multiple efficient methods for visual representation of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasi-explanations. This chapter starts with the motivation and the definitions of different forms of explainability and how these concepts and information granularity can integrate in ML. The chapter focuses on a clear distinction between quasi-explanations and actual domain specific explanations, as well as between potentially explainable and an actually explained ML model that are critically important for the further progress of the ML explainability domain. We discuss foundations of interpretability, overview visual interpretability and present several types of methods to visualize the ML models. Next, we present methods of visual discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). This family of methods take the critical step of creating visual explanations that are not merely quasi-explanations but are also domain specific visual explanations while these methods themselves are domain-agnostic. The chapter includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches, and real-world case studies. The chapter also covers traditional visual methods for understanding multiple ML models, which include deep learning and time series models. We illustrate that many of these methods are quasi-explanations and need further enhancement to become actual domain specific explanations. The chapter concludes with outlining open problems and current research frontiers.

]]>