Document Type

Thesis

Date of Degree Completion

Fall 2024

Degree Name

Master of Science (MS)

Department

Computational Science

Committee Chair

Dr. Boris Kovalerchuk

Second Committee Member

Dr. Razvan Andonie

Third Committee Member

Dr. Szilard VAJDA

Abstract

This research advances interpretable machine learning (ML) by introducing hyperblocks (HBs) as a structured, rule-based approach for creating transparent and accurate models using meaningful numeric attributes directly interpretable to end users. Key techniques, including Parallel Hyperblock Creation, Interactive Hyperblock Creation, Level n Hyperblock Creation, and k-Nearest Neighbor Hyperblock, provide a framework that ensures domain experts can meaningfully engage with the model’s decision-making process through lossless visualizations using General Line Coordinates (GLC). Case studies with the Wisconsin Breast Cancer and MNIST datasets demonstrated HBs' effectiveness in handling high-risk and complex classification tasks, offering interpretable accuracy that traditional models struggle to achieve. In the Wisconsin Breast Cancer study, HBs achieved an accuracy comparable to standard ML methods while providing an interpretable framework for cancer diagnosis, where model trust is critical. In MNIST, HBs showed their ability to scale to larger datasets while maintaining a high level of interpretability. Finally, the Visual Knowledge Discovery (VKD) process, central to this approach, allows experts to adjust model parameters in real time, promoting human-centered insights and collaboration. Overall, this work presents HBs and VKD as powerful tools for interpretable, high-stakes ML applications, supporting transparency, accuracy, and domain relevance.

Share

COinS