Document Type

Thesis

Date of Degree Completion

Fall 2024

Degree Name

Master of Science (MS)

Department

Computational Science

Committee Chair

Dr. Boris Kovalerchuk

Second Committee Member

Dr. Szilard Vajda

Third Committee Member

Dr. Rasvan Andonie

Abstract

There are significant difficulties for the acceptance of black-box Machine Learning (ML) models by subject matter experts (SMEs) despite significant achievements of many black-box models. A promising way to address this problem is by building a trustable, qualitative, interpretable models for the task based on SME knowledge. Such qualitative models can work as qualitative explainers of black-box models or as sanity checks for them. For instance, the expert model can expect that two cases belong to different classes, but the black box model predicts that they are in the same class. In this thesis, qualitative models operate with ordinal attributes, which can be Boolean or k-valued attributes with a small k. Humans easier understand and reason with such attributes than with continuous numeric attributes with many more values, because it requires less cognitive load. Some ML tasks do not have sufficient training data. Building qualitative models with a SME (“expert models”) is a way to solve these tasks solely or partially with a SME’s knowledge. The proposed Monotone Ordinal Expert Knowledge Acquisition (MOEKA) system allows building “expert models” through interview phases with the SME. Monotonicity is a key property of the system that is tested and used to shorten the interview process with a SME along with new methods for selecting the order of questions. MOEKA rather directly approximates domain knowledge in contrast with other ML explainers, which approximate black-box ML models. MOEKA can be also used as a method of focused database searching. Several case studies on gout, diabetes, and housing demonstrate efficiency of MOEKA.

Share

COinS