Title

Context-Sensitive Visualization of Deep Learning Natural Language Processing Models

Document Type

Article

Department or Administrative Unit

Computer Science

Publication Date

7-5-2021

Abstract

The introduction of Transformer neural networks has changed the landscape of Natural Language Processing (NLP) during the last years. So far, none of the visualization systems has yet managed to examine all the facets of the Transformers. This gave us the motivation of the current work. We propose a novel NLP Transformer context-sensitive visualization method that leverages existing NLP tools to find the most significant groups of tokens (words) that have the greatest effect on the output, thus preserving some context from the original text. The original contribution is a context-aware visualization method of the most influential word combinations with respect to a classifier. This context-sensitive approach leads to heatmaps that include more of the relevant information pertaining to the classification, as well as more accurately highlighting the most important words from the input text. The proposed method uses a dependency parser, a BERT model, and the leave-n-out technique. Experimental results suggest that improved visualizations increase the understanding of the model, and help design models that perform closer to the human level of understanding for these problems.

Comments

This article was originally published in 2021 25th International Conference Information Visualisation (IV). The full-text article from the publisher can be found here.

Due to copyright restrictions, this article is not available for free download from ScholarWorks @ CWU.

Journal

2021 25th International Conference Information Visualisation (IV)

Rights

Copyright © 2021, IEEE

Share

COinS