Title

Visualizing Transformers for NLP: A Brief Survey

Document Type

Conference Proceeding

Department or Administrative Unit

Computer Science

Publication Date

9-7-2020

Abstract

The introduction of Transformer neural networks has changed the landscape of Natural Language Processing during the last three years. While models inspired by it have managed to lead the boards for a variety of tasks, some of the mechanisms through which these performances were achieved are not necessarily well-understood. Our survey is focused mostly on explaining Transformer architectures through visualizations. Since visualization enables some degree of explainability, we have examined the various Transformer facets that can be explored through visual analytics. The field is still at a nascent stage and is expected to witness dynamic growth in the near future, since the results are already interesting and promising. Currently, some of the visualizations are relatively close to their original models, whereas others are model-agnostic. The visualizations designed to explore the Transformer architectures enable some additional features, like exploration of all neuronal cells or attention maps, therefore providing an advantage for this particular task. We conclude by proposing a set of requirements for future Transformer visualization frameworks.

Comments

This article was originally published in 2020 24th International Conference Information Visualisation (IV). The full-text article from the publisher can be found here.

Due to copyright restrictions, this article is not available for free download from ScholarWorks @ CWU.

Journal

2020 24th International Conference Information Visualisation (IV)

Rights

Copyright © 2020, IEEE

Share

COinS