Title

Using Spoken Text to Aid Debugging: An Empirical Study

Document Type

Article

Department or Administrative Unit

Computer Science

Publication Date

5-17-2009

Abstract

Comprehending and debugging computer programs are inherently difficult tasks. The current approach to building program execution and debugging environments is to use exclusively visual stimuli. We present an alternative: the Sonified Omniscient Debugger (SOD), a program execution and debugging environment designed to output carefully chosen spoken auditory cues to supplement visual stimuli. Originally designed for the blind, earlier work suggested that SOD may benefit sighted programmers as well. We evaluate the SOD environment in a formal debugging experiment comparing 1) a visual debugger, 2) an auditory debugger, and 3) a multimedia debugger, which includes both the visual and auditory stimuli. Our results indicate that while auditory debuggers on their own are significantly less effective for sighted users when compared with visual and multimedia debuggers, multimedia debuggers might benefit sighted programmers under certain circumstances. Specifically, we found that while multimedia debuggers do not provide instant usability, once programmers have some practice, their performance improves under certain metrics.

Comments

This article was originally published in 2009 IEEE 17th International Conference on Program Comprehension. The full-text article from the publisher can be found here.

Due to copyright restrictions, this article is not available for free download from ScholarWorks @ CWU.

Journal

2009 IEEE 17th International Conference on Program Comprehension

Rights

Copyright © 2009, IEEE

Share

COinS