Chapter 3 Sonification and augmented cognition: A brief overview

Nicholaus P. Brosowsky, The Graduate Center of the City University of New York

In the most general sense, sonification refers to the transformation of non-sonic data into audible (non-speech) sound to represent or convey information to a listener. Thus, sonification is a rather general, all-encompassing umbrella term that might include everything from fire alarms, stethoscopes and Geiger counters, to Stravinsky’s The Rite of Spring and John Cage’s 4’33”. Though there has been recent attempts to more firmly define sonification as a discipline (e.g., Hermann, Hunt, & Neuhoff, 2011; Nees & Walker, 2009), for better or worse, sonification straddles the boundaries of science, art, design, and application. To better situate this review, I will focus on the ways in which sonification could be used as a tool to enhance or augment cognition. Specifically, I focus on three areas: situational awareness, perception and action in motor skills, and data analysis. But before doing so, I will provide a brief outline of the types of sonification previous work has identified and the general rationale for adopting sonification methods at all.

3.1 Types of sonification.

One taxonomy of sonification (for others, see Fitch & Kramer, 1994; Nees & Walker, 2009), distinguishes between five functions of sonification: alerting, status indication, data exploration and, art and entertainment (for more in-depth reviews, see (Hermann et al., 2011; Walker & Kramer, 2004).

Alerts refer to sounds that notify the listener that an event has, or is about to occur, and that something in the environment requires their attention. These range from rather simple, low-information alerts like a door-bell, indicating someone is at the door; to more complex alerts that attempt to convey more information, like warning systems in a helicopter cockpit indicating a range of telemetry and avionics data (Edworthy, Hellier, Aldrich, & Loxley, 2004) or forward collision systems in modern cars (P. Bazilinskyy, Petermeijer, Petrovych, Dodou, & De Winter, 2015; Jamson, Lai, & Carsten, 2008).

Closely related to the alerting function, is the status or progress indicating function. In this case a listener monitors a constant sound for small changes that indicate a change in status or progress update. For example, using auditory displays to monitor for changes in blood pressure (T. Watson & Lip, 2006), internet network traffic (Debashi & Vickers, 2018; Vickers, Laing, & Fairfax, 2017), or telephone hold time (Garcia, Peres, Ritchey, Kortum, & Stallmann, 2011; Kortum, Peres, Knott, & Bushey, 2005).

Data exploration is likely the function most closely associated with the term sonification. Simply put, sound is used to represent data in a way that enables the listener to recognize or search for patterns. This includes auditory graphs, created to summarize and communicate a set of data with known patterns (e.g., Flowers, 2005; Stockman, Nickerson, & Hind, 2005; Walker & Mauney, 2010), or as a way to explore more complex data sets to facilitate interpretation and exploratory analyses (e.g., Grond & Hermann, 2014; Stanton, 2015). Data exploration and pattern recognition will be discussed in greater detail below, however data sonification has been used successfully across a range scientific disciplines from astronomy (Diaz Merced, 2013; W. L. Diaz-Merced et al., 2011) to the social sciences (Dayé & de Campo, 2006). To give just one example, Pereverzev, Loshak, Backhaus, Davis, & Packard (1997) used sonification methods to discover quantum oscillations between two weakly coupled reservoirs of superfluid helium 3, confirming previous theoretical predictions.

Finally, sonification can be used for entertainment, art, sports, and leisure. That is, sonification can be used for artistic expression and/or recreation. This final category includes the creation audio-only versions of games (i.e., sonified games) like Tower of Hanoi (Winberg & Hellstrom, 2001) and Tic-Tac-Toe (Targett & Fernstrom, 2003), as well as using sonified feedback to improve performance in sports like rowing (Dubus & Bresin, 2015) and figure-skating (Boyd & Godbout, 2010). Perhaps unsurprisingly, musical composition is another popular use of sonification methods. Here, often large data sets (e.g., weather changes, shark movements, seismic data) are mapped to musical representations to create works of art (e.g., Ballora, 2014; Parkinson & Tanaka, 2013; Quinn, 2001, 2012). In one demonstration, for example, a century of weather data was transformed into compositions for cello and string quartets to describe and communicate climate change (George, Crawford, Reubold, & Giorgi, 2017). In another, DNA sequences were used to compose music in an effort to summarize complex microbial ecology data (“Microbial Bebop”; Larsen, 2016). The line between artistic expression and data display is obviously blurred here. However, in these cases, the primary goal is to create an aesthetically pleasing work of art and communicating an interpretation of the data, if considered at all, is secondary.

3.2 Why sonify non-sonic information?

Since visual display has become the dominant form of communicating data, one might wonder why we would consider auditory display and sonification at all. This issues has been discussed fairly extensively in various contexts (e.g., Hermann et al., 2011; Kramer, 2000; Nees & Walker, 2009; Sanderson, 2006). Briefly however, the auditory system excels and perhaps outperforms the visual system, in a number of important ways that are relevant to auditory display and sonification.

For one, the auditory system excels at detecting rhythmic and temporal changes. For example, we can perceptually separate two brief sounds like finger snap or metronome tick with as little as five milliseconds separating them; far better than the 35-40 milliseconds required by the visual system (Ashmead, Leroy, & Odom, 1990; Gfeller, Woodworth, Robin, Witt, & Knutson, 1997; Tervaniemi & Brattico, 2004). Therefore, more information can be displayed in audition, compressed at a higher rate, and still maintain discriminability. Similarly, the auditory system is highly sensitive to temporal changes and pattern deviations (Escera, Alho, Winkler, & Näätänen, 1998; Näätänen, Paavilainen, Rinne, & Alho, 2007). As a result, auditory display may be well-suited to data sets that contain complex patterns and temporal changes.

More practically speaking, audition is omnidirectional, not requiring the listener to be oriented towards the display. This is especially important given that most of our primary tasks in our work environments are visual, restricting our ability to orient to other displays. Therefore, adding more visual information may be inappropriate because the visual system might already be occupied (Fitch & Kramer, 1994; Wickens & Liu, 1988) or, by adding additional visual displays, we may be overtaxing an already overburdened visual system (M. L. Brown, Newsome, & Glinert, 1989). Additionally, we are able time-share multiple tasks more efficiently when they are presented in different modalities (Driver, 2001; Driver & Spence, 1998; Wickens, 2002; Wickens, Parasuraman, & Davies, 1984). Therefore, sonification presents an opportunity to present additional information and augment task performance without interfering with, or overloading the visual system.

3.3 Using sonification to augment cognition

3.3.1 Perception, attention, and situational awareness

One way in which sonification could be used to enhance or augment cognition, is by improving situational awareness. Here I refer to situational awareness broadly, as maintaining conscious knowledge of the immediate environment and all the events happening within it. Our ability to maintain situational awareness, while obviously important for many tasks, is limited not only by what information is available in the environment, but also by our ability to process it (e.g., capacity limitations in working memory, attention, perception, etc.). Work in this area has demonstrated some success in improving situational awareness and task performance by using sonification to facilitate attention and perception processes.

Attention is perhaps the most obvious way sonification could be used to improve situational awareness, and the most easily demonstrated. Maintaining situational awareness in complex environments requires that we constantly monitor multiple streams of information. One obvious way to facilitate situational awareness is to offload the monitoring task using auditory alerts or alarms (Hermann et al., 2011; Nees & Walker, 2009). The ubiquity of auditory alarms, from phone alerts to emergency vehicle sirens, makes it easy to over-look. However, they provide an easy way to offload what would be cognitively demanding task (i.e., vigilance or prospective memory), allowing the listener to engage in other tasks. The use of complex auditory alarms has proven useful in a range of settings and tasks including medical or patient monitoring (Cabrera, Ferguson, & Laing, 2005), air-traffic controllers (Cabrera et al., 2005), and piloting aircraft (Edworthy et al., 2004).

Situational awareness in complex environments can be difficult because of the overwhelming amount of information and our limitations in dividing attention. Another way that sonification can aid situational awareness is by transforming multiple streams of information into a more useful, easier-to-manage format for real-time monitoring. There are two fields that have demonstrated the usefulness of sonification tools to facilitate situational awareness by overcoming limitations in divided attention: computer-network traffic monitoring and anesthesiology.

Computer network administrators must monitor flow of traffic in real-time to identify anomalous events like drops in traffic that may reflect hardware failures, or sudden increases in certain types of traffic that could reflect network intrusions (Axon, Alahmadi, Nurse, Goldsmith, & Creese, 2018). Given the large amount of data the network receives every second, the data needs to be aggregated in a way that allows for real-time monitoring. Sonification tools have been shown to be useful for this purpose, demonstrating that listeners can detect network intrusions and anomalous changes in network activity using different sonification methods (e.g., Ballora, Giacobe, & Hall, 2011; Debashi & Vickers, 2018; Vickers, Laing, Debashi, & Fairfax, 2014; Vickers et al., 2017). For example, Qi, Martin, Kapralos, Green, & García-Ruiz (2007) mapped various network traffic data to piano sounds that allowed listeners to detect different types of network intrusions and Gilfix and Couch (2000) mapped network traffic to naturalistic sounds (e.g., chirping, heartbeats) which allowed listeners to detect anomalies in network traffic.

Similarly, anesthesiologists are faced with a similar problem. They need to monitor multiple streams of information about the patient in real-time (e.g., heart rate, central venous pressure, central artery pressure, etc.), often while time-sharing between other tasks. Work in this area tends to show that anesthesiologists and non-anesthesiologists can detect changes using auditory displays as good as when they used visual displays. However, they tend to time-share between tasks better when using an auditory display (Fitch & Kramer, 1994; Loeb & Fitch, 2002; Paterson, Sanderson, Paterson, & Loeb, 2017; Seagull, Wickens, & Loeb, 2001; M. Watson & Sanderson, 2004).

Sonification can also improve situational awareness by augmenting perception. That is, sonification methods can be used to enhance the perceptual representation of our environment by providing extrasensory information. Many studies, for example, have focused on supplementing visual information for the blind using sonification. To aid in navigation, there has been success sonifying depth information (Brock & Kristensson, 2013), and the location of objects (Pavlo Bazilinskyy et al., 2016), and even one demonstration of using echolocation (Kish, 2009). Others have shown success sonifying more complex visual information like object identity (Nagarajan, Yaacob, & Sainarayanan, 2003) and line graphs (L. M. Brown & Brewster, 2003).

However, there are other examples, where extrasensory information is sonified to enhance perception. Probably, the most well-known, and most-often cited example is the Geiger counter. Developed in the early 1900’s, and still used today, the Geiger counter transforms ionization events into audible clicks allowing us to perceive radiation levels in the environment (Knoll, 2010). Another example, called the “Visor” transposes color into sounds to create artificial synesthesia (Foner, 1999). Given our visual system, different sets of wavelengths can appear as the same color, assuming you adjust the relative amplitudes accordingly. Therefore, two objects could then appear to have the same color although they have different spectra; we can perceive the color, we cannot perceive the shape of the spectrum. The visor was designed to sonify the color spectra to enable the user to discriminate colors based on the shapes of the spectrum. For example, you could hear the difference between a painting and a copy of painting, even if visually they are indistinguishable, hear camouflaged objects, or as the authors suggest, the device could be extended to allow you to hear ultraviolet, infrared, or polarized light.

3.3.2 Perception and action in motor skill learning

Another way in which sonification could augment cognition, is by improving perception and action in motor skill learning. That is, sound could be used to provide real-time feedback about performance in a motor task, guiding a learner towards their goal or correct performance. Enhancing motor learning has been explored using auditory alarms, sonified movement feedback, and sonified error feedback (J.F. Dyer, Stapleton, & Rodger, 2017; Sigrist, Rauter, Riener, & Wolf, 2013).

Auditory alarms have proven useful for improving motor skill learning. They are simplest form of sonification in that any movement considered an error triggers an alarm. They are easily interpreted by the learner, though they provide little information about how to correct performance. In rehabilitation, for example, auditory alarms have been used to inform patients about errors in movement (e.g., incorrect gait, unphysiological loading), and shown success in helping the learner correct the behavrio (Batavia, Gianutsos, Vaccaro, & Gold, 2001; Eriksson & Bresin, 2010; Petrofsky, 2001; Riskowski, Mikesky, Bahamonde, & Burr, 2009). Similarly, auditory alarms have facilitated motor training in gymnastics (Baudry, Leroy, Thouvarecq, & Chollet, 2006) and improving rifle movements for professional shooters (Underwood, 2009).

Sound has also been used to provide constant, real-time feedback about movement. This is considered ‘direct sonification’ because some body movement is directly mapped to sound to provide additional information and guide the learner to correct performance. For example, your location is 3D space could be mapped to amplitude and pitch of a constant sound helping you navigate through space. There is some evidence that continuous sonified feedback is beneficial in simple motor tasks; In simple reaching tasks, for example (Oscari, Secoli, Avanzini, Rosati, & Reinkensmeyer, 2012; Schmitz & Bock, 2014). Unfortunately, however, there is little direct evidence that continuous auditory feedback is beneficial in complex motor tasks. There was some success using sonified movement feedback in swimming tasks (Chollet, Madani, & Micallef, 1992; Chollet, Micallef, & Rabischong, 1988), although these effects might be explained better by increased motivation (Sigrist et al., 2013). Constant sonified movement feedback has also been incorporated in a number of different motor tasks like karate (Yamamoto, Shiraki, Takahata, Sakane, & Takebayashi, 2004), rowing (Schaffert, Mattes, & Effenberg, 2009), and skiing (Kirby, 2009), but there have not been corresponding motor learning studies to validate whether they are in fact beneficial for the learner (Sigrist et al., 2013).

There has been more success in using sonified movement error feedback to improve motor-skill learning (Oscari et al., 2012; Schmitz & Bock, 2014). Here, the sound does not directly correspond to your movements, but instead corresponds to your movements in relation to some criterion. For example, instead of directly mapping sound to your location in 3D space, you could map sound parameters to the relationship between your position and some target location (e.g., increase in pitch as you move closer to the target). Using this method has shown some benefits across different complex motor tasks such as speed skating (Boyd & Godbout, 2010) and rowing (Sigrist et al., 2011). Shooting scores during rifle training was also improved with error feedback. Here, the pitch of a pure tone was mapped to the deviation of the gun barrel to the bullseye.

3.3.3 Data analysis and pattern recognition.

One of the goals of datamining or data exploration is to detect hidden regularities in high dimensional data. Our ability to detect these hidden regularities is of course dependent on the representation of the data and our ability to recognize the patterns. As mentioned earlier, our auditory system excels at detecting very subtle patterns in sounds (Grond & Hermann, 2014a, 2014b; Hermann et al., 2011). The use of auditory data representations in fact has a long history, well-before there was a term for it (see Frysinger, 2005). The stethoscope, for example, still provides valuable information for a physician, and Pollack and Ficks (1954) mapped multi-dimensional data onto sound parameters to evaluate the information transmission properties of auditory stimuli (i.e., information “bits”).

Speeth (1961), provided one of the earliest studies that showed the advantages of using auditory data representations over visual for data pattern recognition. Here they were interested in using seismic measurements to discriminate between earthquakes and underground bomb blasts. The seismometer produces complex wave patterns and using visual displays of the data for categorization proved to be a very difficult task. However, once the seismic data transformed into sound, subjects could accurately classify seismic activity on 90% of the trials. Additionally, because the data was time compressed, an analyst could review up to 24 hours of data in 5 minutes.

Other early work has also shown the advantages to using auditory representations when dealing with complex multivariate data. Morrison and Lunney used sound to represent infrared spectral data (Baecker & Buxton, 1987) and Yeung (1980) used sound to represent experimental data from analytical chemistry where subjects achieved 98% classification with little practice. Similarly, Mezrich, Frysinger, & Slivjanovski (1984) used both auditory and visual components to represent multivariate time-series economic data. They found that their dynamic multi-modal display generally outperformed static visual displays.

This early work is important in that it demonstrates that some data sets are well-suited for sonification and confers pattern recognition benefits. These are often dense, multivariate data sets that can take advantage of the temporal nature of auditory representations. More recent work has expanded the range of applications of sonification for data exploration with some notable successes.

One area that has shown the usefulness of sonification is in the interpretation of brain data. For example, real-time monitoring and analysis of ectroencephalographic (EEG) data has diverse application areas including medical screening, brain computer interfaces, and neurofeedback (Väljamäe et al., 2013). Recent work shows that sonification facilitates the interpretation and categorization of EEG data (Baier, Hermann, & Stephani, 2007; Baier et al., 2007; De Campo, Hoeldrich, Eckel, & Wallisch, 2007). For example, sonified EEG data has been used to detect epilectic seizures. One study transformed EEG data into music (snapping time-frequency data to notes in a musical scale) and found that subjects could identify seizures from the auditory data alone (Loui, Koplin-Green, Frick, & Massone, 2014; Parvizi, Gururangan, Razavi, & Chafe, 2018). Similarly, positron emission topographical (PET) data has been sonified to facilitate the diagnosis of Alzheimer’s disease (Gionfrida & Roginska, 2017). Not limited to brain data, other biomedical signals like electrocardiographic (ECG) data have been sonified facilitating the detection of cardiopathic pathologies and other anomalies (Avbelj, 2012; Kather et al., 2017).

The range of fields that have begun to adopt sonification for data exploration, and have shown promising results, is in fact staggeringly diverse. From astronomical data (W. L. Diaz-Merced et al., 2011; W. L. L. Diaz-Merced, 2017; Lunn & Hunt, 2011), meterological data (George et al., 2017), oceanography (Sturm, 2005), physics (Pereverzev et al., 1997), biomedicine (Avbelj, 2012; Larsen, 2016), social sciences (Dayé & de Campo, 2006), to space exploration. During the Voyager 2 mission, the spacecraft was going through the rings of Saturn when it encountered a problem. The operators could not identify the problem through visual analysis of the data. However, once the data was played through a music synthesizer, a “machine-gunning” sound was heard, leading them to conclude that the problem was caused the problems were caused by high-speed collisions with electromagnetically charged micro-meteoroids (Barrass & Kramer, 1999). Sonification can alter our perception of the data allowing insights and pattern recognition that were not possible using visual displays.

3.4 References

Ashmead, D. H., Leroy, D., & Odom, R. D. (1990). Perception of the relative distances of nearby sound sources. Perception & Psychophysics, 47(4), 326–331.

Avbelj, V. (2012). Auditory display of biomedical signals through a sonic representation: ECG and EEG sonification. In MIPRO, 2012 Proceedings of the 35th International Convention (pp. 474–475). IEEE.

Axon, L., Alahmadi, B., Nurse, J. R., Goldsmith, M., & Creese, S. (2018). Sonification in security operations centres: what do security practitioners think? Analyst, 7, 3.

Baecker, R. M., & Buxton, W. A. (1987). Human-computer interaction: a multidisciplinary approach. Morgan Kaufmann Publishers Inc.

Baier, G., Hermann, T., & Stephani, U. (2007). Event-based sonification of EEG rhythms in real time. Clinical Neurophysiology, 118(6), 1377–1386.

Ballora, M. (2014). Sonification, Science and Popular Music: In search of the ‘wow.’ Organised Sound, 19(01), 30–40. https://doi.org/10.1017/S1355771813000381

Ballora, M., Giacobe, N. A., & Hall, D. L. (2011). Songs of cyberspace: an update on sonifications of network traffic to support situational awareness. In Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2011 (Vol. 8064, p. 80640P). International Society for Optics and Photonics.

Barrass, S., & Kramer, G. (1999). Using sonification. Multimedia Systems, 7(1), 23–31.

Batavia, M., Gianutsos, J. G., Vaccaro, A., & Gold, J. T. (2001). A do-it-yourself membrane-activated auditory feedback device for weight bearing and gait training: a case report. Archives of Physical Medicine and Rehabilitation, 82(4), 541–545.

Baudry, L., Leroy, D., Thouvarecq, R., & Chollet, D. (2006). Auditory concurrent feedback benefits on the circle performed in gymnastics. Journal of Sports Sciences, 24(2), 149–156.

Bazilinskyy, P., Petermeijer, S. M., Petrovych, V., Dodou, D., & De Winter, J. C. F. (2015). Take-over requests in highly automated driving: A crowdsourcing survey on auditory, vibrotactile, and visual displays. Unpublished.

Bazilinskyy, Pavlo, van Haarlem, W., Quraishi, H., Berssenbrugge, C., Binda, J., & de Winter, J. (2016). Sonifying the location of an object: A comparison of three methods. IFAC-PapersOnLine, 49(19), 531–536.

Boyd, J., & Godbout, A. (2010). Corrective Sonic Feedback for Speed Skating: A Case Study. Georgia Institute of Technology.

Brock, M., & Kristensson, P. O. (2013). Supporting blind navigation using depth sensing and sonification. In Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication (pp. 255–258). ACM.

Brown, L. M., & Brewster, S. A. (2003). Drawing by ear: Interpreting sonified line graphs. Georgia Institute of Technology.

Brown, M. L., Newsome, S. L., & Glinert, E. P. (1989). An experiment into the use of auditory cues to reduce visual workload. In ACM SIGCHI Bulletin (Vol. 20, pp. 339–346). ACM.

Cabrera, D., Ferguson, S., & Laing, G. (2005). Development of auditory alerts for air traffic control consoles. In Audio Engineering Society Convention 119. Audio Engineering Society.

Chollet, D., Madani, M., & Micallef, J. P. (1992). Effects of two types of biomechanical bio-feedback on crawl performance. Biomechanics and Medicine in Swimming, Swimming Science VI, 48, 53.

Chollet, D., Micallef, J. P., & Rabischong, P. (1988). Biomechanical signals for external biofeedback to improve swimming techniques. Swimming Science V. Champaign, IL: Human Kinetics Books, 389–396.

Dayé, C., & de Campo, A. (2006). Sounds sequential: sonification in the social sciences. Interdisciplinary Science Reviews, 31(4), 349–364. https://doi.org/10.1179/030801806X143286

De Campo, A., Hoeldrich, R., Eckel, G., & Wallisch, A. (2007). New sonification tools for EEG data screening and monitoring. Georgia Institute of Technology.

Debashi, M., & Vickers, P. (2018). Sonification of network traffic flow for monitoring and situational awareness. PLOS ONE, 13(4), e0195948. https://doi.org/10.1371/journal.pone.0195948

Diaz Merced, W. L. (2013). Sound for the exploration of space physics data (PhD). University of Glasgow. Retrieved from http://encore.lib.gla.ac.uk/iii/encore/record/C\_\_Rb3090263

Diaz-Merced, W. L., Candey, R. M., Brickhouse, N., Schneps, M., Mannone, J. C., Brewster, S., & Kolenberg, K. (2011). Sonification of Astronomical Data. Proceedings of the International Astronomical Union, 7(S285), 133–136. https://doi.org/10.1017/S1743921312000440

Diaz-Merced, W. L. L. (2017). We too may find new planets. In AASTCS5 Radio Exploration of Planetary Habitability, Proceedings of the conference 7-12 May, 2017 in Palm Springs, CA. Published in Bulletin of the American Astronomical Society, Vol. 49, No. 3, id. 202.01 (Vol. 49).

Driver, J. (2001). A selective review of selective attention research from the past century. British Journal of Psychology, 92(1), 53–78.

Driver, J., & Spence, C. (1998). Crossmodal attention. Current Opinion in Neurobiology, 8(2), 245–253. https://doi.org/10.1016/S0959-4388(98)80147-5

Dubus, G., & Bresin, R. (2015). Exploration and evaluation of a system for interactive sonification of elite rowing. Sports Engineering, 18(1), 29–41. https://doi.org/10.1007/s12283-014-0164-0

Dyer, J. F., Stapleton, P., & Rodger, M. (2017). Mapping Sonification for Perception and Action in Motor Skill Learning. Frontiers in Neuroscience, 11. https://doi.org/10.3389/fnins.2017.00463

Edworthy, J., Hellier, E., Aldrich, K., & Loxley, S. (2004). Designing trend-monitoring sounds for helicopters: methodological issues and an application. Journal of Experimental Psychology: Applied, 10(4), 203.

Eriksson, M., & Bresin, R. (2010). Improving running mechanics by use of interactive sonification. Proceedings of ISon, 95–98.

Escera, C., Alho, K., Winkler, I., & Näätänen, R. (1998). Neural mechanisms of involuntary attention to acoustic novelty and change. Journal of Cognitive Neuroscience, 10(5), 590–604.

Fitch, W. T., & Kramer, G. (1994). Sonifying the body electric: Superiority of an auditory over a visual display in a complex, multivariate system. In SANTA FE INSTITUTE STUDIES IN THE SCIENCES OF COMPLEXITY-PROCEEDINGS VOLUME- (Vol. 18, pp. 307–307). Addison-Wesley Publishing Co.

Flowers, J. H. (2005). Thirteen years of reflection on auditory graphing: Promises, pitfalls, and potential new directions. Georgia Institute of Technology.

Foner, L. N. (1999). Artificial synesthesia via sonification: A wearable augmented sensory system. Mobile Networks and Applications, 4(1), 75–81.

Frysinger, S. P. (2005). A brief history of auditory data representation to the 1980s. Georgia Institute of Technology.

Garcia, A., Peres, S. C., Ritchey, P., Kortum, P., & Stallmann, K. (2011). Auditory Progress Bars: Estimations of Time Remaining. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1), 1338–1341. https://doi.org/10.1177/1071181311551278

George, S. S., Crawford, D., Reubold, T., & Giorgi, E. (2017). Making Climate Data Sing: Using Music-like Sonifications to Convey a Key Climate Record. Bulletin of the American Meteorological Society, 98(1), 23–27.

Gfeller, K., Woodworth, G., Robin, D. A., Witt, S., & Knutson, J. F. (1997). Perception of rhythmic and sequential pitch patterns by normally hearing adults and adult cochlear implant users. Ear and Hearing, 18(3), 252–260.

Gilfix, M., & Couch, A. L. (2000). Peep (The Network Auralizer): Monitoring Your Network with Sound. In LISA (pp. 109–117).

Gionfrida, L., & Roginska, A. (2017). A Novel Sonification Approach to Support the Diagnosis of Alzheimer’s Dementia. Frontiers in Neurology, 8, 647. https://doi.org/10.3389/fneur.2017.00647

Grond, F., & Hermann, T. (2014a). Interactive Sonification for Data Exploration: How listening modes and display purposes define design guidelines. Organised Sound, 19(1), 41–51.

Grond, F., & Hermann, T. (2014b). Interactive Sonification for Data Exploration: How listening modes and display purposes define design guidelines. Organised Sound, 19(01), 41–51. https://doi.org/10.1017/S1355771813000393

Hermann, T., Hunt, A., & Neuhoff, J. G. (2011). The sonification handbook. Logos Verlag Berlin.

Jamson, A. H., Lai, F. C., & Carsten, O. M. (2008). Potential benefits of an adaptive forward collision warning system. Transportation Research Part C: Emerging Technologies, 16(4), 471–484.

Kather, J. N., Hermann, T., Bukschat, Y., Kramer, T., Schad, L. R., & Zöllner, F. G. (2017). Polyphonic sonification of electrocardiography signals for diagnosis of cardiac pathologies. Scientific Reports, 7, 44549. https://doi.org/10.1038/srep44549

Kirby, R. (2009). Development of a real-time performance measurement and feedback system for alpine skiers. Sports Technology, 2(1–2), 43–52.

Kish, D. (2009, April 11). Seeing with sound: What is it like to “see” the world using sonar? Daniel Kish, who lost his sight in infancy, reveals all. New Scientist.

Knoll, G. F. (2010). Radiation detection and measurement. John Wiley & Sons.

Kortum, P., Peres, S. C., Knott, B. A., & Bushey, R. (2005). The Effect of Auditory Progress Bars on Consumer’s Estimation of Telephone wait Time. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 49(4), 628–632. https://doi.org/10.1177/154193120504900406

Kramer, G. (2000). Auditory display: sonification, audification and auditory interfaces. Addison-Wesley Longman Publishing Co., Inc.

Larsen, P. E. (2016). More of an art than a science: Using microbial DNA sequences to compose music. Journal of Microbiology & Biology Education, 17(1), 129.

Loeb, R. G., & Fitch, W. T. (2002). A laboratory evaluation of an auditory display designed to enhance intraoperative monitoring. Anesthesia & Analgesia, 94(2), 362–368.

Loui, P., Koplin-Green, M., Frick, M., & Massone, M. (2014). Rapidly Learned Identification of Epileptic Seizures from Sonified EEG. Frontiers in Human Neuroscience, 8. https://doi.org/10.3389/fnhum.2014.00820

Lunn, P., & Hunt, A. (2011). Listening to the invisible: Sonification as a tool for astronomical discovery.

Mezrich, J. J., Frysinger, S., & Slivjanovski, R. (1984). Dynamic representation of multivariate time series data. Journal of the American Statistical Association, 79(385), 34–40.

Näätänen, R., Paavilainen, P., Rinne, T., & Alho, K. (2007). The mismatch negativity (MMN) in basic research of central auditory processing: A review. Clinical Neurophysiology, 118(12), 2544–2590. https://doi.org/10.1016/j.clinph.2007.04.026

Nagarajan, R., Yaacob, S., & Sainarayanan, G. (2003). Role of object identification in sonification system for visually impaired. In TENCON 2003. Conference on Convergent Technologies for the Asia-Pacific Region (Vol. 2, pp. 735–739). IEEE.

Nees, M. A., & Walker, B. N. (2009). Auditory Interfaces and Sonification.

Oscari, F., Secoli, R., Avanzini, F., Rosati, G., & Reinkensmeyer, D. J. (2012). Substituting auditory for visual feedback to adapt to altered dynamic and kinematic environments during reaching. Experimental Brain Research, 221(1), 33–41.

Parkinson, A., & Tanaka, A. (2013). Making Data Sing: Embodied Approaches to Sonification. In Sound, Music, and Motion (pp. 151–160). Springer, Cham. https://doi.org/10.1007/978-3-319-12976-1\_9

Parvizi, J., Gururangan, K., Razavi, B., & Chafe, C. (2018). Detecting silent seizures by their sound. Epilepsia, 59(4), 877–884.

Paterson, E., Sanderson, P. M., Paterson, N. a. B., & Loeb, R. G. (2017). Effectiveness of enhanced pulse oximetry sonifications for conveying oxygen saturation ranges: a laboratory comparison of five auditory displays. British Journal of Anaesthesia, 119(6), 1224–1230. https://doi.org/10.1093/bja/aex343

Pereverzev, S. V., Loshak, A., Backhaus, S., Davis, J. C., & Packard, R. E. (1997). Quantum oscillations between two weakly coupled reservoirs of superfluid 3 He. Nature, 388(6641), 449.

Petrofsky, J. (2001). The use of electromyogram biofeedback to reduce Trendelenburg gait. European Journal of Applied Physiology, 85(5), 491–495.

Pollack, I., & Ficks, L. (1954). Information of elementary multidimensional auditory displays. The Journal of the Acoustical Society of America, 26(2), 155–158.

Qi, L., Martin, M. V., Kapralos, B., Green, M., & García-Ruiz, M. (2007). Toward sound-assisted intrusion detection systems. In *OTM Confederated International Conferences" On the Move to Meaningful Internet Systems“* (pp. 1634–1645). Springer.

Quinn, M. (2001). Research set to music: The climate symphony and other sonifications of ice core, radar, DNA, seismic and solar wind data. Georgia Institute of Technology.

Quinn, M. (2012). “Walk on the Sun”: an interactive image sonification exhibit. AI & Society, 27(2), 303–305.

Riskowski, J. L., Mikesky, A. E., Bahamonde, R. E., & Burr, D. B. (2009). Design and validation of a knee brace with feedback to reduce the rate of loading. Journal of Biomechanical Engineering, 131(8), 084503.

Sanderson, P. (2006). The multimodal world of medical monitoring displays. Applied Ergonomics, 37(4), 501–512.

Schaffert, N., Mattes, K., & Effenberg, A. O. (2009). A sound design for the purposes of movement optimisation in elite sport (using the example of rowing). Georgia Institute of Technology.

Schmitz, G., & Bock, O. (2014). A Comparison of Sensorimotor Adaptation in the Visual and in the Auditory Modality. PloS One, 9(9), e107834.

Seagull, F. J., Wickens, C. D., & Loeb, R. G. (2001). When is less more? Attention and workload in auditory, visual, and redundant patient-monitoring conditions. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 45, pp. 1395–1399). SAGE Publications Sage CA: Los Angeles, CA.

Sigrist, R., Rauter, G., Riener, R., & Wolf, P. (2013). Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review. Psychonomic Bulletin & Review, 20(1), 21–53. https://doi.org/10.3758/s13423-012-0333-8

Sigrist, R., Schellenberg, J., Rauter, G., Broggi, S., Riener, R., & Wolf, P. (2011). Visual and auditory augmented concurrent feedback in a complex motor task. Presence: Teleoperators and Virtual Environments, 20(1), 15–32.

Speeth, S. D. (1961). Seismometer sounds. The Journal of the Acoustical Society of America, 33(7), 909–916.

Stanton, J. (2015). Sensing big data: Multimodal information interfaces for exploration of large data sets. In Big Data at Work (pp. 172–192). Routledge.

Stockman, T., Nickerson, L. V., & Hind, G. (2005). Auditory graphs: A summary of current experience and towards a research agenda. Georgia Institute of Technology.

Sturm, B. L. (2005). Pulse of an Ocean: Sonification of Ocean Buoy Data. Leonardo, 38(2), 143–149.

Targett, S., & Fernstrom, M. (2003). Audio games: Fun for all? All for fun! Georgia Institute of Technology.

Tervaniemi, M., & Brattico, E. (2004). From sounds to music towards understanding the neurocognition of musical sound perception. Journal of Consciousness Studies, 11(3–4), 9–27.

Underwood, S. M. (2009). Effects of augmented real-time auditory feedback on top-level precision shooting performance.

Väljamäe, A., Steffert, T., Holland, S., Marimon, X., Benitez, R., Mealla, S., … Jordà, S. (2013). A review of real-time EEG sonification research (pp. 85–93). Presented at the International Conference on Auditory Display 2013 (ICAD 2013), Lodz, Poland. Retrieved from http://icad2013.com/index.php

Vickers, P., Laing, C., Debashi, M., & Fairfax, T. (2014). Sonification Aesthetics and Listening for Network Situational Awareness. ArXiv:1409.5282 [Cs]. https://doi.org/10.13140/2.1.4225.6648

Vickers, P., Laing, C., & Fairfax, T. (2017). Sonification of a network’s self-organized criticality for real-time situational awareness. Displays, 47, 12–24. https://doi.org/10.1016/j.displa.2016.05.002

Walker, B. N., & Kramer, G. (2004). Ecological psychoacoustics and auditory displays: Hearing, grouping, and meaning making. Ecological Psychoacoustics, 150–175.

Walker, B. N., & Mauney, L. M. (2010). Universal design of auditory graphs: A comparison of sonification mappings for visually impaired and sighted listeners. ACM Transactions on Accessible Computing (TACCESS), 2(3), 12.

Watson, M., & Sanderson, P. (2004). Sonification supports eyes-free respiratory monitoring and task time-sharing. Human Factors, 46(3), 497–517.

Watson, T., & Lip, G. Y. H. (2006). Blood pressure measurement in atrial fibrillation: goodbye mercury? Journal of Human Hypertension, 20(9), 638.

Wickens, C. D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science, 3(2), 159–177.

Wickens, C. D., & Liu, Y. (1988). Codes and modalities in multiple resources: A success and a qualification. Human Factors, 30(5), 599–616.

Wickens, C. D., Parasuraman, R., & Davies, D. R. (1984). Varieties of attention.

Winberg, F., & Hellstrom, S. O. (2001). Qualitative aspects of auditory direct manipulation. A case study of the towers of Hanoi. Georgia Institute of Technology.

Yamamoto, G., Shiraki, K., Takahata, M., Sakane, Y., & Takebayashi, Y. (2004). Multimodal knowledge for designing new sound environments. In The International Conference on Human Computer Interaction with Mobile Devices and Services.

Yeung, E. S. (1980). Pattern recognition by audio representation of multivariate analytical data. Analytical Chemistry, 52(7), 1120–1123.