Academic literature on the topic 'Sonification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sonification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sonification"

1

Paterson, Estrella, Penelope Sanderson, Neil Paterson, David Liu, and Robert Loeb. "The effect of a secondary task on identification accuracy of oxygen saturation ranges using an enhanced pulse oximetry sonification." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (September 2016): 628–32. http://dx.doi.org/10.1177/1541931213601143.

Full text
Abstract:
In the operating theatre, anesthesiologists monitor an anesthetized patient’s oxygen saturation (SpO2) with a visual display but also with an auditory tone, or sonification. However, if the anesthesiologist must divide their attention across tasks, they may be less effective at recognising their patient’s SpO2 level. Previous research indicates that a sonification enhanced with additional sound dimensions of tremolo and brightness more effectively supports participants’ identification of SpO2 ranges than a conventional sonification does. This laboratory study explored the effect of a secondary task on participants’ ability to identify SpO2 range when using a conventional sonification (LogLinear sonification) versus an enhanced sonification (Stepped Effects sonification). Nineteen non-clinician participants who used the Stepped Effects sonification were significantly more effective at identifying SpO2 range ( Md = 100%) than were 18 participants using the LogLinear sonification ( Md = 80%). Range identification performance of participants using the Stepped Effects sonification tended to be less disrupted by a concurrent arithmetic task (drop from Md = 100% to 95%) than it was for participants using the LogLinear sonification (drop from Md = 80% to 73%). However, the disruption effect in each case was small, and the difference in disruption across sonifications was not statistically significant. Future research will test the sonifications under more intense cognitive load and in the presence of ambient noise.
APA, Harvard, Vancouver, ISO, and other styles
2

Zestic, Jelena, Birgit Brecknell, Helen Liley, and Penelope Sanderson. "A Novel Auditory Display for Neonatal Resuscitation: Laboratory Studies Simulating Pulse Oximetry in the First 10 Minutes After Birth." Human Factors: The Journal of the Human Factors and Ergonomics Society 61, no. 1 (September 27, 2018): 119–38. http://dx.doi.org/10.1177/0018720818793769.

Full text
Abstract:
Objective: We tested whether enhanced sonifications would improve participants’ ability to judge the oxygen saturation levels (SpO2) of simulated neonates in the first 10 min after birth. Background: During the resuscitation of a newborn infant, clinicians must keep the neonate’s SpO2 levels within the target range, however the boundaries for the target range change each minute during the first 10 min after birth. Resuscitation places significant demand on the clinician’s visual attention, and the pulse oximeter’s sonification could provide eyes-free monitoring. However, clinicians have difficulty judging SpO2 levels using the current sonification. Method: In two experiments, nonclinicians’ ability to detect SpO2 range and direction—while performing continuous arithmetic problems—was tested with enhanced versus conventional sonifications. In Experiment 1, tremolo signaled when SpO2 had deviated below or above the target range. In Experiment 2, tremolo plus brightness signaled when SpO2 was above target range, and tremolo alone when SpO2 was below target range. Results: The tremolo sonification improved range identification accuracy over the conventional display (81% vs. 63%, p < .001). The tremolo plus brightness sonification further improved range identification accuracy over the conventional display (92% vs. 62%, p <.001). In both experiments, there was no difference across conditions in arithmetic task accuracy ( p >.05). Conclusion: Using the enhanced sonifications, participants identified SpO2 range more accurately despite a continuous distractor task. Application: An enhanced pulse oximetry sonification could help clinicians multitask more effectively during neonatal resuscitations.
APA, Harvard, Vancouver, ISO, and other styles
3

Ballora, Mark. "Sonification, Science and Popular Music: In search of the ‘wow’." Organised Sound 19, no. 1 (February 26, 2014): 30–40. http://dx.doi.org/10.1017/s1355771813000381.

Full text
Abstract:
Sonification is described as an under-utilised dimension of the ‘wow!’ factor in science engagement multi-media. It is suggested that sonification's potential value, like much of the scientific visualisation content, probably lies less in hard facts and more in how it may serve as a stimulant for curiosity. Sound is described as a multi-dimensional phenomenon, and a number of approaches to creating sonifications are reviewed. Design strategies are described for five types of phenomena that were sonified for works created by cosmologist George Smoot III and percussionist/ethnomusicologist Mickey Hart, most particularly for their filmRhythms of the Universe(Hart and Smoot 2013).
APA, Harvard, Vancouver, ISO, and other styles
4

Ziemer, Tim. "Sound Terminology in Sonification." Journal of the Audio Engineering Society 72, no. 5 (May 3, 2024): 274–89. http://dx.doi.org/10.17743/jaes.2022.0133.

Full text
Abstract:
Sonification research is intrinsically interdisciplinary. Consequently, a proper documentation of and interdisciplinary discourse about a sonification is often hindered by terminology discrepancies between involved disciplines, i.e., the lack of a common sound terminology in sonification research. Without a common ground, a researcher from one discipline may have trouble understanding the implementation and imagining the resulting sound perception of a sonification, if the sonification is described by a researcher from another discipline. To find a common ground, the author consulted literature on interdisciplinary research and discourse, identified problems that occur in sonification, and applied the recommended solutions. As a result, the author recommends considering three aspects of sonification individually, namely 1) Sound Design Concept, 2) Objective, and 3) Evaluation, clarifying which discipline is involved in which aspect and sticking to this discipline’s terminology. As two requirements of sonifications are that they are a) reproducible and b) interpretable, the author recommends documenting and discussing every sonification design once using audio engineering terminology and once using psychoacoustic terminology. The appendixes provide comprehensive lists of sound terms from both disciplines, together with relevant literature and a clarification of often misunderstood and misused terms.
APA, Harvard, Vancouver, ISO, and other styles
5

Nees, Michael A. "Auditory Graphs Are Not the “Killer App” of Sonification, But They Work." Ergonomics in Design: The Quarterly of Human Factors Applications 26, no. 4 (August 15, 2018): 25–28. http://dx.doi.org/10.1177/1064804618773563.

Full text
Abstract:
The search for the elusive “killer app” of sonification has been a recurring theme in sonification research. In this comment, I argue that the killer-app criterion of success stems from interdisciplinary tensions about how to evaluate sonifications. Using auditory graphs as an example, I argue that the auditory display community has produced successful examples of sonic information design that accomplish the human factors goal of improving human interactions with systems. Still, barriers to using sonifications in interfaces remain, and reducing those barriers could result in more widespread use of audio in systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Groppe, Sven, Rico Klinckenberg, and Benjamin Warnke. "Sound of databases." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2695–98. http://dx.doi.org/10.14778/3476311.3476322.

Full text
Abstract:
Sonifications map data to auditory dimensions and offer a new audible experience to their listeners. We propose a sonification of query processing paired with a corresponding visualization both integrated in a web application. In this demonstration we show that the sonification of different types of relational operators generates different sound patterns, which can be recognized and identified by listeners increasing their understanding of the operators' functionality and supports easy remembering of requirements like merge joins work on sorted input. Furthermore, new ways of analyzing query processing are possible with the sonification approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Jones, Kim-Marie N., Mitchell J. Allen, and Kashlin McCutcheon. "Development of a data sonification toolkit and case study sonifying astrophysical phenomena for visually imparied individuals." Journal of the Acoustical Society of America 154, no. 4_supplement (October 1, 2023): A209. http://dx.doi.org/10.1121/10.0023301.

Full text
Abstract:
Data is predominantly conveyed and analysed in a visual manner. Data sonification provides an alternative approach to data analysis, has a broader application (e.g., peripheral monitoring) and offers accessibility to an alternative cohort (e.g., the visually impaired). While there are countless data visualization tools available, producing data sonifications typically requires in-depth knowledge of audio software and/or computer programming. Research was undertaken into existing data sonification tools, and subsequently a Data Sonification Toolkit was developed using Abelton Live and Max for Live. The Toolkit was developed in particular as an alternative means of data analysis for use by multiple disciplines across a built environment firm. The key aims for the toolkit were that (1) it should be user-friendly and accessible to people without an in-depth knowledge of either Ableton or Max, (2) the sonifications produced be accurate and true representations of input data sets, and (3) the Toolkit should have the capability and flexibility to be expanded and customized by those with the expertise to do so. The Toolkit was used, in collaboration with astrophysicist Chris Harrison, to develop sonifications of astronomical phenomena for the visually impaired. These sonifications were presented at the British Science Festival in 2019.
APA, Harvard, Vancouver, ISO, and other styles
8

Cantrell, Stanley J., R. Michael Winters, Prakriti Kaini, and Bruce N. Walker. "Sonification of Emotion in Social Media: Affect and Accessibility in Facebook Reactions." Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (March 30, 2022): 1–26. http://dx.doi.org/10.1145/3512966.

Full text
Abstract:
Facebook Reactions are a collection of animated icons that enable users to share and express their emotions when interacting with Facebook content. The current design of Facebook Reactions utilizes visual stimuli (animated graphics and text) to convey affective information, which presents usability and accessibility barriers for visually-impaired Facebook users. In this paper, we investigate the use of sonification as a universally-accessible modality to aid in the conveyance of affect for blind and sighted social media users. We discuss the design and evaluation of 48 sonifications, leveraging Facebook Reactions as a conceptual framework. We conducted an online sound-matching study with 75 participants (11 blind, 64 sighted) to evaluate the performance of these sonifications. We found that sonification is an effective tool for conveying emotion for blind and sighted participants, and we highlight sonification design strategies that contribute to improved efficacy. Finally, we contextualize these findings and discuss the implications of this research with respect to HCI and the accessibility of online communities and platforms.
APA, Harvard, Vancouver, ISO, and other styles
9

Barrass, Stephen, Mitchell Whitelaw, and Guillaume Potard. "Listening to the Mind Listening." Media International Australia 118, no. 1 (February 2006): 60–67. http://dx.doi.org/10.1177/1329878x0611800109.

Full text
Abstract:
The Listening to the Mind Listening concert was a practice-led research project to explore the idea that we might hear information patterns in the sonified recordings of brain activity, and to investigate the aesthetics of sonifications of the same data set by different composers. This world-first concert of data sonifications was staged at the Sydney Opera House Studio on the evening of 6 July 2004 to a capacity audience of more than 350 neuroscientists, composers, sonification researchers, new media artists and a general public curious to hear what the human brain could sound like. The concert generated 30 sonifications of the same data set, explicit descriptions of the techniques each composer used to map the data into sound, and 90 reviews of these sonifications. This paper presents the motivations for the project, overviews related work, describes the sonification criteria and the review process, and presents and discusses outcomes from the concert.
APA, Harvard, Vancouver, ISO, and other styles
10

Grond, Florian, and Thomas Hermann. "Interactive Sonification for Data Exploration: How listening modes and display purposes define design guidelines." Organised Sound 19, no. 1 (February 26, 2014): 41–51. http://dx.doi.org/10.1017/s1355771813000393.

Full text
Abstract:
The desire to make data accessible through the sense of listening has led to ongoing research in the fields of sonification and auditory display since the early 1990s. Coming from the disciplines of computer sciences and human computer interface (HCI), the conceptualisation of sonification has been mostly driven by application areas and methods. On the other hand, the sonic arts, which have always participated in the auditory display community, have a genuine focus on sound. Despite these close interdisciplinary relationships between communities of sound practitioners, a rich and sound- or listening-centred concept of sonification is still missing for design guidelines. Complementary to the useful organisation by fields of application, a proper conceptual framework for sound needs to be abstracted from applications and also to some degree from tasks, as both are not directly related to sound. As an initial approach to recasting the thinking about sonification, we propose a conceptualisation of sonifications along two poles in which sound serves either anormativeor adescriptivepurpose. According to these two poles, design guidelines can be developed proper to display purposes and listening modes.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sonification"

1

Berman, Lewis Irwin. "Program comprehension through sonification." Thesis, Durham University, 2011. http://etheses.dur.ac.uk/1396/.

Full text
Abstract:
Background: Comprehension of computer programs is daunting, thanks in part to clutter in the software developer's visual environment and the need for frequent visual context changes. Non-speech sound has been shown to be useful in understanding the behavior of a program as it is running. Aims: This thesis explores whether using sound to help understand the static structure of programs is viable and advantageous. Method: A novel concept for program sonification is introduced. Non-speech sounds indicate characteristics of and relationships among a Java program's classes, interfaces, and methods. A sound mapping is incorporated into a prototype tool consisting of an extension to the Eclipse integrated development environment communicating with the sound engine Csound. Developers examining source code can aurally explore entities outside of the visual context. A rich body of sound techniques provides expanded representational possibilities. Two studies were conducted. In the first, software professionals participated in exploratory sessions to informally validate the sound mapping concept. The second study was a human-subjects experiment to discover whether using the tool and sound mapping improve performance of software comprehension tasks. Twenty-four software professionals and students performed maintenance-oriented tasks on two Java programs with and without sound. Results: Viability is strong for differentiation and characterization of software entities, less so for identification. The results show no overall advantage of using sound in terms of task duration at a 5% level of significance. The results do, however, suggest that sonification can be advantageous under certain conditions. Conclusions: The use of sound in program comprehension shows sufficient promise for continued research. Limitations of the present research include restriction to particular types of comprehension tasks, a single sound mapping, a single programming language, and limited training time. Future work includes experiments and case studies employing a wider set of comprehension tasks, sound mappings in domains other than software, and adding navigational capability for use by the visually impaired.
APA, Harvard, Vancouver, ISO, and other styles
2

Ejdbo, Malin, and Elias Elmquist. "Interactive Sonification in OpenSpace." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170250.

Full text
Abstract:
This report presents the work of a master thesis which aim was to investigate how sonification can be used in the space visualization software OpenSpace to further convey information about the Solar System. A sonification was implemented by using the software SuperCollider and was integrated into OpenSpace using Open Sound Control to send positional data to control the panning and sound level of the sonification. The graphical user interface of OpenSpace was also extended to make the sonification interactive. Evaluations were conducted both online and in the Dome theater to evaluate how well the sonification conveyed information. The outcome of the evaluations shows promising results, which might suggest that sonification has a future in conveying information of the Solar System.
APA, Harvard, Vancouver, ISO, and other styles
3

Pietrucha, Matthew. "Sonification of Spectroscopy Data." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1277.

Full text
Abstract:
Sonification is the process of mapping non-musical data to sound. The field is comprised of three key areas of research: (1) psychological research in perception and cognition, (2) the development of tools, and (3) sonification design and application. The goals of this research were twofold: (1) To provide insights to the development of sonification tools within the programming environment Max for use in further sonification/interdisciplinary research, as well as (2) provide a framework for a musical sonification system. The sonification system discussed was developed to audify spectrometry data, with the purpose of better understanding how multi-purpose systems can be easily modified to suit a particular need. Since all sonification systems may become context specific to the data they audify, a system was developed in the programming language Max that is both modular and responsive to the parameterization of data to create musical outcomes. The trends and phenomena of spectral data in the field of spectroscopy are plotted musically through the system and further enhanced by processes that associate descriptors of said data with compositional idioms, rhythmically, melodically, and harmonically. This process was achieved in Max by creating a modular system that handles the importing and formatting of spectral data (or any data in an array format) to send that data to a variety of subprograms for sonification. Subprograms handle timing and duration, diatonic melody, harmony, and timbral aspects including synthesis and audio effects. These systems are accessible both at a high level for novice users, as well as within the Max environment for more nuanced modification to support further research.
APA, Harvard, Vancouver, ISO, and other styles
4

Ibrahim, Ag Asri Ag. "Usability inspection for sonification applications." Thesis, University of York, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.479510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Steffert, Tony. "Real-time electroencephalogram sonification for neurofeedback." Thesis, Open University, 2018. http://oro.open.ac.uk/57965/.

Full text
Abstract:
Electroencephalography (EEG) is the measurement via the scalp of the electrical activity of the brain. The established therapeutic intervention of neurofeedback involves presenting people with their own EEG in real-time to enable them to modify their EEG for purposes of improving performance or health. The aim of this research is to develop and validate real-time sonifications of EEG for use in neurofeedback and methods for assessing such sonifications. Neurofeedback generally uses a visual display. Where auditory feedback is used, it is mostly limited to pre-recorded sounds triggered by the EEG activity crossing a threshold. However, EEG generates time-series data with meaningful detail at fine temporal resolution and with complex temporal dynamics. Human hearing has a much higher temporal resolution than human vision, and auditory displays do not require people to focus on a screen with their eyes open for extended periods of time - e.g. if they are engaged in some other task. Sonification of EEG could allow more rapid, contingent, salient and temporally detailed feedback. This could improve the efficiency of neurofeedback training and reduce the number and duration of sessions for successful neurofeedback. The same two deliberately simple sonification techniques were used in all three experiments of this research: Amplitude Modulation (AM) sonification, which maps the fluctuations in the power of the EEG to the volume of a pure tone; and Frequency Modulation (FM) sonification, which uses the changes in the EEG power to modify the frequency. Measures included, a listening task, NASA task load index; a measure of how much work it was to do the task, Pre & post measures of mood, and EEG. The first experiment used pre-recorded single channel EEG and participants were asked to listen to the sound of the sonified EEG and try and track the activity that they could hear by moving a slider on a computer screen using a computer mouse. This provided a quantitative assessment of how well people could perceive the sonified fluctuations in EEG level. The tracking accuracy scores were higher for the FM sonification but self-assessments of task load rated the AM sonification as easier to track. The second experiment used the same two sonifications, in a real neurofeedback task using participants own live EEG. Unbeknownst to the participants the neurofeedback task was designed to improve mood. A Pre-Post questionnaire showed that participants changed their self-rated mood in the intended direction with the EEG training, but there was no statistically significant change in EEG. Again the FM sonification showed a better performance but AM was rated as less effortful. The performance of sonifications in the tracking task in experiment 1 was found to predict their relative efficacy at blind self-rated mood modification in experiment 2. The third experiment used both the tracking as in experiment 1 and neurofeedback tasks as in experiment 2, but with modified versions of the AM and FM sonifications to allow two-channel EEG sonifications. This experiment introduced a physical slider as opposed to a mouse for the tracking task. Tracking accuracy increased, but this time no significant difference was found between the two sonification techniques on the tracking task. In the training task, once more the blind self-rated mood did improve in the intended direction with the EEG training, but as again there was no significant change in EEG, this cannot necessarily be attributed to the neurofeedback. There was only a slight difference between the two sonification techniques in the effort measure. In this way, a prototype method has been devised and validated for the quantitative assessment of real-time EEG sonifications. Conventional evaluations of neurofeedback techniques are expensive and time consuming. By contrast, this method potentially provides a rapid, objective and efficient method for evaluating the suitability of candidate sonifications for EEG neurofeedback.
APA, Harvard, Vancouver, ISO, and other styles
6

Perkins, Rhys John. "Interactive sonification of a physics engine." Thesis, Anglia Ruskin University, 2013. http://arro.anglia.ac.uk/323077/.

Full text
Abstract:
Physics engines have become increasingly prevalent in everyday technology. In the context of this thesis they are regarded as a readily available data set that has the potential to intuitively present the process of sonification to a wide audience. Unfortunately, this process is not the focus of attention when formative decisions are made concerning the continued development of these engines. This may reveal a missed opportunity when considering that the field of interactive sonification upholds the importance of physical causalities for the analysis of data through sound. The following investigation deliberates the contextual framework of this field to argue that the physics engine, as part of typical game engine architecture, is an appropriate foundation on which to design and implement a dynamic toolset for interactive sonification. The basis for this design is supported by a number of significant theories which suggest that the underlying data of a rigid body dynamics physics system can sustain an inherent audiovisual metaphor for interaction, interpretation and analysis. Furthermore, it is determined that this metaphor can be enhanced by the extraordinary potential of the computer in order to construct unique abstractions which build upon the many pertinent ideas and practices within the surrounding literature. These abstractions result in a mental model for the transformation of data to sound that has a number of advantages in contrast to a physical modelling approach while maintaining its same creative potential for instrument building, composition and live performance. Ambitions for both sonification and its creative potential are realised by several components which present the user with a range of options for interacting with this model. The implementation of these components effectuates a design that can be demonstrated to offer a unique interpretation of existing strategies as well as overcoming certain limitations of comparable work.
APA, Harvard, Vancouver, ISO, and other styles
7

Parseihian, Gaëtan. "Sonification binaurale pour l'aide à la navigation." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00771316.

Full text
Abstract:
Dans cette thèse, nous proposons la mise en place d'un système de réalité augmentée fondé sur le son 3D et la sonification, ayant pour objectif de fournir les informations nécessaires aux non- voyants pour un déplacement fiable et sûr. La conception de ce système a été abordée selon trois axes. L'utilisation de la synthèse binaurale pour générer des sons 3D est limitée par le problème de l'individualisation des HRTF. Une méthode a été mise en place pour adapter les individus aux HRTF en utilisant la plasticité du cerveau. Évaluée avec une expérience de localisation, cette méthode a permis de montrer les possibilités d'acquisition rapide d'une carte audio-spatiale virtuelle sans utiliser la vision. La sonification de données spatiales a été étudiée dans le cadre d'un système permettant la préhension d'objet dans l'espace péripersonnel. Les capacités de localisation de sources sonores réelles et virtuelles ont été étudiées avec un test de localisation. Une technique de sonification de la distance a été développée. Consistant à relier le paramètre à sonifier aux paramètres d'un effet audio, cette technique peut être appliquée à tout type de son sans nécessiter d'apprentissage supplémentaire. Une stratégie de sonification permettant de prendre en compte les préférences des utilisateurs a été mise en place. Les " morphocons " sont des icônes sonores définis par des motifs de paramètres acoustiques. Cette méthode permet la construction d'un vocabulaire sonore indépendant du son utilisé. Un test de catégorisation a montré que les sujets sont capables de reconnaître des icônes sonores sur la base d'une description morphologique indépendamment du type de son utilisé.
APA, Harvard, Vancouver, ISO, and other styles
8

Worrall, David, and n/a. "SONIFICATION AND INFORMATION CONCEPTS, INSTRUMENTS AND TECHNIQUES." University of Canberra. Communication, 2009. http://erl.canberra.edu.au./public/adt-AUC20090818.142345.

Full text
Abstract:
This thesis is a study of sonification and information: what they are and how they relate to each other. The pragmatic purpose of the work is to support a new generation of software tools that are can play an active role in research and practice that involves understanding information structures found in potentially vary large multivariate datasets. The theoretical component of the work involves a review of the way the concept of information has changed through Western culture, from the Ancient Greeks to recent collaborations between cognitive science and the philosophy of mind, with a particular emphasis on the phenomenology of immanent abstractions and how they might be supported and enhanced using sonification techniques. A new software framework is presented, together with several examples of its use in presenting sonifications of financial information, including that from a high-frequency securities-exchange trading-engine.
APA, Harvard, Vancouver, ISO, and other styles
9

Dyer, John. "Human movement sonification for motor skill learning." Thesis, Queen's University Belfast, 2017. https://pure.qub.ac.uk/portal/en/theses/human-movement-sonification-for-motor-skill-learning(4bda096c-e8ab-4af4-8f35-7445c6b0cb7e).html.

Full text
Abstract:
Transforming human movement into live sound can be used as a method to enhance motor skill learning via the provision of augmented perceptual feedback. A small but growing number of studies hint at the substantial efficacy of this approach, termed 'movement sonification'. However there has been sparse discussion in Psychology about how movement should be mapped onto sound to best facilitate learning. The current thesis draws on contemporary research conducted in Psychology and theoretical debates in other disciplines more directly concerned with sonic interaction - including Auditory Display and Electronic Music-Making - to propose an embodied account of sonification as feedback. The empirical portion of the thesis both informs and tests some of the assumptions of this approach with the use of a custom bimanual coordination paradigm. Four motor skill learning studies were conducted with the use of optical motion-capture. Findings support the general assumption that effective mappings aid learning by making task-intrinsic perceptual information more readily available and meaningful, and that the relationship between task demands and sonic information structure (or, between action and perception) should be complementary. Both the theoretical and empirical treatments of sonification for skill learning in this thesis suggest the value of an approach which addresses learner experience of sonified interaction while grounding discussion in the links between perception and action.
APA, Harvard, Vancouver, ISO, and other styles
10

Leplâtre, Grégory. "The design and evaluation of non speech sounds to support navigation in restricted display devices." Thesis, University of Glasgow, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270963.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sonification"

1

Worrall, David. Sonification Design. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-01497-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hunt, Andy, and Andy Hunt. The sonification handbook. Berlin: Logos Verlag, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Effenberg, Alfred O. Sonification: Ein akustisches Informationskonzept zur menschlichen Bewegung. Schorndorf: Hofmann, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gregory, Kramer, Santa Fe Institute (Santa Fe, N.M.), and International Conference on Auditory Display (1st : 1992 : Santa Fe, N.M.), eds. Auditory display: Sonification, audification, and auditory interfaces. Reading, Mass: Addison-Wesley, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baca, Julia A. Application of data sonification for enhanced interpretation of numerical model results. [Vicksburg, Miss: U.S. Army Engineer Waterways Experiment Station, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sonifikation: Transfer ins Musikalische: Programmbuch zum Festival Sonifikationen - klingende Datenströme: 27.-29. Oktober 2017, Berliner Gesellschaft für Neue Musik = Sonification: transfer into musical arts : program book for the festival Sonifications - audible data streams. Hofheim: Wolke Verlag, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Scaletti, Carla. Sonification ≠ Music. Edited by Roger T. Dean and Alex McLean. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190226992.013.9.

Full text
Abstract:
Starting from the observation that symbolic language is not the only channel for human communication, this chapter examines ‘data sonification’, a means of understanding, reasoning about, and communicating meaning that extends beyond that which can be conveyed by symbolic language alone. Data sonification is a mapping from data generated by a model, captured in an experiment, or otherwise gathered through observation to one or more parameters of an audio signal or sound synthesis model for the purpose of better understanding, communicating, or reasoning about the original model, experiment, or system. Although data sonification shares techniques and materials with data-driven music, it is in the interests of the practitioners of both music composition and data sonification to maintain a distinction between the two fields.
APA, Harvard, Vancouver, ISO, and other styles
8

Minciacchi, Diego, and David Rosenboom, eds. Sonification, Perceptualizing Biological Information. Frontiers Media SA, 2020. http://dx.doi.org/10.3389/978-2-88963-868-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Worrall, David. An Introduction to Data Sonification. Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199792030.013.0016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Worrall, David. Sonification Design: From Data to Intelligible Soundfields. Springer, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sonification"

1

Holmes, Thom. "Sonification." In Sound Art, 98–104. New York: Routledge, 2022. http://dx.doi.org/10.4324/9781315623047-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bukvic, Ivica Ico. "Introduction to Sonification." In Foundations in Sound Design for Embedded Media, 194–217. New York : Routledge, [2019] | Series: Sound design ; volume 3: Routledge, 2019. http://dx.doi.org/10.4324/9781315106359-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Worrall, David. "Sonification: An Overview." In Human–Computer Interaction Series, 23–54. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-01497-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Goudarzi, Visda. "Sonification and HCI." In Human–Computer Interaction Series, 205–22. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73356-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vickers, Paul, Bennett Hogg, and David Worrall. "Aesthetics of sonification." In Body, Sound and Space in Music and Beyond: Multimodal Explorations, 89–109. Abingdon, Oxon; New York, NY: Routledge, 2017. | Series: SEMPRE studies in the psychology of music: Routledge, 2017. http://dx.doi.org/10.4324/9781315569628-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Worrall, David. "Data Sonification: A Prehistory." In Human–Computer Interaction Series, 3–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-01497-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saranti, Anna, Gerhard Eckel, and David Pirrò. "Quantum Harmonic Oscillator Sonification." In Auditory Display, 184–201. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12439-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ramakrishnan, Chandrasekhar. "Sonification and Information Theory." In Auditory Display, 121–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12439-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Savery, Richard, Anna Savery, Martim S. Galvão, and Lisa Zahray. "Navigating Robot Sonification: Exploring Four Approaches to Sonification in Autonomous Vehicles." In Sound and Robotics, 159–75. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003320470-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Van Ransbeeck, Samuel. "Sonification in an Artistic Context." In Playful Disruption of Digital Media, 151–66. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-1891-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sonification"

1

Nalli, Michele, David Johnson, and Thomas Hermann. "Facial Behavior Sonification With the Interactive Sonification Framework Panson." In ICAD 2023: The 28th International Conference on Auditory Display. icad.org: International Community for Auditory Display, 2023. http://dx.doi.org/10.21785/icad2023.4233.

Full text
Abstract:
Facial behavior occupies a central role in social interaction. Its auditory representation is useful for various applications such as for supporting the visually impaired, for actors to train emotional expression, and for supporting annotation of multi-modal behavioral corpora. In this paper we present a prototype system for interactive sonification of facial behavior that works both in real-time mode, using a webcam, and offline mode, analyzing a video file. The system is based on python and Jupyter notebooks, and relies on the python module sc3nb for sonification-related functionalities. Facial feature extraction is realized using OpenFace 2.0. Designing the system led to the development of a framework of reusable components to develop interactive sonification applications, called Panson, which can be used to easily design and adapt sonifications for different use cases. We present the main concepts behind the facial behavior sonification system and the Panson framework. Furthermore, we introduce and discuss novel sonifications developed using Panson, and demonstrate them with a set of sonified videos. The sonifications and Panson are Open Source reproducible research available on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
2

Phillips, Sean, and Andrés Cabrera. "Sonification Workstation." In ICAD 2019: The 25th International Conference on Auditory Display. Newcastle upon Tyne, United Kingdom: Department of Computer and Information Sciences, Northumbria University, 2019. http://dx.doi.org/10.21785/icad2019.056.

Full text
Abstract:
Sonification Workstation is an open-source application for general sonification tasks, designed with ease-of-use and wide applicability in mind. Intended to foster adoption of sonification across disciplines, and increase experimentation with sonification by non-specialists, Sonification Workstation distills tasks useful in sonification and encapsulates them in a single software envi-ronment. The novel interface combines familiar modes of navigation from Digital Audio Workstations, with a highly simplified patcher interface for creating the sonification scheme. Further, the software associates methods of sonification with the data they sonify, in session files, which will make sharing and reproducing sonifications easier. It is posited that facilitating experimentation by non-specialists will increase the potential growth of sonification into fresh territory, encourage discussion of sonification techniques and uses, and create a larger pool of ideas to draw from in advancing the field of sonification. Source code is available at https://github.com/Cherdyakov/sonification-workstation. Binaries for macOS and Windows, as well as sample content, are available at http://sonificationworkstation.org.
APA, Harvard, Vancouver, ISO, and other styles
3

Peres, S. Camille, and Daniel Verona. "A Task-Analysis-Based Evaluation of Sonification Designs for Two sEMG Tasks." In The 22nd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2016. http://dx.doi.org/10.21785/icad2016.038.

Full text
Abstract:
This paper presents a brief description of surface electromyography (sEMG), what it can be used for, as well as some of the problems associated with visual displays of sEMG data. Sonifications of sEMG data have shown potential for certain applications in data monitoring and movement training, however there are still challenges related to the design of these sonifications that need to be addressed. Our previous research has shown that different sonification designs resulted in better listener performance for different sEMG evaluation tasks (e.g. identifying muscle activation time vs. muscle exertion level). Based on this finding, we speculated that sonifications may benefit from being designed to be task-specific, and that integrating a task analysis into the sonification design process may help sonification designers identify intuitive and meaningful sonification designs. This paper presents a brief introduction to what a task analysis is, provides an example of how a task analysis can be used to inform sonification design, and outlines future research into a task-analysis-based approach to sonification design.
APA, Harvard, Vancouver, ISO, and other styles
4

Reinsch, Dennis, and Thomas Hermann. "Sonecules: A Python Sonification Architecture." In ICAD 2023: The 28th International Conference on Auditory Display. icad.org: International Community for Auditory Display, 2023. http://dx.doi.org/10.21785/icad2023.5580.

Full text
Abstract:
This paper introduces sonecules, a flexible, extensible, end-user friendly and open-source Python sonification toolkit to bring ’sonification to the masses’. The package comes with a basic set of what we define as sonecules which are sonification designs tailored for a given class of data, a selected internal logic for sonification and offering a set of functions to interact with data and sonification controls. This is a design-once-use-many approach as each sonecule can be reused on similarly structured data. The primary goal of sonecules is to enable novice users to rapidly get their data audible – by scaffolding their first steps into auditory display. All sonecules offer a description for the user as well as controls which can be adjusted easily and interactively to the selected data. Users are supported to get started as fast as possible using different sonification designs and they can even mix and match sonecules to create more complex aggregated sonecules. Advanced users are enabled to extend/modify any sonification design and thereby create new sonecules. The sonecules Python package is provided as open-source software, which enables others to contribute their own sonification designs as a sonecule – thus it seeds a growing/growable library of well-documented and easy-to-reuse sonifications designs. Sonecules is implemented in Python using mesonic as the sonification framework, which provides the path to rendering-platform agnostic sonifications.
APA, Harvard, Vancouver, ISO, and other styles
5

Quinton, Michael, Iain McGregor, and David Benyon. "Investigating Effective Methods of Designing Sonifications." In The 24th International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2018. http://dx.doi.org/10.21785/icad2018.011.

Full text
Abstract:
This study aims to provide an insight into effective sonification design. There are currently no standardized design methods, allowing a creative development approach. Sonifcation has been implemented in many different applications from scientific data representation to novel styles of musical expression. This means that methods of practice can vary a greatly. The indistinct line between art and science might be the reason why sonification is still sometimes deemed by scientists with a degree of scepticism. Some wellestablished practitioners argue that it is poor design that renders sonifications meaningless, in-turn having an adverse effect on acceptance. To gain a deeper understanding about sonification research and development 11 practitioners were interviewed. They were asked about methods of sonification design and their insights. The findings present information about sonification research and development, and a variety of views regarding sonification design practice.
APA, Harvard, Vancouver, ISO, and other styles
6

Nadri, Chihab, Chairunisa Anaya, Shan Yuan, and Myounghoon Jeon. "Preliminary Guidelines on the Sonification of Visual Artworks: Linking Music, Sonification & Visual Arts." In ICAD 2019: The 25th International Conference on Auditory Display. Newcastle upon Tyne, United Kingdom: Department of Computer and Information Sciences, Northumbria University, 2019. http://dx.doi.org/10.21785/icad2019.074.

Full text
Abstract:
Sonification and data processing algorithms have advanced over the years to reach practical applications in our everyday life. Similarly, image processing techniques have improved over time. While a number of image sonification methods have already been developed, few have delved into potential synergies through the combined use of multiple data and image processing techniques. Additionally, little has been done on the use of image sonification for artworks, as most research has been focused on the transcription of visual data for people with visual impairments. Our goal is to sonify paintings reflecting their art style and genre to improve the experience of both sighted and visually impaired individuals. To this end, we have designed initial sonifications for paintings of abstractionism and realism, and conducted interviews with visual and auditory experts to improve our mappings. We believe the recommendations and design directions we have received will help develop a multidimensional sonification algorithm that can better transcribe visual art into appropriate music.
APA, Harvard, Vancouver, ISO, and other styles
7

Martens, William L., Philip Poronnik, and Darren Saunders. "Hypothesis-Driven Sonification of Proteomic Data Distributions Indicating Neurodegredation in Amyotrophic Lateral Sclerosis." In The 22nd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2016. http://dx.doi.org/10.21785/icad2016.024.

Full text
Abstract:
Three alternative sonifications of proteomic data distributions were compared as a means to indicate the neuropathology associated with Amyotrophic Lateral Sclerosis (ALS) via auditory display (through exploration of the differentiation of induced pluripotent stem cell derived neurons). Pure visual displays of proteomic data often result in ”visual overload” such that detailed or subtle data important to describe ALS neurodegradation may be glossed over, and so three competing approaches to the sonification of proteomic data were designed to capitalize upon human auditory capacities that complement the visual capacities engaged by more conventional graphic representations. The auditory displays resulting from hypothesis-driven design of three alternative sonifications were evaluated by naïve listeners, who were instructed to listen for differences between the sonifications produce from proteomic data associated with three different types of cells. One of the sonifications was based upon the hypothesis that auditory sensitivity to regularities and irregularities in spatio-temporal patterns in the data could be heard through spatial distribution of sonification components. The design of a second sonification was based upon the hypothesis that variation in timbral components might create a distinguishable sound for each of three types of cells. A third sonification was based upon the hypothesis that redundant variation in both spatial and timbral components would be even more powerful as a means for identifying spatio-temporal patterns in the dynamic, multidimensional data generated in current proteomic studies of ALS.
APA, Harvard, Vancouver, ISO, and other styles
8

Trayford, James W., and Chris M. Harrison. "Introducing strauss: A Flexible Sonification Python Package." In ICAD 2023: The 28th International Conference on Auditory Display. International Community for Auditory Display, 2023. http://dx.doi.org/10.21785/icad2023.1978.

Full text
Abstract:
We introduce strauss (Sonification Tools and Resources for Analysis Using Sound Synthesis) a modular, self-contained and flexible Python sonification package, operating in a free and open source (FOSS) capacity. strauss is intended to be a flexible tool suitable for both scientific data exploration and analysis as well as for producing sonifications that are suitable for public outreach and artistic contexts. We explain the motivations behind strauss, and how these lead to our design choices. We also describe the basic code structure and concepts. We then present output sonification examples, specifically: (1) multiple representations of univariate data (i.e., single data series) for data exploration; (2) how multi-variate data can be mapped onto sound to help interpret how those data variables are related and; (3) a full spatial audio example for immersive Virtual Reality. We summarise, alluding to some of the future functionality as strauss development accelerates.
APA, Harvard, Vancouver, ISO, and other styles
9

Atherton, Jack, and Paulo Blikstein. "Sonification Blocks." In IDC '17: Interaction Design and Children. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3078072.3091992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hasan, Shaid, Iqbal Kabir, and Pratic A. Muntakim. "ECG sonification." In the 6th International Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3362966.3362968.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sonification"

1

Reed, Frederick. Data Sonification Project. Fort Belvoir, VA: Defense Technical Information Center, June 2001. http://dx.doi.org/10.21236/ada394982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Baca, Julia A. Application of Data Sonification for Enhanced Interpretation of Numerical Model Results. Fort Belvoir, VA: Defense Technical Information Center, October 1995. http://dx.doi.org/10.21236/ada302521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography