Journal articles on the topic 'Sonification'

To see the other types of publications on this topic, follow the link: Sonification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sonification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Paterson, Estrella, Penelope Sanderson, Neil Paterson, David Liu, and Robert Loeb. "The effect of a secondary task on identification accuracy of oxygen saturation ranges using an enhanced pulse oximetry sonification." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (September 2016): 628–32. http://dx.doi.org/10.1177/1541931213601143.

Full text
Abstract:
In the operating theatre, anesthesiologists monitor an anesthetized patient’s oxygen saturation (SpO2) with a visual display but also with an auditory tone, or sonification. However, if the anesthesiologist must divide their attention across tasks, they may be less effective at recognising their patient’s SpO2 level. Previous research indicates that a sonification enhanced with additional sound dimensions of tremolo and brightness more effectively supports participants’ identification of SpO2 ranges than a conventional sonification does. This laboratory study explored the effect of a secondary task on participants’ ability to identify SpO2 range when using a conventional sonification (LogLinear sonification) versus an enhanced sonification (Stepped Effects sonification). Nineteen non-clinician participants who used the Stepped Effects sonification were significantly more effective at identifying SpO2 range ( Md = 100%) than were 18 participants using the LogLinear sonification ( Md = 80%). Range identification performance of participants using the Stepped Effects sonification tended to be less disrupted by a concurrent arithmetic task (drop from Md = 100% to 95%) than it was for participants using the LogLinear sonification (drop from Md = 80% to 73%). However, the disruption effect in each case was small, and the difference in disruption across sonifications was not statistically significant. Future research will test the sonifications under more intense cognitive load and in the presence of ambient noise.
APA, Harvard, Vancouver, ISO, and other styles
2

Zestic, Jelena, Birgit Brecknell, Helen Liley, and Penelope Sanderson. "A Novel Auditory Display for Neonatal Resuscitation: Laboratory Studies Simulating Pulse Oximetry in the First 10 Minutes After Birth." Human Factors: The Journal of the Human Factors and Ergonomics Society 61, no. 1 (September 27, 2018): 119–38. http://dx.doi.org/10.1177/0018720818793769.

Full text
Abstract:
Objective: We tested whether enhanced sonifications would improve participants’ ability to judge the oxygen saturation levels (SpO2) of simulated neonates in the first 10 min after birth. Background: During the resuscitation of a newborn infant, clinicians must keep the neonate’s SpO2 levels within the target range, however the boundaries for the target range change each minute during the first 10 min after birth. Resuscitation places significant demand on the clinician’s visual attention, and the pulse oximeter’s sonification could provide eyes-free monitoring. However, clinicians have difficulty judging SpO2 levels using the current sonification. Method: In two experiments, nonclinicians’ ability to detect SpO2 range and direction—while performing continuous arithmetic problems—was tested with enhanced versus conventional sonifications. In Experiment 1, tremolo signaled when SpO2 had deviated below or above the target range. In Experiment 2, tremolo plus brightness signaled when SpO2 was above target range, and tremolo alone when SpO2 was below target range. Results: The tremolo sonification improved range identification accuracy over the conventional display (81% vs. 63%, p < .001). The tremolo plus brightness sonification further improved range identification accuracy over the conventional display (92% vs. 62%, p <.001). In both experiments, there was no difference across conditions in arithmetic task accuracy ( p >.05). Conclusion: Using the enhanced sonifications, participants identified SpO2 range more accurately despite a continuous distractor task. Application: An enhanced pulse oximetry sonification could help clinicians multitask more effectively during neonatal resuscitations.
APA, Harvard, Vancouver, ISO, and other styles
3

Ballora, Mark. "Sonification, Science and Popular Music: In search of the ‘wow’." Organised Sound 19, no. 1 (February 26, 2014): 30–40. http://dx.doi.org/10.1017/s1355771813000381.

Full text
Abstract:
Sonification is described as an under-utilised dimension of the ‘wow!’ factor in science engagement multi-media. It is suggested that sonification's potential value, like much of the scientific visualisation content, probably lies less in hard facts and more in how it may serve as a stimulant for curiosity. Sound is described as a multi-dimensional phenomenon, and a number of approaches to creating sonifications are reviewed. Design strategies are described for five types of phenomena that were sonified for works created by cosmologist George Smoot III and percussionist/ethnomusicologist Mickey Hart, most particularly for their filmRhythms of the Universe(Hart and Smoot 2013).
APA, Harvard, Vancouver, ISO, and other styles
4

Ziemer, Tim. "Sound Terminology in Sonification." Journal of the Audio Engineering Society 72, no. 5 (May 3, 2024): 274–89. http://dx.doi.org/10.17743/jaes.2022.0133.

Full text
Abstract:
Sonification research is intrinsically interdisciplinary. Consequently, a proper documentation of and interdisciplinary discourse about a sonification is often hindered by terminology discrepancies between involved disciplines, i.e., the lack of a common sound terminology in sonification research. Without a common ground, a researcher from one discipline may have trouble understanding the implementation and imagining the resulting sound perception of a sonification, if the sonification is described by a researcher from another discipline. To find a common ground, the author consulted literature on interdisciplinary research and discourse, identified problems that occur in sonification, and applied the recommended solutions. As a result, the author recommends considering three aspects of sonification individually, namely 1) Sound Design Concept, 2) Objective, and 3) Evaluation, clarifying which discipline is involved in which aspect and sticking to this discipline’s terminology. As two requirements of sonifications are that they are a) reproducible and b) interpretable, the author recommends documenting and discussing every sonification design once using audio engineering terminology and once using psychoacoustic terminology. The appendixes provide comprehensive lists of sound terms from both disciplines, together with relevant literature and a clarification of often misunderstood and misused terms.
APA, Harvard, Vancouver, ISO, and other styles
5

Nees, Michael A. "Auditory Graphs Are Not the “Killer App” of Sonification, But They Work." Ergonomics in Design: The Quarterly of Human Factors Applications 26, no. 4 (August 15, 2018): 25–28. http://dx.doi.org/10.1177/1064804618773563.

Full text
Abstract:
The search for the elusive “killer app” of sonification has been a recurring theme in sonification research. In this comment, I argue that the killer-app criterion of success stems from interdisciplinary tensions about how to evaluate sonifications. Using auditory graphs as an example, I argue that the auditory display community has produced successful examples of sonic information design that accomplish the human factors goal of improving human interactions with systems. Still, barriers to using sonifications in interfaces remain, and reducing those barriers could result in more widespread use of audio in systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Groppe, Sven, Rico Klinckenberg, and Benjamin Warnke. "Sound of databases." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2695–98. http://dx.doi.org/10.14778/3476311.3476322.

Full text
Abstract:
Sonifications map data to auditory dimensions and offer a new audible experience to their listeners. We propose a sonification of query processing paired with a corresponding visualization both integrated in a web application. In this demonstration we show that the sonification of different types of relational operators generates different sound patterns, which can be recognized and identified by listeners increasing their understanding of the operators' functionality and supports easy remembering of requirements like merge joins work on sorted input. Furthermore, new ways of analyzing query processing are possible with the sonification approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Jones, Kim-Marie N., Mitchell J. Allen, and Kashlin McCutcheon. "Development of a data sonification toolkit and case study sonifying astrophysical phenomena for visually imparied individuals." Journal of the Acoustical Society of America 154, no. 4_supplement (October 1, 2023): A209. http://dx.doi.org/10.1121/10.0023301.

Full text
Abstract:
Data is predominantly conveyed and analysed in a visual manner. Data sonification provides an alternative approach to data analysis, has a broader application (e.g., peripheral monitoring) and offers accessibility to an alternative cohort (e.g., the visually impaired). While there are countless data visualization tools available, producing data sonifications typically requires in-depth knowledge of audio software and/or computer programming. Research was undertaken into existing data sonification tools, and subsequently a Data Sonification Toolkit was developed using Abelton Live and Max for Live. The Toolkit was developed in particular as an alternative means of data analysis for use by multiple disciplines across a built environment firm. The key aims for the toolkit were that (1) it should be user-friendly and accessible to people without an in-depth knowledge of either Ableton or Max, (2) the sonifications produced be accurate and true representations of input data sets, and (3) the Toolkit should have the capability and flexibility to be expanded and customized by those with the expertise to do so. The Toolkit was used, in collaboration with astrophysicist Chris Harrison, to develop sonifications of astronomical phenomena for the visually impaired. These sonifications were presented at the British Science Festival in 2019.
APA, Harvard, Vancouver, ISO, and other styles
8

Cantrell, Stanley J., R. Michael Winters, Prakriti Kaini, and Bruce N. Walker. "Sonification of Emotion in Social Media: Affect and Accessibility in Facebook Reactions." Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (March 30, 2022): 1–26. http://dx.doi.org/10.1145/3512966.

Full text
Abstract:
Facebook Reactions are a collection of animated icons that enable users to share and express their emotions when interacting with Facebook content. The current design of Facebook Reactions utilizes visual stimuli (animated graphics and text) to convey affective information, which presents usability and accessibility barriers for visually-impaired Facebook users. In this paper, we investigate the use of sonification as a universally-accessible modality to aid in the conveyance of affect for blind and sighted social media users. We discuss the design and evaluation of 48 sonifications, leveraging Facebook Reactions as a conceptual framework. We conducted an online sound-matching study with 75 participants (11 blind, 64 sighted) to evaluate the performance of these sonifications. We found that sonification is an effective tool for conveying emotion for blind and sighted participants, and we highlight sonification design strategies that contribute to improved efficacy. Finally, we contextualize these findings and discuss the implications of this research with respect to HCI and the accessibility of online communities and platforms.
APA, Harvard, Vancouver, ISO, and other styles
9

Barrass, Stephen, Mitchell Whitelaw, and Guillaume Potard. "Listening to the Mind Listening." Media International Australia 118, no. 1 (February 2006): 60–67. http://dx.doi.org/10.1177/1329878x0611800109.

Full text
Abstract:
The Listening to the Mind Listening concert was a practice-led research project to explore the idea that we might hear information patterns in the sonified recordings of brain activity, and to investigate the aesthetics of sonifications of the same data set by different composers. This world-first concert of data sonifications was staged at the Sydney Opera House Studio on the evening of 6 July 2004 to a capacity audience of more than 350 neuroscientists, composers, sonification researchers, new media artists and a general public curious to hear what the human brain could sound like. The concert generated 30 sonifications of the same data set, explicit descriptions of the techniques each composer used to map the data into sound, and 90 reviews of these sonifications. This paper presents the motivations for the project, overviews related work, describes the sonification criteria and the review process, and presents and discusses outcomes from the concert.
APA, Harvard, Vancouver, ISO, and other styles
10

Grond, Florian, and Thomas Hermann. "Interactive Sonification for Data Exploration: How listening modes and display purposes define design guidelines." Organised Sound 19, no. 1 (February 26, 2014): 41–51. http://dx.doi.org/10.1017/s1355771813000393.

Full text
Abstract:
The desire to make data accessible through the sense of listening has led to ongoing research in the fields of sonification and auditory display since the early 1990s. Coming from the disciplines of computer sciences and human computer interface (HCI), the conceptualisation of sonification has been mostly driven by application areas and methods. On the other hand, the sonic arts, which have always participated in the auditory display community, have a genuine focus on sound. Despite these close interdisciplinary relationships between communities of sound practitioners, a rich and sound- or listening-centred concept of sonification is still missing for design guidelines. Complementary to the useful organisation by fields of application, a proper conceptual framework for sound needs to be abstracted from applications and also to some degree from tasks, as both are not directly related to sound. As an initial approach to recasting the thinking about sonification, we propose a conceptualisation of sonifications along two poles in which sound serves either anormativeor adescriptivepurpose. According to these two poles, design guidelines can be developed proper to display purposes and listening modes.
APA, Harvard, Vancouver, ISO, and other styles
11

Kirke, Alexis, Samuel Freeman, and Eduardo Reck Miranda. "Wireless Interactive Sonification of Large Water Waves to Demonstrate the Facilities of a Large-Scale Research Wave Tank." Computer Music Journal 39, no. 3 (September 2015): 59–70. http://dx.doi.org/10.1162/comj_a_00315.

Full text
Abstract:
Interactive sonification can provide a platform for demonstration and education as well as for monitoring and investigation. We present a system designed to demonstrate the facilities of the UK's most advanced large-scale research wave tank. The interactive sonification of water waves in the “ocean basin” wave tank at Plymouth University consisted of a number of elements: generation of ocean waves, acquisition and sonification of ocean-wave measurement data, and gesture-controlled pitch and amplitude of sonifications. The generated water waves were linked in real time to sonic features via depth monitors and motion tracking of a floating buoy. Types of water-wave patterns, varying in shape and size, were selected and triggered using wireless motion detectors attached to the demonstrator's arms. The system was implemented on a network of five computers utilizing Max/MSP alongside specialist marine research software, and was demonstrated live in a public performance for the formal opening of the Marine Institute building.
APA, Harvard, Vancouver, ISO, and other styles
12

Hermann, Thomas. "Principles for Shape Sonification." Empirical Musicology Review 8, no. 2 (October 24, 2013): 88. http://dx.doi.org/10.18061/emr.v8i2.3928.

Full text
Abstract:
This commentary starts with a critical reflection on Jensenius and God&oslash;y&rsquo;s sonomotiongrams as a sonification technique to represent movement shapes. Based on this we propose alternative mappings that require less information reduction. Furthermore, design criteria such as invariance, convergence, and stability are presented and applied to sonomotiongrams. Finally, we formulate necessary conditions for sonifications of movement shapes to support the perception and categorization of shapes, and we propose an experimental procedure to assess and compare movement shapes from auditory representations.
APA, Harvard, Vancouver, ISO, and other styles
13

Lindborg, PerMagnus. "Interactive Sonification of Weather Data for The Locust Wrath, a Multimedia Dance Performance." Leonardo 51, no. 5 (October 2018): 466–74. http://dx.doi.org/10.1162/leon_a_01339.

Full text
Abstract:
To work flexibly with the sound design for The Locust Wrath, a multimedia dance performance on the topic of climate change, the author developed software for interactive sonification of climate data. An open-ended approach to parameter mapping allowed tweaking and improvisation during rehearsals, resulting in a large range of musical expression. The sonifications represented weather systems pushing through Southeast Asia in complex patterns. The climate was rendered as a piece of electroacoustic music, whose compositional form—gesture, timbre, intensity, harmony, spatiality—was determined by the data. The article discusses aspects of aesthetic sonification, reports the process of developing the present work and contextualizes the design decisions within theories of cross-modal perception and listening modes.
APA, Harvard, Vancouver, ISO, and other styles
14

Barrass, Stephen, Mitchell Whitelaw, and Freya Bailes. "Listening to the Mind Listening: An Analysis of Sonification Reviews, Designs and Correspondences." Leonardo Music Journal 16 (December 2006): 13–19. http://dx.doi.org/10.1162/lmj.2006.16.13.

Full text
Abstract:
Listening to the Mind Listening (LML) explored whether sonifications can be more than just “noise” in terms of perceived information and musical experience. The project generated an unprecedented body of 27 multichannel sonifications of the same dataset by 38 composers. The design of each sonification was explicitly documented, and there are 88 analytical reviews of the works. The public concert presenting 10 of these sonifications at the Sydney Opera House Studio drew a capacity audience. This paper presents an analysis of the reviews, the designs and the correspondences between timelines of these works.
APA, Harvard, Vancouver, ISO, and other styles
15

García-Benito, R., and E. Pérez-Montero. "PAINTING GRAPHS WITH SOUNDS: COSMONIC SONIFICATION PROJECT." Revista Mexicana de Astronomía y Astrofísica Serie de Conferencias 54 (August 1, 2022): 28–33. http://dx.doi.org/10.22201/ia.14052059p.2022.54.06.

Full text
Abstract:
CosMonic (COSmos harMONIC) is a sonification project with a triple purpose: analysis (by means of sounds) of any type of data, source of inspiration for artistic creations, and pedagogical and dissemination purposes. In this contribution we present the work recently produced by CosMonic in the latter field, creating specific cases for the inclusive astronomy dissemination project Astroaccesible for blind and partially sighted people, but also aimed at a general public that wants to understand astrophysics in an alternative format. For this project, CosMonic seeks to create simple astronomical cases in their acoustic dimension in order to be easily understood. CosMonic's philosophy for these sonifications can be summarized in a simple metaphor: painting graphs with sounds. Sonification is a powerful tool that helps to enhance visual information. Therefore, CosMonic accompanies its audios with animations, using complementary methods to reach a general public. In addition to providing some cases created by CosMonic for inclusive astronomy, we also share our experience with different audiences, as well as suggest some ideas for a better use of sonification in (global) inclusive outreach.
APA, Harvard, Vancouver, ISO, and other styles
16

Sturm, Bob L. "Pulse of an Ocean: Sonification of Ocean Buoy Data." Leonardo 38, no. 2 (April 2005): 143–49. http://dx.doi.org/10.1162/0024094053722453.

Full text
Abstract:
The author presents his work in sonifying ocean buoy data for scientific, pedagogical and compositional purposes. Mapping the spectral buoy data to audible frequencies creates interesting and illuminating sonifications of ocean wave dynamics. Several phenomena can be heard, including both those visible and those invisible in graphical representations of the data. The author has worked extensively with this data to compose music and to produce Music from the Ocean, a multi-media CD-ROM demonstrating the data, the phenomena and the sonification. After a brief introduction to physical oceanography, many examples are presented and a composition and installation created from the sonifications are discussed.
APA, Harvard, Vancouver, ISO, and other styles
17

Winters, R. Michael, and Marcelo M. Wanderley. "Sonification of Emotion: Strategies and results from the intersection with music." Organised Sound 19, no. 1 (February 26, 2014): 60–69. http://dx.doi.org/10.1017/s1355771813000411.

Full text
Abstract:
Emotion is a word not often heard in sonification, though advances in affective computing make the data type imminent. At times the relationship between emotion and sonification has been contentious due to an implied overlap with music. This paper clarifies the relationship, demonstrating how it can be mutually beneficial. After identifying contexts favourable to auditory display of emotion, and the utility of its development to research in musical emotion, the current state of the field is addressed, reiterating the necessary conditions for sound to qualify as a sonification of emotion. With this framework, strategies for display are presented that use acoustic and structural cues designed to target select auditory-cognitive mechanisms of musical emotion. Two sonifications are then described using these strategies to convey arousal and valence though differing in design methodology: one designed ecologically, the other computationally. Each model is sampled at 15-second intervals at 49 evenly distributed points on the AV space, and evaluated using a publically available tool for computational music emotion recognition. The computational design performed 65 times better in this test, but the ecological design is argued to be more useful for emotional communication. Conscious of these limitations, computational design and evaluation is supported for future development.
APA, Harvard, Vancouver, ISO, and other styles
18

Metatla, Oussama, Nick Bryan-Kinns, Tony Stockman, and Fiore Martin. "Sonification of reference markers for auditory graphs: effects on non-visual point estimation tasks." PeerJ Computer Science 2 (April 6, 2016): e51. http://dx.doi.org/10.7717/peerj-cs.51.

Full text
Abstract:
Research has suggested that adding contextual information such as reference markers to data sonification can improve interaction with auditory graphs. This paper presents results of an experiment that contributes to quantifying and analysing the extent of such benefits for an integral part of interacting with graphed data: point estimation tasks. We examine three pitch-based sonification mappings; pitch-only, one-reference, and multiple-references that we designed to provide information about distance from an origin. We assess the effects of these sonifications on users’ performances when completing point estimation tasks in a between-subject experimental design against visual and speech control conditions. Results showed that the addition of reference tones increases users accuracy with a trade-off for task completion times, and that the multiple-references mapping is particularly effective when dealing with points that are positioned at the midrange of a given axis.
APA, Harvard, Vancouver, ISO, and other styles
19

Vinken, Pia M., Daniela Kröger, Ursula Fehse, Gerd Schmitz, Heike Brock, and Alfred O. Effenberg. "Auditory Coding of Human Movement Kinematics." Multisensory Research 26, no. 6 (2013): 533–52. http://dx.doi.org/10.1163/22134808-00002435.

Full text
Abstract:
Although visual perception is dominant on motor perception, control and learning, auditory information can enhance and modulate perceptual as well as motor processes in a multifaceted manner. During last decades new methods of auditory augmentation had been developed with movement sonification as one of the most recent approaches expanding auditory movement information also to usually mute phases of movement. Despite general evidence on the effectiveness of movement sonification in different fields of applied research there is nearly no empirical proof on how sonification of gross motor human movement should be configured to achieve information rich sound sequences. Such lack of empirical proof is given for (a) the selection of suitable movement features as well as for (b) effective kinetic–acoustical mapping patterns and for (c) the number of regarded dimensions of sonification. In this study we explore the informational content of artificial acoustical kinematics in terms of a kinematic movement sonification using an intermodal discrimination paradigm. In a repeated measure design we analysed discrimination rates of six everyday upper limb actions to evaluate the effectiveness of seven different kinds of kinematic–acoustical mappings as well as short term learning effects. The kinematics of the upper limb actions were calculated based on inertial motion sensor data and transformed into seven different sonifications. Sound sequences were randomly presented to participants and discrimination rates as well as confidence of choice were analysed. Data indicate an instantaneous comprehensibility of the artificial movement acoustics as well as short term learning effects. No differences between different dimensional encodings became evident thus indicating a high efficiency for intermodal pattern discrimination for the acoustically coded velocity distribution of the actions. Taken together movement information related to continuous kinematic parameters can be transformed into the auditory domain. Additionally, pattern based action discrimination is obviously not restricted to the visual modality. Artificial acoustical kinematics might be used to supplement and/or substitute visual motion perception in sports and motor rehabilitation.
APA, Harvard, Vancouver, ISO, and other styles
20

Arcand, K. "CHANDRA'S ACCESSIBLE UNIVERSE: FROM SIGHT TO SOUND & TOUCH." Revista Mexicana de Astronomía y Astrofísica Serie de Conferencias 54 (August 1, 2022): 53–56. http://dx.doi.org/10.22201/ia.14052059p.2022.54.11.

Full text
Abstract:
The nature and complexity of various kinds of astronomical data visualizations can be challenging to communicate with people who are blind or low vision. In consultation with members from blind and low vision communities, we present an overview of 3D print, sonification and visual description projects at NASA's Chandra X-ray Observatory Communications group as well as NASA's Universe of Learning, and how the 3D prints, sonifications, and descriptions are currently being used for our mission and programs. We describe how we can integrate verbal explanations of the scientific phenomena along with descriptions of what the visual viewer was seeing in the presented imagery, sonification or 3D model to create a more accessible, cohesive package. Our proposition is that this process of creating content for audiences who are blind or low-vision can and should be applied to other types of astronomy content and even a wider range of science communication content.
APA, Harvard, Vancouver, ISO, and other styles
21

Barrass, Stephen. "An Annotated Portfolio of Research through Design in Acoustic Sonification." Leonardo 49, no. 1 (February 2016): 72–73. http://dx.doi.org/10.1162/leon_a_01116.

Full text
Abstract:
The Hypertension Singing Bowl was shaped from a year of the author’s blood pressure readings, and 3D printed in stainless steel so it would ring. This digitally fabricated singing bowl is an “ultimate particular” that establishes the design space of Acoustic Sonifications. This paper presents early experiments with Acoustic Sonification and analyses them using an Annotated Portfolio to identify interaction, perception, aesthetics and contemplation as important axes of the domain. This illustrates how Annotated Portfolios could also be used to analyse New Interfaces for Musical Expression.
APA, Harvard, Vancouver, ISO, and other styles
22

Garcia-Ruiz, Miguel, Pedro Cesar Santana-Mancilla, Laura Sanely Gaytan-Lugo, and Adriana Iniguez-Carrillo. "Participatory Design of Sonification Development for Learning about Molecular Structures in Virtual Reality." Multimodal Technologies and Interaction 6, no. 10 (October 12, 2022): 89. http://dx.doi.org/10.3390/mti6100089.

Full text
Abstract:
Background: Chemistry and biology students often have difficulty understanding molecular structures. Sonification (the rendition of data into non-speech sounds that convey information) can be used to support molecular understanding by complementing scientific visualization. A proper sonification design is important for its effective educational use. This paper describes a participatory design (PD) approach to designing and developing the sonification of a molecular structure model to be used in an educational setting. Methods: Biology, music, and computer science students and specialists designed a sonification of a model of an insulin molecule, following Spinuzzi’s PD methodology and involving evolutionary prototyping. The sonification was developed using open-source software tools used in digital music composition. Results and Conclusions: We tested our sonification played on a virtual reality headset with 15 computer science students. Questionnaire and observational results showed that multidisciplinary PD was useful and effective for developing an educational scientific sonification. PD allowed for speeding up and improving our sonification design and development. Making a usable (effective, efficient, and pleasant to use) sonification of molecular information requires the multidisciplinary participation of people with music, computer science, and molecular biology backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
23

Barrass, Stephen, and Gregory Kramer. "Using sonification." Multimedia Systems 7, no. 1 (January 1, 1999): 23–31. http://dx.doi.org/10.1007/s005300050108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bresin, Roberto, Thomas Hermann, and Andy Hunt. "Interactive sonification." Journal on Multimodal User Interfaces 5, no. 3-4 (April 20, 2012): 85–86. http://dx.doi.org/10.1007/s12193-012-0095-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ludovico, Luca A., and Giorgio Presti. "The sonification space: A reference system for sonification tasks." International Journal of Human-Computer Studies 85 (January 2016): 72–77. http://dx.doi.org/10.1016/j.ijhcs.2015.08.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

García Riber, Adrián, and Francisco Serradilla. "Toward an Auditory Virtual Observatory." Journal of the Audio Engineering Society 72, no. 5 (May 3, 2024): 341–51. http://dx.doi.org/10.17743/jaes.2022.0146.

Full text
Abstract:
The large ecosystem of observations generated by major space telescope missions can be remotely analyzed using interoperable virtual observatory technologies. In this context of astronomical big data analysis, sonification has the potential of adding a complementary dimension to visualization, enhancing the accessibility of the archives, and offering an alternative strategy to be used when overlapping issues are found in purely graphical representations. This article presents a collection of sonification and musification prototypes that explore the case studies of the MILES and STELIB stellar libraries from the Spanish Virtual Observatory and the Kepler and TESS light curve databases from the Space Telescope Science Institute archive. Using automation and deep learning algorithms, it offers a “palette” of resources that could be used in future developments oriented toward an auditory virtual observatory proposal. The work includes a user study with quantitative and qualitative feedback from specialized and nonspecialized users analyzing the use of sine waves and musical instrument mappings for revealing overlapped lines in galaxy transmission spectra, confirming the need for training and prior knowledge for the correct interpretation of accurate sonifications, and providing potential guidelines to inspire future designs of widely accepted auditory representations for outreach purposes.
APA, Harvard, Vancouver, ISO, and other styles
27

Pratama, Irza Haicha, Angelika Sio Siagian, Faskanita Maristella Nadapdap, Syahdina Saufa Yardha Chaniago, and Laura Novi Silalahi. "Effectiveness of Sonification of Ketumbar Tip (Coriandrum Sativum L.) for Diabetes Melites Complications on the Gynels with Manly Wistar Rats (Rattus Norvegicus) as Aloksan Induced Coal Animals." International Journal of Science and Society 5, no. 5 (December 5, 2023): 927–38. http://dx.doi.org/10.54783/ijsoc.v5i5.958.

Full text
Abstract:
The most common microvascular complication is diabetic nephropathy. Coriander is traditionally used as a stimulant, carminative, anticonvulsant, diuretic and antirheumatic, antiemetic and has potential as an antioxidant. Thus, the authors were interested in exploring the effectiveness of coriander seed sonification (Coriandrum sativum L.) on complications of diabetes mellitus in alloxan-induced male rats as experimental animals, especially on kidney micro-vascularization. This study was an experimental study. In this study, sonification of coriander seeds was carried out using a sonicator. All rats in this study were induced by injection of 5% alloxan, then grouped into 5 different groups including: control (Na-CMC), standard (metformin), coriander seed sonification 200 mg/kgBW, 400 mg/kgBW, and 800 mg/kgBW. The measured parameters in this study were GFR and uACR. The results revealed that the coriander seed sonification in this study significantly increased the GFR value and reduced uACR in line with the increase in the administered dose of coriander sonification (p value < 0.05). Hence, it can be concluded that coriander seed sonification has a protective effect against kidney damage due to diabetes.
APA, Harvard, Vancouver, ISO, and other styles
28

Han, Yoon Chung, and Byeong-jun Han. "Skin Pattern Sonification as a New Timbral Expression." Leonardo Music Journal 24 (December 2014): 41–43. http://dx.doi.org/10.1162/lmj_a_00199.

Full text
Abstract:
The authors discuss two sonification projects that transform fingerprint and skin patterns into audio: (1) Digiti Sonus, an interactive installation performing fingerprint sonification and visualization and (2) skin pattern sonification, which converts pore networks into sound. The projects include novel techniques for representing user-intended fingerprint expression and skin pattern selection as audio parameters.
APA, Harvard, Vancouver, ISO, and other styles
29

Bujacz, Michał, and Paweł Strumiłło. "Sonification: Review of Auditory Display Solutions in Electronic Travel Aids for the Blind." Archives of Acoustics 41, no. 3 (September 1, 2016): 401–14. http://dx.doi.org/10.1515/aoa-2016-0040.

Full text
Abstract:
Abstract Sonification is defined as presentation of information by means of non-speech audio. In assistive technologies for the blind, sonification is most often used in electronic travel aids (ETAs) - devices which aid in independent mobility through obstacle detection or help in orientation and navigation. The presented review contains an authored classification of various sonification schemes implemented in the most widely known ETAs. The review covers both those commercially available and those in various stages of research, according to the input used, level of signal processing algorithm used and sonification methods. Additionally, a sonification approach developed in the Naviton project is presented. The prototype utilizes stereovision scene reconstruction, obstacle and surface segmentation and spatial HRTF filtered audio with discrete musical sounds and was successfully tested in a pilot study with blind volunteers in a controlled environment, allowing to localize and navigate around obstacles.
APA, Harvard, Vancouver, ISO, and other styles
30

Hu, Weijian, Kaiwei Wang, Kailun Yang, Ruiqi Cheng, Yaozu Ye, Lei Sun, and Zhijie Xu. "A Comparative Study in Real-Time Scene Sonification for Visually Impaired People." Sensors 20, no. 11 (June 5, 2020): 3222. http://dx.doi.org/10.3390/s20113222.

Full text
Abstract:
In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this paper, we propose three different auditory-based interaction methods, i.e., depth image sonification, obstacle sonification as well as path sonification, which convey raw depth images, obstacle information and path information respectively to visually impaired people. Three sonification methods are compared comprehensively through a field experiment attended by twelve visually impaired participants. The results show that the sonification of high-level scene information, such as the direction of pathway, is easier to learn and adapt, and is more suitable for point-to-point navigation. In contrast, through the sonification of low-level scene information, such as raw depth images, visually impaired people can understand the surrounding environment more comprehensively. Furthermore, there is no interaction method that is best suited for all participants in the experiment, and visually impaired individuals need a period of time to find the most suitable interaction method. Our findings highlight the features and the differences of three scene detection algorithms and the corresponding sonification methods. The results provide insights into the design of electronic travel aids, and the conclusions can also be applied in other fields, such as the sound feedback of virtual reality applications.
APA, Harvard, Vancouver, ISO, and other styles
31

Mysore, Aprameya, Andreas Velten, and Kevin W. Eliceiri. "Sonification of hyperspectral fluorescence microscopy datasets." F1000Research 5 (October 25, 2016): 2572. http://dx.doi.org/10.12688/f1000research.9233.1.

Full text
Abstract:
Recent advances in fluorescence microscopy have yielded an abundance of high-dimensional spectrally rich datasets that cannot always be adequately explored through conventional three-color visualization methods. While computational image processing techniques allow researchers to derive spectral characteristics of their datasets that cannot be visualized directly, there are still limitations in how to best visually display these resulting rich spectral data. Data sonification has the potential to provide a novel way for researchers to intuitively perceive these characteristics auditorily through direct interaction with the raw multi-channel data. The human ear is well tuned to detect subtle differences in sound that could represent discrete changes in fluorescence spectra. We present a proof of concept implementation of a functional data sonification workflow for analysis of fluorescence microscopy data as an FIJI ImageJ plugin and evaluate its utility with various hyperspectral microscopy datasets. Additionally, we provide a framework for prototyping and testing new sonification methods and a mathematical model to point out scenarios where vision-based spectral analysis fails and sonification-based approaches would not. With this first reported practical application of sonification to biological fluorescence microscopy and supporting computational tools for further exploration, we discuss the current advantages and disadvantages of sonification over conventional spectral visualization approaches. We also discuss where further efforts in spectral sonification need to go to maximize its practical biological applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Flowers, John H., Kimberly D. Turnage, and Dion C. Buhman. "Desktop data sonification." ACM Transactions on Applied Perception 2, no. 4 (October 2005): 473–76. http://dx.doi.org/10.1145/1101530.1101545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Bodle, Carrie. "Sonification/Listening Up." Leonardo Music Journal 16 (December 2006): 51–52. http://dx.doi.org/10.1162/lmj.2006.16.51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sinclair, Peter. "Sonification: what where how why artistic practice relating sonification to environments." AI & SOCIETY 27, no. 2 (August 30, 2011): 173–75. http://dx.doi.org/10.1007/s00146-011-0346-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Newbold, Joseph, Nicolas E. Gold, and Nadia Bianchi-Berthouze. "Movement sonification expectancy model: leveraging musical expectancy theory to create movement-altering sonifications." Journal on Multimodal User Interfaces 14, no. 2 (March 30, 2020): 153–66. http://dx.doi.org/10.1007/s12193-020-00322-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wolfe, Kristina. "Sonification and the Mysticism of Negation." Organised Sound 19, no. 3 (November 13, 2014): 304–9. http://dx.doi.org/10.1017/s1355771814000296.

Full text
Abstract:
Sonification has become a commonly used tool for data analysis, auditory feedback and compositional inspiration. It is often described in scientific terms as a means of uncovering previously unknown patterns in information or data through the use of the auditory sense. This goal seems to be objective, but the results and methodologies can be highly subjective. Moreover, the techniques and sources of information are strikingly similar to those used in mysticism, especially mysticisms of negation, even though the frames of reference and underlying perceptions of the world are markedly different. Both practitioners of sonification and apophatic mystics believe that certain types of information are incomprehensible through traditional analytic means and can only be understood through experience. In this way, sonification can be thought of as a source of mystical information.In this paper, I will discuss the similarities between sonification and apophatic mysticism, or the mysticism of negation. I will argue that the practice of sonification, as a source of mystical information, is ideally suited for creative contemplation, particularly in electronic music. I will start by providing some historical background on the mysticism of negation. I will then present several ways in which sonified knowledge (sound) is often imagined, discussed and perceived akin to a mystical object. Finally, I will discuss specific ways in which sonification exemplifies apophatic mysticism and reveals mystical information. This information – whatever its nature – can be used for creative contemplation and is a potentially invaluable source of compositional and spiritual inspiration.
APA, Harvard, Vancouver, ISO, and other styles
37

Pietschmann, J., F. Geu Flores, and T. Jöllenbeck. "Gait Training in Orthopedic Rehabilitation after Joint Replacement - Back to Normal Gait with Sonification?" International Journal of Computer Science in Sport 18, no. 2 (September 1, 2019): 34–48. http://dx.doi.org/10.2478/ijcss-2019-0012.

Full text
Abstract:
Abstract Even several years after total hip (THR) and total knee replacement (TKR) surgery patients frequently show deficient gait patterns leading to overloads and relieving postures on the contralateral side or in the spine. Gait training is, in these cases, an essential part of rehabilitation. The aim of this study was to compare different feedback methods during gait training after THR and TKR focusing, in particular, on auditory feedback via sonification. A total of 240 patients after THR and TKR were tested in a pre-post-test design during a 3-week rehabilitation period. Even though sonification did not show, statistically, a clear advantage over other feedback methods, it was well accepted by the patients and seemed to significantly change gait pattern during training. A sudden absence of sonification during training led to a rapid relapse into previous movement patterns, which highlights its effectiveness in breaking highly automated gait patterns. A frequent use of sonification during and after rehabilitation could, hence, reduce overloading after THR and TKR. This may soon be viable, since new technologies, such as inertial measurement units, allow for wearable joint angle measurement devices. Back to normal gait with sonification seems possible.
APA, Harvard, Vancouver, ISO, and other styles
38

Gresham-Lancaster, Scot, and Peter Sinclair. "Sonification and Acoustic Environments." Leonardo Music Journal 22 (December 2012): 67–71. http://dx.doi.org/10.1162/lmj_a_00101.

Full text
Abstract:
Sonification can allow us to connect sound and/or music via data to the environment; in another sense, by “displaying” data through sound, sonification participates in creating our acoustic environment. The authors consider here the significance of certain aspects of this relationship.
APA, Harvard, Vancouver, ISO, and other styles
39

Trasmundi, Sarah Bro, and Matthew Isaac Harvey. "A blended quantitative-ethnographic method for describing vocal sonification in dance coaching." Psychology of Language and Communication 22, no. 1 (January 1, 2018): 198–237. http://dx.doi.org/10.2478/plc-2018-0009.

Full text
Abstract:
Abstract In this paper we present a micro-analytic description of the role vocalizing plays in a single case of professional dance instruction. We use a novel mix of qualitative and quantitative tools in order to investigate, and more thoroughly characterize, various forms of vocal co-organization. These forms involve a choreographer using vocalization to couple acoustic dynamics to the dynamics of their bodily movements, while demonstrating a dance routine, in order to enable watching dancers to coordinate the intrabodily dynamics of their own simultaneous performances. In addition to this descriptive project, the paper also suggests how such forms of coordination might emerge, by identifying those forms of voice-body coupling as potential instances of “instructional vocal sonification”. We offer a tentative theoretical model of how vocal sonification might operate when it is used in the teaching of movement skills, and in the choreographic teaching of dance in particular. While non-vocal sonification (both physical and computer-generated) is increasingly well-studied as a means of regulating coordinated inter-bodily movement, we know of no previous work that has systematically approached vocal sonification. We attempt to lay groundwork for future research by showing how our model of instructional vocal sonification might plausibly account for some of the effects of vocalization that we observe here. By doing so, the paper both provides a solid basis for hypothesis generation about a novel class of phenomena (i.e., vocal sonification), and contributes to bridging the methodological gap between isolated descriptions and statistical occurrences of a given type of event.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Wanyu, Michelle Agnes Magalhaes, Wendy E. Mackay, Michel Beaudouin-Lafon, and Frédéric Bevilacqua. "Motor Variability in Complex Gesture Learning: Effects of Movement Sonification and Musical Background." ACM Transactions on Applied Perception 19, no. 1 (January 31, 2022): 1–21. http://dx.doi.org/10.1145/3482967.

Full text
Abstract:
With the increasing interest in movement sonification and expressive gesture-based interaction, it is important to understand which factors contribute to movement learning and how. We explore the effects of movement sonification and users’ musical background on motor variability in complex gesture learning. We contribute an empirical study in which musicians and non-musicians learn two gesture sequences over three days, with and without movement sonification. Results show the interlaced interaction effects of these factors and how they unfold in the three-day learning process. For gesture 1, which is fast and dynamic with a direct “action-sound” sonification, movement sonification induces higher variability for both musicians and non-musicians on day 1. While musicians reduce this variability to a similar level as no auditory feedback condition on day 2 and day 3, non-musicians remain to have significantly higher variability. Across three days, musicians also have significantly lower variability than non-musicians. For gesture 2, which is slow and smooth with an “action-music” metaphor, there are virtually no effects. Based on these findings, we recommend future studies to take into account participants’ musical background, consider longitudinal study to examine these effects on complex gestures, and use awareness when interpreting the results given a specific design of gesture and sound.
APA, Harvard, Vancouver, ISO, and other styles
41

Munteanu, Ligia, Veturia Chiroiu, and Ciprian Dragne. "On the sonification technique." Journal of Engineering Sciences and Innovation 4, no. 2 (May 30, 2019): 155–68. http://dx.doi.org/10.56958/jesi.2019.4.2.155.

Full text
Abstract:
An introduction of the sonification theory and its applications to the medical imaging is presented in this paper. The sonification theory is known in the literature as the transformation of the image into sound by means of a linear operator based on the linear theory of sound propagation. By reversing back to image, an inverse problem has to be solved in order to find if the sound discovers or not new details in the original image. When the classical sonification operator is applied in the inverse problem, no image enhancement is achieved and no details are discovered. This is probably because the classical operator is based on the linear theory of sound propagation. In this paper a new sonification algorithm is advanced based on the Burgers equation of sound propagation. The new algorithm is able to bring improvements in the medical image by inversion. It earns gains in improvement of the medical image by capturing hardly detectable details in the unclear original images. The approach is exercised on fictive ultrasound images of human and rat livers.
APA, Harvard, Vancouver, ISO, and other styles
42

Walker, Bruce N., and Gregory Kramer. "Sonification design and metaphors." ACM Transactions on Applied Perception 2, no. 4 (October 2005): 413–17. http://dx.doi.org/10.1145/1101530.1101535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ben-Tal, Oded, and Jonathan Berger. "Creative Aspects of Sonification." Leonardo 37, no. 3 (June 2004): 229–33. http://dx.doi.org/10.1162/0024094041139427.

Full text
Abstract:
Agoal of sonification research is the intuitive audio representation of complex, multidimensional data. The authors present two facets of this research that may provide insight into the creative process. First, they discuss aspects of categorical perception in nonverbal auditory scene analysis and propose that these characteristics are simplified models of creative engagement with sound. Second, they describe the use of sonified data in musical compositions by each of the authors and observe aspects of the creative process in the purely aesthetic use of sonified statistical data.
APA, Harvard, Vancouver, ISO, and other styles
44

Diaz-Merced, Wanda L., Robert M. Candey, Nancy Brickhouse, Matthew Schneps, John C. Mannone, Stephen Brewster, and Katrien Kolenberg. "Sonification of Astronomical Data." Proceedings of the International Astronomical Union 7, S285 (September 2011): 133–36. http://dx.doi.org/10.1017/s1743921312000440.

Full text
Abstract:
AbstractThis document presents Java-based software called xSonify that uses a sonification technique (the adaptation of sound to convey information) to promote discovery in astronomical data. The prototype is designed to analyze two-dimensional data, such as time-series data. We demonstrate the utility of the sonification technique with examples applied to X-ray astronomy and solar data. We have identified frequencies in the Chandra X-Ray observations of EX Hya, a cataclysmic variable of the intermediate polar type. In another example we study the impact of a major solar flare, with its associated coronal mass ejection (CME), on the solar wind plasma (in particular the solar wind between the Sun and the Earth), and the Earth's magnetosphere.
APA, Harvard, Vancouver, ISO, and other styles
45

Grond, Florian, and Thomas Hermann. "Aesthetic strategies in sonification." AI & SOCIETY 27, no. 2 (August 30, 2011): 213–22. http://dx.doi.org/10.1007/s00146-011-0341-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Vickers, Paul, David Worrall, and Richard So. "Special Issue on Sonification." Displays 47 (April 2017): 1. http://dx.doi.org/10.1016/j.displa.2016.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bonet, Núria. "Musical Borrowing in Sonification." Organised Sound 24, no. 02 (August 2019): 184–94. http://dx.doi.org/10.1017/s1355771819000220.

Full text
Abstract:
Sonification presents some challenges in communicating information, particularly because of the large difference between possible data to sound mappings and cognitively valid mappings. It is an information transmission process which can be described through the Shannon-Weaver Theory of Mathematical Communication. Musical borrowing is proposed as a method in sonification which can aid the information transmission process as the composer’s and listener’s shared musical knowledge is used. This article describes the compositional process of Wasgiischwashäsch (2017) which uses Rossini’s William Tell Overture (1829) to sonify datasets relating to climate change in Switzerland. It concludes that the familiarity of audiences with the original piece, and the humorous effect produced by the distortion of a well-known piece, contribute to a more effective transmission process.
APA, Harvard, Vancouver, ISO, and other styles
48

Bresin, Roberto, Maurizio Mancini, Ludvig Elblaus, and Emma Frid. "Sonification of the self vs. sonification of the other: Differences in the sonification of performed vs. observed simple hand movements." International Journal of Human-Computer Studies 144 (December 2020): 102500. http://dx.doi.org/10.1016/j.ijhcs.2020.102500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Groß-Vogt, Katharina, Matthias Frank, and Robert Höldrich. "Focused Audification and the optimization of its parameters." Journal on Multimodal User Interfaces 14, no. 2 (December 18, 2019): 187–98. http://dx.doi.org/10.1007/s12193-019-00317-8.

Full text
Abstract:
AbstractWe present a sonification method which we call Focused Audification (FA; previously: Augmented Audification) that allows to expand pure audification in a flexible way. It is based on a combination of single-side-band modulation and a pitch modulation of the original data stream. Based on two free parameters, the sonification’s frequency range is adjustable to the human hearing range and allows to interactively zoom into the data set at any scale. The parameters have been adjusted in a multimodal experiment on cardiac data by laypeople. Following from these results we suggest a procedure for parameter optimization to achieve an optimal listening range for any data set, adjusted to human speech.
APA, Harvard, Vancouver, ISO, and other styles
50

Lenzi, Sara, and Paolo Ciuccarelli. "Intentionality and design in the data sonification of social issues." Big Data & Society 7, no. 2 (July 2020): 205395172094460. http://dx.doi.org/10.1177/2053951720944603.

Full text
Abstract:
Data sonification is a practice for conducting scientific analysis through the use of sound to represent data. It is now transitioning to a practice for communicating and reaching wider publics by expanding the range of languages and senses for understanding complexity in data-intensive societies. Communicating to wider publics, though, requires that authors intentionally shape sonification in ways that consider the goals and contexts in which publics relate. It requires a specific set of knowledge and skills that design as a discipline could provide. In this article, we interpret five recent sonification projects and locate them on a scale of intentionality in how authors communicate socially relevant issues to publics.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography