Journal articles on the topic 'Interactive sonification'

To see the other types of publications on this topic, follow the link: Interactive sonification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Interactive sonification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bresin, Roberto, Thomas Hermann, and Andy Hunt. "Interactive sonification." Journal on Multimodal User Interfaces 5, no. 3-4 (April 20, 2012): 85–86. http://dx.doi.org/10.1007/s12193-012-0095-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grond, Florian, and Thomas Hermann. "Interactive Sonification for Data Exploration: How listening modes and display purposes define design guidelines." Organised Sound 19, no. 1 (February 26, 2014): 41–51. http://dx.doi.org/10.1017/s1355771813000393.

Full text
Abstract:
The desire to make data accessible through the sense of listening has led to ongoing research in the fields of sonification and auditory display since the early 1990s. Coming from the disciplines of computer sciences and human computer interface (HCI), the conceptualisation of sonification has been mostly driven by application areas and methods. On the other hand, the sonic arts, which have always participated in the auditory display community, have a genuine focus on sound. Despite these close interdisciplinary relationships between communities of sound practitioners, a rich and sound- or listening-centred concept of sonification is still missing for design guidelines. Complementary to the useful organisation by fields of application, a proper conceptual framework for sound needs to be abstracted from applications and also to some degree from tasks, as both are not directly related to sound. As an initial approach to recasting the thinking about sonification, we propose a conceptualisation of sonifications along two poles in which sound serves either anormativeor adescriptivepurpose. According to these two poles, design guidelines can be developed proper to display purposes and listening modes.
APA, Harvard, Vancouver, ISO, and other styles
3

Kirke, Alexis, Samuel Freeman, and Eduardo Reck Miranda. "Wireless Interactive Sonification of Large Water Waves to Demonstrate the Facilities of a Large-Scale Research Wave Tank." Computer Music Journal 39, no. 3 (September 2015): 59–70. http://dx.doi.org/10.1162/comj_a_00315.

Full text
Abstract:
Interactive sonification can provide a platform for demonstration and education as well as for monitoring and investigation. We present a system designed to demonstrate the facilities of the UK's most advanced large-scale research wave tank. The interactive sonification of water waves in the “ocean basin” wave tank at Plymouth University consisted of a number of elements: generation of ocean waves, acquisition and sonification of ocean-wave measurement data, and gesture-controlled pitch and amplitude of sonifications. The generated water waves were linked in real time to sonic features via depth monitors and motion tracking of a floating buoy. Types of water-wave patterns, varying in shape and size, were selected and triggered using wireless motion detectors attached to the demonstrator's arms. The system was implemented on a network of five computers utilizing Max/MSP alongside specialist marine research software, and was demonstrated live in a public performance for the formal opening of the Marine Institute building.
APA, Harvard, Vancouver, ISO, and other styles
4

Lindborg, PerMagnus. "Interactive Sonification of Weather Data for The Locust Wrath, a Multimedia Dance Performance." Leonardo 51, no. 5 (October 2018): 466–74. http://dx.doi.org/10.1162/leon_a_01339.

Full text
Abstract:
To work flexibly with the sound design for The Locust Wrath, a multimedia dance performance on the topic of climate change, the author developed software for interactive sonification of climate data. An open-ended approach to parameter mapping allowed tweaking and improvisation during rehearsals, resulting in a large range of musical expression. The sonifications represented weather systems pushing through Southeast Asia in complex patterns. The climate was rendered as a piece of electroacoustic music, whose compositional form—gesture, timbre, intensity, harmony, spatiality—was determined by the data. The article discusses aspects of aesthetic sonification, reports the process of developing the present work and contextualizes the design decisions within theories of cross-modal perception and listening modes.
APA, Harvard, Vancouver, ISO, and other styles
5

Haixia Zhao, B. K. Smith, K. Norman, C. Plaisant, and B. Shneiderman. "Interactive Sonification of Choropleth Maps." IEEE Multimedia 12, no. 2 (April 2005): 26–35. http://dx.doi.org/10.1109/mmul.2005.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Degara, Norberto, Andy Hunt, and Thomas Hermann. "Interactive Sonification [Guest editors' introduction]." IEEE MultiMedia 22, no. 1 (January 2015): 20–23. http://dx.doi.org/10.1109/mmul.2015.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pauletto, Sandra, and Andy Hunt. "Interactive sonification of complex data." International Journal of Human-Computer Studies 67, no. 11 (November 2009): 923–33. http://dx.doi.org/10.1016/j.ijhcs.2009.05.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Weinberg, Gil, and Travis Thatcher. "Interactive Sonification: Aesthetics, Functionality and Performance." Leonardo Music Journal 16 (December 2006): 9–12. http://dx.doi.org/10.1162/lmj.2006.16.9.

Full text
Abstract:
The authors present a sonification installation that allows a group of players to interact with an auditory display of neural activity. The system is designed to represent electrical spike propagation in a neuron culture through sound propagation in space. Participants can simulate neural spikes by using a set of specially designed controllers, experimenting and sonically investigating the electrical activity of the brain. The article discusses some aesthetic and functional aspects of sonification and describes the authors' approach for group interaction with auditory displays. It concludes with the description of a performance piece for the system and ideas for improvements and future work.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Haixia. "Interactive sonification of geo-referenced data." ACM SIGACCESS Accessibility and Computing, no. 82 (June 2005): 33–36. http://dx.doi.org/10.1145/1077238.1077244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Han, Yoon Chung, and Byeong-jun Han. "Skin Pattern Sonification as a New Timbral Expression." Leonardo Music Journal 24 (December 2014): 41–43. http://dx.doi.org/10.1162/lmj_a_00199.

Full text
Abstract:
The authors discuss two sonification projects that transform fingerprint and skin patterns into audio: (1) Digiti Sonus, an interactive installation performing fingerprint sonification and visualization and (2) skin pattern sonification, which converts pore networks into sound. The projects include novel techniques for representing user-intended fingerprint expression and skin pattern selection as audio parameters.
APA, Harvard, Vancouver, ISO, and other styles
11

Ziemer, Tim, Holger Schultheis, David Black, and Ron Kikinis. "Psychoacoustical Interactive Sonification for Short Range Navigation." Acta Acustica united with Acustica 104, no. 6 (November 1, 2018): 1075–93. http://dx.doi.org/10.3813/aaa.919273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wolf, KatieAnna E., and Rebecca Fiebrink. "Personalised interactive sonification of musical performance data." Journal on Multimodal User Interfaces 13, no. 3 (March 13, 2019): 245–65. http://dx.doi.org/10.1007/s12193-019-00294-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Barrett, Natasha. "Interactive Spatial Sonification of Multidimensional Data for Composition and Auditory Display." Computer Music Journal 40, no. 2 (June 2016): 47–69. http://dx.doi.org/10.1162/comj_a_00358.

Full text
Abstract:
This article presents a new approach to interactive spatial sonification of multidimensional data as a tool for spatial sound synthesis, for composing temporal–spatial musical materials, and as an auditory display for scientists to analyze multidimensional data sets in time and space. The approach applies parameter-mapping sonification and is currently implemented in an application called Cheddar, which was programmed in Max/MSP. Cheddar sonifies data in real time, where the user can modify a wide variety of temporal, spatial, and sonic parameters during the listening process, and thus more easily uncover patterns and processes in the data than when applying non-real-time, noninteractive techniques. The design draws on existing literature concerning perception and acoustics, and it applies the author's practical experience in acousmatic composition, spectromorphology, and sound semantics, while addressing accuracy, flexibility, and ease of use. Although previous sonification applications have addressed some degree of real-time control and spatialization, this approach integrates space and sound in an interactive framework. Spatial information is sonified in high-order 3-D ambisonics, where the user can interactively move the virtual listening position to reveal details easily missed from fixed or noninteractive spatial views. Sounds used as input to the sonification take advantage of the rich spectra and extramusical attributes of acoustic sources, which, although previously theorized, are investigated here in a practical context thoroughly tested alongside acoustic and psychoacoustic considerations. Furthermore, when using Cheddar, no specialized knowledge of programming, acoustics, or psychoacoustics is required. These approaches position Cheddar at the junction between science and art. With one application serving both disciplines, the patterns and processes of science are more fluently appropriated into music or sound art, and vice versa for scientific research, science public outreach, and education.
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Jiajun, and Thomas Hermann. "Interactive Mode Explorer Sonification Enhances Exploratory Cluster Analysis." Journal of the Audio Engineering Society 66, no. 9 (September 16, 2018): 703–11. http://dx.doi.org/10.17743/jaes.2018.0042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hermann, T., and A. Hunt. "Guest Editors' Introduction: An Introduction to Interactive Sonification." IEEE MultiMedia 12, no. 2 (April 2005): 20–24. http://dx.doi.org/10.1109/mmul.2005.26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pauletto, Sandra, Howard Cambridge, and Patrick Susini. "Data sonification and sound design in interactive systems." International Journal of Human-Computer Studies 85 (January 2016): 1–3. http://dx.doi.org/10.1016/j.ijhcs.2015.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Alonso-Arevalo, Miguel A., Simon Shelley, Dik Hermes, Jacqueline Hollowood, Michael Pettitt, Sarah Sharples, and Armin Kohlrausch. "Curve shape and curvature perception through interactive sonification." ACM Transactions on Applied Perception 9, no. 4 (October 2012): 1–19. http://dx.doi.org/10.1145/2355598.2355600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Jiajun, Thomas Hermann, and Roberto Bresin. "Introduction to the special issue on interactive sonification." Journal on Multimodal User Interfaces 13, no. 3 (August 1, 2019): 151–53. http://dx.doi.org/10.1007/s12193-019-00312-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Burloiu, Grigore, Valentin Mihai, and Stefan Damian. "Layered Motion and Gesture Sonification in an Interactive Installation." Journal of the Audio Engineering Society 66, no. 10 (October 16, 2018): 770–78. http://dx.doi.org/10.17743/jaes.2018.0047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Turchet, Luca, and Roberto Bresin. "Effects of Interactive Sonification on Emotionally Expressive Walking Styles." IEEE Transactions on Affective Computing 6, no. 2 (April 1, 2015): 152–64. http://dx.doi.org/10.1109/taffc.2015.2416724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fernstrom, M., E. Brazil, and L. Bannon. "HCI Design and Interactive Sonification for Fingers and Ears." IEEE Multimedia 12, no. 2 (April 2005): 36–44. http://dx.doi.org/10.1109/mmul.2005.27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Quinn, Marty. "“Walk on the Sun”: an interactive image sonification exhibit." AI & SOCIETY 27, no. 2 (September 16, 2011): 303–5. http://dx.doi.org/10.1007/s00146-011-0355-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Diniz, Nuno, Pieter Coussement, Alexander Deweppe, Michiel Demey, and Marc Leman. "An embodied music cognition approach to multilevel interactive sonification." Journal on Multimodal User Interfaces 5, no. 3-4 (January 11, 2012): 211–19. http://dx.doi.org/10.1007/s12193-011-0084-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Schaffert, Nina, and Klaus Mattes. "Interactive Sonification in Rowing: Acoustic Feedback for On-Water Training." IEEE MultiMedia 22, no. 1 (January 2015): 58–67. http://dx.doi.org/10.1109/mmul.2015.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Geronazzo, Michele, Alberto Bedin, Luca Brayda, Claudio Campus, and Federico Avanzini. "Interactive spatial sonification for non-visual exploration of virtual maps." International Journal of Human-Computer Studies 85 (January 2016): 4–15. http://dx.doi.org/10.1016/j.ijhcs.2015.08.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Fabiani, Marco, Roberto Bresin, and Gaël Dubus. "Interactive sonification of expressive hand gestures on a handheld device." Journal on Multimodal User Interfaces 6, no. 1-2 (November 23, 2011): 49–57. http://dx.doi.org/10.1007/s12193-011-0076-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Frid, Emma, Ludvig Elblaus, and Roberto Bresin. "Interactive sonification of a fluid dance movement: an exploratory study." Journal on Multimodal User Interfaces 13, no. 3 (November 15, 2018): 181–89. http://dx.doi.org/10.1007/s12193-018-0278-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Barrett, Natasha, and Karen Mair. "Aftershock: A science–art collaboration through sonification." Organised Sound 19, no. 1 (February 26, 2014): 4–16. http://dx.doi.org/10.1017/s1355771813000368.

Full text
Abstract:
We are immersed in a wonderful cacophony of sound. Sustained and intermittent pings, cracks, burrs, plops and tingles jostle for position in our heads. High-pitched delicate cascades contrast starkly with deep thunder like rumbles that seem to permeate our entire bodies. We are exploring a fragmenting fault zone from the inside, a dynamic geological process brought to our ears through sonification and science–art collaboration: the interactive sound art installation Aftershock. Aftershock (2011) is the result of the collaboration between composer Natasha Barrett, associate professor of geosciences Karen Mair and the Norwegian Centre for the Physics of Geological Processes (PGP) at the University of Oslo. In this paper we discuss how a scientist and an artist collaborated through sonification, the artistic results of this approach and how new compositional methods and aesthetical frameworks emerged.
APA, Harvard, Vancouver, ISO, and other styles
29

Arnold, Rudolf. "PLAY ME: interactive sonification of sexual arousal in long-distance relationships." Paladyn, Journal of Behavioral Robotics 11, no. 1 (June 27, 2020): 250–70. http://dx.doi.org/10.1515/pjbr-2020-0014.

Full text
Abstract:
AbstractMusic is the best medium for expressing emotions and arousal nonverbally. PLAY ME is a gender-neutral Arduino-based system that allows partners in a long-distance relationship to perceive each other’s sexual arousal and to provide stimulation of erogenous zones using music. PLAY ME’s main parts are a tiny pneumatic anal probe connected to a pressure sensor and a bodysuit with integrated vibrators. Whenever both partners wear these devices, a real-time exchange of emotions and corporeal feelings can be enabled. Three sensors capture genital sexual arousal and transform it into music: a pulse sensor, a sensor for galvanic skin response and a pneumatic anal pressure probe. The anal probe measures pelvic tensions and contractions. Its signal controls the main voice. Higher arousal leads to stronger pelvic muscle tensions. Measured data are mapped to pitch, so the level of sexual arousal is audible in a comprehensive way, and orgasms can be clearly identified by regular pulsating sounds. The pulse sensor and the skin response sensor are the driving rhythm and drone frequency. The vibrators in the bodysuit are controlled by sound that is generated by the partner using any audio source. Mixing the sounds generated by the sensors and the instrument leads to interactive music that can enhance erotic feelings and sexual arousal in the way of biofeedback. This article describes the background and construction of the PLAY ME system and shows diagrams of sensor values recorded during sexual stimulation. After a discussion of the results, there is an outlook toward further development.
APA, Harvard, Vancouver, ISO, and other styles
30

Landry, Steven, and Myounghoon Jeon. "Interactive sonification strategies for the motion and emotion of dance performances." Journal on Multimodal User Interfaces 14, no. 2 (March 14, 2020): 167–86. http://dx.doi.org/10.1007/s12193-020-00321-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wirfs-Brock, Jordan, Alli Fam, Laura Devendorf, and Brian Keegan. "Examining Narrative Sonification: Using First-Person Retrospection Methods to Translate Radio Production to Interaction Design." ACM Transactions on Computer-Human Interaction 28, no. 6 (December 31, 2021): 1–34. http://dx.doi.org/10.1145/3461762.

Full text
Abstract:
We present a first-person, retrospective exploration of two radio sonification pieces that employ narrative scaffolding to teach audiences how to listen to data. To decelerate and articulate design processes that occurred at the rapid pace of radio production, the sound designer and producer wrote retrospective design accounts. We then revisited the radio pieces through principles drawn from guidance design, data storytelling, visualization literacy, and sound studies. Finally, we speculated how these principles might be applied through interactive, voice-based technologies. First-person methods enabled us to access the implicit knowledge embedded in radio production and translate it to technologies of interest to the human–computer-interaction community, such as voice user interfaces that rely on auditory display. Traditionally, sonification practitioners have focused more on generating sounds than on teaching people how to listen; our process, however, treated sound and narrative as a holistic, sonic-narrative experience. Our first-person retrospection illuminated the role of narrative in designing to support people as they learn to listen to data.
APA, Harvard, Vancouver, ISO, and other styles
32

Dubus, Gaël, and Roberto Bresin. "Exploration and evaluation of a system for interactive sonification of elite rowing." Sports Engineering 18, no. 1 (September 7, 2014): 29–41. http://dx.doi.org/10.1007/s12283-014-0164-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Brückner, Hans-Peter, Sebastian Lesse, Wolfgang Theimer, and Holger Blume. "Design space exploration of hardware platforms for interactive low latency movement sonification." Journal on Multimodal User Interfaces 10, no. 1 (September 22, 2015): 1–11. http://dx.doi.org/10.1007/s12193-015-0199-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

POLLI, ANDREA. "Modelling storms in sound: the Atmospherics/Weather Works project." Organised Sound 9, no. 2 (August 2004): 175–80. http://dx.doi.org/10.1017/s1355771804000251.

Full text
Abstract:
Atmospherics/Weather Works is an interdisciplinary project in the sonification of storms and other meteorological events generated directly from atmospheric data produced by a highly detailed and physically accurate simulation of weather systems. This paper discusses the background, conception and execution of the first stage of the project that has resulted in several performances, stereo recordings, a multi-channel spatialised sound installation, and an interactive sound environment.
APA, Harvard, Vancouver, ISO, and other styles
35

Polo, Antonio, and Xavier Sevillano. "Musical Vision: an interactive bio-inspired sonification tool to convert images into music." Journal on Multimodal User Interfaces 13, no. 3 (November 8, 2018): 231–43. http://dx.doi.org/10.1007/s12193-018-0280-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Skulimowski, Piotr, Mateusz Owczarek, Andrzej Radecki, Michal Bujacz, Dariusz Rzeszotarski, and Pawel Strumillo. "Interactive sonification of U-depth images in a navigation aid for the visually impaired." Journal on Multimodal User Interfaces 13, no. 3 (November 8, 2018): 219–30. http://dx.doi.org/10.1007/s12193-018-0281-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Han, Yoon Chung. "Digiti Sonus and Eyes." Public 30, no. 60 (March 1, 2020): 156–63. http://dx.doi.org/10.1386/public_00012_7.

Full text
Abstract:
This paper discusses design process and motivations of two interactive biometric data artworks (Digiti Sonus and Eyes). Narratives and artistic explorations using two forms of biometric data, from fingerprints and the iris, are discussed based on insights from various fields such as genetics, visual feature analysis, user interface design and data visualization and sonification. Digiti Sonus and Eyes extracted the unique visual features of each type of biometric data and transformed the data into musical sound with multimodal interactions so that the result was real time experimental sound. Various aspects of the two artworks are compared in this paper.
APA, Harvard, Vancouver, ISO, and other styles
38

Radecki, Andrzej, Michał Bujacz, Piotr Skulimowski, and Paweł Strumiłło. "Interactive sonification of images in serious games as an education aid for visually impaired children." British Journal of Educational Technology 51, no. 2 (July 22, 2019): 473–97. http://dx.doi.org/10.1111/bjet.12852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Worrall, David. "Can Micro-Gestural Inflections Be Used to Improve the Soniculatory Effectiveness of Parameter Mapping Sonifications?" Organised Sound 19, no. 1 (February 26, 2014): 52–59. http://dx.doi.org/10.1017/s135577181300040x.

Full text
Abstract:
Parameter mapping sonification is the most widely used technique for representing multi-dimensional data in sound. However, it is known to be unreliable when used for detecting information in some types of data. This is generally thought to be the result of the co-dependency of the psychoacoustic dimensions used in the mapping.Positing its perceptual basis in a theory of embodied cognition, the most common approach to overcoming this limitation involves techniques that afford the interactive exploration of the data using gross body gestures. In some circumstances, such exploration is not possible and, even when it is, it may be neither necessary nor sufficient.This article explores some other possible reasons for the unreliability of parameter mapping sonification and, drawing from the experience of expressive musical performance, suggests that the problem lies not in the parametric approachper se, nor in the lack of interactivity, but in the extent to which the parameters employed contribute to coherent gestalts. A method for how this might be achieved that relies on the use of micro-gestural information is proposed. While this is speculative, the use of such gestural inflections is well known in music performance, is supported by findings in neuroscience and lends itself to empirical testing.
APA, Harvard, Vancouver, ISO, and other styles
40

Maples, Creve, and Craig Peterson. "µuSE (Multidimensional, User-Oriented Synthetic Environment): a Functionality-based, Human Computer Interface." International Journal of Virtual Reality 1, no. 1 (January 1, 1995): 2–9. http://dx.doi.org/10.20870/ijvr.1995.1.1.2601.

Full text
Abstract:
The Multidimensional User-oriented Synthetic Environment, µuSE, is an open-ended software shell that provides a new approach to interacting with computer-based information. By using a real-time, device-independent software design and incorporating both cognitive and experiential models of human perception, µuSE greatly enhances a person's ability to examine, interact with, and understand relationships in complex information space. A µuSE shell may be wrapped around data, models, simulations or even complete programs. Using a new design approach, based on human functionality, it provides tools for the presentation, exploration, navigation, manipulation, and examination of information. Users experience a highly interactive environment, capable of dynamically mapping information into visual, auditory or kinesthetic representations. By eliminating the necessity for application programs to directly interact with devices, and by providing standard functional channels for user communication, the software environment significantly reduces and simplifies application code development. puSE also permits both multiprocessor operations and distributed network-based, heterogeneously shared environments. The system currently supports flat screen, stereo, and VR operation (with full head-tracking), voice recognition, sound synthesis, data sonification, and a variety of commercial interactive devices.
APA, Harvard, Vancouver, ISO, and other styles
41

Varni, Giovanna, Gaël Dubus, Sami Oksanen, Gualtiero Volpe, Marco Fabiani, Roberto Bresin, Jari Kleimola, Vesa Välimäki, and Antonio Camurri. "Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices." Journal on Multimodal User Interfaces 5, no. 3-4 (December 13, 2011): 157–73. http://dx.doi.org/10.1007/s12193-011-0079-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cantrell, Stanley J., R. Michael Winters, Prakriti Kaini, and Bruce N. Walker. "Sonification of Emotion in Social Media: Affect and Accessibility in Facebook Reactions." Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (March 30, 2022): 1–26. http://dx.doi.org/10.1145/3512966.

Full text
Abstract:
Facebook Reactions are a collection of animated icons that enable users to share and express their emotions when interacting with Facebook content. The current design of Facebook Reactions utilizes visual stimuli (animated graphics and text) to convey affective information, which presents usability and accessibility barriers for visually-impaired Facebook users. In this paper, we investigate the use of sonification as a universally-accessible modality to aid in the conveyance of affect for blind and sighted social media users. We discuss the design and evaluation of 48 sonifications, leveraging Facebook Reactions as a conceptual framework. We conducted an online sound-matching study with 75 participants (11 blind, 64 sighted) to evaluate the performance of these sonifications. We found that sonification is an effective tool for conveying emotion for blind and sighted participants, and we highlight sonification design strategies that contribute to improved efficacy. Finally, we contextualize these findings and discuss the implications of this research with respect to HCI and the accessibility of online communities and platforms.
APA, Harvard, Vancouver, ISO, and other styles
43

Kim, Charles, Alexandria Guo, Gautam Salhotra, Sara Sprinkhuizen, Keerthi Shetty, and David Sun Kong. "Sonifying Data from the Human Microbiota: Biota Beats." Computer Music Journal 44, no. 1 (2020): 51–70. http://dx.doi.org/10.1162/comj_a_00552.

Full text
Abstract:
Abstract This article presents a musical interface that enables the sonification of data from the human microbiota, the trillions of microorganisms that inhabit the human body, into sound and music. The project is concerned with public engagement in science, particularly the life sciences, and developing cultivation technologies that take advantage of the ubiquitous and accessible nature of the human microbiota. In this article we examine the collaboration between team members proficient in musical composition and those with expertise in biology, sonification, and data visualization, producing an individualized piece of music designed to capture basic biological data and user attention. Although this system, called Biota Beats, sonifies ubiquitous data for educational science projects, it also establishes a connection between individuals and their bodies and between a community and its context through interactive music experiences, while attempting to make the science of the human microbiome more accessible. The science behind standardizing sonified data for scientific, human analysis is still in development (in comparison to charts, graphs, spectrograms, or other types of data visualization). So a more artistic approach, using the framework of musical genres and their associated themes and motifs, is a convenient and previously established way to capitalize on how people naturally perceive sound. Further, to forge a creative connection between the human microbiota and the music genre, a philosophical shift is necessary, that of viewing the human body and the digital audio workstation as ubiquitous computers.
APA, Harvard, Vancouver, ISO, and other styles
44

Schaffert, Nina, Andrew Godbout, Sebastian Schlueter, and Klaus Mattes. "Towards an application of interactive sonification for the forces applied on the pedals during cycling on the Wattbike ergometer." Displays 50 (December 2017): 41–48. http://dx.doi.org/10.1016/j.displa.2017.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Metatla, Oussama, Nick Bryan-Kinns, Tony Stockman, and Fiore Martin. "Sonification of reference markers for auditory graphs: effects on non-visual point estimation tasks." PeerJ Computer Science 2 (April 6, 2016): e51. http://dx.doi.org/10.7717/peerj-cs.51.

Full text
Abstract:
Research has suggested that adding contextual information such as reference markers to data sonification can improve interaction with auditory graphs. This paper presents results of an experiment that contributes to quantifying and analysing the extent of such benefits for an integral part of interacting with graphed data: point estimation tasks. We examine three pitch-based sonification mappings; pitch-only, one-reference, and multiple-references that we designed to provide information about distance from an origin. We assess the effects of these sonifications on users’ performances when completing point estimation tasks in a between-subject experimental design against visual and speech control conditions. Results showed that the addition of reference tones increases users accuracy with a trade-off for task completion times, and that the multiple-references mapping is particularly effective when dealing with points that are positioned at the midrange of a given axis.
APA, Harvard, Vancouver, ISO, and other styles
46

Delogu, Franco, Marta Olivetti Belardinelli, Massimiliano Palmiero, Emanuele Pasqualotto, Haixia Zhao, Catherine Plaisant, and Stefano Federici. "Interactive sonification for blind people exploration of geo-referenced data: comparison between a keyboard-exploration and a haptic-exploration interfaces." Cognitive Processing 7, S1 (August 4, 2006): 178–79. http://dx.doi.org/10.1007/s10339-006-0137-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Su, Isabelle, Zhao Qin, Tomás Saraceno, Ally Bisshop, Roland Mühlethaler, Evan Ziporyn, and Markus J. Buehler. "Sonification of a 3-D Spider Web and Reconstitution for Musical Composition Using Granular Synthesis." Computer Music Journal 44, no. 4 (2020): 43–59. http://dx.doi.org/10.1162/comj_a_00580.

Full text
Abstract:
Abstract Three-dimensional spider webs feature highly intricate fiber architectures, which can be represented via 3-D scanning and modeling. To allow novel interpretations of the key features of a 3-D Cyrtophora citricola spider web, we translate complex 3-D data from the original web model into music, using data sonification. We map the spider web data to audio parameters such as pitch, amplitude, and envelope. Paired with a visual representation, the resulting audio allows a unique and holistic immersion into the web that can describe features of the 3-D architecture (fiber distance, lengths, connectivity, and overall porosity of the structure) as a function of spatial location in the web. Using granular synthesis, we further develop a method to extract musical building blocks from the sonified web, transforming the original representation of the web data into new musical compositions. We build a new virtual, interactive musical instrument in which the physical 3-D web data are used to generate new variations in sound through exploration of different spatial locations and grain-processing parameters. The transformation of sound from grains to musical arrangements (variations of melody, rhythm, harmony, chords, etc.) is analogous to the natural bottom–up processing of proteins, resembling the design of sequence and higher-level hierarchical protein material organization from elementary chemical building blocks. The tools documented here open possibilities for creating virtual instruments based on spider webs for live performances and art installations, suggesting new possibilities for immersion into spider web data, and for exploring similarities between protein folding, on the one hand, and assembly and musical expression, on the other.
APA, Harvard, Vancouver, ISO, and other styles
48

Barrass, Stephen. "An Annotated Portfolio of Research through Design in Acoustic Sonification." Leonardo 49, no. 1 (February 2016): 72–73. http://dx.doi.org/10.1162/leon_a_01116.

Full text
Abstract:
The Hypertension Singing Bowl was shaped from a year of the author’s blood pressure readings, and 3D printed in stainless steel so it would ring. This digitally fabricated singing bowl is an “ultimate particular” that establishes the design space of Acoustic Sonifications. This paper presents early experiments with Acoustic Sonification and analyses them using an Annotated Portfolio to identify interaction, perception, aesthetics and contemplation as important axes of the domain. This illustrates how Annotated Portfolios could also be used to analyse New Interfaces for Musical Expression.
APA, Harvard, Vancouver, ISO, and other styles
49

Newbold, Joseph, Nicolas E. Gold, and Nadia Bianchi-Berthouze. "Movement sonification expectancy model: leveraging musical expectancy theory to create movement-altering sonifications." Journal on Multimodal User Interfaces 14, no. 2 (March 30, 2020): 153–66. http://dx.doi.org/10.1007/s12193-020-00322-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Poirier-Quinot, David, Gaetan Parseihian, and Brian F. G. Katz. "Comparative study on the effect of Parameter Mapping Sonification on perceived instabilities, efficiency, and accuracy in real-time interactive exploration of noisy data streams." Displays 47 (April 2017): 2–11. http://dx.doi.org/10.1016/j.displa.2016.05.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography