Academic literature on the topic 'Direct sonification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Direct sonification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Direct sonification"

1

Fernström, Mikael, and Caolan McNamara. "After direct manipulation---direct sonification." ACM Transactions on Applied Perception 2, no. 4 (October 2005): 495–99. http://dx.doi.org/10.1145/1101530.1101548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mysore, Aprameya, Andreas Velten, and Kevin W. Eliceiri. "Sonification of hyperspectral fluorescence microscopy datasets." F1000Research 5 (October 25, 2016): 2572. http://dx.doi.org/10.12688/f1000research.9233.1.

Full text
Abstract:
Recent advances in fluorescence microscopy have yielded an abundance of high-dimensional spectrally rich datasets that cannot always be adequately explored through conventional three-color visualization methods. While computational image processing techniques allow researchers to derive spectral characteristics of their datasets that cannot be visualized directly, there are still limitations in how to best visually display these resulting rich spectral data. Data sonification has the potential to provide a novel way for researchers to intuitively perceive these characteristics auditorily through direct interaction with the raw multi-channel data. The human ear is well tuned to detect subtle differences in sound that could represent discrete changes in fluorescence spectra. We present a proof of concept implementation of a functional data sonification workflow for analysis of fluorescence microscopy data as an FIJI ImageJ plugin and evaluate its utility with various hyperspectral microscopy datasets. Additionally, we provide a framework for prototyping and testing new sonification methods and a mathematical model to point out scenarios where vision-based spectral analysis fails and sonification-based approaches would not. With this first reported practical application of sonification to biological fluorescence microscopy and supporting computational tools for further exploration, we discuss the current advantages and disadvantages of sonification over conventional spectral visualization approaches. We also discuss where further efforts in spectral sonification need to go to maximize its practical biological applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Wanyu, Michelle Agnes Magalhaes, Wendy E. Mackay, Michel Beaudouin-Lafon, and Frédéric Bevilacqua. "Motor Variability in Complex Gesture Learning: Effects of Movement Sonification and Musical Background." ACM Transactions on Applied Perception 19, no. 1 (January 31, 2022): 1–21. http://dx.doi.org/10.1145/3482967.

Full text
Abstract:
With the increasing interest in movement sonification and expressive gesture-based interaction, it is important to understand which factors contribute to movement learning and how. We explore the effects of movement sonification and users’ musical background on motor variability in complex gesture learning. We contribute an empirical study in which musicians and non-musicians learn two gesture sequences over three days, with and without movement sonification. Results show the interlaced interaction effects of these factors and how they unfold in the three-day learning process. For gesture 1, which is fast and dynamic with a direct “action-sound” sonification, movement sonification induces higher variability for both musicians and non-musicians on day 1. While musicians reduce this variability to a similar level as no auditory feedback condition on day 2 and day 3, non-musicians remain to have significantly higher variability. Across three days, musicians also have significantly lower variability than non-musicians. For gesture 2, which is slow and smooth with an “action-music” metaphor, there are virtually no effects. Based on these findings, we recommend future studies to take into account participants’ musical background, consider longitudinal study to examine these effects on complex gestures, and use awareness when interpreting the results given a specific design of gesture and sound.
APA, Harvard, Vancouver, ISO, and other styles
4

Barth, Anna, Leif Karlstrom, Benjamin K. Holtzman, Arthur Paté, and Avinash Nayak. "Sonification and Animation of Multivariate Data to Illuminate Dynamics of Geyser Eruptions." Computer Music Journal 44, no. 1 (2020): 35–50. http://dx.doi.org/10.1162/comj_a_00551.

Full text
Abstract:
Abstract Sonification of time series data in natural science has gained increasing attention as an observational and educational tool. Sound is a direct representation for oscillatory data, but for most phenomena, less direct representational methods are necessary. Coupled with animated visual representations of the same data, the visual and auditory systems can work together to identify complex patterns quickly. We developed a multivariate data sonification and visualization approach to explore and convey patterns in a complex dynamic system, Lone Star Geyser in Yellowstone National Park. This geyser has erupted regularly for at least 100 years, with remarkable consistency in the interval between eruptions (three hours) but with significant variations in smaller scale patterns between each eruptive cycle. From a scientific standpoint, the ability to hear structures evolving over time in multiparameter data permits the rapid identification of relationships that might otherwise be overlooked or require significant processing to find. The human auditory system is adept at physical interpretation of call-and-response or causality in polyphonic sounds. Methods developed here for oscillatory and nonstationary data have great potential as scientific observational and educational tools, for data-driven composition with scientific and artistic intent, and towards the development of machine learning tools for pattern identification in complex data.
APA, Harvard, Vancouver, ISO, and other styles
5

Andreev, Artem A., Daria N. Yakhina, Daniil V. Bryankin, and Veronika V. Makarova. "Investigation of the dispersion process of polypropylene glycol in perfluorodecalin." Image Journal of Advanced Materials and Technologies 7, no. 1 (2022): 028–35. http://dx.doi.org/10.17277/jamt.2022.01.pp.028-035.

Full text
Abstract:
Perfluorinated polyesters (PFPE) are used as components of high-performance lubricants and oils operating in high and low temperature conditions, including in special-purpose products used in space and in Arctic conditions. To date, the methods for producing PFPE are complex processes that are quite demanding to the conditions of synthesis, which makes it almost impossible to produce these compounds commercially on a large scale. One of the promising approaches to obtaining perfluorinated esters is liquid-phase direct fluorination. Toxic, ozone-depleting solvents have been used in these processes. When they are replaced with harmless, perfluorinated liquids, for example, perfluorodecalin (PFD), another problem is found – the original polyesters are insoluble in PFD, which is confirmed by laser microinterferometry. In this regard, ultrasonic dispersion of the initial polypropylene glycols (PPGs) of various molecular weights in PFD was proposed in order to obtain emulsions for subsequent direct fluorination. The resulting emulsions were studied using gel-penetrating chromatography, dynamic light scattering and gravimetry. According to the results of the study, the values of particle sizes and emulsion concentrations were obtained over time after ultrasound, and it was also concluded that the stability of the molecular weight of PPG in the emulsion after sonification. The applicability of ultrasonic dispersion for a high specific surface area of the phase boundary in the PPG – PFD system is discussed, and the feasibility of sonification in a continuous or batch mode when implementing liquid-phase fluorination of PPG in PFD is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Linz, Jill A., and Christian Howat. "Atom Tones: investigating waveforms and spectra of atomic elements in an audible periodic chart using techniques found in music production." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A220. http://dx.doi.org/10.1121/10.0016071.

Full text
Abstract:
Atom Music was introduced in 2019 as a way to create unique audible tones for each atomic element that are direct translations of that element’s spectral lines. Each atomic element produces a unique spectral line pattern that can be recognized as the fingerprint of that element. Sonification is the process of translating non-audible data into audible signals as a way to gain an understanding of the original data. In this paper, sonification is applied to atomic spectra, using technology primarily from music production. These were applied to atomic spectra using additive synthesis methods and analyzed using digital audio workstations. Interest in the audible tones has primarily been in element identification through each tone, as well as those interested in musical interpretation. We investigate what insights can be made by observing the digital waveforms and spectra of each element tone. We consider whether there are patterns within the different element waveforms; if there is any correlation between the elements producing similar beat patterns; and if there are any harmonic relations between electron states that are represented by the spectra itself. Results indicate that this method could be a useful tool in investigating atomic structure.
APA, Harvard, Vancouver, ISO, and other styles
7

Bień, Beata, and Jurand D. Bień. "Analysis of Reject Water Formed in the Mechanical Dewatering Process of Digested Sludge Conditioned by Physical and Chemical Methods." Energies 15, no. 5 (February 24, 2022): 1678. http://dx.doi.org/10.3390/en15051678.

Full text
Abstract:
Reject water separated from digested sludge may be a potential source of nutrients due to its high content. However, most often, reject water after sludge dewatering is directed to sewage lines at wastewater treatment plants, negatively affecting their operation, especially in the biological part. The activities related to sludge conditioning before dewatering have a direct impact on the quality of the reject water. The reject water of raw digested sludge is characterized by very high concentrations of ammonium nitrogen, at 1718 mgN-NH4+/dm3; phosphates, at 122.4 mgPO43−/dm3; and chemical oxygen demand (COD), at 2240 mgO2/dm3. The objective of the research was to determine the impact of selected sludge conditioning methods on the quality of reject water obtained after sludge dewatering. The following parameters were analyzed in the reject water: the chemical oxygen demand (COD), phosphates, ammonium nitrogen, and total suspended solids (TSS). It has been observed that the sludge sonification process increases the content of impurities (COD, phosphates) in reject water with an increase in the amplitude of the ultrasonic field. On the other hand, the chemical reagents cause a decrease in the concentration of the pollutants with an increase of the chemical dose. It has been found that the inorganic coagulant PIX 113 gives much better results regarding the reduction of contamination than the polyelectrolyte Zetag 8180.
APA, Harvard, Vancouver, ISO, and other styles
8

Chiroiu, Veturia, Ligia Munteanu, Rodica Ioan, Ciprian Dragne, and Luciana Majercsik. "Using the Sonification for Hardly Detectable Details in Medical Images." Scientific Reports 9, no. 1 (November 27, 2019). http://dx.doi.org/10.1038/s41598-019-54080-7.

Full text
Abstract:
AbstractThe inverse sonification problem is investigated in this article in order to detect hardly capturing details in a medical image. The direct problem consists in converting the image data into sound signals by a transformation which involves three steps - data, acoustics parameters and sound representations. The inverse problem is reversing back the sound signals into image data. By using the known sonification operator, the inverse approach does not bring any gain in the sonified medical imaging. The replication of the image already known does not help the diagnosis and surgical operation. In order to bring gains in the medical imaging, a new sonification operator is advanced in this paper, by using the Burgers equation of sound propagation. The sonified medical imaging is useful in interpreting the medical imaging that, however powerful they may be, are never good enough to aid tumour surgery. The inverse approach is exercised on several medical images used to surgical operations.
APA, Harvard, Vancouver, ISO, and other styles
9

Schuller, Jan C., and Ulf Henrik Göhle. "The Music of Heart Rate Variability." Leonardo, October 20, 2022, 1–3. http://dx.doi.org/10.1162/leon_a_02292.

Full text
Abstract:
Abstract The variability of the heart rate reflects the regulatory status of the autonomous nervous system. For analytic purposes accessible to laypersons, the authors developed methods to transform heart rate variability into acoustical information, either through direct sound synthesis or the production of MIDI files (Musical Instrument Digital Interface) to trigger other devices. The authors describe the methods, some results and discuss applications for analytical and artistic purposes, such as music composition and biofeedback. The resulting “music” is complex, rhythmic, and often has unexpected and interesting implied harmony. Keywords: HRV, heart rate variability, sonification, biorhythms.
APA, Harvard, Vancouver, ISO, and other styles
10

Fedorova, Ksenia. "Mechanisms of Augmentation in Proprioceptive Media Art." M/C Journal 16, no. 6 (November 7, 2013). http://dx.doi.org/10.5204/mcj.744.

Full text
Abstract:
Introduction In this article, I explore the phenomenon of augmentation by questioning its representational nature and analyzing aesthetic modes of our interrelationship with the environment. How can senses be augmented and how do they serve as mechanisms of enhancing the feeling of presence? Media art practices offer particularly valuable scenarios of activating such mechanisms, as the employment of digital technology allows them to operate on a more subtle level of perception. Given that these practices are continuously evolving, this analysis cannot claim to be a comprehensive one, but rather aims to introduce aspects of the specific relations between augmentation, sense of proprioception, technology, and art. Proprioception is one of the least detectable and trackable human senses because it involves our intuitive sense of positionality, which suggests a subtle equilibrium between a center (our individual bodies) and the periphery (our immediate environments). Yet, as any sense, proprioception implies a communicational chain, a network of signals traveling and exchanging information within the body-mind complex. The technological augmentation of this dynamic process produces an interference in our understanding of the structure and elements, the information sent/received. One way to understand the operations of the senses is to think about them as images that the mind creates for itself. Artistic intervention (usually) builds upon exactly this logic: representation of images generated in mind, supplementing or even supplanting the existing collection of inner images with new, created ones. Yet, in case of proprioception the only means to interfere with and augment these inner images is on bodily level. Hence, the question of communication through images (or representations) should be extended towards a more complex theory of embodied perception. Drawing on phenomenology, cognitive science, and techno-cultural studies, I focus on the potential of biofeedback technologies to challenge and transform our self-perception by conditioning new pathways of apprehension (sometimes by creating mechanisms of direct stimulation of neural activity). I am particularly interested in how the awareness of the self (grounded in the felt relationality of our body parts) is most significantly activated at the moments of disturbance of balance, in situations of perplexity and disorientation. Projects by Marco Donnarumma, Sean Montgomery, and other artists working with biofeedback aesthetically validate and instantiate current research about neuro-plasticity, with technologically mediated sensory augmentation as one catalyst of this process. Augmentation as Representation: Proprioception and Proprioceptive Media Representation has been one of the key ways to comprehend reality. But representation also constitutes a spatial relation of distancing and separation: the spectator encounters an object placed in front of him, external to him. Thus, representation is associated more with an analytical, rather than synthetic, methodology because it implies detachment and division into parts. Both methods involve relation, yet in the case of representation there is a more distinct element of distance between the representing subject and represented object. Representation is always a form of augmentation: it extends our abilities to see the "other", otherwise invisible sides and qualities of the objects of reality. Representation is key to both science and art, yet in case of the latter, what is represented is not a (claimed) "objective" scheme of reality, but rather images of the imaginary, inner reality (even figurative painting always presents a particular optical and psychological perspective, to say nothing about forms of abstract art). There are certain kinds of art (visual arts, music, dance, etc.) that deal with different senses and thus, build their specific representational structures. Proprioception is one of the senses that occupies relatively marginal position in artistic production (which is exactly because of the specificity of its representational nature and because it does not create a sense of an external object. The term "proprioception" comes from Latin propius, or "one's own", "individual", and capio, cepi – "to receive", "to perceive". It implies a sense of one's self felt as a relational unity of parts of the body most vividly discovered in movement and in effort employed in it. The loss of proprioception usually means loss of bodily orientation and a feeling of one's body (Sacks 43-54). On the other hand, in case of additional stimulation and training of this sense (not only via classical cyber-devices, like cyber-helmets, gloves, etc. that set a different optics, but also techniques of different kinds of altered states of mind, e.g. through psychotropics, but also through architecture of virtual space and acoustics) a sense of disorientation that appears at first changes towards some analogue of reactions of enthusiasm, excitement discovery, and emotion of approaching new horizons. What changes is not only perception of external reality, but a sense of one's self: the self is felt as fluid, flexible, with penetrable borders. Proprioception implies initial co-existence of the inner and outer space on the basis of originary difference and individuality/specificity of the occupied position. Yet, because they are related, the "external" and "other" already feels as "one's own", and this is exactly what causes the sense of presence. Among the many possible connections that the body, in its sense of proprioception, is always already ready for, only a certain amount gets activated. The result of proprioception is a special kind of meta-stable internal image. This image may not coincide with the optical, auditory, or haptic image. According to Brian Massumi, proprioception translates the exertions and ease of the body's encounters with objects into a muscular memory of relationality. This is the cumulative memory of skill, habit, posture. At the same time as proprioception folds tactility in, it draws out the subject's reactions to the qualities of the objects it perceives through all five senses, bringing them into the motor realm of externalizable response. (59) This internal image is not mediated by anything, though it depends directly on the relations between the parts. It cannot be grasped because it is by definition fluid and dynamic. The position in one point is replaced here by a position-in-movement (point-in-movement). "Movement is not indexed by position. Rather, the position is born in movement, from the relation of movement towards itself" (Massumi 179). Philosopher of "extended mind" Andy Clark notes that we should distinguish between a real body schema (non-conscious configuration) and a body image (conscious construct) (Clark). It is the former that is important to understand, and yet is the most challenging. Due to its fluidity and self-referentiality, proprioception is not presentable to consciousness (the unstable internal image that it creates resides in consciousness but cannot be grasped and thus re-presented). A feeling/sense, it is not bound by sensible forms that would serve as means of objectification and externalization. As Barbara Montero observes, while the objects of vision and hearing, i.e. the most popular senses involved in the arts, are beyond one's body, sense of proprioception relates directly to the bodily sensation, it does not represent any external objects, but the sensory itself (231). These characteristics of proprioception help to reframe the question of augmentation as mediation: in the case of proprioception, the medium of sensation is the very relational structure of the body itself, irrespective of the "exteroceptive" (tactile) or "interoceptive" (visceral) dimensions of sensibility. The body is understood, then, as the "body without image,” and its proprioceptive effect can then be described as "the sensibility proper to the muscles and ligaments" (Massumi 58). Proprioception in (Media) Art One of the most convincing ways of externalization and (re)presentation of the data of proprioception is through re-production of its structure and its artificial enhancement with the help of technology. This can be achieved in at least two ways: by setting up situations and environments that emphasize self-perspective and awareness of perception, and by presenting measurements of bio-data and inviting into dialogue with them. The first strategy may be connected to disorientation and shifted perspective that are created in immersive virtual environments that make the role of otherwise un-trackable, fluid sense of proprioception actually felt and cognized. These effects are closely related to the nuances of perception of space, for instance, to spatial illusion. Practice of spatial illusion in the arts traces its history as far back as Roman frescos, trompe l’oeil, as well as phantasmagorias, like magic lantern. Geometrically, the system of the 360º image is still the most effective in producing a sense of full immersion—either in spaces from panoramas, Stereopticon, Cinéorama to CAVE (Computer Augmented Virtual Environments), or in devices for an individual spectator’s usage, like a stereoscope, Sensorama and more recent Head Mounted Displays (HMD). All these devices provide a sense of hermetic enclosure and bodily engagement with its scenes (realistic or often fantastical). Their images are frameless and thus immeasurable (lack of the sense of proportion provokes feeling of disorientation), image apparatus and the image itself converge here into an almost inseparable total unity: field of vision is filled, and the medium becomes invisible (Grau 198-202; 248-255). Yet, the constructed image is even more frameless and more peculiarly ‘mental’ in environments created on the basis of objectless or "immaterial" media, like light or sound; or in installations prioritizing haptic sensation and in responsive architectures, i.e. environments that transform physically in reaction to their inhabitants. The examples may include works by Olafur Eliasson that are centered around the issues of conscious perception and employ various optical and other apparata (mirrors, curved surfaces, coloured glass, water systems) to shift the habitual perspective and make one conscious of the subtle changes in the environment depending on one's position in space (there have been instances of spectators in Eliasson's installations falling down after trying to lean against an apparent wall that turned out to be a mere optical construct.). Figure 1: Olafur Eliasson, Take Your Time, 2008. © Olafur Eliasson Studio. In his classic H2OExpo project for Delta Expo in 1997, the Dutch architect Lars Spuybroek experimented with the perception of instability. There is no horizontal surface in the pavilion; floors, composed of interconnected elliptical volumes, transform into walls and walls into ceilings, promoting a sense of fluidity and making people respond by falling, leaning, tilting and "experiencing the vector of one’s own weight, and becoming sensitized to the effects of gravity" (Schwartzman 63). Along the way, specially installed sensors detect the behaviour of the ‘walker’ and send signals to the system to contribute further to the agenda of imbalance and confusion by changing light, image projection, and sound.Figure 2: Lars Spuybroek, H2OExpo, 1994-1997. © NOX/ Lars Spuybroek. Philip Beesley’s Hylozoic Ground (2010) is also a responsive environment filled by a dense organic network of delicate illuminated acrylic tendrils that can extend out to touch the visitor, triggering an uncanny mixture of delight and discomfort. The motif of pulsating movement was inspired by fluctuations in coral reefs and recreated via the system of precise sensors and microprocessors. This reference to an unfamiliar and unpredictable natural environment, which often makes us feel cautious and ultra-attentive, is a reminder of our innate ability of proprioception (a deeply ingrained survival instinct) and its potential for a more nuanced, intimate, emphatic and bodily rooted communication. Figure 3: Philip Beesley, Hylozoic Ground, 2010. © Philip Beesley Architect Inc. Works of this kind stimulate awareness of both the environment and one's own response to it. Inviting participants to actively engage with the space, they evoke reactions of self-reflexivity, i.e. the self becomes the object of its own exploration and (potentially) transformation. Another strategy of revealing the processes of the "body without image" is through representing various kinds of bio-data, bodily affective reactions to certain stimuli. Biosignal monitoring technologies most often employed include EEG (Electroencephalogram), EMG (Electromyogram), GSR (Galvanic Skin Response), ECG (Electrocardiogram), HRV (Heart Rate Variability) and others. Previously available only in medical settings and research labs, many types of sensors (bio and environmental) now become increasingly available (bio-enabled products ranging from cardio watches—an instance of the "quantified self" trend—to brain wave-controlled video games). As the representatives of the DIY makers community put it: "By monitoring some phenomena (biofeedback) you can train yourself to modulate them, possibly improving your emotional state. Biosensing lets you interact more naturally with digital systems, creating cyborg-like extensions of your body that overcome disabilities or provide new abilities. You can also share your bio-signals, if you choose, to participate in new forms of communication" (Montgomery). What is it about these technologies besides understanding more accurately the unconscious and invisible signals? The critical question in relation to biofeedback data is about the adequacy of the transference of the initial signal, about the "new" brought by the medium, as well as the ontological status of the resulting representation. These data are reflections of something real, yet themselves have a different weight, also providing the ground for all sorts of simulative methods and creation of mixed realities. External representations, unlike internal, are often attributed a prosthetic nature that is treated as extensions of existing skills. Besides serving their direct purpose (for instance, maps give detailed picture of a distant location), these extensions provide certain psychological effects, such as disorientation, displacement, a shift in a sense of self and enhancement of the sense of presence. Artistic experiments with bio-data started in the 1960s most famously with employing the method of sonification. Among the pioneers were the composers Alvin Lucier, Richard Teitelbaum, David Rosenblum, Erkki Kurenemi, Pierre Henry, and others. Today's versions of biophysical performance may include not only acoustic, but also visual interpretation, as well as subtle narrative scenarios. An example can be Marco Donnarumma's Hypo Chrysos, a piece that translates visceral strain in sound and moving images. The title refers to the type of a punishing trial in one of the circles of hell in Dante's Divine Comedy: the eternal task of carrying heavy rocks is imitated by the artist-performer, while the audience can feel the bodily tension enhanced by sound and imagery. The state of the inner body is, thus, amplified, or augmented. The sense of proprioception experienced by the performer is translated into media perceivable by others. In this externalized form it can also be shared, i.e. released into a space of inter-subjectivity, where it receives other, collective qualities and is not perceived negatively, in terms of pressure. Figure 4: Marco Donnarumma, Hypo Chrysos, 2011. © Marco Donnarumma. Another example can be an installation Telephone Rewired by the artist-neuroscientist Sean Montgomery. Brainwave signals are measured from each visitor upon the entrance to the installation site. These individual data then become part of the collective archive of the brainwaves of all the participants. In the second room, the viewer is engulfed by pulsing light and sound that mimic endogenous brain waveforms of the previous viewers. As in the experience of Donnarumma's performance, this process encourages tuning in to the inner state of the other and finding resonating states in one's own body. It becomes a tool for self-exploration, self-knowledge, and self-control, as well as for developing skills of collective being, of shared body-mind topologies. Synchronization of mental and bodily states of multiple people serves here a broader and deeper goal of training collaborative and empathic abilities. An immersive experience, it triggers deep embodied neural circuits, reaching towards the most authentic reactions not mediated by conscious procedures and judgment. Figure 5: Sean Montgomery, Telephone Rewired, 2013. © Sean Montgomery. Conclusion The potential of biofeedback as a strategy for art projects is a rich area that artists have only begun to explore. The layer of the imaginary and the fictional (which makes art special and different from, for instance, science) can add a critical dimension to understanding the processes of augmentation and mediation. As the described examples demonstrate, art is an investigative journey that can be engaging, surprising, and awakening towards the more subtle and acute forms of thinking and feeling. This astuteness and percipience are especially needed as media and technologies penetrate and affect our very abilities to apprehend reality. We need new tools to make independent and individual judgment. The sense of proprioception establishes a productive challenge not only for science, but also for the arts, inviting a search for new mechanisms of representing the un-presentable and making shareable and communicable what is, by definition, individual, fluid, and ungraspable. Collaborative cognition emerging from the augmentation of proprioception that is enabled by biofeedback technologies holds distinct promise for exploration of not only subjective, but also inter-subjective states and aesthetic strategies of inducing them. References Beesley, Philip. Hylozoic Ground. 2010. Venice Biennale, Venice. Clark, Andy, and David J. Chalmers. “The Extended Mind.” Analysis 58.1 (1998):7-19. Donnarumma, Marco. Hypo Chrysos: Action Art for Vexed Body and Biophysical Media. 2011. Xth Sense Biosensing Wearable Technology. MADATAC Festival, Madrid. Eliasson, Olafur. Take Your Time, 2008. P.S.1 Contemporary Art Centre; Museum of Modern Art, New York. Grau, Oliver. Virtual Art: From Illusion to Immersion. Cambridge, Mass.: MIT Press, 2003. Massumi, Brian. Parables of the Virtual: Movement, Affect, Sensation. Durham: Duke University Press, 2002. Montero, Barbara. "Proprioception as an Aesthetic Sense." Journal of Aesthetics and Art Criticism 64.2 (2006): 231-242. Montgomery, Sean, and Ira Laefsky. "Biosensing: Track Your Body's Signals and Brain Waves and Use Them to Control Things." Make 26. 1 Oct. 2013 ‹http://www.make-digital.com/make/vol26?pg=104#pg104›. Sacks, Oliver. "The Disembodied Lady". The Man Who Mistook His Wife for a Hat and Other Clinical Tales. Philippines: Summit Books, 1985. Schwartzman, Madeline, See Yourself Sensing. Redefining Human Perception. London: Black Dog Publishing, 2011. Spuybroek, Lars. Waterland. 1994-1997. H2O Expo, Zeeland, NL.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Direct sonification"

1

Fernström, Mikael, and Caolan McNamara. "After Direct Manipulation - Direct Sonification." In International Conference on Auditory Display '98. BCS Learning & Development, 1998. http://dx.doi.org/10.14236/ewic/ad1998.14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vickers, Paul, and Robert Höldrich. "Direct Segmented Sonification of Characteristic Features of the Data Domain." In ICAD 2019: The 25th International Conference on Auditory Display. Newcastle upon Tyne, United Kingdom: Department of Computer and Information Sciences, Northumbria University, 2019. http://dx.doi.org/10.21785/icad2019.043.

Full text
Abstract:
Like audification, auditory graphs maintain the temporal relationships of data while using parameter mappings to represent the ordinate values. Such direct approaches have the advantage of presenting the data stream ‘as is’ without the imposed interpretations or accentuation of particular features found in indirect approaches. However, datasets can often be subdivided into short non-overlapping variable length segments that each encapsulate a discrete unit of domain-specific significant information and current direct approaches cannot represent these. We present Direct Segmented Sonification (DSSon) for highlighting the segments’ data distributions as individual sonic events. Using domain knowledge DSSon presents segments as discrete auditory gestalts while retaining the overall temporal regime and relationships of the dataset. The method’s structural decoupling from the sound stream’s formation means playback speed is independent of the individual sonic event durations, thereby offering highly flexible time compression/stretching to allow zooming into or out of the data. DSSon displays high directness, letting the data ‘speak’ for themselves.
APA, Harvard, Vancouver, ISO, and other styles
3

Morawitz, Falk. "Multilayered Narration in Electroacoustic Music Composition Using Nuclear Magnetic Resonance Data Sonification and Acousmatic Storytelling." In ICAD 2019: The 25th International Conference on Auditory Display. Newcastle upon Tyne, United Kingdom: Department of Computer and Information Sciences, Northumbria University, 2019. http://dx.doi.org/10.21785/icad2019.052.

Full text
Abstract:
Nuclear magnetic resonance (NMR) spectroscopy is an analytical tool to determine the structure of chemical compounds. Unlike other spectroscopic methods, signals recorded using NMR spectrometers are frequently in a range of zero to 20000 Hz, making direct playback possible. As each type of molecule has, based on its structural features, distinct and predictable features in its NMR spectra, NMR data sonification can be used to create auditory ‘fingerprints’ of molecules. This paper describes the methodology of NMR data sonification of the nuclei nitrogen, phosphorous, and oxygen and analyses the sonification products of DNA and protein NMR data. The paper introduces On the Extinction of a Species, an acousmatic music composition combining NMR data sonification and voice narration. Ideas developed in electroacoustic composition, such as acousmatic storytelling and sound-based narration are presented and investigated for their use in sonification-based creative works.
APA, Harvard, Vancouver, ISO, and other styles
4

Kreković, G., and I. Vican. "Towards a Parallel Computing Framework for Direct Sonification of Multivariate Chronological Data." In AM '17: Audio Mostly 2017. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3123514.3123551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Blanco, Andrea Lorena Aldana, Steffen Grautoff, and Thomas Hermann. "CardioSounds: A Portable System to Sonify ECG Rhythm Disturbances in Real-Time." In The 24th International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2018. http://dx.doi.org/10.21785/icad2018.028.

Full text
Abstract:
This paper is a continuation and extension of our previous work [1]. CardioSounds is a portable system that allows users to measure and sonify their electrocardiogram signal in real-time. The ECG signal is acquired using the hardware platform BITalino and subsequently analyzed and sonified using a Raspberry Pi. Users can control basic features from the system (start recording, stop recording) using their smartphone. The system is meant to be used for diagnostic and monitoring of cardiac pathologies, providing users with the possibility to monitor a signal without occupying their visual attention. In this paper, we introduce a novel method, anticipatory mapping, to sonify rhythm disturbances such as Atrial Fibrillation, Atrial flutter and Ventricular Fibrillation. Anticipatory mapping enhances perception of rhythmic details without disrupting the direct perception of the actual heart beat rhythm. We test the method on selected pathological data involving three of the most known rhythm disturbances. A preliminary perception test to assess aesthetics of the sonifications and its possible use in medical scenarios shows that the anticipatory mapping method is regarded as informative discerning healthy and pathological states, however there is no agreement about a preferred sonification type.
APA, Harvard, Vancouver, ISO, and other styles
6

Collins, Nick. "Sonification of the Riemann Zeta Function." In ICAD 2019: The 25th International Conference on Auditory Display. Newcastle upon Tyne, United Kingdom: Department of Computer and Information Sciences, Northumbria University, 2019. http://dx.doi.org/10.21785/icad2019.003.

Full text
Abstract:
The Riemann zeta function is one of the great wonders of mathematics, with a deep and still not fully solved connection to the prime numbers. It is defined via an infinite sum analogous to Fourier additive synthesis, and can be calculated in various ways. It was Riemann who extended the consideration of the series to complex number arguments, and the famous Riemann hypothesis states that the non-trivial zeroes of the function all occur on the critical line 0:5 + ti, and what is more, hold a deep correspondence with the prime numbers. For the purposes of sonification, the rich set of mathematical ideas to analyse the zeta function provide strong resources for sonic experimentation. The positions of the zeroes on the critical line can be directly sonified, as can values of the zeta function in the complex plane, approximations to the prime spectrum of prime powers and the Riemann spectrum of the zeroes rendered; more abstract ideas concerning the function also provide interesting scope.
APA, Harvard, Vancouver, ISO, and other styles
7

Cádiz, Rodrigo F., Lothar Droppelmann, Max Guzmán, and Cristian Tejos. "Auditory Graphs from Denoising Real Images Using Fully Symmetric Convolutional Neural Networks." In ICAD 2021: The 26th International Conference on Auditory Display. icad.org: International Community for Auditory Display, 2021. http://dx.doi.org/10.21785/icad2021.028.

Full text
Abstract:
Auditory graphs are a very useful way to deliver numerical information to visually impaired users. Several tools have been proposed for chart data sonification, including audible spreadsheets, custom interfaces, interactive tools and automatic models. In the case of the latter, most of these models are aimed towards the extraction of contextual information and not many solutions have been proposed for the generation of an auditory graph directly from the pixels of an image by the automatic extraction of the underlying data. These kind of tools can dramatically augment the availability and usability of auditory graphs for the visually impaired community. We propose a deep learning-based approach for the generation of an automatic sonification of an image containing a bar or a line chart using only pixel information. In particular, we took a denoising approach to this problem, based on a fully symmetric convolutional neural network architecture. Our results show that this approach works as a basis for the automatic sonification of charts directly from the information contained in the pixels of an image.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Jiajun, and Thomas Hermann. "Parallel Computing of Particle Trajectory Sonification to Enable Real-time Interactivity." In The 23rd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2017. http://dx.doi.org/10.21785/icad2017.022.

Full text
Abstract:
In this paper, we revisit, explore and extend the Particle Trajectory Sonification (PTS) model, which supports cluster analysis of high-dimensional data by probing a model space with virtual particles which are ‘gravitationally’ attracted to a mode of the dataset’s potential function. The particles’ kinetic energy progression of as function of time adds directly to a signal which constitutes the sonification. The exponential increase in computation power since its conception in 1999 enables now for the first time to investigate real-time interactivity in such complex interweaved dynamic sonification models. We speeded up the computation of the PTS model with (i) data optimization via vector quantization, and (ii) parallel computing via OpenCL. We investigated the performance of sonifying high-dimensional complex data under different approaches. The results show a substantial increase in speed when applying vector quantization and parallelism with CPU. GPU parallelism provided a substantial speedup for very large number of particles comparing to using CPU but did not show enough benefit for a low number of particles due to copying overhead. A hybrid OpenCL implementation is presented to maximize the benefits of both worlds.
APA, Harvard, Vancouver, ISO, and other styles
9

Markum, Harald, Alberto de-Campo, Natascha HOERMANN, Willibald Plessas, and Bianka Sengl. "Sonification of lattice data: the spectrum of the Dirac operator across the deconfinement transition." In XXIIIrd International Symposium on Lattice Field Theory. Trieste, Italy: Sissa Medialab, 2005. http://dx.doi.org/10.22323/1.020.0152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography