Dissertations / Theses on the topic 'Sonification'

To see the other types of publications on this topic, follow the link: Sonification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sonification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Berman, Lewis Irwin. "Program comprehension through sonification." Thesis, Durham University, 2011. http://etheses.dur.ac.uk/1396/.

Full text
Abstract:
Background: Comprehension of computer programs is daunting, thanks in part to clutter in the software developer's visual environment and the need for frequent visual context changes. Non-speech sound has been shown to be useful in understanding the behavior of a program as it is running. Aims: This thesis explores whether using sound to help understand the static structure of programs is viable and advantageous. Method: A novel concept for program sonification is introduced. Non-speech sounds indicate characteristics of and relationships among a Java program's classes, interfaces, and methods. A sound mapping is incorporated into a prototype tool consisting of an extension to the Eclipse integrated development environment communicating with the sound engine Csound. Developers examining source code can aurally explore entities outside of the visual context. A rich body of sound techniques provides expanded representational possibilities. Two studies were conducted. In the first, software professionals participated in exploratory sessions to informally validate the sound mapping concept. The second study was a human-subjects experiment to discover whether using the tool and sound mapping improve performance of software comprehension tasks. Twenty-four software professionals and students performed maintenance-oriented tasks on two Java programs with and without sound. Results: Viability is strong for differentiation and characterization of software entities, less so for identification. The results show no overall advantage of using sound in terms of task duration at a 5% level of significance. The results do, however, suggest that sonification can be advantageous under certain conditions. Conclusions: The use of sound in program comprehension shows sufficient promise for continued research. Limitations of the present research include restriction to particular types of comprehension tasks, a single sound mapping, a single programming language, and limited training time. Future work includes experiments and case studies employing a wider set of comprehension tasks, sound mappings in domains other than software, and adding navigational capability for use by the visually impaired.
APA, Harvard, Vancouver, ISO, and other styles
2

Ejdbo, Malin, and Elias Elmquist. "Interactive Sonification in OpenSpace." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170250.

Full text
Abstract:
This report presents the work of a master thesis which aim was to investigate how sonification can be used in the space visualization software OpenSpace to further convey information about the Solar System. A sonification was implemented by using the software SuperCollider and was integrated into OpenSpace using Open Sound Control to send positional data to control the panning and sound level of the sonification. The graphical user interface of OpenSpace was also extended to make the sonification interactive. Evaluations were conducted both online and in the Dome theater to evaluate how well the sonification conveyed information. The outcome of the evaluations shows promising results, which might suggest that sonification has a future in conveying information of the Solar System.
APA, Harvard, Vancouver, ISO, and other styles
3

Pietrucha, Matthew. "Sonification of Spectroscopy Data." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1277.

Full text
Abstract:
Sonification is the process of mapping non-musical data to sound. The field is comprised of three key areas of research: (1) psychological research in perception and cognition, (2) the development of tools, and (3) sonification design and application. The goals of this research were twofold: (1) To provide insights to the development of sonification tools within the programming environment Max for use in further sonification/interdisciplinary research, as well as (2) provide a framework for a musical sonification system. The sonification system discussed was developed to audify spectrometry data, with the purpose of better understanding how multi-purpose systems can be easily modified to suit a particular need. Since all sonification systems may become context specific to the data they audify, a system was developed in the programming language Max that is both modular and responsive to the parameterization of data to create musical outcomes. The trends and phenomena of spectral data in the field of spectroscopy are plotted musically through the system and further enhanced by processes that associate descriptors of said data with compositional idioms, rhythmically, melodically, and harmonically. This process was achieved in Max by creating a modular system that handles the importing and formatting of spectral data (or any data in an array format) to send that data to a variety of subprograms for sonification. Subprograms handle timing and duration, diatonic melody, harmony, and timbral aspects including synthesis and audio effects. These systems are accessible both at a high level for novice users, as well as within the Max environment for more nuanced modification to support further research.
APA, Harvard, Vancouver, ISO, and other styles
4

Ibrahim, Ag Asri Ag. "Usability inspection for sonification applications." Thesis, University of York, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.479510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Steffert, Tony. "Real-time electroencephalogram sonification for neurofeedback." Thesis, Open University, 2018. http://oro.open.ac.uk/57965/.

Full text
Abstract:
Electroencephalography (EEG) is the measurement via the scalp of the electrical activity of the brain. The established therapeutic intervention of neurofeedback involves presenting people with their own EEG in real-time to enable them to modify their EEG for purposes of improving performance or health. The aim of this research is to develop and validate real-time sonifications of EEG for use in neurofeedback and methods for assessing such sonifications. Neurofeedback generally uses a visual display. Where auditory feedback is used, it is mostly limited to pre-recorded sounds triggered by the EEG activity crossing a threshold. However, EEG generates time-series data with meaningful detail at fine temporal resolution and with complex temporal dynamics. Human hearing has a much higher temporal resolution than human vision, and auditory displays do not require people to focus on a screen with their eyes open for extended periods of time - e.g. if they are engaged in some other task. Sonification of EEG could allow more rapid, contingent, salient and temporally detailed feedback. This could improve the efficiency of neurofeedback training and reduce the number and duration of sessions for successful neurofeedback. The same two deliberately simple sonification techniques were used in all three experiments of this research: Amplitude Modulation (AM) sonification, which maps the fluctuations in the power of the EEG to the volume of a pure tone; and Frequency Modulation (FM) sonification, which uses the changes in the EEG power to modify the frequency. Measures included, a listening task, NASA task load index; a measure of how much work it was to do the task, Pre & post measures of mood, and EEG. The first experiment used pre-recorded single channel EEG and participants were asked to listen to the sound of the sonified EEG and try and track the activity that they could hear by moving a slider on a computer screen using a computer mouse. This provided a quantitative assessment of how well people could perceive the sonified fluctuations in EEG level. The tracking accuracy scores were higher for the FM sonification but self-assessments of task load rated the AM sonification as easier to track. The second experiment used the same two sonifications, in a real neurofeedback task using participants own live EEG. Unbeknownst to the participants the neurofeedback task was designed to improve mood. A Pre-Post questionnaire showed that participants changed their self-rated mood in the intended direction with the EEG training, but there was no statistically significant change in EEG. Again the FM sonification showed a better performance but AM was rated as less effortful. The performance of sonifications in the tracking task in experiment 1 was found to predict their relative efficacy at blind self-rated mood modification in experiment 2. The third experiment used both the tracking as in experiment 1 and neurofeedback tasks as in experiment 2, but with modified versions of the AM and FM sonifications to allow two-channel EEG sonifications. This experiment introduced a physical slider as opposed to a mouse for the tracking task. Tracking accuracy increased, but this time no significant difference was found between the two sonification techniques on the tracking task. In the training task, once more the blind self-rated mood did improve in the intended direction with the EEG training, but as again there was no significant change in EEG, this cannot necessarily be attributed to the neurofeedback. There was only a slight difference between the two sonification techniques in the effort measure. In this way, a prototype method has been devised and validated for the quantitative assessment of real-time EEG sonifications. Conventional evaluations of neurofeedback techniques are expensive and time consuming. By contrast, this method potentially provides a rapid, objective and efficient method for evaluating the suitability of candidate sonifications for EEG neurofeedback.
APA, Harvard, Vancouver, ISO, and other styles
6

Perkins, Rhys John. "Interactive sonification of a physics engine." Thesis, Anglia Ruskin University, 2013. http://arro.anglia.ac.uk/323077/.

Full text
Abstract:
Physics engines have become increasingly prevalent in everyday technology. In the context of this thesis they are regarded as a readily available data set that has the potential to intuitively present the process of sonification to a wide audience. Unfortunately, this process is not the focus of attention when formative decisions are made concerning the continued development of these engines. This may reveal a missed opportunity when considering that the field of interactive sonification upholds the importance of physical causalities for the analysis of data through sound. The following investigation deliberates the contextual framework of this field to argue that the physics engine, as part of typical game engine architecture, is an appropriate foundation on which to design and implement a dynamic toolset for interactive sonification. The basis for this design is supported by a number of significant theories which suggest that the underlying data of a rigid body dynamics physics system can sustain an inherent audiovisual metaphor for interaction, interpretation and analysis. Furthermore, it is determined that this metaphor can be enhanced by the extraordinary potential of the computer in order to construct unique abstractions which build upon the many pertinent ideas and practices within the surrounding literature. These abstractions result in a mental model for the transformation of data to sound that has a number of advantages in contrast to a physical modelling approach while maintaining its same creative potential for instrument building, composition and live performance. Ambitions for both sonification and its creative potential are realised by several components which present the user with a range of options for interacting with this model. The implementation of these components effectuates a design that can be demonstrated to offer a unique interpretation of existing strategies as well as overcoming certain limitations of comparable work.
APA, Harvard, Vancouver, ISO, and other styles
7

Parseihian, Gaëtan. "Sonification binaurale pour l'aide à la navigation." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00771316.

Full text
Abstract:
Dans cette thèse, nous proposons la mise en place d'un système de réalité augmentée fondé sur le son 3D et la sonification, ayant pour objectif de fournir les informations nécessaires aux non- voyants pour un déplacement fiable et sûr. La conception de ce système a été abordée selon trois axes. L'utilisation de la synthèse binaurale pour générer des sons 3D est limitée par le problème de l'individualisation des HRTF. Une méthode a été mise en place pour adapter les individus aux HRTF en utilisant la plasticité du cerveau. Évaluée avec une expérience de localisation, cette méthode a permis de montrer les possibilités d'acquisition rapide d'une carte audio-spatiale virtuelle sans utiliser la vision. La sonification de données spatiales a été étudiée dans le cadre d'un système permettant la préhension d'objet dans l'espace péripersonnel. Les capacités de localisation de sources sonores réelles et virtuelles ont été étudiées avec un test de localisation. Une technique de sonification de la distance a été développée. Consistant à relier le paramètre à sonifier aux paramètres d'un effet audio, cette technique peut être appliquée à tout type de son sans nécessiter d'apprentissage supplémentaire. Une stratégie de sonification permettant de prendre en compte les préférences des utilisateurs a été mise en place. Les " morphocons " sont des icônes sonores définis par des motifs de paramètres acoustiques. Cette méthode permet la construction d'un vocabulaire sonore indépendant du son utilisé. Un test de catégorisation a montré que les sujets sont capables de reconnaître des icônes sonores sur la base d'une description morphologique indépendamment du type de son utilisé.
APA, Harvard, Vancouver, ISO, and other styles
8

Worrall, David, and n/a. "SONIFICATION AND INFORMATION CONCEPTS, INSTRUMENTS AND TECHNIQUES." University of Canberra. Communication, 2009. http://erl.canberra.edu.au./public/adt-AUC20090818.142345.

Full text
Abstract:
This thesis is a study of sonification and information: what they are and how they relate to each other. The pragmatic purpose of the work is to support a new generation of software tools that are can play an active role in research and practice that involves understanding information structures found in potentially vary large multivariate datasets. The theoretical component of the work involves a review of the way the concept of information has changed through Western culture, from the Ancient Greeks to recent collaborations between cognitive science and the philosophy of mind, with a particular emphasis on the phenomenology of immanent abstractions and how they might be supported and enhanced using sonification techniques. A new software framework is presented, together with several examples of its use in presenting sonifications of financial information, including that from a high-frequency securities-exchange trading-engine.
APA, Harvard, Vancouver, ISO, and other styles
9

Dyer, John. "Human movement sonification for motor skill learning." Thesis, Queen's University Belfast, 2017. https://pure.qub.ac.uk/portal/en/theses/human-movement-sonification-for-motor-skill-learning(4bda096c-e8ab-4af4-8f35-7445c6b0cb7e).html.

Full text
Abstract:
Transforming human movement into live sound can be used as a method to enhance motor skill learning via the provision of augmented perceptual feedback. A small but growing number of studies hint at the substantial efficacy of this approach, termed 'movement sonification'. However there has been sparse discussion in Psychology about how movement should be mapped onto sound to best facilitate learning. The current thesis draws on contemporary research conducted in Psychology and theoretical debates in other disciplines more directly concerned with sonic interaction - including Auditory Display and Electronic Music-Making - to propose an embodied account of sonification as feedback. The empirical portion of the thesis both informs and tests some of the assumptions of this approach with the use of a custom bimanual coordination paradigm. Four motor skill learning studies were conducted with the use of optical motion-capture. Findings support the general assumption that effective mappings aid learning by making task-intrinsic perceptual information more readily available and meaningful, and that the relationship between task demands and sonic information structure (or, between action and perception) should be complementary. Both the theoretical and empirical treatments of sonification for skill learning in this thesis suggest the value of an approach which addresses learner experience of sonified interaction while grounding discussion in the links between perception and action.
APA, Harvard, Vancouver, ISO, and other styles
10

Leplâtre, Grégory. "The design and evaluation of non speech sounds to support navigation in restricted display devices." Thesis, University of Glasgow, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Breder, Elijah. "Towards the sonification of the World Wide Web : SprocketPlug." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ29484.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sun, Lichi. "Real-time sonification of muscle tension for piano players." Thesis, University of York, 2017. http://etheses.whiterose.ac.uk/18695/.

Full text
Abstract:
This research focuses on the design of an Interactive Sonification (ISon) feedback system to inform piano players of high muscle tension. The system includes an Arduino board and EMG sensors as hardware, and Max and the Processing language as software. Experimental results demonstrate the feasibility of a system to self-monitor muscle tension in piano performance and in other real-time situations.
APA, Harvard, Vancouver, ISO, and other styles
13

Joliat, Nicholas D. (Nicholas David). "DoppelLab : spatialized data sonification in a 3D virtual environment." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85427.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 67-69).
This thesis explores new ways to communicate sensor data by combining spatialized sonification with data visualiation in a 3D virtual environment. A system for sonifying a space using spatialized recorded audio streams is designed, implemented, and integrated into an existing 3D graphical interface. Exploration of both real-time and archived data is enabled. In particular, algorithms for obfuscating audio to protect privacy, and for time-compressing audio to allow for exploration on diverse time scales are implemented. Synthesized data sonification in this context is also explored.
by Nicholas D. Joliat.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
14

Anderson, Janet E. "Sonification design for complex work domains : streams, mappings and attention /." [St. Lucia, Qld.], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18173.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Grond, Florian [Verfasser]. "Listening-Mode-Centered Sonification Design for Data Exploration / Florian Grond." Bielefeld : Universitätsbibliothek Bielefeld, 2003. http://d-nb.info/1052123473/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Forsberg, Joel. "A Mobile Application for Improving Running Performance Using Interactive Sonification." Thesis, KTH, Tal, musik och hörsel, TMH, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-159577.

Full text
Abstract:
Apps that assist long-distance runners have become popular, however most of them focus on results that come from calculations based on distance and time. To become a better runner, an improvement of both the body posture and running gait is required. Using sonic feedback to improve performance in different sports applications has become an established research area during the last two decades. Sonic feedback is particularly well suited for activities where the user has to maintain a visual focus on something, for example when running. The goal of this project was to implement a mobile application that addresses long-distance runners’ body posture and running gait. By decreasing the energy demand for a specific velocity, the runner’s performance can be improved. The application makes use of the sensors in a mobile phone to analyze the runner’s vertical force, step frequency, velocity and body tilt, together with a sonification of those parameters in an interactive way by altering the music that the user is listening to. The implementation was made in the visual programming language Pure Data together with MobMuPlat, which enables the use of Pure Data in a mobile phone. Tests were carried out with runners of different levels of experience, the results showed that the runners could interact with the music for three of the four parameters but more training is required to be able to change the running gait in real-time.
Det har blivit populärt med appar som riktar sig till långdistanslöpare, men de flesta av dessa fokuserar på resultat som kommer från uträkningar av distans och tid. För att bli en bättre löpare krävs att man förbättrar både sin kroppshållning och sin löpstil. Det har blivit ett etablerat forskningsämne under de senaste årtiondena att använda sig av ljudåterkoppling för att förbättra sin prestation inom olika sporter. Detta lämpar sig väl för aktiviteter där användaren behöver fokusera sin blick på något, till exempel under löpning. Målet med det här projektet var att implementera en mobil applikation som riktar sig till att förbättra långdistanslöpares kroppshållning och löpstil. Genom att minska på energin som krävs för att springa med en viss hastighet kan löparens prestationsförmåga öka. Applikationen använder sig av sensorerna i en mobiltelefon för att analysera användarens vertikala kraft, stegfrekvens, hastighet och kroppslutning genom att sonifiera dessa parametrar på ett interaktivt sätt där musiken som användaren lyssnar på ändras på olika sätt. Implementeringen gjordes i det visuella programmeringsspråket Pure Data tillsammans med MobMuPlat, som gör att implementeringen kan användas i en mobiltelefon. Tester genomfördes med löpare med olika grader av erfarenhet, resultaten visade att löparna kunde interagera med musiken för tre av de fyra parametrarna men mer övning krävs för att kunna förändra löpstilen i realtid.
APA, Harvard, Vancouver, ISO, and other styles
17

Winters, Raymond. "Exploring music through sound: sonification of emotion, gesture, and corpora." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121524.

Full text
Abstract:
Contemporary music research is a data-rich domain, integrating a diversity of approaches to data collection, analysis, and display. Though the idea of using sound to perceive scientific information is not new, using it as a tool to study music is a special case, unfortunately lacking proper development. To explore this prospect, this thesis examines sonification of three types of data endemic to music research: emotion, gesture, and corpora. Emotion is a type of data most closely associated with the emergent field of affective computing, though its study in music began much earlier. Gesture is studied quantitatively using motion capture systems designed to accurately record the movements of musicians or dancers in performance. Corpora designates large databases of music itself, constituting for instance, the collection of string quartets by Beethoven, or an individual's music library. Though the motivations for using sonification differ in each case, as this thesis makes clear, added benefits arise from the shared medium of sound. In the case of emotion, sonification first benefits from the robust literature on the structural and acoustic determinants of musical emotion and the new computational tools designed to recognize it. Sonification finds application by offering systematic and theoretically informed mappings, capable of accurately instantiating computational models, and abstracting the emotional elicitors of sound from a specific musical context. In gesture, sound can be used to represent a performer's expressive movements in the same medium as the performed music, making relevant visual cues accessible through simultaneous auditory display. A specially designed tool is evaluated for its ability to meet goals of sonification and expressive movement analysis more generally. In the final case, sonification is applied to the analysis of corpora. Playing through Bach's chorales, Beethoven's string quartets or Monteverdi's madrigals at high speeds (up to 10,000 notes/second) yields characteristically different sounds, and can be applied as a technique for analysis of pitch-transcription algorithms.
La recherche actuelle en musique implique la collecte, l'analyse et l'affichage d'un large volume de données, abordées selon différentes approches. Bien que l'idée d'utiliser le son afin de percevoir l'information scientifique ait déjà été explorée, l'utilisation du son comme outil d'étude de la musique constitue un cas particulier encore malheureusement sous-développé. Afin d'explorer cette perspective, trois types de données endémiques en recherche musicale sont examinées dans ce mémoire: émotion, geste et corpus. L'émotion en tant que type de données se retrouve le plus fréquemment au sein du domaine émergent de l'informatique affective, même si la notion fut abordée en musique bien plus tôt. Le geste est étudié de façon quantitative à l'aide de systèmes de capture de mouvement conçus pour enregistrer précisément les mouvements de musiciens ou danseurs lors d'interprétations et performances. Le corpus désigne ici les vastes bases de données sur la musique elle-même que constituent, par exemple, le recueil des quatuors à cordes de Beethoven, ou une collection musicale personnelle. Bien que les motivations pour la sonification diffèrent entre ces trois cas, comme clairement illustré dans ce mémoire, leur relation commune au médium sonore peut engendrer des avantages supplémentaires. Dans le cas de l'émotion, la sonification peut tout d'abord s'appuyer sur les connaissances établie concernant les déterminants acoustiques et structurels de l'émotion musicale, ainsi que sur les nouveaux outils informatiques conçus pour leur identification. La sonification trouve alors son utilité dans les configurations systématiques et théoriquement justifiées qu'elle peut proposer pour précisément instancier un modèle informatique (appliquer un modèle informatique à l'objet d'étude) et extraire d'un contexte musical spécifique les vecteurs d'émotion du son. Pour le geste, le son peut servir à représenter les mouvements expressifs de l'interprète au sein du même médium que la musique interprétée, offrant ainsi un accès auditif simultané correspondant aux indices visuels pertinents. La capacité d'un outil logiciel spécialement conçu à atteindre des objectifs de sonification et d'analyse du mouvement expressif au sens large est évaluée. Enfin, la sonification est appliquée à l'analyse de corpus. La lecture à très haute vitesse (de l'ordre de 104 notes par seconde) des chorales de Bach, des quatuors à cordes de Beethoven ou des madrigaux de Monteverdi induit des sons différents et caractéristiques. Cette technique peut être employée pour l'analyse d'algorithmes de transcription de hauteurs.
APA, Harvard, Vancouver, ISO, and other styles
18

Bodle, Carrie. "Sonification of the invisible : large scale sound installments on building facades." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33025.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2005.
Includes bibliographical references (p. 58).
The intention of this project is to utilize sound as representation of MIT research-extending out to the public what may be invisible, or less known to the broader community interested in MIT's spectrum of work. I am utilizing Building 54, also known as the Green Building, on the MIT campus to address the public and MIT community through a vehicle of transmission utilizing sound as representation of research here at MIT. Collaborating with scientists from MIT's Haystack Observatory, I am proposing the sonic display of research data from an architectural scale, a speaker setup on the south facade of the Green Building. This project will be a multi-speaker sound installment with a total of 35 Public Address speakers temporarily attached to the vertical concrete columns on the buildings' facade. The speakers will be broadcasting audio representations of sound waves embedded in Earth's charged upper atmosphere, or ionosphere. These sounds make tangible the state of the ionospheric portion of the terrestrial upper atmosphere, a region under active radar study by the Atmospheric Sciences Group at MIT's Haystack Observatory. The speaker arrangement on the Green Building's facade visually reminds the listener of an upwards-sloping graph. This is representative of the spectral frequency distribution of the sounds, which vary both by time and in altitude.
(Cont.) This large-scale sound installment will make tangible the converging perspectives of contemporary arts and upper atmospheric science, representative for the advanced research focus of this institution, and exemplary for MIT's interests in creating an environment in which the arts merge with technology to create inspirations for artists and scientist likewise. The scale of this project is considerable, but so is the size of the Haystack Observatory installation, the distance to the ionosphere, and the iconic silhouette of the Green Building overseeing the MIT campus when viewed from the Boston bank of the Charles River.
by Carrie Bodle.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhao, Haixia. "Interactive sonification of abstract data - framework, design space, evaluation, and user tool." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3394.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
20

Kallel, Monem. "Traitement biologique des rejets organiques concentres apres sonification et saponification : graisses-margines." Paris 6, 1992. http://www.theses.fr/1992PA066194.

Full text
Abstract:
Le pretraitement des dechets graisseux par sonification aboutit a la formation d'emulsion stable et favorise le contact avec le milieu biologique. L'activite de ce produit est a l'origine de l'inhibition du systeme biologique. Le pretraitement par saponification se montre plus efficace plus facile a realiser et permet un traitement biologique efficace. Le traitement combine des margines par sonification et saponification permet d'avoir une elimination satisfaisante par un traitement biologique anaerobie
APA, Harvard, Vancouver, ISO, and other styles
21

Yang, Jiajun. "Enhancing the quality and motivation of physical exercise using real-time sonification." Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/10396/.

Full text
Abstract:
This research project investigated the use of real-time sonification as a way to improve the quality and motivation of biceps curl exercise among healthy young participants. A sonification system was developed featuring an elec- tromyography (EMG) sensor and Microsoft Kinect camera. During exercise, muscular and kinematic data were collected and sent to custom design sonifi- cation software developed using Max to generate real-time auditory feedback. The software provides four types of output sound in consideration of personal preference and long-term use. Three experiments were carried out. The pilot study examined the sonifi- cation system and gathered the users’ comments about their experience of each type of sound in relation to its functionality and aesthetics. A 3-session between-subjects test and an 8-session within-subjects comparative test were conducted to compared the exercise quality and motivation between two conditions: with and without the real-time sonification. Overall, several conclusions are drawn based on the experimental results: The sonification improved participants’ pace of biceps curl significantly. No significant effect was found for the effect on vertical movement range. Participants expended more effort in training with the presence of sonification. Analysis of sur- veys indicated a higher motivation and willingness when exercising with the sonification. The results reflect a wider potential for applications including general fitness, physiotherapy and elite sports training.
APA, Harvard, Vancouver, ISO, and other styles
22

Duarte, García Mario. "Portfolio of original compositions." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/portfolio-of-original-compositions(e0a5e175-d4d0-417c-a5d1-1b9587f019f4).html.

Full text
Abstract:
This portfolio of compositions researches the use of DNA sequences as a system for music composition. The main objective is to identify and create musical expression using sequences of DNA, which carry embedded structure and organisation. An effective model of DNA sonification was implemented through the use of a Parameter Mapping Sonification Interface that can obtain a musical rendering of the genetic material through the replication of different stages of DNA coding. In this way the composer could make compositional decisions a priori in mapping musical parameters (musification), and make use of sonification as a tool in order to successfully generate, organise and develop musical material in a compositional system based on DNA and RNA sequences. This is in contrast to a hard scientific sonification perspective, where unedited outputs are presented as a musical composition. As a result, the portfolio provides an intelligible corpus of music that integrates science, literature, cultural heritage and the social sphere in order to deliver my musical discourse. Nine compositions are presented in the portfolio, each of these explores the use of sonification and musification as a compositional tool to generate and organise musical materials.
APA, Harvard, Vancouver, ISO, and other styles
23

Poirier-Quinot, David. "Design of a radio direction finder for search and rescue operations : estimation, sonification, and virtual prototyping." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066138/document.

Full text
Abstract:
Cette thèse étudie le design et la conception d’un Goniomètre Radio Fréquence (GRF) visant à assister les services de secours à la personne (e.g. équipes de sauvetage déblaiement), exploitant les téléphones des victimes à retrouver comme des balises de détresse radio. La conception est focalisée sur une interface audio, utilisant le son pour guider progressivement les sauveteurs vers la victime. L’ambition de la thèse est d’exploiter les mécanismes naturels propres à l’audition humaine pour améliorer les performances générales du processus de recherche plutôt que de développer de nouvelles techniques d’Estimation de Direction de Source (EDS).Les techniques d’EDS classiques sont tout d’abord exposées, ainsi qu’une série d’outils permettant d’en évaluer les performances. Basée sur ces outils, une étude de cas est proposée visant à évaluer les performances attendues d’un GRF portable, adapté aux conditions d’emploi nominales envisagées. Il est montré que les performances des techniques dites « à haute résolution » généralement utilisées pour l’EDS sont sérieusement amoindries lorsqu’une contrainte de taille/poids maximum est imposée sur le réseau d’antennes associé au GRF, particulièrement en présence de multi-trajets de propagation entre le téléphone ciblé et le tableau d’antenne.Par la suite, une étude bibliographique concernant la sonification par encodage de paramètres (Parameter Mapping Sonification) est proposée. Plusieurs paradigmes de sonification sont considérés et évalués en rapport à leur capacité à transmettre une information issue de différents designs de GRF envisagés. Des tests d’écoute sont menés, suggérant qu’après une courte phase d’entrainement les sujets sont capables d’analyser plusieurs flux audio en parallèle ainsi que d’interpréter des informations de haut niveau encodées dans des flux sonores complexes. Lesdits tests ont permis de souligner le besoin d’une sonification du GRF basée sur une hiérarchie perceptive de l’information présentée, permettant aux débutants de focaliser leur attention sans efforts et uniquement sur les données les plus importantes. Une attention particulière est portée à l’ergonomie de l’interface sonore et à son impact sur l’acceptation et la confiance de l’opérateur vis-à-vis du GRF (e.g. en ce qui concerne la perception du bruit de mesure pendant l’utilisation du GRF pendant la navigation).Finalement, un prototype virtuel est proposé, simulant une navigation basée sur le GRF dans un environnement virtuel pour en évaluer les performances (e.g. paradigmes de sonification proposés plus haut). En parallèle, un prototype physique est assemblé afin d’évaluer la validité écologique du prototype virtuel. Le prototype physique est basé sur une architecture radio logicielle, permettant d’accélérer les phases de développement entre les différentes versions du design de GRF étudiées. L’ensemble des études engageant les prototypes virtuels et physiques sont menées en collaboration avec des professionnels des opérations de recherche à la personne. Les performances des designs de GRF proposés sont par la suite comparées à celles des solutions de recherche existantes (géo-stéréophone, équipes cynotechniques, etc.).Il est montré que, dans le contexte envisagé, un simple GRF basé sur la sonification en parallèle des signaux provenant de plusieurs antennes directionnelles peut offrir des performances de navigations comparables à celles résultantes de designs plus complexes basés sur des méthodes à haute résolution. Puisque l’objectif de la tâche est de progressivement localiser une cible, la pierre angulaire du système semble être la robustesse et la consistance de ses estimations plutôt que sa précision ponctuelle. Impliquer l’utilisateur dans le processus d’estimation permet d’éviter des situations critiques où ledit utilisateur se sentirait impuissant face à un système autonome (boîte noire) produisant des informations qui lui semblent incohérentes
This research investigates the design of a radio Direction Finder (DF) for rescue operation using victims' cellphone as localization beacons. The conception is focused on an audio interface, using sound to progressively guide rescuers towards their target. The thesis' ambition is to exploit the natural mechanisms of human hearing to improve the global performance of the search process rather than to develop new Direction-Of-Arrival (DOA) estimation techniques.Classical DOA estimation techniques are introduced along with a range of tools to assess their efficiency. Based on these tools, a case study is proposed regarding the performance that might be expected from a lightweight DF design tailored to portable operation. It is shown that the performance of high-resolution techniques usually implemented for DOA estimation are seriously impacted by any size-constraint applied on the DF, particularly in multi-path propagation conditions.Subsequently, a review of interactive parameter mapping sonification is proposed. Various sonification paradigms are designed and assessed regarding their capacity to convey information related to different levels of DF outputs. Listening tests are conducted suggesting that trained subjects are capable of monitoring multiple audio streams and gather information from complex sounds. Said tests also indicate the need for a DF sonification that perceptively orders the presented information, for beginners to be able to effortlessly focus on the most important data only. Careful attention is given to sound aesthetic and how it impacts operators' acceptance and trust in the DF, particularly regarding the perception of measurement noise during the navigation.Finally, a virtual prototype is implemented that recreates DF-based navigation in a virtual environment to evaluate the proposed sonification mappings. In the meantime, a physical prototype is developed to assess the ecological validity of the virtual evaluations. Said prototype exploits a software defined radio architecture for rapid iteration through design implementations. The overall performance evaluation study is conducted in consultation with rescue services representatives and compared with their current search solutions.It is shown that, in this context, simple DF designs based on the parallel sonification of the output signal of several antennas may produce navigation performance comparable to these of more complex designs based on high-resolution methods. As the task objective is to progressively localize a target, the system's cornerstone appears to be the robustness and consistency of its estimations rather than its punctual accuracy. Involving operators in the estimation allows avoiding critical situations where one feels helpless when faced with an autonomous system producing non-sensical estimations. Virtual prototyping proved to be a sensible and efficient method to support this study, allowing for fast iterations through sonifications and DF designs implementations
APA, Harvard, Vancouver, ISO, and other styles
24

Nikolaidis, Ryan John. "A generative model of tonal tension and its application in dynamic realtime sonification." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42697.

Full text
Abstract:
This thesis presents the design and implementation of a generative model of tonal tension. It further describes the application of the generative model in realtime sonification. The thesis discusses related theoretical work in musical fields including generative system design, sonification, and perception and cognition. It highlights a review of the related research from historical to contemporary work. It contextualizes this work in informing the design and application of the generative model of tonal tension. The thesis concludes by presenting a formal evaluation of the system. The evaluation consists of two independent subject-response studies assessing the effectiveness of the generative system to create tonal tension and map it to visual parameters in sonification.
APA, Harvard, Vancouver, ISO, and other styles
25

Edström, Viking, and Fredrik Hallberg. "Human Interaction in 3D Manipulations : Can sonification improve the performance of the interaction?" Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146344.

Full text
Abstract:
In this report the effects of using sonification when performing move- ments in 3D space are explored. User studies were performed where partic- ipants had to repeatedly move their hand toward a target. Three different sonification modes were tested where the fundamental frequency, sound level and sound rate were varied respectively depending on the distance to the target. The results show that there is no statistically significant performance increase for any sonification mode. There is however an in- dication that sonification increases the interaction speed for some users. The mode which provided the greatest average performance increase was when the sound level was varied. This mode gave a 7% average speed increase over the silent control mode. However, the sound level mode has some significant drawbacks, especially the very high base volume re- quirement, which might not make it the best suited sonification mode for all applications. In the general case we instead recommend using the sonification mode that varies the sound rate, which gave a slightly lower performance gain but can be played at a lower volume due to its binary nature.
APA, Harvard, Vancouver, ISO, and other styles
26

Wu, Hsin-Fu. "Spectral analysis and sonification of simulation data generated in a frequency domain experiment." Monterey, California: Naval Postgraduate School, 2002, 2002. http://hdl.handle.net/10945/9805.

Full text
Abstract:
In this thesis, we evaluate the frequency domain approach for data farming and assess the possibility of analyzing complex data sets using data sonification. Data farming applies agent-based models and simulation, computing power, and data analysis and visualization technologies to help answer complex questions in military operations. Sonification is the use of data to generate sound for analysis. We apply a frequency domain experiment (FDE) to a combat simulation and analyze the output data set using spectral analysis. We compare the results from our FDE with those obtained using another experimental design on the same combat scenario. Our results confirm and complement the earlier findings. We then develop an auditory display that uses data sonification to represent the simulation output data set with sound. We consider the simulation results from the FDE as a waveshaping function and generate sounds using sonification software. We characterize the sonified data by their noise, signal, and volume. Qualitatively, the sonified data match the corresponding spectra from the FDE. Therefore, we demonstrate the feasibility of representing simulation data from the FDE with our sonification. Finally, we offer suggestions for future development of a multimodal display that can be used for analyzing complex data sets.
APA, Harvard, Vancouver, ISO, and other styles
27

Reynal, Maxime. "Non-visual interaction concepts : considering hearing, haptics and kinesthetics for an augmented remote tower environment." Thesis, Toulouse, ISAE, 2019. http://www.theses.fr/2019ESAE0034.

Full text
Abstract:
Afin de simplifier la gestion des ressources humaines et de réduire les coûts d’exploitation, certaines tours de contrôle sont désormais conçues pour ne pas être implantées directement sur l’aéroport. Ce concept, connu sous le nom de tour de contrôle distante (remote tower), offre un contexte de travail “digital” : la vue sur les pistes est diffusée via des caméras situées sur le terrain distant. Ce concept pourrait également être étendu au contrôle simultanés de plusieurs aéroports à partir d’une seule salle de contrôle, par un contrôleur seul (tour de contrôle distante multiple). Ces notions nouvelles offrent aux concepteurs la possibilité de développer des formes d’interaction novatrices. Cependant, la plupart des augmentations actuelles reposent sur la vue, qui est largement utilisée et, par conséquent, parfois surchargée.Nous nous sommes ainsi concentrés sur la conception et l’évaluation de nouvelles techniques d’interaction faisant appel aux sens non visuels, plus particulièrement l’ouïe, le toucher et la proprioception. Deux campagnes expérimentales ont été menées. Durant les processus de conception, nous avons identifié, avec l’aide d’experts du domaine, certaines situations pertinentes pour les contrôleurs aériens en raison de leur criticité: a) la mauvaise visibilité (brouillard épais,perte de signal vidéo), b) les mouvements non autorisés au sol (lorsque les pilotes déplacent leur appareil sans y avoir été préalablement autorisés), c) l’incursion de piste (lorsqu’un avion traverse le point d’attente afin d’entrer sur la piste alors qu’un autre, simultanément, s’apprête à atterrir) et d) le cas des communications radio simultanées provenant de plusieurs aéroports distants. La première campagne expérimentale visait à quantifier la contribution d’une technique d’interaction basée sur le son spatial, l’interaction kinesthésique et des stimuli vibrotactiles, afin de proposer une solution au cas de perte de visibilité sur le terrain contrôlé. L’objectif était d’améliorer la perception de contrôleurs et d’accroître le niveau général de sécurité, en leur offrant un moyen différent pour localiser les appareils. 22 contrôleurs ont été impliqués dans une tâche de laboratoire en environnement simulé. Des résultats objectifs et subjectifs ont montré une précision significativement plus élevée en cas de visibilité dégradée lorsque la modalité d’interaction testée était activée. Parallèlement, les temps de réponse étaient significativement plus longs relativement courts par rapport à la temporalité de la tâche. L’objectif de la seconde campagne expérimentale, quant à elle, était d’évaluer 3 autres modalités d’interaction visant à proposer des solutions à 3 autres situations critiques : les mouvements non autorisés au sol,les incursions de piste et les appels provenant d’un aéroport secondaire contrôlé. Le son spatial interactif, la stimulation tactile et les mouvements du corps ont été pris en compte pour la conception de 3 autres techniques interactives. 16contrôleurs aériens ont participé à une expérience écologique dans laquelle ils ont contrôlé 1 ou 2 aéroport(s), avec ou sans augmentation. Les résultats comportementaux ont montré une augmentation significative de la performance globale des participants lorsque les modalités d’augmentation étaient activées pour un seul aéroport. La première campagne a été la première étape dans le développement d’une nouvelle technique d’interaction qui utilise le son interactif comme moyen de localisation lorsque la vue seule ne suffit pas. Ces deux campagnes ont constitué les premières étapes de la prise en compte des augmentations multimodales non visuelles dans les contextes des tours de contrôles déportées Simples et Multiples
In an effort to simplify human resource management and reduce operational costs, control towers are now increasingly designed to not be implanted directly on the airport but remotely. This concept, known as remote tower, offers a “digital”working context: the view on the runways is broadcast remotely using cameras located on site. Furthermore, this concept could be enhanced to the control of several airports simultaneously from one remote tower facility, by only one air traffic controller (multiple remote tower). These concepts offer designers the possibility to develop novel interaction forms. However, the most part of the current augmentations rely on sight, which is largely used and, therefore, is sometimes becoming overloaded. In this Ph.D. work, the design and the evaluation of new interaction techniques that rely onnon-visual human senses have been considered (e.g. hearing, touch and proprioception). Two experimental campaigns have been led to address specific use cases. These use cases have been identified during the design process by involving experts from the field, appearing relevant to controllers due to the criticality of the situation they define. These situations are a) poor visibility (heavy fog conditions, loss of video signal in remote context), b) unauthorized movements on ground (when pilots move their aircraft without having been previously cleared), c) runway incursion (which occurs when an aircraft crosses the holding point to enter the runway while another one is about to land), and d) how to deal with multiple calls associated to distinct radio frequencies coming from multiple airports. The first experimental campaign aimed at quantifying the contribution of a multimodal interaction technique based on spatial sound, kinaesthetic interaction and vibrotactile feedback to address the first use case of poor visibility conditions. The purpose was to enhance controllers’ perception and increase overall level of safety, by providing them a novel way to locate aircraft when they are deprived of their sight. 22 controllers have been involved in a laboratory task within a simulated environment.Objective and subjective results showed significantly higher performance in poor visibility using interactives patial sound coupled with vibrotactile feedback, which gave the participants notably higher accuracy in degraded visibility.Meanwhile, response times were significantly longer while remaining acceptably short considering the temporal aspect of the task. The goal of the second experimental campaign was to evaluate 3 other interaction modalities and feedback addressing 3 other critical situations, namely unauthorized movements on ground, runway incursion and calls from a secondary airport. We considered interactive spatial sound, tactile stimulation and body movements to design3 different interaction techniques and feedback. 16 controllers’ participated in an ecological experiment in which they were asked to control 1 or 2 airport(s) (Single Vs. Multiple operations), with augmentations activated or not. Having no neat results regarding the interaction modalities into multiple remote tower operations, behavioural results shown asignificant increase in overall participants’ performance when augmentation modalities were activated in single remotecontrol tower operations. The first campaign was the initial step in the development of a novel interaction technique that uses sound as a precise means of location. These two campaigns constituted the first steps for considering non-visual multimodal augmentations into remote tower operations
APA, Harvard, Vancouver, ISO, and other styles
28

Peyre, Iseline. "Sonification du mouvement pour la rééducation après une lésion cérébrale acquise : conception et évaluations de dispositifs." Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS414.pdf.

Full text
Abstract:
Première cause de handicap acquis de l’adulte, les lésions cérébrales acquises non dégénératives induisent de multiples troubles affectant les sphères sensitivo-motrices, cognitives et psycho-sociales. La chronicisation des déficits résiduels est à l’origine d’une perte d’autonomie dans la réalisation des activités de vie quotidienne. L’accompagnement en rééducation favorise la récupération fonctionnelle grâce au recours à des méthodes, techniques et outils complémentaires. Ainsi, la poursuite de la rééducation en autonomie supervisée lors de la phase chronique est désormais encouragée. Avec le développement des technologies pour la santé, de nouvelles modalités d’accompagnement font actuellement l’objet d’investigations. L’émergence d’outils interactifs de sonification du mouvement permettant de disposer en temps réel d’informations sonores continues en lien avec les mouvements réalisés constituent une voie prometteuse pour la rééducation. Cependant, des questionnements relatifs à l’orientation des choix de conception, particulièrement concernant les caractéristiques des retours sonores et les modalités d’interactions gestes-sons, sont actuellement au cœur des réflexions. Face à ces constats, l’objectif principal de cette thèse interdisciplinaire, santé-arts-sciences, était de développer un dispositif de sonification du mouvement pour la rééducation en autonomie supervisée de patients présentant des séquelles motrices après une lésion cérébrale acquise. Dans cette perspective, le premier objectif était d’évaluer l’effet de différents types de retours sonores (caractéristiques sonores et modalités d’interactions gestes-sons) sur deux tâches gestuelles : un mouvement d'antépulsion (extension du coude), et un maintien postural, avec des participants de différents profils. Le deuxième objectif était de définir les critères de conception et de sélectionner les solutions adaptées pour la création d’un dispositif de sonification du mouvement répondant aux caractéristiques et besoins de patients présentant des séquelles motrices au membre supérieur suite à une lésion cérébrale acquise, dans la perspective d’un usage en autonomie supervisée. Le troisième objectif était d’initier une évaluation du dispositif conçu, afin de disposer d’éléments formels permettant d’envisager une étude clinique. Les travaux menés ont notamment permis de confirmer l’effet de la présence de retours sonores interactifs lors de l’exécution de gestes et l’importance de la prise en considération des modalités d’interaction gestes-sons. Le processus de co-conception centré utilisateurs mis en œuvre avec des experts de plusieurs disciplines a conduit à la création d’un dispositif de sonification du mouvement mobile novateur, fonctionnel, flexible (personnalisable), adapté à une situation de rééducation en autonomie supervisée. Peu onéreux, le dispositif a été dupliqué en 10 exemplaires. Les premiers résultats des évaluations réalisées auprès de thérapeutes sont très encourageants. Ce travail de thèse ouvre ainsi des perspectives d’évaluation clinique à grande échelle
As the leading cause of acquired disability in adults, non-degenerative acquired brain injuries lead to multiple disorders affecting the sensory-motor, cognitive and psycho-social dimensions. The chronicisation of residual deficits leads to a loss of autonomy in the performance of daily living activities. Rehabilitation support promotes functional recovery through the use of complementary methods, techniques and tools. Thus, the pursuit of rehabilitation in supervised autonomy during the chronic phase is now encouraged. With the development of health technologies, new support methods are currently being investigated. The emergence of interactive movement sonification tools that provide continuous sound information in real-time in relation to the movements performed is a promising approach to rehabilitation. However, the orientation of design choices, particularly concerning the characteristics of sound feedback and the modalities of gesture-sound interactions, are currently at the centre of reflection. The main objective of this interdisciplinary health-arts-sciences thesis was to develop a movement sonification device for the supervised autonomous rehabilitation of patients with motor impairment after an acquired brain injury. In this perspective, the first objective was to evaluate the effect of different types of sound feedback (sound characteristics and modalities of gesture-sound interactions) on two gestural tasks: an elbow extension movement, and a postural maintenance, with participants of different profiles. The second objective was to define the design criteria and to select the appropriate solutions for the creation of a movement sonification device responding to the characteristics and needs of patients with motor impairment in the upper limb following an acquired brain injury, with a perspective of use in supervised autonomy. The third objective was to initiate an evaluation of the designed device, in order to consider a clinical study. The studies carried out confirmed the effect of the presence of interactive sound feedback during the execution of gestures and the importance of taking into consideration the modalities of gesture-sound interaction. The user-centered co-design process implemented with experts from several disciplines led to the creation of an innovative, functional, flexible (customisable) mobile movement sonification device, adapted to a supervised autonomous rehabilitation situation. The device is inexpensive and has been duplicated in 10 copies. The first results of the evaluations carried out with therapists are very encouraging, opening up perspectives for large-scale clinical evaluation
APA, Harvard, Vancouver, ISO, and other styles
29

NORLIN, ANDERSSON JACOB. "App For Improving Heart Rate MonitorBased Endurance Training in RunningAthletes Through Heart Beat Sonification." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134943.

Full text
Abstract:
Fitness is a large part of todays modern society, especially running. With the advent of the smartphone a large number of apps based on helping runners track and improve their performance has popped up, some of which support some kind of audio feedback. To further build on this this essay explores the possibilities of using sonification of a runners heart rate to aid them in maintaining specific heart rate zones while training. The sonification was implemented on the android platform using a Zephyr HxM BT heart rate monitor. It was tested with three users. I found that while the sonification aided the users in maintaining their heart rate zones, it was achieved through overcompensating, and this resulting in an uneven heart rate. However the concept shows promise, and i suggest some ways it can be improved in the future.
Friskvård är en stor del av vår moderna värld, speciellt löpning. Efter smarttelefonens ankomst har en stor mängd appar som fokuserar på att hjälpa löpare hålla reda på och förbättra sin löpning kommit. Varvid vissa innehåller någon form av ljudfeedback. För att bygga vidare på detta utforstkar denna uppsats möjligheten att utnyttja sonifiering av en löpares puls för att förbättra hens löpförmåga, genom att ligga inom vissa pulszoner". Sonifiering implementerades i form av en app på Androidplatformen. I uppsatsen upptäcktes det att medans sonifiering hjälpte löparna upptäcka när de kom utanför en pulszon, var det på kostnaden av en jämn puls. Därför föreslås det i slutet sätt att vidareutveckla detta koncept för att bättre hjälpa löpare förbättra sig.
APA, Harvard, Vancouver, ISO, and other styles
30

Smith, Daniel R. "Effects of training and context on human performance in a point estimation sonification task." Thesis, Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/32845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Savard, Alexandre. "When gestures are perceived through sounds : a framework for sonification of musicians' ancillary gestures." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116051.

Full text
Abstract:
This thesis presents a multimodal sonification system that combines video with sound synthesis generated from motion capture data. Such a system allows for a fast and efficient exploration of musicians' ancillary gestural data, for which sonification complements conventional videos by stressing certain details which could escape one's attention if not displayed using an appropriate representation. The main objective of this project is to provide a research tool designed for people that are not necessarily familiar with signal processing or computer sciences. This tool is capable of easily generating meaningful sonifications thanks to dedicated mapping strategies. On the one hand, the dimensionality reduction of data obtained from motion capture systems such as the Vicon is fundamental as it may exceed 350 signals describing gestures. For that reason, a Principal Component Analysis is used to objectively reduce the number of signals to a subset that conveys the most significant gesture information in terms of signal variance. On the other hand, movement data presents high variability depending on the subjects: additional control parameters for sound synthesis are offered to restrain the sonification to the significant gestures, easily perceivable visually in terms of speed and path distance. Then, signal conditioning techniques are proposed to adapt the control signals to sound synthesis parameter requirements or to allow for emphasizing certain gesture characteristics that one finds important. All those data treatments are performed in realtime within one unique environment, minimizing data manipulation and facilitating efficient sonification designs. Realtime process also allows for an instantaneous system reset to parameter changes and process selection so that the user can easily and interactively manipulate data, design and adjust sonifications strategies.
APA, Harvard, Vancouver, ISO, and other styles
32

Al, Bsoul Abeer. "Efficacité des ultrasons pour la destruction des trois biomasses." Grenoble INPG, 2009. http://www.theses.fr/2009INPG0020.

Full text
Abstract:
Les objectifs de cette recherche sont d'évaluer l'efficacité de la sonication pour la destruction microbienne de souches de Mycobacterium sp. 6PY1, et de Rhodobacter capsulatus B10 en présence et en absence de particules solides (TiO2 ou SiO2), et d'irradiations ultrasonores (basse et haute fréquence) en combinaison avec l’irradiation UV pour la désinfection des Mycobacterium sp. 6PY1 en présence et en absence de particules solides (TiO2 ou SiO2). Trois étapes expérimentales de la désinfection ont été menées: sous ultrasons seul, sous irradiation UV seule et la combinaison des ultrasons avec les irradiations UV. Les paramètres testés pour la sonication pour les deux systèmes (basse ou haute fréquence ultrasonique) sont : la fréquence des ultrasons (fréquence de 20 kHz ou 612 kHz), le temps d'irradiations, le volume irradié, la puissance ultrasonore, la concentration bactérienne, l'effet des radicaux (Na2CO3), la séquence d'application des deux systèmes de sonication, l'effet du traitement par ultrasons sur la cinétique de croissance, et de la présence de particules solides (TiO2 ou SiO2)
The goals of this research were to evaluate the effectiveness of the sonication for destruction of Mycobacterium sp. 6PY1 and Rhodobacter capsulatus B10 in the presence and absence of solid particles (TiO2 or SiO2), and sonication (low and high frequency) combined with UV irradiation for disinfection of Mycobacterium sp. 6PY1 in the presence and absence of solid particles (TiO2 or SiO2). The variables tested for both sonication systems (low or high frequency ultrasonic reactor) included : the sonication frequency (frequency of 20 kHz or 612 kHz), the sonication time, the sonication volume, the sonication power-to-volume ratio, the initial bacterial concentration, the effect of radical scavengers (Na2CO3), sequence of application of the two sonication systems, the effects of ultrasonic treatment on the growth kinetics, and the presence of solid particles (TiO2 or SiO2)
APA, Harvard, Vancouver, ISO, and other styles
33

Lindstedt, Simon, and Hannes Derler. "Ljud som sammankopplar oss : Ett utforskande av Augmented Audio Reality för att hitta interaktioner som kopplar oss samman." Thesis, Blekinge Tekniska Högskola, Institutionen för teknik och estetik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16567.

Full text
Abstract:
Syftet med detta arbete är att undersöka interaktivt ljud ställt i relation till Augmented Reality. Vi vill undersöka begreppet genom ett fokus på ljud, och därigenom bredda uppfattningen om vad Augmented Reality potentiellt kan innebära. Vi har undersökt Sonification, Audio Spatial Awareness och Augmented Reality för att producera en gestaltning baserad på en kombination av dessa teoribildningar. Dels för att undersöka hur Sonification kan göra vissa aspekter av verkligheten tydligare för människor, samt för att använda denna information för att försöka påverka människors inställning till och uppfattning av varandra. Gestaltningens syfte är att undersöka hur mänskligt samspel kan påverkas genom ljud baserat på rumslig data, och är ett resultat av den forskning som lagt den teoretiska grunden och den primära metod som vi valt för att utveckla den genom. Performative Experience Design ämnas till att undersöka och främja interaktivitet utifrån ett performativt perspektiv, då den uppmanar till ett öppet och nyfiket förhållningssätt till mänskligt samspel. Resultat av detta är ett system med stor potential för vidareutveckling, men också en påbörjad diskussion om vad Augmented Reality kan innebära.
The purpose of this work is to investigate interactive sound in relation to Augmented Reality. We want to explore the concept by focusing on sound, thereby broadening the perception of what Augmented Reality potentially could mean. We have investigated Sonication, Audio Spatial Awareness and Augmented Reality to produce an artefact based on a combination of these theories. Our focus is to investigate how Sonification can make certain aspects of reality clearer to people, as well as to use this information to try to influence people's perception of each other. We aim to investigate how human interaction can be influenced by sound based on spatial data, and is directly influenced by the research which laid the theoretical foundation combined with the primary method we chose. Performative Experience Design aims to investigate and generate interaction based on a performance perspective, as it calls for an open and curious approach to human interaction. The result of this is a system with great potential for further development, but also a beginning to an initial discussion of what constitutes Augmented reality.
APA, Harvard, Vancouver, ISO, and other styles
34

Portron, Arthur. "L'étude de l'influence du contexte sur la poursuite oculaire." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066123/document.

Full text
Abstract:
La poursuite est un mouvement oculaire qui permet de suivre un objet qui se déplace de manière lente et continue dans notre environnement. Les recherches ont démontré que ce mouvement est sous-tendu par la contribution simultanée de signaux liés à l'image de la cible et du contexte visuel sur la rétine, de signaux extra rétiniens reflétant l'implication de mécanismes cognitifs et de la copie d'efférence, ainsi que de la mise en jeu de processus d'inhibition et de suppression de certains signaux liés au contexte visuel. Cette combinaison dynamique permet au système de s'adapter et de maintenir le mouvement de poursuite dans une grande variété de contextes. Si la présence et la sélection d'un signal de mouvement dans l'environnement est considérée comme une condition sine qua none au déclenchement et au maintien de la poursuite, certaines observations depuis les années 70 nuancent ce postulat. Afin d'approfondir la compréhension des mécanismes sous-jacent au maintien de la poursuite après disparition du signal de mouvement et de questionner la nature des signaux sensoriels conduisant à la génération de poursuite, mon travail de thèse a porté sur l'étude de l'influence de deux contextes spécifiques. Ces deux contextes, l'un visuel l'autre auditif, partagent la propriété d'être corrélés au mouvement oculaire généré par l'individu. De fait, ces contextes fournissent un signal nouveau porteur d'une information sur le mouvement en cours. Dans différentes configurations de visibilité de la cible, nous avons étudiés l'influence de ces contextes sur le maintien de la poursuite, et sur la génération volontaire de mouvements oculaires lisses et continus
Pursuit eye movements allow us to track a target which moves continuously and slowly in our visual environment. Studies have shown this movement is based on the simultaneous contribution of retinal signals linked to the retinal image of the visual target and to the context, on extra retinal signals underlying cognitive process and the efference copy, and some inhibition and suppression processes related to the visual context. This dynamical combination allows the pursuit system to adapt in a wide range of contexts. If the presence of a motion signal in the visual environment is thought as a prerequisite to initiate and then to maintain the pursuit, some results since the 70’s moderate this view. In order to investigate the mechanisms underlying the maintenance of pursuit eye movements after target disappearance and the nature of signals leading to generate pursuit, we investigate the effects of two different contexts. These contexts, a visual one, and an auditory one, share the same property which is to be dependent on eye movements. As a result of this dependence, the contexts yield a new signal, visual or auditory, which carry an information about the ongoing eye movement. We study the effects of these information induced by the contexts in procedure involving the generation and the maintenance of smooth pursuit eye movements, and the generation of smooth and continuous eye movements without a moving target
APA, Harvard, Vancouver, ISO, and other styles
35

Fagergren, Emma. "Wa-UM-eii : How a Choreographer Can Use Sonification to Communicate With Dancers During Rehearsals." Thesis, Linköpings universitet, Filosofiska fakulteten, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79766.

Full text
Abstract:
A sonification is a nonverbal speech act and might sound like “wa-UM-eii”, or “wooosh!” The purpose of this study was to investigate a choreographer’s use of sonification in dance instructions, to see if there are different types of sonifications and if the use of these might differ with a change in context. Video material capturing the rehearsals of a noted dance company was analyzed using a cognitive ethnography-based approach. Nine different types of sonifications were identified and described according to purpose, and a context-based analysis showed that certain kinds of sonifications occurred more frequently in some contexts than others. The results suggest that sonification used in dance instruction can serve multiple different purposes – the three main purposes described here are these: to communicate the quality of a movement, to facilitate communication between choreographer and dancers, and to coordinate the dancers as a group.
APA, Harvard, Vancouver, ISO, and other styles
36

Dubus, Gaël. "Interactive sonification of motion : Design, implementation and control of expressive auditory feedback with mobile devices." Doctoral thesis, KTH, Musikakustik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-127944.

Full text
Abstract:
Sound and motion are intrinsically related, by their physical nature and through the link between auditory perception and motor control. If sound provides information about the characteristics of a movement, a movement can also be influenced or triggered by a sound pattern. This thesis investigates how this link can be reinforced by means of interactive sonification. Sonification, the use of sound to communicate, perceptualize and interpret data, can be used in many different contexts. It is particularly well suited for time-related tasks such as monitoring and synchronization, and is therefore an ideal candidate to support the design of applications related to physical training. Our objectives are to develop and investigate computational models for the sonification of motion data with a particular focus on expressive movement and gesture, and for the sonification of elite athletes movements.  We chose to develop our applications on a mobile platform in order to make use of advanced interaction modes using an easily accessible technology. In addition, networking capabilities of modern smartphones potentially allow for adding a social dimension to our sonification applications by extending them to several collaborating users. The sport of rowing was chosen to illustrate the assistance that an interactive sonification system can provide to elite athletes. Bringing into play complex interactions between various kinematic and kinetic quantities, studies on rowing kinematics provide guidelines to optimize rowing efficiency, e.g. by minimizing velocity fluctuations around average velocity. However, rowers can only rely on sparse cues to get information relative to boat velocity, such as the sound made by the water splashing on the hull. We believe that an interactive augmented feedback communicating the dynamic evolution of some kinematic quantities could represent a promising way of enhancing the training of elite rowers. Since only limited space is available on a rowing boat, the use of mobile phones appears appropriate for handling streams of incoming data from various sensors and generating an auditory feedback simultaneously. The development of sonification models for rowing and their design evaluation in offline conditions are presented in Paper I. In Paper II, three different models for sonifying the synchronization of the movements of two users holding a mobile phone are explored. Sonification of expressive gestures by means of expressive music performance is tackled in Paper III. In Paper IV, we introduce a database of mobile applications related to sound and music computing. An overview of the field of sonification is presented in Paper V, along with a systematic review of mapping strategies for sonifying physical quantities. Physical and auditory dimensions were both classified into generic conceptual dimensions, and proportion of use was analyzed in order to identify the most popular mappings. Finally, Paper VI summarizes experiments conducted with the Swedish national rowing team in order to assess sonification models in an interactive context.

QC 20130910

APA, Harvard, Vancouver, ISO, and other styles
37

PAPETTI, Stefano. "Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools." Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.

Full text
Abstract:
Questa tesi affronta una varietà di temi di ricerca, spaziando dalla interazione uomo-macchina alla modellizzazione fisica. Ciò che unisce queste ampie aree di interesse è l'idea di utilizzare simulazioni numeriche di fenomeni acustici basate sulla fisica, al fine di implementare interfacce uomo-macchina che offrano feedback sonoro coerente con l'interazione dell'utente. A questo proposito, negli ultimi anni sono nate numerose nuove discipline che vanno sotto il nome di -- per citarne alcune -- auditory display, sonificazione e sonic interaction design. In questa tesi vengono trattate la progettazione e la realizzazione di algoritmi audio efficienti per la sonificazione interattiva. A tale scopo si fa uso di tecniche di modellazione fisica di suoni ecologici (everyday sounds), ovvero suoni che non rientrano nelle famiglie del parlato e dei suoni musicali.
The work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
APA, Harvard, Vancouver, ISO, and other styles
38

Deschamps, Marie-Lys, Penelope Sanderson, Kelly Hinckfuss, Caitlin Browning, Robert G. Loeb, Helen Liley, and David Liu. "Improving the detectability of oxygen saturation level targets for preterm neonates: A laboratory test of tremolo and beacon sonifications." ELSEVIER SCI LTD, 2016. http://hdl.handle.net/10150/617179.

Full text
Abstract:
Recent guidelines recommend oxygen saturation (SpO(2)) levels of 90%-95% for preterm neonates on supplemental oxygen but it is difficult to discern such levels with current pulse oximetry sonifications. We tested (1) whether adding levels of tremolo to a conventional log-linear pulse oximetry sonification would improve identification of SpO(2) ranges, and (2) whether adding a beacon reference tone to conventional pulse oximetry confuses listeners about the direction of change. Participants using the Tremolo (94%) or Beacon (81%) sonifications identified SpO(2) range significantly more accurately than participants using the LogLinear sonification (52%). The Beaton sonification did not confuse participants about direction of change. The Tremolo sonification may have advantages over the Beacon sonification for monitoring SpO(2) of preterm neonates, but both must be further tested with clinicians in clinically representative scenarios, and with different levels of ambient noise and distractions. Crown Copyright (C) 2016 Published by Elsevier Ltd. All rights reserved.
APA, Harvard, Vancouver, ISO, and other styles
39

Wolczynski, Leon. "Ljud- eller oljud : Hur upplevs ljudsättning av gränssnitt." Thesis, Mittuniversitetet, Avdelningen för data- och systemvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-28993.

Full text
Abstract:
Att presentera information eller påverka användarens upplevelser via ljudsättning av gränssnitt är lågt prioriterat inom systemutveckling. Design av gränssnitt håller den visuella presentationen av information central därtill sammankopplar detta främst med områden som exempelvis användbarhet eller funktionell form. Syftet med denna studie har varit att undersöka hur ljud påverkar upplevelsen samt hur detta skulle kunna implementeras för att förbättra användares upplevelser av internetbaserade gränssnitt. Det vetenskapliga ramverket har sammanställts genom ett artikelunderlag vilket genererat en grundläggande kategorisering. Detta avsåg koppling till ”Estetik”, ”Sonification”, ”Att ljudsätta användardata” samt ”User experience och audio”. Arbetssättet byggde på en metodkombination där ett kvantitativt internetbaserat frågeformulär undersökte områden inom ramverkets kategorisering samt en kvalitativ ”Think-aloud” undersökning där två verksamma systemutvecklare testade en ljudsatt prototyp. Resultatet från den kvalitativa undersökningen visade hur ljudsättning initialt inte värdesatts vid interaktion med ett gränssnitt samtidigt som prototypens ljudsättning inte upplevdes störande eller negativ. Prototypen framkom som ”för” kreativt vinklat vilket återkopplar till ramverkets beskrivning gällande två estetiska huvudspår inom webbdesign. Den kvantitativa analysen framställer hur användbarhet och ordning värdesatts högre än kreativa upplevelser samt att information i gränssnitt inte sågs lämpligt att förmedla via ljud. Slutsatsen inom arbetet ligger vid att ljudsättning av gränssnitt genom ett balanserat formspråk och en bibehållen känsla av kontroll, kan och bör kopplas till användarens interaktion med gränssnittet. Detta kan trots ett viss motstånd användas för att påverka upplevelser, känslor och därtill uppfattad användbarhet inom ett system.
APA, Harvard, Vancouver, ISO, and other styles
40

Brittell, Megen. "Neuro-imaging Support for the Use of Audio to Represent Geospatial Location in Cartographic Design." Thesis, University of Oregon, 2019. http://hdl.handle.net/1794/24538.

Full text
Abstract:
Audio has the capacity to display geospatial data. As auditory display design grapples with the challenge of aligning the spatial dimensions of the data with the dimensions of the display, this dissertation investigates the role of time in auditory geographic maps. Three auditory map types translate geospatial data into collections of musical notes, and arrangement of those notes in time vary across three map types: sequential, augmented-sequential, and concurrent. Behavioral and neuroimaging methods assess the auditory symbology. A behavioral task establishes geographic context, and neuroimaging provides a quantitative measure of brain responses to the behavioral task under recall and active listening response conditions. In both behavioral and neuroimaging data, two paired contrasts measure differences between the sequential and augmented-sequential map types, and between the augmented- sequential and concurrent map types. Behavioral data reveal differences in both response time and accuracy. Response times for the augmented-sequential map type are substantially longer in both contrasts under the active response condition. Accuracy is lower for concurrent maps than for augmented-sequential maps; response condition influences direction of differences in accuracy between the sequential and augmented-sequential map types. Neuroimaging data from functional magnetic resonance imaging (fMRI) show significant differences in blood-oxygenation level dependent (BOLD) response during map listening. The BOLD response is significantly stronger in the left auditory cortex and planum temporale for the concurrent map type in contrast to the augmented- sequential map type. And the response in the right auditory cortex and bilaterally in the visual cortex is significantly stronger for augmented-sequential maps in contrast to sequential maps. Results from this research provide empirical evidence to inform choices in the design of auditory cartographic displays, enriching the diversity of geographic map artifacts. Four supplemental files and two data sets are available online. Three audio files demonstrate the three map types: sequential (Supplementary Files, Audio 1), augmented- sequential (Supplementary Files, Audio 2), and concurrent (Supplementary Files, Audio 3). Associated data are available through OpenNeuro (https://openneuro.org/ datasets/ds001415).
APA, Harvard, Vancouver, ISO, and other styles
41

Banf, Michael [Verfasser]. "Auditory image understanding for the visually impaired based on a modular computer vision sonification model / Michael Banf." Siegen : Universitätsbibliothek der Universität Siegen, 2013. http://d-nb.info/1045776394/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Scholz, Daniel S. [Verfasser]. "Sonification of arm movements in stroke rehabilitation: a novel approach in neurologic music therapy / Daniel S. Scholz." Hannover : Bibliothek der Tierärztlichen Hochschule Hannover, 2015. http://d-nb.info/1080868100/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Franchin, Wagner José. "Adição e avaliação de estímulos sonoros como ferramenta de apoio à exploração visual de dados." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-05122007-115432/.

Full text
Abstract:
Visualização é o processo genérico que utiliza representações visuais e interativas para facilitar a análise e o entendimento de informações de conjunto de dados. A maioria das ferramentas de visualização existentes atualmente utiliza exclusivamente recursos visuais para representar informações e isto tem limitado a capacidade exploratória e a apresentação de dados. Vários estudos têm demonstrado que o uso do som como recurso alternativo para representação de dados (sonificação) pode ser útil na interpretação de informações e também pode apoiar o aumento da dimensionalidade da apresentação visual. A sonificação é o objeto de estudo deste trabalho. Este trabalho implementa o novo módulo de sonificação de um sistema de exploração visual de dados, o Super Spider (Watanabe, 2007), que foi estendido com a implementação de recursos que auxiliam a exploração de dados por meio de sons. Um novo sistema, chamado Sonar 2D, também foi desenvolvimento de forma integrada ao Super Spider e apresenta uma nova técnica para sonificação de dados. Além disso, são apresentados resultados de testes com usuários aplicados para avaliar e validar os mapeamentos visuais e sonoros utilizados nos sistemas
Visualization is a generic process that uses visual and interactive representations to easy the analysis and the understanding of complex datasets. To this date, most of the visualization toolkits make use almost exclusively of visual aid to represent information, which has limited the capacity for data presentation and exploration. Many studies have shown that sound as an alternative data display tool (sonification) can be useful to support information interpretation and may also add dimensions to a visual display. Sonification is the object of study of this work. This work implements the new sonification module for a recently developed visual exploration system, the Super Spider (Watanabe, 2007). It has been extended with the implementation of functionalities in order to support data exploration through sounds. A new system, called Sonar 2D, was also developed and integrated to Super Spider, including a new technique of data sonification. In addition, this work presents results user evaluation for validation of some of the visual and sound mappings employed in both systems
APA, Harvard, Vancouver, ISO, and other styles
44

Lansley, Alastair. "Adventures in software engineering : plugging HCI & acessibility gaps with open source solutions." Thesis, Federation University Australia, 2020. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/174516.

Full text
Abstract:
There has been a great deal of research undertaken in the field of Human-Computer Interfaces (HCI), input devices, and output modalities in recent years. From touch-based and voice control input mechanisms such as those found on modern smart-devices to the use of touch-free input through video-stream/image analysis (including depth streams and skeletal mapping) and the inclusion of gaze tracking, head tracking, virtual reality and beyond - the availability and variety of these I/O (Input/Output) mechanisms has increased tremendously and progressed both into our living rooms and into our lives in general. With regard to modern desktop computers and videogame consoles, at present many of these technologies are at a relatively immature stage of development - their use often limited to simple adjuncts to the staple input mechanisms of mouse, keyboard, or joystick / joypad inputs. In effect, we have these new input devices - but we're not quite sure how best to use them yet; that is, where their various strengths and weaknesses lie, and how or if they can be used to conveniently and reliably drive or augment applications in our everyday lives. In addition, much of this technology is provided by proprietary hardware and software, providing limited options for customisation or adaptation to better meet the needs of specific users. Therefore, this project investigated the development of open source software solutions to address various aspects of innovative user I/O in a flexible manner. Towards this end, a number of original software applications have been developed which incorporate functionality aimed at enhancing the current state of the art in these areas and making that software freely available for use by any who may find it beneficial.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
45

Bressolette, Benjamin. "Manipulations gestuelles d'un objet virtuel sonifié pour le contrôle d'une interface en situation de conduite." Thesis, Ecole centrale de Marseille, 2018. http://www.theses.fr/2018ECDM0009/document.

Full text
Abstract:
Les constructeurs automobiles proposent un vaste éventail de fonctions secondaires à la conduite, en relation avec le GPS, la musique, ou encore la ventilation. L'ensemble de ces fonctions est centralisé sur un écran accessible au centre de l'habitacle, équipé d'une dalle tactile dans nombre de véhicules récents. Cependant, leur manipulation en conduite peut se révéler périlleuse : la sollicitation de la modalité visuelle pour interagir avec l'interface peut entraîner une baisse de vigilance vis-à-vis de la tâche de conduite, ce qui peut mener à des situations dangereuses. Au cours de cette thèse, fruit d'une collaboration entre le Groupe PSA et le laboratoire PRISM, nous nous sommes attachés à proposer une association de gestes et de sons comme alternative à une sollicitation visuelle. Le but est de proposer des interactions réalisables en aveugle, pour permettre au conducteur de focaliser ses yeux sur la route. Pour réaliser conjointement une manipulation de l'interface et une tâche de conduite, une sollicitation multisensorielle peut en effet permettre de faire baisser la charge mentale du conducteur, en comparaison à une situation unimodale visuelle. Pour que le lien entre les gestes et les sons paraisse naturel, un objet virtuel a été introduit, manipulable grâce à des gestes. Cet objet est le support des stratégies de sonification, par analogie aux sons de notre environnement, qui sont la conséquence d'une action sur un objet. L'objet virtuel permet également de structurer différents gestes autour d'une même métaphore, ou encore de redéfinir le menu d'une interface. La première partie de cette thèse est consacrée à la mise au point de stratégies de sonification pour transmettre des informations pertinentes sur la dynamique de l'objet virtuel. Deux expériences perceptives ont été mises en place, qui ont conduit à la discrimination de deux stratégies de sonification intéressantes. Dans une deuxième partie, nous avons œuvré à la mise en situation automobile par un travail sur les stimuli sonores, sur l'interface, et par l'étude de l'intégration multisensorielle. Un design des stratégies de sonification a été proposé pour permettre de se projeter dans une utilisation en véhicule. Par la suite, les évocations provoquées par le couplage des gestes et des sons ont été au centre d'une troisième expérience perceptive. Cette étude a été effectuée en aveugle, où le concept d'objet virtuel était inconnu et découvert progressivement par les sujets. Ces images mentales véhiculées par les stratégies de sonification peuvent en effet être utiles lors de la familiarisation des utilisateurs avec l'interface. Une quatrième expérience perceptive s'est concentrée sur la prise en main de l'objet virtuel, où l'intégration des stimuli visuels et auditifs a été étudiée, dans le contexte du maniement d'une interface. Les sujets ont été placés dans des conditions similaires à la découverte de l'interface en véhicule à l'arrêt, avec des stimuli audio-visuels ; puis à son utilisation en aveugle grâce aux stratégies de sonification. Les enseignements de ces expériences ont permis de bâtir une interface gestuelle, qui a été comparée à une interface tactile dans une dernière expérience perceptive réalisée en simulateur de conduite. Bien que les résultats montrent une utilisation plus performante de l'interface tactile, l'association des gestes et des sons semble intéressante du point de vue de la charge cognitive des conducteurs. L'interface gestuelle peut donc offrir une alternative prometteuse ou un complément aux interfaces tactiles pour une utilisation simultanée à la conduite en toute sécurité
Car manufacturers offer a wide range of secondary driving controls, such as GPS, music, or ventilation, often localized on a central touch-sensitive screen. However, operating them while driving proves to be unsafe: engaging the sense of sight for interface interaction can lead to vigilance reduction towards the driving task, which can lead to high-risk situations. In this PhD thesis, which is a part of a collaborative research project involving both the PSA Group and the PRISM laboratory, we aim to provide a gesture and sound association as an alternative to the visual solicitation. The goal is to enable blind interface interactions, allowing the driver to focus their eyes on the road. When jointly performing interface manipulations and the driving task, a multisensory solicitation can lower the driver's cognitive load, in comparison with a visual unimodal situation. For the gesture-sound association to feel more natural, a virtual object that can be handled with gestures is introduced. This object is the support for sonification strategies, constructed by analogy with sounds from our environment, which are the consequence of an action on an object .The virtual object also allows to structure different gestures around the same metaphor, or to redefine the interface's menu. The first part of this thesis deals with the development of sonification strategies, with the aim to inform users about the virtual object dynamic. Two perceptual experiments were set up, which led to the discrimination of two valuable sonification strategies. In a second part, the automotive application was addressed by designing new sound stimuli, the interface, and by studying the multisensory integration. Sounds were proposed for each of the two sonification strategies, to progress towards an in-vehicle integration. The evocations brought by the gestures and sounds association were the subject of a third perceptive blinded experiment. The concepts around the virtual object were unknown and gradually discovered by the subjects. These mental images conveyed by the sonification strategies can help users familiarize with the interface. A fourth perceptual experiment focused on the virtual object handling for first-time users, where the integration of audio-visual stimuli was studied, in the context of an interface manipulation. The experiment conditions were similar to the driver first discovering of the interface in a parked vehicle thanks to audio-visual stimuli, and then operating it through sonification strategies only. The results of this experiment lead to the design of a gestural interface, which was compared with the results obtained with a touchscreen interface in a final perceptual experiment, carried out in a driving simulator. Although the results show better performances for the tactile interface, the combination of gestures and sounds proved to be effective from the cognitive load point of view. The gesture interface can therefore offer a promising alternative or complement to tactile interfaces for a safe simultaneous use in driving condition
APA, Harvard, Vancouver, ISO, and other styles
46

Landelle, Caroline. "Impact du vieillissement sur la perception multisensorielle et les processus cérébraux sous-jacents : étude de la kinesthésie et de la perception de textures." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0146.

Full text
Abstract:
Nous percevons mieux notre corps et notre environnement si l’on tient compte de plusieurs sources sensorielles en même temps. Mais tous les systèmes sensoriels déclinent progressivement au cours du vieillissement. Cette thèse a contribué à mieux comprendre comment les perceptions multisensorielles et les réseaux cérébraux qui les sous-tendent sont modifiés chez la personne âgée. Ce travail souligne l’existence d’une repondération des informations sensorielles et une facilitation générale des processus d’interaction entre les sens pour optimiser la perception des mouvements du corps ou la perception de textures dès 65 ans. Au niveau cérébral, l'effondrement des processus inhibiteurs avec l'âge entrainerait une moins bonne sélection des réseaux et expliquerait les troubles perceptifs. Néanmoins, les personnes âgées pourraient bénéficier d’un recrutement cérébral moins spécifique pour surmonter au moins partiellement ces déclins sensoriels
We can better perceive our body and our environment if we take into account several sensory sources at the same time. However, all sensory systems gradually decline with aging. This thesis contributes to a better understanding of how multisensory perceptions and the underlying brain networks are modified in the elderly. This work highlights both a reweighting of sensory information and a general facilitation of interaction processes between the senses to optimize the perception of body movements or the perception of textures as soon of 65 years old. At the brain level, the break-down of inhibitory processes with age would lead to a poorer selection of networks and would explain perceptual disorders. Nevertheless, older people could benefit from less specific brain recruitment to at least partially compensate these sensory declines
APA, Harvard, Vancouver, ISO, and other styles
47

Denjean, Sebastien. "Sonification des véhicules électriques par illusions auditives : étude de l'intégration audiovisuelle de la perception du mouvement automobile en simulateur de conduite." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4710.

Full text
Abstract:
Ces travaux de thèse portent sur la mise en place d’une stratégie de sonification qui vise à proposer un retour sonore pouvant se substituer au bruit moteur dans les véhicules électriques, rendant au conducteur les informations qu’il transmet habituellement sur la dynamique du véhicule.Pour cela, nous nous sommes basés sur une première phase d’analyse qui nous a permis d’étudier comment le bruit automobile influence notre perception du mouvement. A partir de deux expériences menées en simulateur de conduite, nous avons pu relier le retour sonore et la vitesse perçue par le conducteur, définissant ainsi la métaphore du bruit moteur sur laquelle s’appuie le contrôle des sons de synthèse.De façon similaire au bruit moteur, le retour sonore proposé informe le conducteur par l’intermédiaire de sa variation de hauteur tonale. Pour arriver à un son qui informe le conducteur efficacement et qui reste acceptable sur toute la gamme de vitesse du véhicule, nous nous sommes appuyés sur l’illusion de glissando infini de Shepard-Risset, qui nous permet de donner un retour d’information précis grâce à une hauteur tonale qui varie rapidement tout en restant contenue dans une plage de fréquence restreinte.L’apport de cette stratégie a enfin été testé lors de deux expériences, la première portant sur l’influence de ce retour sonore sur la perception de la vitesse des conducteurs, la seconde sur leur comportement dans une tâche de freinage. Ces deux études ont montré un effet significatif du retour sonore qui suggère que ces informations sont bien intégrées par les conducteurs, faisant de ces sons un candidat prometteur pour devenir le « bruit moteur » des véhicules de demain
This thesis aims to build an auditory display to sonify electric vehicles. Our goal consisted in bringing back to the driver the motion information, which is usually provided by the combustion engine noise.The first stage of this work consisted in analyzing how automotive noises can influence drivers’ perception of motion. We conducted two driving simulator experiments to study drivers’ speed perception in presence of different automotive noises. These results provided a link between the acoustic feedback and the speed perceived by the driver, on which we based our sonification strategy.Similarly to combustion engine noise, the acoustic feedback proposed in this work informs the driver via its pitch variation. We used the Shepard Risset glissando illusion to sonify the whole speed range of the vehicle. Pitch circularity in the construction of these sounds provides a precise information on small speed variation with fast pitch variations, and is in addition restrained within a narrow bandwith.We then tested the contribution of this strategy in two experiments. The first dealt with the influence of the proposed sounds on drivers’ speed perception ; the second with their behavior in a common braking task. These studies showed that the drivers easily integrate the information brought by this sound, and that it influences their perception of motion and modifies their driving behavior. These inputs make the proposed sound a good candidate to become the new « engine noise » of future electric cars
APA, Harvard, Vancouver, ISO, and other styles
48

Henriks, Olof. "Mapping physical movement parameters to auditory parameters by using human body movement." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-200831.

Full text
Abstract:
This study focuses on evaluating a system containing five different mappings of physical movement parameters to auditory parameters. Physical parameter variables such as size, location, among others, were obtained by using a motion tracking system, where the two hands of the user would work as rigid bodies. Translating these variables to auditory parameter variables gave the ability to control different parameters of MIDI files. The aim of the study was to determine how well a total of five participants, all with prior musical knowledge and experience, could adapt to the system concerning both user generated data as well as overall user experience. The study showed that the participants developed a positive personal engagement with the system and this way of audio and music alteration. Exploring the initial mappings of the system established ideas for future development of the system in potential forthcoming work.
APA, Harvard, Vancouver, ISO, and other styles
49

Henthorne, Cody M. "Sonifying Performance Data to Facilitate Tuning of Complex Systems." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/78162.

Full text
Abstract:
In the modern computing landscape, the challenge of tuning software systems is exacerbated by the necessity to accommodate multiple divergent execution environments and stakeholders. Achieving optimal performance requires a different configuration for every combination of hardware setups and business requirements. In addition, the state of the art in system tuning can involve complex statistical models and tools which require deep expertise not commonly possessed by the average software engineer. As an alternative approach to performance tuning, this thesis puts forward the use of sonification-conveying information via non-speech audio-to aid software engineers in tuning complex systems. In particular, this thesis designs, develops, and evaluates a tuning system that interactively (i.e., in response to user actions) sonifies the performance metrics of a computer system. This thesis demonstrates that interactive sonification can effectively guide software engineers through performance tuning of a computer system. To that end, a scientific survey determined which sound characteristics (e.g., loudness, panning, pitch, tempo, etc.) are best suited to express information to the engineer. These characteristics were used to create a proof-of-concept tuning system that was applied to tune the parameters of a real world enterprise application server. Equipped with the tuning system, engineers-not experts in enterprise computing nor performance tuning-were able to tune the server, so that its resulting performance surpasses that exhibited under the standard configuration. The results indicate that sound-based tuning approaches can provide valuable solutions to the challenges of configuring complex computer systems.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
50

Banf, Michael [Verfasser]. "Making the Visual World Audible : Auditory Image Understanding for the Visually Impaired Based on a Modular Computer Vision Sonification Model / Michael Banf." Aachen : Shaker, 2013. http://nbn-resolving.de/urn:nbn:de:101:1-20140428791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography