Dissertations / Theses on the topic 'Interactive sonification'

To see the other types of publications on this topic, follow the link: Interactive sonification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 dissertations / theses for your research on the topic 'Interactive sonification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ejdbo, Malin, and Elias Elmquist. "Interactive Sonification in OpenSpace." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170250.

Full text
Abstract:
This report presents the work of a master thesis which aim was to investigate how sonification can be used in the space visualization software OpenSpace to further convey information about the Solar System. A sonification was implemented by using the software SuperCollider and was integrated into OpenSpace using Open Sound Control to send positional data to control the panning and sound level of the sonification. The graphical user interface of OpenSpace was also extended to make the sonification interactive. Evaluations were conducted both online and in the Dome theater to evaluate how well the sonification conveyed information. The outcome of the evaluations shows promising results, which might suggest that sonification has a future in conveying information of the Solar System.
APA, Harvard, Vancouver, ISO, and other styles
2

Perkins, Rhys John. "Interactive sonification of a physics engine." Thesis, Anglia Ruskin University, 2013. http://arro.anglia.ac.uk/323077/.

Full text
Abstract:
Physics engines have become increasingly prevalent in everyday technology. In the context of this thesis they are regarded as a readily available data set that has the potential to intuitively present the process of sonification to a wide audience. Unfortunately, this process is not the focus of attention when formative decisions are made concerning the continued development of these engines. This may reveal a missed opportunity when considering that the field of interactive sonification upholds the importance of physical causalities for the analysis of data through sound. The following investigation deliberates the contextual framework of this field to argue that the physics engine, as part of typical game engine architecture, is an appropriate foundation on which to design and implement a dynamic toolset for interactive sonification. The basis for this design is supported by a number of significant theories which suggest that the underlying data of a rigid body dynamics physics system can sustain an inherent audiovisual metaphor for interaction, interpretation and analysis. Furthermore, it is determined that this metaphor can be enhanced by the extraordinary potential of the computer in order to construct unique abstractions which build upon the many pertinent ideas and practices within the surrounding literature. These abstractions result in a mental model for the transformation of data to sound that has a number of advantages in contrast to a physical modelling approach while maintaining its same creative potential for instrument building, composition and live performance. Ambitions for both sonification and its creative potential are realised by several components which present the user with a range of options for interacting with this model. The implementation of these components effectuates a design that can be demonstrated to offer a unique interpretation of existing strategies as well as overcoming certain limitations of comparable work.
APA, Harvard, Vancouver, ISO, and other styles
3

Forsberg, Joel. "A Mobile Application for Improving Running Performance Using Interactive Sonification." Thesis, KTH, Tal, musik och hörsel, TMH, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-159577.

Full text
Abstract:
Apps that assist long-distance runners have become popular, however most of them focus on results that come from calculations based on distance and time. To become a better runner, an improvement of both the body posture and running gait is required. Using sonic feedback to improve performance in different sports applications has become an established research area during the last two decades. Sonic feedback is particularly well suited for activities where the user has to maintain a visual focus on something, for example when running. The goal of this project was to implement a mobile application that addresses long-distance runners’ body posture and running gait. By decreasing the energy demand for a specific velocity, the runner’s performance can be improved. The application makes use of the sensors in a mobile phone to analyze the runner’s vertical force, step frequency, velocity and body tilt, together with a sonification of those parameters in an interactive way by altering the music that the user is listening to. The implementation was made in the visual programming language Pure Data together with MobMuPlat, which enables the use of Pure Data in a mobile phone. Tests were carried out with runners of different levels of experience, the results showed that the runners could interact with the music for three of the four parameters but more training is required to be able to change the running gait in real-time.
Det har blivit populärt med appar som riktar sig till långdistanslöpare, men de flesta av dessa fokuserar på resultat som kommer från uträkningar av distans och tid. För att bli en bättre löpare krävs att man förbättrar både sin kroppshållning och sin löpstil. Det har blivit ett etablerat forskningsämne under de senaste årtiondena att använda sig av ljudåterkoppling för att förbättra sin prestation inom olika sporter. Detta lämpar sig väl för aktiviteter där användaren behöver fokusera sin blick på något, till exempel under löpning. Målet med det här projektet var att implementera en mobil applikation som riktar sig till att förbättra långdistanslöpares kroppshållning och löpstil. Genom att minska på energin som krävs för att springa med en viss hastighet kan löparens prestationsförmåga öka. Applikationen använder sig av sensorerna i en mobiltelefon för att analysera användarens vertikala kraft, stegfrekvens, hastighet och kroppslutning genom att sonifiera dessa parametrar på ett interaktivt sätt där musiken som användaren lyssnar på ändras på olika sätt. Implementeringen gjordes i det visuella programmeringsspråket Pure Data tillsammans med MobMuPlat, som gör att implementeringen kan användas i en mobiltelefon. Tester genomfördes med löpare med olika grader av erfarenhet, resultaten visade att löparna kunde interagera med musiken för tre av de fyra parametrarna men mer övning krävs för att kunna förändra löpstilen i realtid.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Haixia. "Interactive sonification of abstract data - framework, design space, evaluation, and user tool." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3394.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Dubus, Gaël. "Interactive sonification of motion : Design, implementation and control of expressive auditory feedback with mobile devices." Doctoral thesis, KTH, Musikakustik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-127944.

Full text
Abstract:
Sound and motion are intrinsically related, by their physical nature and through the link between auditory perception and motor control. If sound provides information about the characteristics of a movement, a movement can also be influenced or triggered by a sound pattern. This thesis investigates how this link can be reinforced by means of interactive sonification. Sonification, the use of sound to communicate, perceptualize and interpret data, can be used in many different contexts. It is particularly well suited for time-related tasks such as monitoring and synchronization, and is therefore an ideal candidate to support the design of applications related to physical training. Our objectives are to develop and investigate computational models for the sonification of motion data with a particular focus on expressive movement and gesture, and for the sonification of elite athletes movements.  We chose to develop our applications on a mobile platform in order to make use of advanced interaction modes using an easily accessible technology. In addition, networking capabilities of modern smartphones potentially allow for adding a social dimension to our sonification applications by extending them to several collaborating users. The sport of rowing was chosen to illustrate the assistance that an interactive sonification system can provide to elite athletes. Bringing into play complex interactions between various kinematic and kinetic quantities, studies on rowing kinematics provide guidelines to optimize rowing efficiency, e.g. by minimizing velocity fluctuations around average velocity. However, rowers can only rely on sparse cues to get information relative to boat velocity, such as the sound made by the water splashing on the hull. We believe that an interactive augmented feedback communicating the dynamic evolution of some kinematic quantities could represent a promising way of enhancing the training of elite rowers. Since only limited space is available on a rowing boat, the use of mobile phones appears appropriate for handling streams of incoming data from various sensors and generating an auditory feedback simultaneously. The development of sonification models for rowing and their design evaluation in offline conditions are presented in Paper I. In Paper II, three different models for sonifying the synchronization of the movements of two users holding a mobile phone are explored. Sonification of expressive gestures by means of expressive music performance is tackled in Paper III. In Paper IV, we introduce a database of mobile applications related to sound and music computing. An overview of the field of sonification is presented in Paper V, along with a systematic review of mapping strategies for sonifying physical quantities. Physical and auditory dimensions were both classified into generic conceptual dimensions, and proportion of use was analyzed in order to identify the most popular mappings. Finally, Paper VI summarizes experiments conducted with the Swedish national rowing team in order to assess sonification models in an interactive context.

QC 20130910

APA, Harvard, Vancouver, ISO, and other styles
6

Edström, Viking, and Fredrik Hallberg. "Human Interaction in 3D Manipulations : Can sonification improve the performance of the interaction?" Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146344.

Full text
Abstract:
In this report the effects of using sonification when performing move- ments in 3D space are explored. User studies were performed where partic- ipants had to repeatedly move their hand toward a target. Three different sonification modes were tested where the fundamental frequency, sound level and sound rate were varied respectively depending on the distance to the target. The results show that there is no statistically significant performance increase for any sonification mode. There is however an in- dication that sonification increases the interaction speed for some users. The mode which provided the greatest average performance increase was when the sound level was varied. This mode gave a 7% average speed increase over the silent control mode. However, the sound level mode has some significant drawbacks, especially the very high base volume re- quirement, which might not make it the best suited sonification mode for all applications. In the general case we instead recommend using the sonification mode that varies the sound rate, which gave a slightly lower performance gain but can be played at a lower volume due to its binary nature.
APA, Harvard, Vancouver, ISO, and other styles
7

Reynal, Maxime. "Non-visual interaction concepts : considering hearing, haptics and kinesthetics for an augmented remote tower environment." Thesis, Toulouse, ISAE, 2019. http://www.theses.fr/2019ESAE0034.

Full text
Abstract:
Afin de simplifier la gestion des ressources humaines et de réduire les coûts d’exploitation, certaines tours de contrôle sont désormais conçues pour ne pas être implantées directement sur l’aéroport. Ce concept, connu sous le nom de tour de contrôle distante (remote tower), offre un contexte de travail “digital” : la vue sur les pistes est diffusée via des caméras situées sur le terrain distant. Ce concept pourrait également être étendu au contrôle simultanés de plusieurs aéroports à partir d’une seule salle de contrôle, par un contrôleur seul (tour de contrôle distante multiple). Ces notions nouvelles offrent aux concepteurs la possibilité de développer des formes d’interaction novatrices. Cependant, la plupart des augmentations actuelles reposent sur la vue, qui est largement utilisée et, par conséquent, parfois surchargée.Nous nous sommes ainsi concentrés sur la conception et l’évaluation de nouvelles techniques d’interaction faisant appel aux sens non visuels, plus particulièrement l’ouïe, le toucher et la proprioception. Deux campagnes expérimentales ont été menées. Durant les processus de conception, nous avons identifié, avec l’aide d’experts du domaine, certaines situations pertinentes pour les contrôleurs aériens en raison de leur criticité: a) la mauvaise visibilité (brouillard épais,perte de signal vidéo), b) les mouvements non autorisés au sol (lorsque les pilotes déplacent leur appareil sans y avoir été préalablement autorisés), c) l’incursion de piste (lorsqu’un avion traverse le point d’attente afin d’entrer sur la piste alors qu’un autre, simultanément, s’apprête à atterrir) et d) le cas des communications radio simultanées provenant de plusieurs aéroports distants. La première campagne expérimentale visait à quantifier la contribution d’une technique d’interaction basée sur le son spatial, l’interaction kinesthésique et des stimuli vibrotactiles, afin de proposer une solution au cas de perte de visibilité sur le terrain contrôlé. L’objectif était d’améliorer la perception de contrôleurs et d’accroître le niveau général de sécurité, en leur offrant un moyen différent pour localiser les appareils. 22 contrôleurs ont été impliqués dans une tâche de laboratoire en environnement simulé. Des résultats objectifs et subjectifs ont montré une précision significativement plus élevée en cas de visibilité dégradée lorsque la modalité d’interaction testée était activée. Parallèlement, les temps de réponse étaient significativement plus longs relativement courts par rapport à la temporalité de la tâche. L’objectif de la seconde campagne expérimentale, quant à elle, était d’évaluer 3 autres modalités d’interaction visant à proposer des solutions à 3 autres situations critiques : les mouvements non autorisés au sol,les incursions de piste et les appels provenant d’un aéroport secondaire contrôlé. Le son spatial interactif, la stimulation tactile et les mouvements du corps ont été pris en compte pour la conception de 3 autres techniques interactives. 16contrôleurs aériens ont participé à une expérience écologique dans laquelle ils ont contrôlé 1 ou 2 aéroport(s), avec ou sans augmentation. Les résultats comportementaux ont montré une augmentation significative de la performance globale des participants lorsque les modalités d’augmentation étaient activées pour un seul aéroport. La première campagne a été la première étape dans le développement d’une nouvelle technique d’interaction qui utilise le son interactif comme moyen de localisation lorsque la vue seule ne suffit pas. Ces deux campagnes ont constitué les premières étapes de la prise en compte des augmentations multimodales non visuelles dans les contextes des tours de contrôles déportées Simples et Multiples
In an effort to simplify human resource management and reduce operational costs, control towers are now increasingly designed to not be implanted directly on the airport but remotely. This concept, known as remote tower, offers a “digital”working context: the view on the runways is broadcast remotely using cameras located on site. Furthermore, this concept could be enhanced to the control of several airports simultaneously from one remote tower facility, by only one air traffic controller (multiple remote tower). These concepts offer designers the possibility to develop novel interaction forms. However, the most part of the current augmentations rely on sight, which is largely used and, therefore, is sometimes becoming overloaded. In this Ph.D. work, the design and the evaluation of new interaction techniques that rely onnon-visual human senses have been considered (e.g. hearing, touch and proprioception). Two experimental campaigns have been led to address specific use cases. These use cases have been identified during the design process by involving experts from the field, appearing relevant to controllers due to the criticality of the situation they define. These situations are a) poor visibility (heavy fog conditions, loss of video signal in remote context), b) unauthorized movements on ground (when pilots move their aircraft without having been previously cleared), c) runway incursion (which occurs when an aircraft crosses the holding point to enter the runway while another one is about to land), and d) how to deal with multiple calls associated to distinct radio frequencies coming from multiple airports. The first experimental campaign aimed at quantifying the contribution of a multimodal interaction technique based on spatial sound, kinaesthetic interaction and vibrotactile feedback to address the first use case of poor visibility conditions. The purpose was to enhance controllers’ perception and increase overall level of safety, by providing them a novel way to locate aircraft when they are deprived of their sight. 22 controllers have been involved in a laboratory task within a simulated environment.Objective and subjective results showed significantly higher performance in poor visibility using interactives patial sound coupled with vibrotactile feedback, which gave the participants notably higher accuracy in degraded visibility.Meanwhile, response times were significantly longer while remaining acceptably short considering the temporal aspect of the task. The goal of the second experimental campaign was to evaluate 3 other interaction modalities and feedback addressing 3 other critical situations, namely unauthorized movements on ground, runway incursion and calls from a secondary airport. We considered interactive spatial sound, tactile stimulation and body movements to design3 different interaction techniques and feedback. 16 controllers’ participated in an ecological experiment in which they were asked to control 1 or 2 airport(s) (Single Vs. Multiple operations), with augmentations activated or not. Having no neat results regarding the interaction modalities into multiple remote tower operations, behavioural results shown asignificant increase in overall participants’ performance when augmentation modalities were activated in single remotecontrol tower operations. The first campaign was the initial step in the development of a novel interaction technique that uses sound as a precise means of location. These two campaigns constituted the first steps for considering non-visual multimodal augmentations into remote tower operations
APA, Harvard, Vancouver, ISO, and other styles
8

Parseihian, Gaëtan. "Sonification binaurale pour l'aide à la navigation." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00771316.

Full text
Abstract:
Dans cette thèse, nous proposons la mise en place d'un système de réalité augmentée fondé sur le son 3D et la sonification, ayant pour objectif de fournir les informations nécessaires aux non- voyants pour un déplacement fiable et sûr. La conception de ce système a été abordée selon trois axes. L'utilisation de la synthèse binaurale pour générer des sons 3D est limitée par le problème de l'individualisation des HRTF. Une méthode a été mise en place pour adapter les individus aux HRTF en utilisant la plasticité du cerveau. Évaluée avec une expérience de localisation, cette méthode a permis de montrer les possibilités d'acquisition rapide d'une carte audio-spatiale virtuelle sans utiliser la vision. La sonification de données spatiales a été étudiée dans le cadre d'un système permettant la préhension d'objet dans l'espace péripersonnel. Les capacités de localisation de sources sonores réelles et virtuelles ont été étudiées avec un test de localisation. Une technique de sonification de la distance a été développée. Consistant à relier le paramètre à sonifier aux paramètres d'un effet audio, cette technique peut être appliquée à tout type de son sans nécessiter d'apprentissage supplémentaire. Une stratégie de sonification permettant de prendre en compte les préférences des utilisateurs a été mise en place. Les " morphocons " sont des icônes sonores définis par des motifs de paramètres acoustiques. Cette méthode permet la construction d'un vocabulaire sonore indépendant du son utilisé. Un test de catégorisation a montré que les sujets sont capables de reconnaître des icônes sonores sur la base d'une description morphologique indépendamment du type de son utilisé.
APA, Harvard, Vancouver, ISO, and other styles
9

Savard, Alexandre. "When gestures are perceived through sounds : a framework for sonification of musicians' ancillary gestures." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116051.

Full text
Abstract:
This thesis presents a multimodal sonification system that combines video with sound synthesis generated from motion capture data. Such a system allows for a fast and efficient exploration of musicians' ancillary gestural data, for which sonification complements conventional videos by stressing certain details which could escape one's attention if not displayed using an appropriate representation. The main objective of this project is to provide a research tool designed for people that are not necessarily familiar with signal processing or computer sciences. This tool is capable of easily generating meaningful sonifications thanks to dedicated mapping strategies. On the one hand, the dimensionality reduction of data obtained from motion capture systems such as the Vicon is fundamental as it may exceed 350 signals describing gestures. For that reason, a Principal Component Analysis is used to objectively reduce the number of signals to a subset that conveys the most significant gesture information in terms of signal variance. On the other hand, movement data presents high variability depending on the subjects: additional control parameters for sound synthesis are offered to restrain the sonification to the significant gestures, easily perceivable visually in terms of speed and path distance. Then, signal conditioning techniques are proposed to adapt the control signals to sound synthesis parameter requirements or to allow for emphasizing certain gesture characteristics that one finds important. All those data treatments are performed in realtime within one unique environment, minimizing data manipulation and facilitating efficient sonification designs. Realtime process also allows for an instantaneous system reset to parameter changes and process selection so that the user can easily and interactively manipulate data, design and adjust sonifications strategies.
APA, Harvard, Vancouver, ISO, and other styles
10

Smith, Daniel R. "Effects of training and context on human performance in a point estimation sonification task." Thesis, Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/32845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Winberg, Fredrik. "Contextualizing Accessibility : Interaction for Blind Computer Users." Doctoral thesis, Stockholm : Human-Computer Interaction, Kungliga tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Magnani, Alessandro. "Sonificazione: stato dell'arte e casi di studio." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24697/.

Full text
Abstract:
Negli ultimi anni, i progressi delle tecnologie di elaborazione digitale dei segnali hanno favorito l’utilizzo del suono all’interno dei sistemi multimediali, non soltanto in virtù delle sue caratteristiche musicali, ma anche come mezzo per rappresentare informazioni e dati più o meno complessi. Sfruttare l’udito in combinazione con la vista viene tanto naturale nella quotidianità che quasi non ci si accorge di quanto questo sistema sia complesso. Oggigiorno poter usufruire di questo strumento anche quando si interagisce con la tecnologia è cruciale. In questi termini la Sonificazione ha reso possibile porre nuovi orizzonti in moltissimi campi scientifici, attraverso l’efficacia e l’efficienza dimostrata nei vari casi di studio. L’obiettivo di questa tesi è di trasmettere al lettore una conoscenza del concetto di Sonificazione, soffermandosi in particolare sugli aspetti tecnici e sui vantaggi e gli svantaggi che essa può offrire. Oltre alle nozioni già presenti nello stato dell’arte, si sono voluti presentare due casi di studio recenti, nel tentativo di illustrare come questa disciplina possa avere un impatto fondamentale nella vita di tutti i giorni.
APA, Harvard, Vancouver, ISO, and other styles
13

Barrass, Stephen, and stephen barrass@cmis csiro au. "Auditory Information Design." The Australian National University. Computer Science, 1998. http://thesis.anu.edu.au./public/adt-ANU20010702.150218.

Full text
Abstract:
The prospect of computer applications making "noises" is disconcerting to some. Yet the soundscape of the real world does not usually bother us. Perhaps we only notice a nuisance? This thesis is an approach for designing sounds that are useful information rather than distracting "noise". The approach is called TaDa because the sounds are designed to be useful in a Task and true to the Data. ¶ Previous researchers in auditory display have identified issues that need to be addressed for the field to progress. The TaDa approach is an integrated approach that addresses an array of these issues through a multifaceted system of methods drawn from HCI, visualisation, graphic design and sound design. A task-analysis addresses the issue of usefulness. A data characterisation addresses perceptual faithfulness. A case-based method provides semantic linkage to the application domain. A rule-based method addresses psychoacoustic control. A perceptually linearised sound space allows transportable auditory specifications. Most of these methods have not been used to design auditory displays before, and each has been specially adapted for this design domain. ¶ The TaDa methods have been built into computer-aided design tools that can assist the design of a more effective display, and may allow less than experienced designers to make effective use of sounds. The case-based method is supported by a database of examples that can be searched by an information analysis of the design scenario. The rule-based method is supported by a direct manipulation interface which shows the available sound gamut of an audio device as a 3D coloured object that can be sliced and picked with the mouse. These computer-aided tools are the first of their kind to be developed in auditory display. ¶ The approach, methods and tools are demonstrated in scenarios from the domains of mining exploration, resource monitoring and climatology. These practical applications show that sounds can be useful in a wide variety of information processing activities which have not been explored before. The sounds provide information that is difficult to obtain visually, and improve the directness of interactions by providing additional affordances.
APA, Harvard, Vancouver, ISO, and other styles
14

Jeon, Myounghoon. ""Spindex" (speech index) enhances menu navigation user experience of touch screen devices in various input gestures: tapping, wheeling, and flicking." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37101.

Full text
Abstract:
In a large number of electronic devices, users interact with the system by navigating through various menus. Auditory menus can complement or even replace visual menus, so research on auditory menus has recently increased with mobile devices as well as desktop computers. Despite the potential importance of auditory displays on touch screen devices, little research has been attempted to enhance the effectiveness of auditory menus for those devices. In the present study, I investigated how advanced auditory cues enhance auditory menu navigation on a touch screen smartphone, especially for new input gestures such as tapping, wheeling, and flicking methods for navigating a one-dimensional menu. Moreover, I examined if advanced auditory cues improve user experience, not only for visuals-off situations, but also for visuals-on contexts. To this end, I used a novel auditory menu enhancement called a "spindex" (i.e., speech index), in which brief audio cues inform the users of where they are in a long menu. In this study, each item in a menu was preceded by a sound based on the item's initial letter. One hundred and twenty two undergraduates navigated through an alphabetized list of 150 song titles. The study was a split-plot design with manipulated auditory cue type (text-to-speech (TTS) alone vs. TTS plus spindex), visual mode (on vs. off), and input gesture style (tapping, wheeling, and flicking). Target search time and subjective workload for the TTS + spindex were lower than those of the TTS alone in all input gesture types regardless of visual type. Also, on subjective ratings scales, participants rated the TTS + spindex condition higher than the plain TTS on being 'effective' and 'functionally helpful'. The interaction between input methods and output modes (i.e., auditory cue types) and its effects on navigation behaviors was also analyzed based on the two-stage navigation strategy model used in auditory menus. Results were discussed in analogy with visual search theory and in terms of practical applications of spindex cues.
APA, Harvard, Vancouver, ISO, and other styles
15

Nyström, Elin. "Hur vår sinnesstämningen påverkas av ett förstärkt ljudlandskap : En konceptdriven undersökning av hur en användarupplevelse skulle kunna designas för att öka kontakten med vår nära omgivning." Thesis, Högskolan Kristianstad, Avdelningen för design, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-22554.

Full text
Abstract:
Nyfikenhet och känsla av att kontakt med vår omgivning är viktigt för vårt välbefinnande. Däremot bidrar kombinationen av att vara hemmablind och mobiltelefonens tjuvande av vår uppmärksamhet till att vi distanserar än mer från den. Genom att engagera människor genom ljud för att rikta vår uppmärksamhet mot omgivningen och bort från mobiltelefonen kan man väcka nyfikenheten till liv, och på så sätt få ökad kontakt med omgivningen. En konceptdriven designstudie ligger till grund för ett designkoncept som den här kandidatuppsatsen presenterar. Syftet är att studien skall skall bidra med insikter och lärdomar för framtida forskning inom området. Arbetet undersöker hur användarupplevelsen av ett förstärkt ljudlandskap potentiellt kan designas för att människor ska få ökad kontakt med sin omgivning. I en empiriskt studie får deltagarna jämföra hur deras sinnesstämning påverkas av ett förstärkt ljudlandskap med det naturliga ljudlandskapet i en, för dem, känd miljö. Resultatet visar att det går att påverka användarens sinnesstämning, men att det råder delade meningar om den ger ökad kontakt med omgivningen. Slutsatsen blir att; sinnesstämningen påverkas olika, vilket kan bero på vilka personliga preferenser användaren redan har med sig.
Curiosity and a sense of connection with our everyday environment is important for our well-being. However, the combination of not being able to appreciate what is close to us and smartphones stealing our attention just makes a larger gap between us and everyday environments. By engaging people through sound to direct our focus towards our surroundings and away from the smartphone, you can bring your curiosity to life, and in that way get an enhanced connection to the everyday environment. This paper presents a design concept based on a study made with a Concept-Driven Design method. The purpose of the suggested design concept is that the insights and knowledge that is drawn from it can be used to help future research within the field. This paper examines how a user experience of an augmented soundscape could be designed for people to enhance their connection to their surroundings. In an empirical study, participants get to compare how their mood is affected by a natural soundscape compared to an augmented soundscape in an everyday environment. The result shows that all of the participants' mood was affected by the soundscape, however when it comes to enhanced connection to their surroundings only 40% of the participants said it did. A conclusion was made that the mood is affected differently depending on personal preferences and previous experience.
APA, Harvard, Vancouver, ISO, and other styles
16

Perkins, Rhys J. "Interactive sonification of a physics engine." Thesis, 2013. https://arro.anglia.ac.uk/id/eprint/323077/1/Thesis%20Rhys%20Perkins.pdf.

Full text
Abstract:
Physics engines have become increasingly prevalent in everyday technology. In the context of this thesis they are regarded as a readily available data set that has the potential to intuitively present the process of sonification to a wide audience. Unfortunately, this process is not the focus of attention when formative decisions are made concerning the continued development of these engines. This may reveal a missed opportunity when considering that the field of interactive sonification upholds the importance of physical causalities for the analysis of data through sound. The following investigation deliberates the contextual framework of this field to argue that the physics engine, as part of typical game engine architecture, is an appropriate foundation on which to design and implement a dynamic toolset for interactive sonification. The basis for this design is supported by a number of significant theories which suggest that the underlying data of a rigid body dynamics physics system can sustain an inherent audiovisual metaphor for interaction, interpretation and analysis. Furthermore, it is determined that this metaphor can be enhanced by the extraordinary potential of the computer in order to construct unique abstractions which build upon the many pertinent ideas and practices within the surrounding literature. These abstractions result in a mental model for the transformation of data to sound that has a number of advantages in contrast to a physical modelling approach while maintaining its same creative potential for instrument building, composition and live performance. Ambitions for both sonification and its creative potential are realised by several components which present the user with a range of options for interacting with this model. The implementation of these components effectuates a design that can be demonstrated to offer a unique interpretation of existing strategies as well as overcoming certain limitations of comparable work.
APA, Harvard, Vancouver, ISO, and other styles
17

Caldis, Constantina. "Data sonification artworks : a music and design investigation of multi-modal interactive installations." Thesis, 2014.

Find full text
Abstract:
Through three case studies this research report will explore the design and multimodal attributes of interactive installations that feature characteristics comparable to those of musical instruments. It will briefly outline and define data sonification and the five sonification techniques: Audification, Auditory Icons, Earcons, Parameter-mapping sonification and Model-based sonification. Model-based data sonification will be more closely explored, as interactivity and the incorporation of multiple modes are key elements within these types of sonifications. Pre-existing knowledge of musical instruments, their multi-modal attributes and functionality will be analyzed in relation to the case studies. The design analysis will focus on the interaction and interface design while incorporating the three modes, namely: visual guidance, real-time physical/gestural interaction, and instantaneous acoustic feedback. To conclude, this research report will demonstrate, through the case studies, that the function and design of musical instruments can be suggestive in realizing a potential way forward for digital artists and their multi-modal interactive installations.
APA, Harvard, Vancouver, ISO, and other styles
18

Barrass, Stephen. "Auditory Information Design." Phd thesis, 1997. http://hdl.handle.net/1885/46072.

Full text
Abstract:
The prospect of computer applications making "noises" is disconcerting to some. Yet the soundscape of the real world does not usually bother us. Perhaps we only notice a nuisance? This thesis is an approach for designing sounds that are useful information rather than distracting "noise". The approach is called TaDa because the sounds are designed to be useful in a Task and true to the Data. ¶ Previous researchers in auditory display have identified issues that need to be addressed for the field to progress. The TaDa approach is an integrated approach that addresses an array of these issues through a multifaceted system of methods drawn from HCI, visualisation, graphic design and sound design. A task-analysis addresses the issue of usefulness. A data characterisation addresses perceptual faithfulness. A case-based method provides semantic linkage to the application domain. A rule-based method addresses psychoacoustic control. A perceptually linearised sound space allows transportable auditory specifications. Most of these methods have not been used to design auditory displays before, and each has been specially adapted for this design domain. ¶ The TaDa methods have been built into computer-aided design tools that can assist the design of a more effective display, and may allow less than experienced designers to make effective use of sounds. The case-based method is supported by a database of examples that can be searched by an information analysis of the design scenario. The rule-based method is supported by a direct manipulation interface which shows the available sound gamut of an audio device as a 3D coloured object that can be sliced and picked with the mouse. These computer-aided tools are the first of their kind to be developed in auditory display. ¶ The approach, methods and tools are demonstrated in scenarios from the domains of mining exploration, resource monitoring and climatology. These practical applications show that sounds can be useful in a wide variety of information processing activities which have not been explored before. The sounds provide information that is difficult to obtain visually, and improve the directness of interactions by providing additional affordances.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography