Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Interactive audio.

Dissertationen zum Thema „Interactive audio“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Interactive audio" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Smith, Adam Douglas 1975. „WAI-KNOT (Wireless Audio Interactive Knot)“. Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/62360.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001.
Includes bibliographical references (leaves 44-45).
The Sound Transformer is a new type of musical instrument. It looks a little like a saxophone, but when you sing or "kazoo" into it, astonishing transforms and mutations come out. What actually happens is that the input sound is sent via 802.11 wireless link to a net server that transforms the sound and sends it back to the instrument's speaker. In other words, instead of a resonant acoustic body, or a local computer synthesizer, this architecture allows sound to be sourced or transformed by an infinite array of online services, and channeled through a gesturally expressive handheld. Emerging infrastructures (802.11, Bluetooth, 3G and 4G, etc) seem to aim at this new class of instrument. But can such an architecture really work? In particular, given the delays incurred by decoupling the sound transformation from the instrument over a wireless network, are interactive music applications feasible? My thesis is that they are. To prove this, I built a platform called WAI-KNOT (for Wireless Audio Interactive Knot) in order to examine the latency issues as well as other design elements, and test their viability and impact on real music making. The Sound Transformer is a WAI-KNOT application.
Adam Douglas Smith.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Olaleye, Olufunke I. „Symbiotic Audio Communication on Interactive Transport“. Kent State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=kent1176438067.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Tsingos, Nicolas. „MODELS AND ALGORITHMS FOR INTERACTIVE AUDIO RENDERING“. Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00629574.

Der volle Inhalt der Quelle
Annotation:
Les systèmes de réalité virtuelle interactifs combinent des représentations visuelle, sonore et haptique, afin de simuler de manière immersive l'exploration d'un monde tridimensionnel représenté depuis le point de vue d'un observateur contrôlé en temps réel par l'utilisateur. La plupart des travaux effectués dans ce domaine ont historiquement port'e sur les aspects visuels (par exemple des méthodes d'affichage interactif de modèles 3D complexes ou de simulation réaliste et efficace de l'éclairage) et relativement peu de travaux ont été consacrés 'a la simulation de sources sonores virtuelles 'également dénommée auralisation. Il est pourtant certain que la simulation sonore est un facteur clé dans la production d'environnements de synthèse, la perception sonore s'ajoutant à la perception visuelle pour produire une interaction plus naturelle. En particulier, les effets sonores spatialisés, dont la direction de provenance est fidèlement reproduite aux oreilles de l'auditeur, sont particulièrement importants pour localiser les objets, séparer de multiples signaux sonores simultanés et donner des indices sur les caractéristiques spatiales de l'environnement (taille, matériaux, etc.). La plupart des systèmes de réalité virtuelle immersifs, des simulateurs les plus complexes aux jeux vidéo destin'es au grand public mettent aujourd'hui en œuvre des algorithmes de synthèse et spatialisation des sons qui permettent d'améliorer la navigation et d'accroître le réalisme et la sensation de présence de l'utilisateur dans l'environnement de synthèse. Comme la synthèse d'image dont elle est l'équivalent auditif, l'auralisation, appel'ee aussi rendu sonore, est un vaste sujet 'a la croisée de multiples disciplines : informatique, acoustique et 'électroacoustique, traitement du signal, musique, calcul géométrique mais également psycho-acoustique et perception audio-visuelle. Elle regroupe trois problématiques principales: synthèse et contrôle interactif de sons, simulation des effets de propagation du son dans l'environnement et enfin, perception et restitution spatiale aux oreilles de l'auditeur. Historiquement, ces trois problématiques émergent de travaux en acoustique architecturale, acoustique musicale et psycho-acoustique. Toutefois une différence fondamentale entre rendu sonore pour la réalité virtuelle et acoustique réside dans l'interaction multimodale et dans l'efficacité des algorithmes devant être mis en œuvre pour des applications interactives. Ces aspects importants contribuent 'a en faire un domaine 'a part qui prend une importance croissante, tant dans le milieu de l'acoustique que dans celui de la synthèse d'image/réalité virtuelle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Brossier, Paul M. „Automatic annotation of musical audio for interactive applications“. Thesis, Queen Mary, University of London, 2006. http://qmro.qmul.ac.uk/xmlui/handle/123456789/3809.

Der volle Inhalt der Quelle
Annotation:
As machines become more and more portable, and part of our everyday life, it becomes apparent that developing interactive and ubiquitous systems is an important aspect of new music applications created by the research community. We are interested in developing a robust layer for the automatic annotation of audio signals, to be used in various applications, from music search engines to interactive installations, and in various contexts, from embedded devices to audio content servers. We propose adaptations of existing signal processing techniques to a real time context. Amongst these annotation techniques, we concentrate on low and mid-level tasks such as onset detection, pitch tracking, tempo extraction and note modelling. We present a framework to extract these annotations and evaluate the performances of different algorithms. The first task is to detect onsets and offsets in audio streams within short latencies. The segmentation of audio streams into temporal objects enables various manipulation and analysis of metrical structure. Evaluation of different algorithms and their adaptation to real time are described. We then tackle the problem of fundamental frequency estimation, again trying to reduce both the delay and the computational cost. Different algorithms are implemented for real time and experimented on monophonic recordings and complex signals. Spectral analysis can be used to label the temporal segments; the estimation of higher level descriptions is approached. Techniques for modelling of note objects and localisation of beats are implemented and discussed. Applications of our framework include live and interactive music installations, and more generally tools for the composers and sound engineers. Speed optimisations may bring a significant improvement to various automated tasks, such as automatic classification and recommendation systems. We describe the design of our software solution, for our research purposes and in view of its integration within other systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sheets, Gregory S. „Audio coding and identification for an interactive television application“. Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-02132009-172049/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Jordan, Eric Michael. „Programming models for the development of interactive audio applications“. Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/37764.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (leaves 59-63).
by Eric Michael Jordan.
M.S.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Malone, Caitlin A. „A Visit to the Priory: An Interactive Audio Tour“. Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/389.

Der volle Inhalt der Quelle
Annotation:
The chapter house of the Benedictine priory of Saint John Le Bas-Nueil, currently located in the Worcester Art Museum, is an impressive piece of architecture. However, visitors are currently restricted to admiring the structure and its restoration only, as there is limited information presented in the museum about the room’s original use. The purpose of this project was to produce a low-impact, narrative-driven audio experience designed to increase visitor interest in the museum in general and Benedictine life during the twelfth century in particular. The prototype produced combines elements of traditional audio tours, radio drama, and question-and-answer interaction sequences to provide a self-driven immersive experience.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Byers, Kenneth Charles. „Full-body interaction : perception and consciousness in interactive digital 3-dimension audio visual installations“. Thesis, University of the West of Scotland, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.740180.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lucas, Stephen 1985. „Virtual Stage: Merging Virtual Reality Technologies and Interactive Audio/Video“. Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc984124/.

Der volle Inhalt der Quelle
Annotation:
Virtual Stage is a project to use Virtual Reality (VR) technology as an audiovisual performance interface. The depth of control, modularity of design, and user immersion aim to solve some of the representational problems in interactive audiovisual art and the control problems in digital musical instruments. Creating feedback between interaction and perception, the VR environment references the viewer's behavioral intuition developed in the real world, facilitating clarity in the understanding of artistic representation. The critical essay discusses of interactive behavior, game mechanics, interface implementations, and technical developments to express the structures and performance possibilities. This discussion uses Virtual Stage as an example with specific aesthetic and technical solutions, but addresses archetypal concerns in interactive audiovisual art. The creative documentation lists the interactive functions present in Virtual Stage as well as code reproductions of selected technical solutions. The included code excerpts document novel approaches to virtual reality implementation and acoustic physical modeling of musical instruments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Roy, Deb Kumar 1969. „NewsComm--a hand-held device for interactive access to structured audio“. Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/60444.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Christensen, Tania. „The Sound of Suspense : Designing an audio-physical, interactive storytelling system“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280701.

Der volle Inhalt der Quelle
Annotation:
In this paper, the design process and evaluation of an interactive audio-physical storytelling system that uses motion capture to deliver a narrative is described. Following a research through design approach, the aim is to establish which factors are important when designing audiophysical systems for storytelling. To investigate how the visuals and audio collaborated, two diferent experiments were conducted. Six participants were flmed when interacting with the system while using the Sensual Evaluation Instrument (SEI), followed by an interview focusing on their experience of the narrative and an analysis of their SEI usage. Results showed that the audio structures the dramatic experience, while the visual elements put the narrative in a context and support curiosity.
I den här rapporten beskrivs designprocessen och utvärderingen av ett interaktivt audio-fysiskt system som använder motion capture-teknik för at leverera et narrativ. Genom användning av Research through design-metodologi var målet med studien at fastställa vilka faktorer som bör tas hänsyn till vid design av audio-fysiska system för historieberätande. För at undersöka hur visuella och auditiva element samarbetade utfördes två olika experiment. Sex personer filmades när de interagerade med systemet samtidigt som de använde Sensual Evaluation Instrument (SEI). Eferåt följde en intervju med fokus på upplevelsen av narrativet och en analys av interaktionen med SEI. Resultaten visade att de auditiva elementen strukturerar den dramaturgiska upplevelsen medan de visuella elementen ger narrativet ett sammanhang och uppmuntrar till nyfkenhet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Donat-Bouillud, Pierre. „Models, Analysis and Execution of Audio Graphs in Interactive Multimedia Systems“. Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS604.

Der volle Inhalt der Quelle
Annotation:
Les Systèmes Interactifs Multimédia (SIM) sont utilisés en concert pour des spectacles interactifs, qui mêlent en temps-réel instruments acoustiques, instruments électroniques, des données issues de divers capteurs (gestes, interface midi, etc) et le contrôle de différents média (vidéo, lumière, etc). Cette thèse présente un modèle formel de graphe audio, via un système de types et une sémantique dénotationnelle, avec des flux de données bufferisés datés multipériodiques qui permettent de représenter avec plus ou moins de précisions l'entrelacement du contrôle (par exemple un oscillateur basse fréquence, des vitesses issues d’un accéléromètre) et des traitements audio dans un SIM. Une extension audio d’Antescofo, un SIM qui fait office de suiveur de partition et qui comporte un langage synchrone temporisé dédié, a motivé le développement de ce modèle. Cette extension permet de connecter des effets Faust et des effets natifs, à la volée, de façon sure. L’approche a été validée sur une pièce de musique mixte et un exemple d'interactions audio et vidéo. Enfin, cette thèse propose des optimisations hors-ligne à partir du rééchantillonnage automatique de parties d'un graphe audio à exécuter. Un modèle de qualité et de temps d'exécution dans le graphe a été défini. Son étude expérimentale a été réalisée grâce à un SIM prototype à partir de la génération automatique de graphes audio, ce qui a permis aussi de caractériser des stratégies de rééchantillonnage proposées pour le cas en ligne en temps-réel
Interactive Multimedia Systems (IMSs) are used in concert for interactive performances, which combine in real time acoustic instruments, electronic instruments, data from various sensors (gestures, midi interface, etc.) and the control of different media (video, light, etc.). This thesis presents a formal model of audio graphs, via a type system and a denotational semantics, with multirate timestamped bufferized data streams that make it possible to represent with more or less precision the interleaving of the control (for example a low frequency oscillator, velocities from an accelerometer) and audio processing in an MIS. An audio extension of Antescofo, an IMS that acts as a score follower and includes a dedicated synchronous timed language, has motivated the development of this model. This extension makes it possible to connect Faust effects and native effects on the fly safely. The approach has been validated on a mixed music piece and an example of audio and video interactions. At last, this thesis proposes offline optimizations based on the automatic resampling of parts of an audio graph to be executed. A quality and execution time model in the graph has been defined. Its experimental study was carried out using a prototype IMS based on the automatic generation of audio graphs, which has also made it possible to characterize resampling strategies proposed for the online case in real time
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Schiessl, Simon Karl Josef 1972. „Acoustic chase : designing an interactive audio environment to stimulate human body movement“. Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/26919.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.
Includes bibliographical references (p. 58-60).
An immersive audio environment was created that explores how humans react to commands imposed by a machine generating its acoustic stimuli on the basis of tracked body movement. In this environment, different states of human and machine action are understood as a balance of power that moves back and forth between the apparatus and the human being. This system is based on spatial sounds that are designed to stimulate body movements. The physical set-up consists of headphones with attached sensors to pick up the movements of the head. Mathematic models calculate the behavior of the sound, its virtual motion path relative to the person, and how it changes over time.
by Simon Karl Josef Schiessl.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Tilki, John F. „Encoding a Hidden Digital Signature Using Psychoacoustic Masking“. Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36785.

Der volle Inhalt der Quelle
Annotation:
The Interactive Video Data System (IVDS) project began with an initial abstract concept of achieving interactive television by transmitting hidden digital information in the audio of commercials. Over the course of three years such a communication method was successfully developed, the hardware systems to realize the application were designed and built, and several full-scale field tests were conducted. The novel coding scheme satisfies all of the design constraints imposed by the project sponsors. By taking advantage of psychoacoustic properties, the hidden digital signature is inaudible to most human observers yet is detectable by the hardware decoder. The communication method is also robust against most extraneous room noise as well as the wow and flutter of videotape machines. The hardware systems designed for the application have been tested and work as intended. A triple-stage audio amplifier buffers the input signal, eliminates low frequency interference such as human voices, and boosts the filtered result to an appropriate level. A codec samples the filtered and amplified audio, and feeds it into the digital signal processor. The DSP, after applying a pre-emphasis and compensation filter, performs the data extraction by calculating FFTs, compensating for frequency shifts, estimating the digital signature, and verifying the result via a cyclic redundancy check. It then takes action appropriate for the command specified in the digital signature. If necessary it will verbally prompt and provide information to the user, and will decode infrared signals from a remote control. The results of interactions are transmitted by radio frequency spread spectrum to a cell cite, where they are then forwarded to the host computer.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

West, Charles J. Rhodes Dent. „An interactive system for developing multimediated hospital-based patient instruction“. Normal, Ill. Illinois State University, 2001. http://wwwlib.umi.com/cr/ilstu/fullcit?p3064488.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ed. D.)--Illinois State University, 2001.
Title from title page screen, viewed March 30, 2006. Dissertation Committee: Dent Rhodes (chair), Norman Bettis, Kenneth Jerich, Joaquin Vila. Includes bibliographical references (leaves 110-118) and abstract. Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Patel, Dipankumar Dalubhai. „Subjective effects of cell loss and bit error on compressed audio-visual applications over ATM“. Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314077.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Blue, Kevin J. „In/retrospection : an interactive audiovisual composition for ten-piece orchestra, electronically manipulated audio, and video“. Virtual Press, 2007. http://liblink.bsu.edu/uhtbin/catkey/1365789.

Der volle Inhalt der Quelle
Annotation:
In/Retrospection is an audiovisual composition employing audio and video in an interactive form, written for a ten-piece orchestra, electronically generated audio, and video that interact with each other in a variety of ways. Not only is the use of overall interaction employed, but each element of the composition is given its own space to develop and take its place in the forefront of the listeners/viewers focus, thus shifting attention to various aspects of the composition. In this way, the composition is neither a video with accompanying audio or audio with accompanying video, but a combination of both forms. On top of this, the electroacoustic portion of the piece, employing both traditional orchestral instruments as well as electronically manipulated sounds and music, adds yet another level of interaction and attention-shifting mechanics to the composition. The constant shifting of the listener's/viewer's focus is the fundamental idea explored in In/Retrospection.
School of Music
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Wozniewski, Michael. „A framework for interactive three-dimensional sound and spatial audio processing in a virtual environment“. Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18430.

Der volle Inhalt der Quelle
Annotation:
Immersive virtual environments offer the possibility of natural interaction within a virtual world that is familiar to users because it is based on everyday activity. The use of such environments for the representation and control of 3-D sound remains largely unexplored. We propose a novel paradigm for interacting with sound and using virtual space as the medium for spatial audio processing. A supporting software framework has been developed that provides functionality not available in other 3-D audio systems, including powerful control over the directivity of sound and the ability to bend the rules of physics for musical purposes. These features provide the necessary tools to create virtual scenes for audio engineering, musical creation, listening, and performance. Tracking technology allows the use of gesture-based interaction techniques to control the environment, resulting in many possibilities for novel applications.
Les environnements virtuels immersifs peuvent procurer une interaction naturelle au sein d'un monde virtuel qui soit familière aux usagers car ils sont basés sur des actions de tous les jours. L'utilisation de tels environnements pour la représentation et le contrôle de sons 3-D reste largement inexplorée. On propose un nouveau paradigme pour l'interaction avec le son et l'utilisation d'un espace virtuel comme instrument pour le traitement spatial du son. Une architecture logicielle procurant des fonctionnalité qui ne sont pas disponibles dans d'autres systèmes pour sons 3-D a été développée, offrant un contrôle puissant sur la direction du son et la capacité de déformer les règles de la physique dans des buts musicaux. Ces fonctionnalités procurent les outils nécessaires pour créer des scènes virtuelles pour l'ingénierie du son, la création, l'écoute et l'interprétation musicale. L'utilisation de technologies de suivis de mouvements permet l'utilisation de techniques naturelles basées sur les gestes pour le contrôle de l'environnement, ce qui ouvre la voie à plusieurs nouvelles applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Blake, Todd Arthur. „Micro Coin (TM) Computer Interactive Educational System“. Virtual Press, 1985. http://liblink.bsu.edu/uhtbin/catkey/491464.

Der volle Inhalt der Quelle
Annotation:
The purpose of this creative project was to develop a promotional videotape to be used in the marketing process of Micro Coin(TM). This area had not been explored before by Micro Coin Electronics Incorporated. Based on the information given to me about Micro Coin I was given total control of the content of the videotape. I based my creative project on comparing current marketing techniques of computers and computer software, and Micro Coin builds and improves those techniques. Micro Coin is such a revolutionary idea, there was the need to show an example of Micro Coin being used. I learned that even with total control creativity is limited.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Leaman, Oliver. „CuDAS : an interactive curriculum combining pedagogic composition with interactive software for the teaching of music technology“. Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/6071.

Der volle Inhalt der Quelle
Annotation:
Within the framework of education of Music Technology for 16-18 year olds there exists a lack of thorough teaching and learning resources sufficient for a broad understanding of the basics of audio and electronic synthesis. This PhD submission outlines the role of the composer in the classroom in addressing this fundamental issue through the development of a curriculum containing pedagogic composition and interactive software. There will be a discussion of the principles of pedagogic methodologies developed by various composers and of the current model of learning provided in Music Technology Alevel. The programming tools used to develop the software are investigated, as well as an exploration into the current learning psychology that informed the curriculum development. This submission consists of a written thesis that accompanies a set of compositions and a multimedia DVD, which includes the software for the CuDAS curriculum. Within this software is contained a presentation of a series of interactive tutorials alongside compositions in the form of scores, recordings and interactive exercises. There is also include written supporting documentation and sound files of techniques and recordings from contrasting genres of music history.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Denois, Véronique. „La télévision démocratique : recherche sur l'hypothèse d'un téléspectateur sujet dans l'espace audio-visuel“. Paris 5, 1996. http://www.theses.fr/1996PA05H090.

Der volle Inhalt der Quelle
Annotation:
Notre recherche veut rendre compte d'un moment fondateur de l'espace audio-visuel en mutation dans lequel nous évoluons : celui du basculement de la télévision du monopole dans la logique d'un espace privatise. Plusieurs analyses mettent l'accent sur le pouvoir économique, l'assujettissement de l'outil audiovisuel à ses financiers. Revendiquant la liberté si tardivement acquise dans le cas français, les professionnels prétendaient n'accepter qu'un prescripteur : le public. Ils entendaient mettre en œuvre la "démocratie télévisuelle" par la soumission de la communication médiatisée aux objectifs de la communication sociale. Nous avons pris pour objet de vérifier la réalité de cette démocratie télévisuelle, d'en comprendre les mécanismes en scrutant le jeu relationnel entre la télévision et son public. La notion de public à laquelle se réfèrent les professionnels de l'audiovisuel organise socialement cet espace. Notre travail propose l'analyse de cette notion, tout d'abord comme réalité de la dynamique du champ audio-visuel, ensuite comme hypothèse d'un acteur susceptible de se reconnaitre comme tel dans l'espace social. Au travers des déclarations des professionnels recueillies de 1987 à 1992, notre première partie analyse les constructions du téléspectateur. L’étude d'un feuilleton interactif nous permet ensuite de tester l'hypothèse d'un téléspectateur "partenaire" dans une situation de confrontation plus directe. Enfin, la possibilité d'une constitution téléspectateurs en contre-pouvoir face au champ audiovisuel est évoquée au travers d'un historique et d'une étude de terrain auprès des associations de téléspectateurs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Bolte, Jason L. „Forgotten dreams : an electro-acoustic composition for double bass, eight-channel digital audio, and interactive electronics“. Virtual Press, 2003. http://liblink.bsu.edu/uhtbin/catkey/1265460.

Der volle Inhalt der Quelle
Annotation:
This creative project explores the technical and musical possibilities associated with the composition of an electro-acoustic work, scored for double bass, eight-channel digital audio, and interactive electronics, that integrates the uniqueness and spontaneity of live performance with the textural, timbral, and spatial complexity that can be achieved through the use of prerecorded digital audio. The most significant compositional idea that is explored in this work is the interactions between the double bass, digital audio, and interactive electronics. These three components interact with one another on several levels. This is not limited to harmonic, melodic, or rhythmic components, but also includes such attributes as timbre, texture, dynamics, timing, improvisation, and sound projection. To create this interaction, several computer applications are used for the realization of the digital audio and score notation, and in live performance.
School of Music
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Ong, Felicia Li Chin. „Transforming the Learner's Environment: Blending Interactive and Multimedia [Poster presentation]“. School of Engineering, Design and Technology. University of Bradford, 2010. http://hdl.handle.net/10454/4445.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Chiu, Chi-Hsun. „Multimedia technology enhances library services : creating an interactive DVD for Muncie Public Library“. Virtual Press, 2006. http://liblink.bsu.edu/uhtbin/catkey/1345334.

Der volle Inhalt der Quelle
Annotation:
This creative project is to create a DVD as an interactive tool for Muncie Public Library librarians, introducing the library's environment and promoting programs to local residents. The DVD provides a friendly interface and utilizes the latest technology, such as Quick Time movies, 360° Virtual pictures and animations in introducing the library's facilities and guiding Muncie residents visually around the library. Additionally, the DVD provides a new method instead of a traditional flyer for residents to access the library's services and programs.
Department of Telecommunications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Christel, Michael George. „A comparative study of digital video interactive interfaces in the delivery of a code inspection course“. Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/8151.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Beane, Allison Brooke. „Generating audio-responsive video images in real-time for a live symphony performance“. Texas A&M University, 2003. http://hdl.handle.net/1969.1/5927.

Der volle Inhalt der Quelle
Annotation:
Multimedia performances, uniting music and interactive images, are a unique form of entertainment that has been explored by artists for centuries. This audio-visual combination has evolved from rudimentary devices generating visuals for single instruments to cutting-edge video image productions for musical groups of all sizes. Throughout this evolution, a common goal has been to create real-time, audio-responsive visuals that accentuate the sound and enhance the performance. This paper explains the creation of a project that produces real-time, audioresponsive and artist interactive visuals to accompany a live musical performance by a symphony orchestra. On April 23, 2006, this project was performed live with the Brazos Valley Symphony Orchestra. The artist, onstage during the performance, controlled the visual presentation through a user interactive, custom computer program. Using the power of current visualization technology, this digital program was written to manipulate and synchronize images to a musical work. This program uses pre-processed video footage chosen to reflect the energy of the music. The integration of the video imagery into the program became a reiterative testing process that allowed for important adjustments throughout the visual creation process. Other artists are encouraged to use this as a guideline for creating their own audio-visual projects exploring the union of visuals and music.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Pridemore, David H. „Interactive CD-ROM computer tour of the Ball State University Department of Art“. Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/961564.

Der volle Inhalt der Quelle
Annotation:
For my creative thesis project I authored an interactive tour of the Ball State Department of Art. Many underlying factors go into this project. My desire to learn multimedia design, the departments desire to develop a new information tool and having the necessary hardware and software to do such a project were all key to its sucess.In the summer of 1994 I came to Ball State to learn multimedia authoring while getting a master's degree in art. Unknown to me at that time, the department had set a goal of increasing visibility both within and beyond the Ball State community. Faculty members Professor Phil Repp and Professor Christine Paul were collaborating on a promotional identity campaign. From these collaborations grew the idea of a departmental publication to promote the mission and programs of the Department of Art. With the rapid advancement of technology, it seemed appropriate to use computers as part of this promotional campaign.As Professors Paul and Repp researched the possible ways in which computers could be incorporated into this project, many questions remained. Exactly what form should a project like this take and who could do it? Careful discussion and planning also followed over what physical form the project should take (i.e. video tape, a computer disk, or printed material). Eventually the decision was made that an interactive tour of the Department of Art on CD-ROM was the most appropriate solution. For the amount of information that needed to be included and to engage the end user in a dynamic, interactive way, this medium was also the most logical.My decision to return to school coincides perfectly with the departments needs. Professor Paul’s and Professor Repp’s collaboration led to the conclusion that a third person would be needed. Someone who was already literate in advanced computer graphics and had the desire for such an undertaking. Therefore, my goals of advancing my understanding of Macintosh based digital imagery learning multimedia are significant on two levels; my career as a teacher and a professional artist would realize significant gains and this project is an outstanding addition to my portfolio.For the past several years, the primary area of artistic study for me has been in the area of computer graphics and I came to Ball State last summer with some very specific goals. One of them being to learn Macromedia Director (the authoring package I used to create the project). Director is nationally recognized by professionals in this field as the top program for this type of work. Therefore, this was both an opportunity to reach personal goals and to create a thesis project that could be used as an important part of the Department of Arts identity campaign. My thesis project is the result of my own goals and the Department of Arts goals to utilize cutting edge technology for designing innovative computer programs.I’m sure at the onset of this project that I did not understand the full magnitude of an undertaking such as this. However, it is very rewarding to look back and see both how far I’ve come personally and how the piece has progressed into a dynamic information tool.
Department of Art
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Ahmad, Ali. „DESIGN FOR AUDITORY DISPLAYS: IDENTIFYING TEMPORAL AND SPATIAL INFORMATION CONVEYANCE PRINCIPLES“. Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2835.

Der volle Inhalt der Quelle
Annotation:
Designing auditory interfaces is a challenge for current human-systems developers. This is largely due to a lack of theoretical guidance for directing how best to use sounds in today's visually-rich graphical user interfaces. This dissertation provided a framework for guiding the design of audio interfaces to enhance human-systems performance. This doctoral research involved reviewing the literature on conveying temporal and spatial information using audio, using this knowledge to build three theoretical models to aid the design of auditory interfaces, and empirically validating select components of the models. The three models included an audio integration model that outlines an end-to-end process for adding sounds to interactive interfaces, a temporal audio model that provides a framework for guiding the timing for integration of these sounds to meet human performance objectives, and a spatial audio model that provides a framework for adding spatialization cues to interface sounds. Each model is coupled with a set of design guidelines theorized from the literature, thus combined, the developed models put forward a structured process for integrating sounds in interactive interfaces. The developed models were subjected to a three phase validation process that included review by Subject Matter Experts (SMEs) to assess the face validity of the developed models and two empirical studies. For the SME review, which assessed the utility of the developed models and identified opportunities for improvement, a panel of three audio experts was selected to respond to a Strengths, Weaknesses, Opportunities, and Threats (SWOT) validation questionnaire. Based on the SWOT analysis, the main strengths of the models included that they provide a systematic approach to auditory display design and that they integrate a wide variety of knowledge sources in a concise manner. The main weaknesses of the models included the lack of a structured process for amending the models with new principles, some branches were not considered parallel or completely distinct, and lack of guidance on selecting interface sounds. The main opportunity identified by the experts was the ability of the models to provide a seminal body of knowledge that can be used for building and validating auditory display designs. The main threats identified by the experts were that users may not know where to start and end with each model, the models may not provide comprehensive coverage of all uses of auditory displays, and the models may act as a restrictive influence on designers or they may be used inappropriately. Based on the SWOT analysis results, several changes were made to the models prior to the empirical studies. Two empirical evaluation studies were conducted to test the theorized design principles derived from the revised models. The first study focused on assessing the utility of audio cues to train a temporal pacing task and the second study combined both temporal (i.e., pace) and spatial audio information, with a focus on examining integration issues. In the pace study, there were four different auditory conditions used for training pace: 1) a metronome, 2) non-spatial auditory earcons, 3) a spatialized auditory earcon, and 4) no audio cues for pace training. Sixty-eight people participated in the study. A pre- post between subjects experimental design was used, with eight training trials. The measure used for assessing pace performance was the average deviation from a predetermined desired pace. The results demonstrated that a metronome was not effective in training participants to maintain a desired pace, while, spatial and non-spatial earcons were effective strategies for pace training. Moreover, an examination of post-training performance as compared to pre-training suggested some transfer of learning. Design guidelines were extracted for integrating auditory cues for pace training tasks in virtual environments. In the second empirical study, combined temporal (pacing) and spatial (location of entities within the environment) information were presented. There were three different spatialization conditions used: 1) high fidelity using subjective selection of a "best-fit" head related transfer function, 2) low fidelity using a generalized head-related transfer function, and 3) no spatialization. A pre- post between subjects experimental design was used, with eight training trials. The performance measures were average deviation from desired pace and time and accuracy to complete the task. The results of the second study demonstrated that temporal, non-spatial auditory cues were effective in influencing pace while other cues were present. On the other hand, spatialized auditory cues did not result in significantly faster task completion. Based on these results, a set of design guidelines was proposed that can be used to direct the integration of spatial and temporal auditory cues for supporting training tasks in virtual environments. Taken together, the developed models and the associated guidelines provided a theoretical foundation from which to direct user-centered design of auditory interfaces.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering PhD
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Libler, Rebecca W. „A study of the effectiveness of interactive television as the primary mode of instruction in selected high school physics classes“. Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/776632.

Der volle Inhalt der Quelle
Annotation:
The study gathered and analyzed data about the impact of interactive television on student achievement and attitude in high school physics classes. Students enrolled in a distance learning program using interactive television to teach physics were the study population. Data were obtained from eighty-five students at six remote sites and the originating site. Z-tests of the mean scores obtained by the study population on each section of the American Association of Physics Teachers/National Science Teachers Association (AAPT/NSTA) Introductory Physics Examination Version 1988R indicated the study population achieved at a level significantly lower than the test norming population in all four areas analyzed. A one-way analysis of variance (ANOVA Model) was completed on achievement data arranged by group according to type of classroom monitoring. Group 1 had certified teachers acting as on-site facilitators; Group 2 had no on-site facilitators. There was no significant difference (p > .05) in achievement between the two groups. A survey was administered to determine the attitudes of students toward interactive television as the method of instruction and to assess student attitude toward the course content. Frequency and percentage distributions of responses to each question on the student survey were descriptive of student attitude. A one-way analysis of variance (ANOVA Model) failed to demonstrate any significant difference at the .05 level in attitudes between the group in classrooms monitored by certified teachers and the group in classrooms which were self-monitored. Students enrolled in the interactive television physics course held slightly more positive than negative attitudes toward interactive television as the method of instruction. Student attitude toward interactive television was less positive after taking the course than prior to taking the course. Students in interactive television classes generally held positive attitudes toward the content of physics.
Department of Educational Leadership
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Maake, Matsobane Joshua. „Using interactive television in the in-service education and training of guidance teachers“. Thesis, University of Pretoria, 2000. http://hdl.handle.net/2263/26234.

Der volle Inhalt der Quelle
Annotation:
This study is focused on how technology is employed as educational support media in distance education. The aim is to establish the availability and accessibility of interac¬tive television for both guidance teachers and students in rural, remote and previously disadvantaged communities. Interactive television could be used to support the primary modes of education, namely, contact education on campus or at remote sites, paper-¬based distance education and Web-based distance education for in-service education and training of guidance teachers. The TELETUKS schools project is cursorily pre¬sented as an example of a technology-enhanced delivery system to facilitate interactive television learning. The ITV has the potential to be cost-effective, saving on travelling costs and reaching for increased numbers of upgrading guidance teachers per unit time. A comprehensive interactive television model for in-service training of the guidance teacher in the Northern Province is presented.
Thesis (PhD (Educational Guidance and Counselling))--University of Pretoria, 2007.
Educational Psychology
unrestricted
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Strandberg, Carl. „Mediating Interactions in Games Using Procedurally Implemented Modal Synthesis : Do players prefer and choose objects with interactive synthetic sounds over objects with traditional sample based sounds?“ Thesis, Luleå tekniska universitet, Institutionen för konst, kommunikation och lärande, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-68015.

Der volle Inhalt der Quelle
Annotation:
Procedurally implemented synthetic audio could offer greater interactive potential for audio in games than the currently popular sample based approach does. At the same time, synthetic audio can reduce storage requirements that using sample based audio results in. This study examines these potentials, and looks at one game interaction in depth to gain knowledge around if players prefer and chooses objects with interactive sounds generated through procedurally implemented modal synthesis, over objects with traditionally implemented sample based sound. An in-game environment listening test was created where 20 subjects were asked to throw a ball, 35 times, at a wall to destroy wall tiles and reveal a message. For each throw they could select one of two balls; one ball had a modal synthesis sound that varied in pitch with how hard the ball was thrown, the other had a traditionally implemented sample based sound that did not correspond with how hard it was thrown but one of four samples was called at random. The subjects were then asked questions to evaluate how realistic they perceived the two versions to be, which they preferred, and how they perceived the sounds corresponding to interaction. The results show that the modal synthesis version is preferred and perceived as being more realistic than the sample based version, but wether this was a deciding factor in subjects’ choices could not be determined.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Miranda, David J. „Music Blocks: Design and Preliminary Evaluation of Interactive Tangible Block Games with Audio and Visual Feedback for Cognitive Assessment and Training“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1516970991068766.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Pecino, Rodriguez Jose Ignacio. „Portfolio of original compositions : dynamic audio composition via space and motion in virtual and augmented environments“. Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/portfolio-of-original-compositions-dynamic-audio-composition-via-space-and-motion-in-virtual-and-augmented-environments(637e9f5b-7d42-4214-92c4-70bac912cec2).html.

Der volle Inhalt der Quelle
Annotation:
Electroacoustic music is often regarded as not being sufficiently accessible to the general public because of its sound-based abstract quality and the complexity of its language. Live electronic music introduces the figure of the performer as a gestural bodily agent that re-enables our multimodal perception of sound and seems to alleviate the accessibility dilemma. However, live electronic music generally lacks the level of detail found in studio-based fixed media works, and it can hardly be transferred outside the concert hall situation (e.g. as a video recording) without losing most of its fresh, dynamic and unpredictable nature. Recent developments in 3D simulation environments and game audio technologies suggest that alternative approaches to music composition and distribution are possible, presenting an opportunity to address some of these issues. In particular, this Portfolio of Compositions proposes the use of real and virtual space as a new medium for the creation and organisation of sound events via computer-simulated audio-sources. In such a context, the role of the performer is sometimes assumed by the listener itself, through the operation of an interactive-adaptive system, or it is otherwise replaced by a set of automated but flexible procedures. Although all of these works are sonic centric in nature, they often present a visual component that reinforces the multimodal perception of meaningful musical structures, either as real space locations for sonic navigation (locative audio), or live visualisations of physically-informed gestural agents in 3D virtual environments. Consequently, this thesis draws on general game-audio concepts and terminology, such as procedural sound, non-linearity, and generative music; but it also embraces game development tools (game engines) as a new methodological and technological approach to electroacoustic music composition. In such context, space and the real-time generation, control, and manipulation of assets combine to play an important role in broadening the routes of musical expression and the accessibility of the musical language. The portfolio consists of six original compositions. Three of these works–Swirls, Alice - Elegy to the Memory of an Unfortunate Lady, and Alcazabilla–are interactive in nature and they required the creation of custom software solutions (e.g. SonicMaps) in order to deal with open-form musical structures. The last three pieces–Singularity, Apollonian Gasket, and Boids–are based on fractal or emergent behaviour models and algorithms, and they propose a non-interactive linear organisation of sound materials via real-time manipulation of non-conventional 3D virtual instruments. These original instrumental models exhibit strong spatial and kinematic qualities with an abstract and minimal visual representation, resulting in an extremely efficient way to build spatialisation patterns, texture, and musical gesture, while preserving the sonic-centric essence of the pieces.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Williams, Vanyelle Coughran. „Development of a Physical Science Curriculum for Interactive Videodisc Delivery: A Case Study“. Thesis, North Texas State University, 1986. https://digital.library.unt.edu/ark:/67531/metadc332133/.

Der volle Inhalt der Quelle
Annotation:
Using a case study approach, this investigation focused on the deliberations and decision-making processes involved in the development of a physical science curriculum to be delivered by interactive videodiscs. The mediating factors that influenced the developmental processes included the participants and their perceptions, their decisions and factors influencing their decisions. The Curriculum and Instruction Advisory Committee of the Texas Learning Technology Group was selected as the subject of this study which used qualitative data collection methods. Data collection included participant observation of curriculum meetings followed by structured interviews of the participants. Document analyses were triangulated with the observations and interviews to ascertain influences on decision-making processes. Developmental processes indicated the emergence of staff and committee procedures. Procedures were influenced by school district and personal philosophies, teacher and student needs, and constraining factors such as state Developmental processes indicated the emergence of staff and committee procedures. Procedures were influenced by school district and personal philosophies, teacher and student needs, and constraining factors such as state mandates. Other influencing factors included research, tradition, and politics. Core curriculum was to be delivered by interactive videodiscs and include remediation and enrichment loops along with laboratory simulations. Participants stressed that students perform traditional laboratory experiments in addition to simulations. This curriculum also addressed the possibility of the course being taught by teachers not certified in physical science.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Dorina, Dibra. „Real-time interactive visualization aiding pronunciation of English as a second language“. Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-40264.

Der volle Inhalt der Quelle
Annotation:
Computer assisted language learning (CALL) comprises a wide range of information technologies that aim to broaden the context of teaching by getting advantages of IT. For example, a few efforts have been put on including a combination of voice and its visual representation for language learning, and some studies are reporting positive outcomes. However, more research is needed in order to assess the impact caused by specific visualization styles such as: highlighting syllables and/or wave of sound. In order to explore this issue, we focused at measuring the potential impact that two distinct visualization styles and its combination can have on teaching children the pronunciation of English as a second language. We built a prototype which was designed to assist students while learning pronunciation of syllables. This system was employing two different real-time interactive visualization styles. One of these visualization styles utilizes audio capturing and processing, using a recent technology development: Web Audio API.We evaluated the effect of our prototype with an experiment with children aged from 9 to 11years old. We followed an experimental approach with a control group and three experimental groups. We tested our hypothesis that states that the use of a combined visualization style can have greater impact at learning pronunciation in comparison with traditional learning approach.Initial descriptive analyses were suggesting promising results for the group that used the combined visualization prototype. However, additional statistical analyses were carried out in order to measure the effect of prototype as accurately as possible within the constraints of our study. Further analyses provided evidence that our combined visualizations prototype has positively affected the learning of pronunciation. Nonetheless, the difference was not big comparing to the system that employed only wave of sound visualization. Ability to perceive visual information differs among individuals. Therefore, further research with different sample division is needed to determine whether is the combination of visualizations that does the effect, or is the wave in itself. Slitting groups based on this characteristic and perform the testing will be considered for the future research.Eventually, we can be confident to continue exploring further the possibility of integrating our proposed combination of two visualization styles in teaching practices of second language learning, due to positive outcomes that our current research outlined. In addition, from a technological perspective, our work is at the forefront of exploring the use of tools such as Web Audio API for CALL.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Andersson, Olliver. „Exploring new interaction possibilities for video game music scores using sample-based granular synthesis“. Thesis, Luleå tekniska universitet, Medier, ljudteknik och teater, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79572.

Der volle Inhalt der Quelle
Annotation:
For a long time, the function of the musical score has been to support activity in video games, largely by reinforcing the drama and excitement. Rather than leave the score in the background, this project explores the interaction possibilities of an adaptive video game score using real-time modulation of granular synthesis. This study evaluates a vertically re-orchestrated musical score with elements of the score being played back with granular synthesis. A game level was created where parts of the musical score utilized one granular synthesis stem, the parameters of which were controlled by the player. A user experience study was conducted to evaluate the granular synthesis interaction. The results show a wide array of user responses, opinions, impression and recommendations about how the granular synthesis interaction was musically experienced. Some results show that the granular synthesis stem is regarded as an interactive feature and have a direct relationship to the background music. Other results show that interaction went unnoticed. In most cases, the granular synthesis score was experienced as comparable to a more conventional game score and so, granular synthesis can be seen a new interactive tool for the sounddesigner. The study shows that there is more to be explored regarding musical interactions within games.

For contact with the author or request of videoclips, audio or other resources

Mail: olliver.andersson@gmail.com

APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Johansson, David, und Nils Rydh. „Interaktivitet inom ambisonics : Samband mellan rörelser och interaktivt ljud“. Thesis, Blekinge Tekniska Högskola, Institutionen för teknik och estetik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21750.

Der volle Inhalt der Quelle
Annotation:
Musik och rörelser har sedan urminnes tider varit nära bundna via dansen, men detta sambandet har för det mesta varit endimensionellt i den mening att en eller fler människor som reagerar till musik eller ljud av något slag. Här har vi sett att det finns ett intressant rumsligt, interaktivt och sinnligt utforskande att göra. Via ambisonics, en teknik för att panorera ljud i tre dimensioner och en Kinect kamera har vi utforskat vad som sker när en vänder på detta sambandet och låter ljudet reagera till människans rörelse istället. Men här finns mer att utforska än bara sambandet mellan musik och dans. En av de för oss mest intressanta delarna har varit hur det går att utveckla en ljudinstallation där ljuden som spelas upp motsvarar rörelserna som skapar dem. Med sensory ethnography som perspektiv har vi under ett antal workshops tillsammans med personer med en bakgrund inom dans och uppträde undersökt vilken roll kinestesi och agens har i detta möte mellan rörelse och ljud. Vad vi funnit utifrån diffrakterande analys är hur agens, förkunskap och precision spelar stor roll för upplevelsen av kontroll i en installation som våran; men även hur rumsligt utforskande och lekfullhet kan träda fram när ovannämnda kriterier uteblir.
Music and motion have shared an intimate connection through dance since ages past, however this bond has for the most part been one dimensional in the sense that people are the ones reacting to music. This connection is something we have found intriguing to explore through the aspects of interactivity, spaciousness and sensory experience. Using ambisonics, a technology used to recreate three dimensional sound fields and a Kinect camera, we have sought out to investigate what happens if you reverse this connection and let audio react to a person's movement instead. There is however more to this than just the bond between dance and music. One of the more interesting aspects has been how an audio installation can be developed to play back audio that corresponds to the movements that generate them. We have through the usage of sensory ethnography as our exploratory perspective enacted a number of workshops where we, together with people with a background in dance and performance, investigated the role that kinaesthesia and a person's sense of agency has in this interconnection of motion and sound. What we have found is how agency, precognition and precision plays a vital role in how well the sense of control is experienced in an installation such as ours; but also how a more spacious and playful exploration can emerge when the above mentioned criterias are absent.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Sommer, Nathan. „A Machine Learning Approach to Controlling Musical Synthesizer Parameters in Real-Time Live Performance“. University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592168963826025.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Li, Zheng, und Hua Wang. „A Mobile Game for Encouraging Active Listening among Deaf and Hard of Hearing People : Comparing the usage between mobile and desktop game“. Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10500.

Der volle Inhalt der Quelle
Annotation:
Context. Daily active listening is important for the deaf and hard of hearing people (DHH) of their hearing rehabilitation, but the related hearing activities are usually not enough for them due to kinds of reasons. Although some traditional desktop computer-assisted tools were created for encouraging active listening, the usage rate is not high. Nowadays, mobile smart devices become more and more widely used and easily accessible all around the world. Game applications on these devices are good tools for training related activities. However, in the market, there are limited games designed for the DHH, especially aiming for engaging them in active listening. Therefore, such a game on mobile platform is the inspiration for increasing their everyday active listening. Objective. In this study, an audio-based mobile game application called the Music Puzzle was to create on Android operating system, for encouraging the DHH in their active listening. With aim of making the game have good usability and engaging for real use, we were to evaluate the game and conduct experiments on its usage, to see if it could be more used than another traditional hearing game on desktop platform and bring greater amount active listening for the DHH. Methods. In this study, overall, methods of literature review, game development, preliminary and evaluation experiments, as well as tracking study were used. In the development phase, interaction design theories and techniques was applied for assisting the design work. Android and Pure Data were employed for the software implementation work. In the evaluation phase, System Usability Scale (SUS) and Intrinsic Motivation Inventory (IMI) questionnaire were used for respectively testing the game usability and engagement. Then a four-week tracking study was conducted to acquire the usage data of the mobile game among the target group. Afterwards, the data was collected and compared with the usage data of the desktop game using statistical method of paired sample t-test. Results. From the preliminary experiments results, most of the participants reported their enjoyment with playing Music Puzzle and willingness to use it. Subsequent experiment gave good results on the game usability and engagement. The final tracking study shows that most participants activated and played Music Puzzle during the given time period. Compared with the desktop game, the DHH spent significantly greater amount of time on playing the mobile game. Conclusion. The study indicates that the Music Puzzle has good usability and it is engaging. Compared with the desktop game, The Music Puzzle mobile game is a more effective tool for encouraging and increasing the amount of active listening time among the DHH people in their everyday life.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Watkins, Mark N. „Technology and the history-social science framework“. CSUSB ScholarWorks, 1992. https://scholarworks.lib.csusb.edu/etd-project/1055.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Young, David M. „Adaptive Game Music: The Evolution and Future of Dynamic Music Systems in Video Games“. Ohio University Honors Tutorial College / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1340112710.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Goller, Whitney. „DollHouse“. Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3065.

Der volle Inhalt der Quelle
Annotation:
The artist discusses the work in DollHouse, her Master of Fine Arts exhibition on display at Tipton Gallery, Johnson City, Tennessee from January 25 to February 5, 2016. The exhibition was an installation consisting of five sets, each containing furniture - both 2D and 3D - and a mask with instructions relating to a room found within a dollhouse. The sets and supporting thesis explore the ideas of social norms, feminism, and identity, and how submission to ideologies can create emptiness, while engagement can prompt social change. Topics include the process and evolution of the work and the artists who influenced it, ideas of identity and society, and the impacts of social norms on young women’s lives. Included is a catalogue of the exhibition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Wooller, René William. „Techniques for automated and interactive note sequence morphing of mainstream electronic music“. Queensland University of Technology, 2007. http://eprints.qut.edu.au/20232/.

Der volle Inhalt der Quelle
Annotation:
Note sequence morphing is the combination of two note sequences to create a ‘hybrid transition’, or ‘morph’. The morph is a ‘hybrid’ in the sense that it exhibits properties of both sequences. The morph is also a ‘transition’, in that it can segue between them. An automated and interactive approach allows manipulation in realtime by users who may control the relative influence of source or target and the transition length. The techniques that were developed through this research were designed particularly for popular genres of predominantly instrumental electronic music which I will refer to collectively as Mainstream Electronic Music (MEM). The research has potential for application within contexts such as computer games, multimedia, live electronic music, interactive installations and accessible music or “music therapy”. Musical themes in computer games and multimedia can morph adaptively in response to parameters in realtime. Morphing can be used by electronic music producers as an alternative to mixing in live performance. Interactive installations and accessible music devices can utilise morphing algorithms to enable expressive control over the music through simple interface components. I have developed a software application called LEMorpheus which consists of software infrastructure for morphing and three alternative note sequence morphing algorithms: parametric morphing, probabilistic morphing and evolutionary morphing. Parametric morphing involves converting the source and target into continuous envelopes, interpolation, and converting the interpolated envelopes back into note sequences. Probabilistic morphing involves converting the source and target into probability matrices and seeding them on recent output to generate the next note. Evolutionary morphing involves iteratively mutating the source into multiple possible candidates and selecting those which are judged as more similar to the target, until the target is reached. I formally evaluated the probabilistic morphing algorithm by extracting qualitative feedback from participants in a live electronic music situation, benchmarked against a live, professional DJ. The probabilistic algorithm was competitive, being favoured particularly for long morphs. The evolutionary morphing algorithm was formally evaluated using an online questionnaire, benchmarked against a human composer/producer. For particular samples, the morphing algorithm was competitive and occasionally seen as innovative; however, the morphs created by the human composer typically received more positive feedback, due to coherent, large scale structural changes, as opposed to the forced continuity of the morphing software.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Gertonsson, Simon, und Anton Hansson. „Det spatiala ljudets vikt“. Thesis, Blekinge Tekniska Högskola, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16987.

Der volle Inhalt der Quelle
Annotation:
Detta kandidatarbete handlar om det spatiala ljudets vikt i en digital värld där det visuella alltsom oftast hamnar i framkant. För att utföra detta går denna text in på vad ett ljudligt narrativ är, människans lyssnade och interaktivt ljudberättande. Förståelse för alternativa koncept såsom ljud i olika kontexter, till exempel soundscapes och sound mapping presenteras med studier från R. Murray. Schafer (1994), Mark Nazemi och Diane Gromala(2012).Tillsammans med ny och gammal teknik kommer denna undersökning att försöka arbeta gentemot människans naturliga respons till ljud utan visuella referenspunkter. Detta undersöks med hjälp av spatialt ljud , ny lokaliseringsteknik; även känt som Pozyx som integrerats tillsammans med en virtuell miljö, skapat i spelmotorn Unity där användandet av ljudmotorn renderar spatialt ljud i realtid. Resultatet blev att deltagarna i denna undersökning skapade sitt egna personliga narrativ, där vi som designers skapade förutsättningarna till detta genom interaktiv teknik och selektivt lyssnande.
This bachelor thesis is about the importance of spatial sounds in a digital world where the visual most often comes to the forefront. To do this, this text goes on what a sound narrative is, the human listening and interactive sound narrative. Understanding of alternative concepts like sounds in different contexts, such as soundscapes and sound mapping, is presented withstudies by R. Murray. Schafer (1994), Mark Nazemi and Diane Gromala (2012). Along with new and old technology, this study will try to explain the human natural response to sound without visual reference points. This is investigated using spatial sound, new localization technology; also known as Pozyx, integrated with a virtual environment created in the Unity’s game engine, where the use of the audio engine delivers spatial sound in real time. As a result, the participants in this survey created their own personal narrative, where we as designers created the conditions for this, through interactive technology and selective listening.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Rao, Ram Raghavendra. „Audio-visual interaction in multimedia“. Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/13349.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Korn, Dennis Raymond. „The development of a student-initiated, teacher-guided hypermedia program for automotive computer control systems“. CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1469.

Der volle Inhalt der Quelle
Annotation:
To provide a proper amount of quality training for tomorrow's automotive technicians, it will be necessary to provide more time for training or to develop a more efficient means of training. This project uses a HyperCard-based program to provide a starting point in increasing efficiency in instructional delivery and to continue to provide the skills necessary for a student to become a competent automotive technician.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Radeau, Monique. „Interaction audio-visuelle et modularité = Auditory-visual interaction and modularity“. Doctoral thesis, Universite Libre de Bruxelles, 1991. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212982.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Windle, Amanda. „Territorial violence and design, 1950-2010 : a human-computer study of personal space and chatbot interaction“. Thesis, University of the Arts London, 2011. http://ualresearchonline.arts.ac.uk/2785/.

Der volle Inhalt der Quelle
Annotation:
Personal space is a human’s imaginary system of precaution and an important concept for exploring territoriality, but between humans and technology because machinic agencies transfer, relocate, enact and reenact territorially. Literatures of territoriality, violence and affect are uniquely brought together, with chatbots as the research object to argue that their ongoing development as artificial agents, and the ambiguity of violence they can engender, have broader ramifications for a socio-technical research programme. These literatures help to understand the interrelation of virtual and actual spatiality relevant to research involving chatrooms and internet forums, automated systems and processes, as well as human and machine agencies; because all of these spaces, methods and agencies involve the personal sphere. The thesis is an ethical tale of cruel techno-science that is performed through conceptualisations from the creative arts, constituting a PhD by practice. This thesis chronicles four chatbots, taking into account interventions made in fine art, design, fiction and film that are omitted from a history of agent technology. The thesis re-interprets Edward Hall’s work on proxemics, personal space and territoriality, using techniques of the bricoleur and rudiments (an undeveloped and speculative method of practice), to understand chatbot techniques such as the pick-up, their entrapment logics, their repetitions of hateful speech, their nonsense talk (including how they disorientate spatial metaphors), as well as how developers switch on and off their learning functionality. Semi-structured interviews and online forum postings with chatbot developers were used to expand and reflect on the rudimentary method. To urge that this project is timely is itself a statement of anxiety. Chatbots can manipulate, exceed, and exhaust a human understanding of both space and time. Violence between humans and machines in online and offline spaces is explored as an interweaving of agency and spatiality. A series of rudiments were used to probe empirical experiments such as the Prisoner’s Dilemma (Tucker, 1950). The spatial metaphors of confinement as a parable of entrapment, are revealed within that logic and that of chatbots. The ‘Obedience to Authority’ experiments (Milgram, 1961) were used to reflect on the roles played by machines which are then reflected into a discussion of chatbots and the experiments done in and around them. The agency of the experimenter was revealed in the machine as evidenced with chatbots which has ethical ramifications. The argument of personal space is widened to include the ways machinic territoriality and its violence impacts on our ways of living together both in the private spheres of our computers and homes, as well as in state-regulated conditions (Directive-3, 2003). The misanthropic aspects of chatbot design are reflected through the methodology of designing out of fear. I argue that personal spaces create misanthropic design imperatives, methods and ways of living. Furthermore, the technological agencies of personal spaces have a confining impact on the transient spaces of the non-places in a wider discussion of the lift, chatroom and car. The violent origins of the chatbot are linked to various imaginings of impending disaster through visualisations, supported by case studies in fiction to look at the resonance of how anxiety transformed into terror when considering the affects of violence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Stensgaard, Pontus, Anders Alléus und Jesper Palm. „Adaptive Mood Audio : Rethinking Audio for Games“. Thesis, Blekinge Tekniska Högskola, Sektionen för planering och mediedesign, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2869.

Der volle Inhalt der Quelle
Annotation:
The focus of this thesis is to study the way that adaptive audio can be used in digital games and how it can be used to portray different moods to the listener, how games can reflect different feelings and how quickly those feelings can change. Games audio environment is significant to be able to adapt to the ever changing narrative of the game. The purpose is to gain insight in how immersion in digital games can be improved with the use of adaptive audio ­ to study if there’s an easy way to implement a system where audio can be mixed and adjusted in real­time to mirror the events in the game and project accurate feelings. To study this we will create a parameter based system in the sound engine of the game we will make during the production phase, with different parameters based on a number of different factors . Keywords: Adaptive Audio, Parameter System, Mood Music, Digital Games, Game Production Syftet med detta kandidatarbete är att studera hur adaptiva ljud kan användas i digitala spel, hur de kan användas för att beskriva olika stämningar till lyssnaren, och hur spel kan spegla olika känslor och hur snabbt dessa känslor kan förändras. Det är viktigt att ett spels ljudmiljö kan anpassa sig till den ständigt föränderliga berättelsen i spelet. Syftet är att få en inblick i hur immersion i digitala spel kan förbättras med användning av adaptiva ljud ­ att studera om det finns ett enkelt sätt att implementera ett system där ljudet kan mixas och anpassas i realtid för att spegla händelserna i spelet och återskapa äkta känslor. För att undersöka detta kommer vi skapa ett parameterbaserat system i ljudmotorn till spelet som vi kommer göra under produktionsfasen, med olika parameterar som är baserade på ett antal olika faktorer. Nyckelord: Adaptivt Ljud, Parametersystem, Stämningsmusik, Digitala Spel, Spelproduktion.
Ett arbete som tar upp hur man kan få immersionen i digitala spel att förbättras med användning av adaptivt ljud och musik för att spegla narrativet.
Pontus Stensgaard: 0769123182 pontus.stensgaard@gmail.com
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Hoggan, Eve Elizabeth. „Crossmodal audio and tactile interaction with mobile touchscreens“. Thesis, University of Glasgow, 2010. http://theses.gla.ac.uk/1863/.

Der volle Inhalt der Quelle
Annotation:
Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective. A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent. Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established. The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie