Academic literature on the topic 'Virtual auditory space'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Virtual auditory space.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Virtual auditory space"

1

Adams, N. H., and G. H. Wakefield. "State-Space Synthesis of Virtual Auditory Space." IEEE Transactions on Audio, Speech, and Language Processing 16, no. 5 (July 2008): 881–90. http://dx.doi.org/10.1109/tasl.2008.924151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Findlay-Walsh, Iain. "Virtual auditory reality." SoundEffects - An Interdisciplinary Journal of Sound and Sound Experience 10, no. 1 (January 15, 2021): 71–90. http://dx.doi.org/10.7146/se.v10i1.124199.

Full text
Abstract:
This article examines popular music listening in light of recent research in auditory perception and spatial experience, record production, and virtual reality, while considering parallel developments in digital pop music production practice. The discussion begins by considering theories of listening and embodiment by Brandon LaBelle, Eric Clarke, Salomè Voegelin and Linda Salter, examining relations between listening subjects and aural environments, conceptualising listening as a process of environmental ‘inhabiting’, and considering auditory experience as the real-time construction of ‘reality’. These ideas are discussed in relation to recent research on popular music production and perception, with a focus on matters of spatial sound design, the virtual ‘staging’ of music performances and performing bodies, digital editing methods and effects, and on shifting relations between musical spatiality, singer-persona, audio technologies, and listener. Writings on music and virtual space by Martin Knakkergaard, Allan Moore, Ragnhild Brøvig-Hanssen & Anne Danielsen, Denis Smalley, Dale Chapman, Kodwo Eshun and Holger Schulze are discussed, before being related to conceptions of VR sound and user experience by Jaron Lanier, Rolf Nordahl & Niels Nilsson, Mel Slater, Tom Garner and Frances Dyson. This critical framework informs three short aural analyses of digital pop tracks released during the last 10 years - Titanium (Guetta & Sia 2010), Ultralight Beam (West 2016) and 2099 (Charli XCX 2019) - presented in the form of autoethnographic ‘listening notes’. Through this discussion on personal popular music listening and virtual spatiality, a theory of pop listening as embodied inhabiting of simulated narrative space, or virtual story-world, with reference to ‘aural-dominant realities’ (Salter), ‘sonic possible worlds’ (Voegelin), and ‘sonic fictions’ (Eshun), is developed. By examining personal music listening in relation to VR user experience, this study proposes listening to pop music in the 21st century as a mode of immersive, embodied ‘storyliving’, or ‘storydoing’ (Allen & Tucker).
APA, Harvard, Vancouver, ISO, and other styles
3

Kapralos, B., M. R. Jenkin, and E. Milios. "Virtual Audio Systems." Presence: Teleoperators and Virtual Environments 17, no. 6 (December 1, 2008): 527–49. http://dx.doi.org/10.1162/pres.17.6.527.

Full text
Abstract:
To be immersed in a virtual environment, the user must be presented with plausible sensory input including auditory cues. A virtual (three-dimensional) audio display aims to allow the user to perceive the position of a sound source at an arbitrary position in three-dimensional space despite the fact that the generated sound may be emanating from a fixed number of loudspeakers at fixed positions in space or a pair of headphones. The foundation of virtual audio rests on the development of technology to present auditory signals to the listener's ears so that these signals are perceptually equivalent to those the listener would receive in the environment being simulated. This paper reviews the human perceptual and technical literature relevant to the modeling and generation of accurate audio displays for virtual environments. Approaches to acoustical environment simulation are summarized and the advantages and disadvantages of the various approaches are presented.
APA, Harvard, Vancouver, ISO, and other styles
4

Poon, P. W., and J. F. Brugge. "Virtual-space receptive fields of single auditory nerve fibers." Journal of Neurophysiology 70, no. 2 (August 1, 1993): 667–76. http://dx.doi.org/10.1152/jn.1993.70.2.667.

Full text
Abstract:
1. Sounds reaching the tympanic membranes are first modified by the acoustic properties of the torso, head, and external ear. For certain frequencies in the incident sound there results a complex, direction-dependent spatial distribution of sound pressure at the eardrum such that, within a sound field, localized areas of pressure maxima are flanked by areas of pressure minima. Listeners may use these spatial maxima and minima in localizing the source of a sound in space. The results presented describe how information about this spatial pressure pattern is transmitted from the cochlea to the central auditory system via single fibers of the auditory nerve. 2. Discharges of single fibers of the auditory nerve were studied in Nembutal-anesthetized cats [characteristic frequencies (CFs) ranged from 0.4 to 40 kHz]. Click stimuli were derived from sound-pressure waveforms that were generated by a loudspeaker placed at 1,800 locations around the cat's head and recorded at the tympanic membrane with miniature microphones. Recorded signals were converted to acoustic stimuli and delivered to the ear via a calibrated and sealed earphone. The full complement of signals is referred to as "virtual acoustic space," and the spatial distribution of discharges to this array of signals is referred to as a "virtual-space receptive field" (VSRF). 3. Fibers detect both pressure maxima and pressure minima in virtual acoustic space. Thus VSRFs take on complex shapes. 4. VSRFs of fibers of the same or similar CF having low spontaneous rates had the same overall pattern as those from high-spontaneous rate (HSR) fibers. For HSR fibers, the VSRF is obscured by the high background spike activity. 5. Comparison of the VSRF and isolevel contour maps of the stimulus derived at various frequencies revealed that auditory nerve fibers most accurately extract spectral information contained in the stimulus at a frequency close to or slightly higher than CF.
APA, Harvard, Vancouver, ISO, and other styles
5

Zotkin, D. N., R. Duraiswami, and L. S. Davis. "Rendering Localized Spatial Audio in a Virtual Auditory Space." IEEE Transactions on Multimedia 6, no. 4 (August 2004): 553–64. http://dx.doi.org/10.1109/tmm.2004.827516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hartung, Klaus, Susanne J. Sterbing, Clifford H. Keller, and Terry T. Takahashi. "Applications of virtual auditory space in psychoacoustics and neurophysiology." Journal of the Acoustical Society of America 105, no. 2 (February 1999): 1164. http://dx.doi.org/10.1121/1.425528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Takahashi, Terry T., Clifford H. Keller, David R. Euston, and Michael L. Spezio. "Analysis of auditory spatial receptive fields: An application of virtual auditory space technology." Journal of the Acoustical Society of America 111, no. 5 (2002): 2391. http://dx.doi.org/10.1121/1.4809152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Carlile, Simon, and Daniel Wardman. "Masking produced by broadband noise presented in virtual auditory space." Journal of the Acoustical Society of America 100, no. 6 (December 1996): 3761–68. http://dx.doi.org/10.1121/1.417236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ishii, Masahiro, Masanori Nakata, and Makoto Sato. "Networked SPIDAR:A Networked Virtual Environment with Visual, Auditory, and Haptic Interactions." Presence: Teleoperators and Virtual Environments 3, no. 4 (January 1994): 351–59. http://dx.doi.org/10.1162/pres.1994.3.4.351.

Full text
Abstract:
This research aims at the realization of a networked virtual environment for the design of three-dimensional (3-D) objects. Based on an analysis of an ordinary collaborative design, we illustrate that a collaborative work space consists of a dialog space and an object space. In the dialog space, a participant interacts with partners, and in the object space with an object. The participants enter the dialog space and the object space in turn, appropriately. In addition, collaborative design of 3-D objects is carried out with multimodal interactions: visual, auditory, and haptic. A networked virtual environment must support these interactions without contradiction in either time or space. In this paper, we propose a networked virtual environment for a pair of participants to satisfy the conditions described above. To implement the networked system, we take into account the necessity of visual, auditory, and haptic interactions, the need for participants to switch between the dialog space and the object space quickly and appropriately, and human ergonomics on the functional space of hands and eyes. An experiment on hand-over task was done to investigate the effect of the networked haptic device with the proposed system. Object layout tasks, such as toy block layout, office furniture layout, city building layout, etc., can be performed by using this environment.
APA, Harvard, Vancouver, ISO, and other styles
10

Venkateswaran Nisha, Kavassery, and Ajith Uppunda Kumar. "Virtual Auditory Space Training-Induced Changes of Auditory Spatial Processing in Listeners with Normal Hearing." Journal of International Advanced Otology 13, no. 1 (May 29, 2017): 118–27. http://dx.doi.org/10.5152/iao.2017.3477.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Virtual auditory space"

1

Kelly, Michael C. "Efficient representation of adaptable virtual auditory space." Thesis, University of York, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Spezio, Michael L. "Using virtual reality to understand the brain : applications in virtual auditory space /." view abstract or download file of text, 2002. http://wwwlib.umi.com/cr/uoregon/fullcit?p3045096.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2002.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 127-139). Also available for download via the World Wide Web; free to University of Oregon users. Address: http://wwwlib.umi.com/cr/uoregon/fullcit?p3045096.
APA, Harvard, Vancouver, ISO, and other styles
3

Jin, Craig. "Spectral analysis and resolving spatial ambiguities in human sound localization." Thesis, The University of Sydney, 2001. http://hdl.handle.net/2123/1342.

Full text
Abstract:
This dissertation provides an overview of my research over the last five years into the spectral analysis involved in human sound localization. The work involved conducting psychophysical tests of human auditory localization performance and then applying analytical techniques to analyze and explain the data. It is a fundamental thesis of this work that human auditory localization response directions are primarily driven by the auditory localization cues associated with the acoustic filtering properties of the external auditory periphery, i.e., the head, torso, shoulder, neck, and external ears. This work can be considered as composed of three parts. In the first part of this work, I compared the auditory localization performance of a human subject and a time-delay neural network model under three sound conditions: broadband, high-pass, and low-pass. A “black-box” modeling paradigm was applied. The modeling results indicated that training the network to localize sounds of varying center-frequency and bandwidth could degrade localization performance results in a manner demonstrating some similarity to human auditory localization performance. As the data collected during the network modeling showed that humans demonstrate striking localization errors when tested using bandlimited sound stimuli, the second part of this work focused on human sound localization of bandpass filtered noise stimuli. Localization data was collected from 5 subjects and for 7 sound conditions: 300 Hz to 5 kHz, 300 Hz to 7 kHz, 300 Hz to 10 kHz, 300 Hz to 14 kHz, 3 to 8 kHz, 4 to 9 kHz, and 7 to 14 kHz. The localization results were analyzed using the method of cue similarity indices developed by Middlebrooks (1992). The data indicated that the energy level in relatively wide frequency bands could be driving the localization response directions, just as in Butler’s covert peak area model (see Butler and Musicant, 1993). The question was then raised as to whether the energy levels in the various frequency bands, as described above, are most likely analyzed by the human auditory localization system on a monaural or an interaural basis. In the third part of this work, an experiment was conducted using virtual auditory space sound stimuli in which the monaural spectral cues for auditory localization were disrupted, but the interaural spectral difference cue was preserved. The results from this work showed that the human auditory localization system relies primarily on a monaural analysis of spectral shape information for its discrimination of directions on the cone of confusion. The work described in the three parts lead to the suggestion that a spectral contrast model based on overlapping frequency bands of varying bandwidth and perhaps multiple frequency scales can provide a reasonable algorithm for explaining much of the current psychophysical and neurophysiological data related to human auditory localization.
APA, Harvard, Vancouver, ISO, and other styles
4

Jin, Craig. "Spectral analysis and resolving spatial ambiguities in human sound localization." University of Sydney, 2001. http://hdl.handle.net/2123/1342.

Full text
Abstract:
Doctor of Philosophy
This dissertation provides an overview of my research over the last five years into the spectral analysis involved in human sound localization. The work involved conducting psychophysical tests of human auditory localization performance and then applying analytical techniques to analyze and explain the data. It is a fundamental thesis of this work that human auditory localization response directions are primarily driven by the auditory localization cues associated with the acoustic filtering properties of the external auditory periphery, i.e., the head, torso, shoulder, neck, and external ears. This work can be considered as composed of three parts. In the first part of this work, I compared the auditory localization performance of a human subject and a time-delay neural network model under three sound conditions: broadband, high-pass, and low-pass. A “black-box” modeling paradigm was applied. The modeling results indicated that training the network to localize sounds of varying center-frequency and bandwidth could degrade localization performance results in a manner demonstrating some similarity to human auditory localization performance. As the data collected during the network modeling showed that humans demonstrate striking localization errors when tested using bandlimited sound stimuli, the second part of this work focused on human sound localization of bandpass filtered noise stimuli. Localization data was collected from 5 subjects and for 7 sound conditions: 300 Hz to 5 kHz, 300 Hz to 7 kHz, 300 Hz to 10 kHz, 300 Hz to 14 kHz, 3 to 8 kHz, 4 to 9 kHz, and 7 to 14 kHz. The localization results were analyzed using the method of cue similarity indices developed by Middlebrooks (1992). The data indicated that the energy level in relatively wide frequency bands could be driving the localization response directions, just as in Butler’s covert peak area model (see Butler and Musicant, 1993). The question was then raised as to whether the energy levels in the various frequency bands, as described above, are most likely analyzed by the human auditory localization system on a monaural or an interaural basis. In the third part of this work, an experiment was conducted using virtual auditory space sound stimuli in which the monaural spectral cues for auditory localization were disrupted, but the interaural spectral difference cue was preserved. The results from this work showed that the human auditory localization system relies primarily on a monaural analysis of spectral shape information for its discrimination of directions on the cone of confusion. The work described in the three parts lead to the suggestion that a spectral contrast model based on overlapping frequency bands of varying bandwidth and perhaps multiple frequency scales can provide a reasonable algorithm for explaining much of the current psychophysical and neurophysiological data related to human auditory localization.
APA, Harvard, Vancouver, ISO, and other styles
5

Schönstein, David. "Individual of spectral cues for applications in virtual auditory space : study of inter-subject differences in Head-Related Transfer Functions using perceptual judgements from listening tests." Paris 6, 2012. http://www.theses.fr/2012PA066488.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Virtual auditory space"

1

Carlile, Simon. Virtual Auditory Space: Generation and Applications. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-662-22594-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

1957-, Carlile Simon, ed. Virtual auditory space: Generation and applications. Austin, TX: RG Landes, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Carlile, Simon. Virtual Auditory Space: Generation and Applications. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Carlile, Simon. Virtual Auditory Space: Generation and Applications. Springer London, Limited, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Carlile, Simon. Virtual Auditory Space: Generation and Applications (Neuroscience Intelligence Unit). Landes Bioscience, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Carlile, Simon. Virtual Auditory Space: Generation and Applications (Neuroscience Intelligence Unit). R G Landes Co, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Virtual Auditory Space: Generation and Applications (Neuroscience Intelligence Unit). Springer, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Virtual auditory space"

1

Shinn-Cunningham, Barbara, and Abhijit Kulkarni. "Recent Developments in Virtual Auditory Space." In Neuroscience Intelligence Unit, 185–243. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-662-22594-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pralong, Danièle, and Simon Carlile. "Generation and Validation of Virtual Auditory Space." In Neuroscience Intelligence Unit, 109–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-662-22594-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kobayashi, Yosuke, Kazuhiro Kondo, and Kiyoshi Nakagawa. "Intelligibility of HE-AAC Coded Japanese Words with Various Stereo Coding Modes in Virtual 3D Audio Space." In Auditory Display, 219–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12439-6_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Calleri, Cristina, Louena Shtrepi, Alessandro Armando, and Arianna Astolfi. "Investigations on the Influence of Auditory Perception on Urban Space Design Through Virtual Acoustics." In Advances in Civil and Industrial Engineering, 344–67. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-3637-6.ch015.

Full text
Abstract:
The study investigates the influence of different façade materials on listeners' space wideness perception on the basis of auditory stimuli, aiming at improving the awareness of how different façade designs can influence the outdoor environment under multiple aspects. The investigation has been conducted through a listening test with a 4-level factorial design in which participants had to rank different sound stimuli with respect to the perceived wideness of the space in which they were produced. The stimuli were obtained through auralisation of an impulsive sound in virtual scenarios in which different scattering and absorption coefficients of the building façades and different source and receiver positions were tested. Results showed that the absorption coefficient of the façades and sound source position significantly affect the perceived wideness of spaces while scattering coefficient and receiver position do not. Moreover, no correlation was found between the above-mentioned factors, and music experience of participants proved not to be an influential factor as well.
APA, Harvard, Vancouver, ISO, and other styles
5

Spöhrer, Markus. "Playing With Auditory Environments in Audio Games." In Research Anthology on Game Design, Development, Usage, and Social Impact, 644–61. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-7589-8.ch032.

Full text
Abstract:
Audio games highlight audio as the major narrative, ludic, and interactive element in the process of gaming. These games enroll the players in the process of gaming and distribute agency by translating auditive cues into interactive “pings” and provide a potential for an auditory virtual space. Designed for either blind persons or as “learning software” for hard-of-hearing people, audio games dismiss graphical elements by using the auditory ludic elements and foreground auditory perception as a main condition for playing the game. Spöhrer demonstrates this by using the example of 3D Snake, which needs to be played with headphones or surround speakers. The game uses verbal instructions and different sound effects to produce an auditory image of a snake that can be moved with the computer keyboard. In this auditory environment, the relation of both human and non-human elements (e.g., controller devices, the arrangement of speakers, cultural practices of gaming, aesthetic devices, and software configurations) produce and translate a specific mode of auditory perception.
APA, Harvard, Vancouver, ISO, and other styles
6

Spöhrer, Markus. "Playing With Auditory Environments in Audio Games." In Advances in Human and Social Aspects of Technology, 87–111. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7027-1.ch004.

Full text
Abstract:
Audio games highlight audio as the major narrative, ludic, and interactive element in the process of gaming. These games enroll the players in the process of gaming and distribute agency by translating auditive cues into interactive “pings” and provide a potential for an auditory virtual space. Designed for either blind persons or as “learning software” for hard-of-hearing people, audio games dismiss graphical elements by using the auditory ludic elements and foreground auditory perception as a main condition for playing the game. Spöhrer demonstrates this by using the example of 3D Snake, which needs to be played with headphones or surround speakers. The game uses verbal instructions and different sound effects to produce an auditory image of a snake that can be moved with the computer keyboard. In this auditory environment, the relation of both human and non-human elements (e.g., controller devices, the arrangement of speakers, cultural practices of gaming, aesthetic devices, and software configurations) produce and translate a specific mode of auditory perception.
APA, Harvard, Vancouver, ISO, and other styles
7

Engel, Isaac, and Lorenzo Picinali. "Reverberation and its Binaural Reproduction: The Trade-off between Computational Efficiency and Perceived Quality." In Advances in Fundamental and Applied Research on Spatial Audio [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.101940.

Full text
Abstract:
Accurately rendering reverberation is critical to produce realistic binaural audio, particularly in augmented reality applications where virtual objects must blend in seamlessly with real ones. However, rigorously simulating sound waves interacting with the auralised space can be computationally costly, sometimes to the point of being unfeasible in real time applications on resource-limited mobile platforms. Luckily, knowledge of auditory perception can be leveraged to make computational savings without compromising quality. This chapter reviews different approaches and methods for rendering binaural reverberation efficiently, focusing specifically on Ambisonics-based techniques aimed at reducing the spatial resolution of late reverberation components. Potential future research directions in this area are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

KAN, A., C. T. JIN, and A. VAN SCHAIK. "PSYCHOACOUSTIC EVALUATION OF DIFFERENT METHODS FOR CREATING INDIVIDUALIZED, HEADPHONE-PRESENTED VIRTUAL AUDITORY SPACE FROM B-FORMAT ROOM IMPULSE RESPONSES." In Principles and Applications of Spatial Hearing, 303–13. WORLD SCIENTIFIC, 2011. http://dx.doi.org/10.1142/9789814299312_0024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

IWAYA, Y., M. OTANI, and Y. SUZUKI. "DEVELOPMENT OF VIRTUAL AUDITORY DISPLAY SOFTWARE RESPONSIVE TO HEAD MOVEMENT AND A CONSIDERATION ON DERATION OF SPATIALIZED AMBIENT SOUND TO IMPROVE REALISM OF PERCEIVED SOUND SPACE." In Principles and Applications of Spatial Hearing, 121–35. WORLD SCIENTIFIC, 2011. http://dx.doi.org/10.1142/9789814299312_0010.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Virtual auditory space"

1

Di Ai and HaiLong Wu. "Electronic compass for virtual auditory space." In 2010 International Conference on Progress in Informatics and Computing (PIC). IEEE, 2010. http://dx.doi.org/10.1109/pic.2010.5687906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hugeng, Hugeng, Jovan Anggara, and Dadang Gunawan. "Enhanced three-dimensional HRIRs interpolation for virtual auditory space." In 2017 International Conference on Signals and Systems (ICSigSys). IEEE, 2017. http://dx.doi.org/10.1109/icsigsys.2017.7967065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chabot, Samuel, Wendy Lee, Rebecca Elder, and Jonas Braasch. "Using a Multimodal Immersive Environment to Investigate Perceptions in Augmented Virtual Reality Systems." In The 24th International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2018. http://dx.doi.org/10.21785/icad2018.014.

Full text
Abstract:
The Collaborative-Research Augmented Immersive Virtual Environment Laboratory at Rensselaer is a state-of-the-art space that offers users the capabilities of multimodality and immersion. Realistic and abstract sets of data can be explored in a variety of ways, even in large group settings. This paper discusses the motivations of the immersive experience and the advantages over smaller scale and single-modality expressions of data. One experiment focuss on the influence of immersion on perceptions of architectural renderings. Its findings suggest disparities between participants’ judgment when viewing either two-dimensional printouts or the immersive CRAIVE-Lab screen. The advantages of multimodality are discussed in an experiment concerning abstract data exploration. Various auditory cues for aiding in visual data extraction were tested for their affects on participants’ speed and accuracy of information extraction. Finally, artificially generated auralizations are paired with recreations of realistic spaces to analyze the influences of immersive visuals on the perceptions of sound fields. One utilized method for creating these sound fields is a geometric ray-tracing model, which calculates the auditory streams of each individual loudspeaker in the lab to create a cohesive sound field representation of the visual space.
APA, Harvard, Vancouver, ISO, and other styles
4

Chabot, Samuel, and Jonas Braasch. "An Immersive Virtual Environment for Congruent Audio-Visual Spatialized Data Sonifications." In The 23rd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2017. http://dx.doi.org/10.21785/icad2017.072.

Full text
Abstract:
The use of spatialization techniques in data sonification provides system designers with an additional tool for conveying information to users. Oftentimes, spatialized data sets are meant to be experienced by a single or few users at a time. Projects at Rensselaer's Collaborative-Research Augmented Immersive Virtual Environment Laboratory allow even large groups of collaborators to work within a shared virtual environment system. The lab provides an equal emphasis on the visual and audio system, with a nearly 360 degree panoramic display and 128-loudspeaker array housed behind the acoustically-transparent screen. The space allows for dynamic switching between immersions in recreations of physical scenes and presentations of abstract or symbolic data. Content creation for the space is not a complex process - the entire display is essentially a single desktop and straight-forward tools such as the Virtual Microphone Control allow for dynamic real-time spatialization. With the ability to target individual channels in the array, audio-visual congruency is achieved. The loudspeaker array creates a high-spatial density soundfield within which users are able to freely explore due to the virtual elimination of a so-called “sweet-spot.”
APA, Harvard, Vancouver, ISO, and other styles
5

May, Keenan R., Briana Sobel, Jeff Wilson, and Bruce N. Walker. "Auditory Displays to Facilitate Object Targeting in 3D Space." In ICAD 2019: The 25th International Conference on Auditory Display. Newcastle upon Tyne, United Kingdom: Department of Computer and Information Sciences, Northumbria University, 2019. http://dx.doi.org/10.21785/icad2019.008.

Full text
Abstract:
In both extreme and everyday situations, humans need to find nearby objects that cannot be located visually. In such situations, auditory display technology could be used to display information supporting object targeting. Unfortunately, spatial audio inadequately conveys sound source elevation, which is crucial for locating objects in 3D space. To address this, three auditory display concepts were developed and evaluated in the context of finding objects within a virtual room, in either low or no visibility conditions: (1) a one-time height-denoting “area cue,” (2) ongoing “proximity feedback,” or (3) both. All three led to improvements in performance and subjective workload compared to no sound. Displays (2) and (3) led to the largest improvements. This pattern was smaller, but still present, when visibility was low, compared to no visibility. These results indicate that persons who need to locate nearby objects in limited visibility conditions could benefit from the types of auditory displays considered here.
APA, Harvard, Vancouver, ISO, and other styles
6

Iwaya, Yukio, Masahi Toyoda, Makoto Otani, and Yoiti Suzuki. "Evaluation of Realism of Dynamic Sound Space Using a Virtual Auditory Display." In 2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel & Distributed Computing (SNPD). IEEE, 2012. http://dx.doi.org/10.1109/snpd.2012.99.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Geronazzo, Michele, and Paola Cesari. "A motion based setup for peri-personal space estimation with virtual auditory displays." In VRST '16: 22th ACM Symposium on Virtual Reality Software and Technology. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2993369.2996303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Iwaya, Yukio, Makoto Otani, Takao Tsuchiya, and Junfeng Li. "Virtual Auditory Display on a Smartphone for High-Resolution Acoustic Space by Remote Rendering." In 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP). IEEE, 2015. http://dx.doi.org/10.1109/iih-msp.2015.69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Higuera-Trujillo, Juan Luis, Carmen Llinares Millán, Susana Iñarra Abad, and Juan Serra Lluch. "A virtual reality study in university classrooms: The influence of classroom colour on memory and attention." In INNODOCT 2020. Valencia: Editorial Universitat Politècnica de València, 2020. http://dx.doi.org/10.4995/inn2020.2020.11858.

Full text
Abstract:
Design of teaching spaces influences the cognitive abilities of its users. Among the design variables, the colour stands out for the ease of its implementation and its aesthetic possibilities. Previous studies suggest that it can influence students' academic progress. However, due to the difficulty in studying their combinations, only a limited number of colours have been exhaustively studied. This was the objective of the present study: to contribute to the study of the effect of different colour parameters applied on the walls of university classrooms on students’ memory and attention performances. To address it, a virtual reality study was carried out with 80 university students. The colour variable was studied through two parameters: hue (8 settings) and saturation (2 settings). The resulting 16 combinations were implemented in a virtual reality university classroom. Memory performance was quantified using a psychological task of remembering an auditory word list, and attention was quantified by the reaction time to auditory stimuli. Analyses indicate that memory and attention performance is affected by some of these parameters, so they could be especially critical in the design of this type of space. Results may be of interest to different agents involved in the university classroom project, from architects and designers to the political leaders of these institutions.
APA, Harvard, Vancouver, ISO, and other styles
10

Ishikawa, Ayumi. "Visual and Auditory Impression for Virtual Reality Space Expressed by Panoramic Image and Impulse Response Signal." In 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE). IEEE, 2019. http://dx.doi.org/10.1109/gcce46687.2019.9015489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography