Journal articles on the topic 'Virtual auditory space'

To see the other types of publications on this topic, follow the link: Virtual auditory space.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Virtual auditory space.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Adams, N. H., and G. H. Wakefield. "State-Space Synthesis of Virtual Auditory Space." IEEE Transactions on Audio, Speech, and Language Processing 16, no. 5 (July 2008): 881–90. http://dx.doi.org/10.1109/tasl.2008.924151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Findlay-Walsh, Iain. "Virtual auditory reality." SoundEffects - An Interdisciplinary Journal of Sound and Sound Experience 10, no. 1 (January 15, 2021): 71–90. http://dx.doi.org/10.7146/se.v10i1.124199.

Full text
Abstract:
This article examines popular music listening in light of recent research in auditory perception and spatial experience, record production, and virtual reality, while considering parallel developments in digital pop music production practice. The discussion begins by considering theories of listening and embodiment by Brandon LaBelle, Eric Clarke, Salomè Voegelin and Linda Salter, examining relations between listening subjects and aural environments, conceptualising listening as a process of environmental ‘inhabiting’, and considering auditory experience as the real-time construction of ‘reality’. These ideas are discussed in relation to recent research on popular music production and perception, with a focus on matters of spatial sound design, the virtual ‘staging’ of music performances and performing bodies, digital editing methods and effects, and on shifting relations between musical spatiality, singer-persona, audio technologies, and listener. Writings on music and virtual space by Martin Knakkergaard, Allan Moore, Ragnhild Brøvig-Hanssen & Anne Danielsen, Denis Smalley, Dale Chapman, Kodwo Eshun and Holger Schulze are discussed, before being related to conceptions of VR sound and user experience by Jaron Lanier, Rolf Nordahl & Niels Nilsson, Mel Slater, Tom Garner and Frances Dyson. This critical framework informs three short aural analyses of digital pop tracks released during the last 10 years - Titanium (Guetta & Sia 2010), Ultralight Beam (West 2016) and 2099 (Charli XCX 2019) - presented in the form of autoethnographic ‘listening notes’. Through this discussion on personal popular music listening and virtual spatiality, a theory of pop listening as embodied inhabiting of simulated narrative space, or virtual story-world, with reference to ‘aural-dominant realities’ (Salter), ‘sonic possible worlds’ (Voegelin), and ‘sonic fictions’ (Eshun), is developed. By examining personal music listening in relation to VR user experience, this study proposes listening to pop music in the 21st century as a mode of immersive, embodied ‘storyliving’, or ‘storydoing’ (Allen & Tucker).
APA, Harvard, Vancouver, ISO, and other styles
3

Kapralos, B., M. R. Jenkin, and E. Milios. "Virtual Audio Systems." Presence: Teleoperators and Virtual Environments 17, no. 6 (December 1, 2008): 527–49. http://dx.doi.org/10.1162/pres.17.6.527.

Full text
Abstract:
To be immersed in a virtual environment, the user must be presented with plausible sensory input including auditory cues. A virtual (three-dimensional) audio display aims to allow the user to perceive the position of a sound source at an arbitrary position in three-dimensional space despite the fact that the generated sound may be emanating from a fixed number of loudspeakers at fixed positions in space or a pair of headphones. The foundation of virtual audio rests on the development of technology to present auditory signals to the listener's ears so that these signals are perceptually equivalent to those the listener would receive in the environment being simulated. This paper reviews the human perceptual and technical literature relevant to the modeling and generation of accurate audio displays for virtual environments. Approaches to acoustical environment simulation are summarized and the advantages and disadvantages of the various approaches are presented.
APA, Harvard, Vancouver, ISO, and other styles
4

Poon, P. W., and J. F. Brugge. "Virtual-space receptive fields of single auditory nerve fibers." Journal of Neurophysiology 70, no. 2 (August 1, 1993): 667–76. http://dx.doi.org/10.1152/jn.1993.70.2.667.

Full text
Abstract:
1. Sounds reaching the tympanic membranes are first modified by the acoustic properties of the torso, head, and external ear. For certain frequencies in the incident sound there results a complex, direction-dependent spatial distribution of sound pressure at the eardrum such that, within a sound field, localized areas of pressure maxima are flanked by areas of pressure minima. Listeners may use these spatial maxima and minima in localizing the source of a sound in space. The results presented describe how information about this spatial pressure pattern is transmitted from the cochlea to the central auditory system via single fibers of the auditory nerve. 2. Discharges of single fibers of the auditory nerve were studied in Nembutal-anesthetized cats [characteristic frequencies (CFs) ranged from 0.4 to 40 kHz]. Click stimuli were derived from sound-pressure waveforms that were generated by a loudspeaker placed at 1,800 locations around the cat's head and recorded at the tympanic membrane with miniature microphones. Recorded signals were converted to acoustic stimuli and delivered to the ear via a calibrated and sealed earphone. The full complement of signals is referred to as "virtual acoustic space," and the spatial distribution of discharges to this array of signals is referred to as a "virtual-space receptive field" (VSRF). 3. Fibers detect both pressure maxima and pressure minima in virtual acoustic space. Thus VSRFs take on complex shapes. 4. VSRFs of fibers of the same or similar CF having low spontaneous rates had the same overall pattern as those from high-spontaneous rate (HSR) fibers. For HSR fibers, the VSRF is obscured by the high background spike activity. 5. Comparison of the VSRF and isolevel contour maps of the stimulus derived at various frequencies revealed that auditory nerve fibers most accurately extract spectral information contained in the stimulus at a frequency close to or slightly higher than CF.
APA, Harvard, Vancouver, ISO, and other styles
5

Zotkin, D. N., R. Duraiswami, and L. S. Davis. "Rendering Localized Spatial Audio in a Virtual Auditory Space." IEEE Transactions on Multimedia 6, no. 4 (August 2004): 553–64. http://dx.doi.org/10.1109/tmm.2004.827516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hartung, Klaus, Susanne J. Sterbing, Clifford H. Keller, and Terry T. Takahashi. "Applications of virtual auditory space in psychoacoustics and neurophysiology." Journal of the Acoustical Society of America 105, no. 2 (February 1999): 1164. http://dx.doi.org/10.1121/1.425528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Takahashi, Terry T., Clifford H. Keller, David R. Euston, and Michael L. Spezio. "Analysis of auditory spatial receptive fields: An application of virtual auditory space technology." Journal of the Acoustical Society of America 111, no. 5 (2002): 2391. http://dx.doi.org/10.1121/1.4809152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Carlile, Simon, and Daniel Wardman. "Masking produced by broadband noise presented in virtual auditory space." Journal of the Acoustical Society of America 100, no. 6 (December 1996): 3761–68. http://dx.doi.org/10.1121/1.417236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ishii, Masahiro, Masanori Nakata, and Makoto Sato. "Networked SPIDAR:A Networked Virtual Environment with Visual, Auditory, and Haptic Interactions." Presence: Teleoperators and Virtual Environments 3, no. 4 (January 1994): 351–59. http://dx.doi.org/10.1162/pres.1994.3.4.351.

Full text
Abstract:
This research aims at the realization of a networked virtual environment for the design of three-dimensional (3-D) objects. Based on an analysis of an ordinary collaborative design, we illustrate that a collaborative work space consists of a dialog space and an object space. In the dialog space, a participant interacts with partners, and in the object space with an object. The participants enter the dialog space and the object space in turn, appropriately. In addition, collaborative design of 3-D objects is carried out with multimodal interactions: visual, auditory, and haptic. A networked virtual environment must support these interactions without contradiction in either time or space. In this paper, we propose a networked virtual environment for a pair of participants to satisfy the conditions described above. To implement the networked system, we take into account the necessity of visual, auditory, and haptic interactions, the need for participants to switch between the dialog space and the object space quickly and appropriately, and human ergonomics on the functional space of hands and eyes. An experiment on hand-over task was done to investigate the effect of the networked haptic device with the proposed system. Object layout tasks, such as toy block layout, office furniture layout, city building layout, etc., can be performed by using this environment.
APA, Harvard, Vancouver, ISO, and other styles
10

Venkateswaran Nisha, Kavassery, and Ajith Uppunda Kumar. "Virtual Auditory Space Training-Induced Changes of Auditory Spatial Processing in Listeners with Normal Hearing." Journal of International Advanced Otology 13, no. 1 (May 29, 2017): 118–27. http://dx.doi.org/10.5152/iao.2017.3477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jenison, Rick L., and Kate Fissell. "A Spherical Basis Function Neural Network for Modeling Auditory Space." Neural Computation 8, no. 1 (January 1996): 115–28. http://dx.doi.org/10.1162/neco.1996.8.1.115.

Full text
Abstract:
This paper describes a neural network for approximation problems on the sphere. The von Mises basis function is introduced, whose activation depends on polar rather than Cartesian input coordinates. The architecture of the von Mises Basis Function (VMBF) neural network is presented along with the corresponding gradient-descent learning rules. The VMBF neural network is used to solve a particular spherical problem of approximating acoustic parameters used to model perceptual auditory space. This model ultimately serves as a signal processing engine to synthesize a virtual auditory environment under headphone listening conditions. Advantages of the VMBF over standard planar Radial Basis Functions (RBFs) are discussed.
APA, Harvard, Vancouver, ISO, and other styles
12

Hu, Hongmei, Lin Zhou, Hao Ma, and Zhenyang Wu. "HRTF personalization based on artificial neural network in individual virtual auditory space." Applied Acoustics 69, no. 2 (February 2008): 163–72. http://dx.doi.org/10.1016/j.apacoust.2007.05.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tominaga, Hiroshi, and Tatsuhiro Yonekura. "A proposal of an auditory interface for the virtual three-dimensional space." Systems and Computers in Japan 30, no. 11 (October 1999): 77–84. http://dx.doi.org/10.1002/(sici)1520-684x(199910)30:11<77::aid-scj9>3.0.co;2-h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Steffens, Henning, Michael Schutte, and Stephan D. Ewert. "Acoustically driven orientation and navigation in enclosed spaces." Journal of the Acoustical Society of America 152, no. 3 (September 2022): 1767–82. http://dx.doi.org/10.1121/10.0013702.

Full text
Abstract:
Awareness of space, and subsequent orientation and navigation in rooms, is dominated by the visual system. However, humans are able to extract auditory information about their surroundings from early reflections and reverberation in enclosed spaces. To better understand orientation and navigation based on acoustic cues only, three virtual corridor layouts (I-, U-, and Z-shaped) were presented using real-time virtual acoustics in a three-dimensional 86-channel loudspeaker array. Participants were seated on a rotating chair in the center of the loudspeaker array and navigated using real rotation and virtual locomotion by “teleporting” in steps on a grid in the invisible environment. A head mounted display showed control elements and the environment in a visual reference condition. Acoustical information about the environment originated from a virtual sound source at the collision point of a virtual ray with the boundaries. In different control modes, the ray was cast either in view or hand direction or in a rotating, “radar”-like fashion in 90° steps to all sides. Time to complete, number of collisions, and movement patterns were evaluated. Navigation and orientation were possible based on the direct sound with little effect of room acoustics and control mode. Underlying acoustic cues were analyzed using an auditory model.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Yi, Ge Yu, Guan-Yang Liu, Chao Huang, and Yu-Hang Wang. "Vibrotactile-Based Operational Guidance System for Space Science Experiments." Actuators 10, no. 9 (September 9, 2021): 229. http://dx.doi.org/10.3390/act10090229.

Full text
Abstract:
On-orbit astronauts and scientists on the ground need to cooperate closely, to complete space science experiments efficiently. However, for the increasingly diverse space science experiments, scientists are unable to train astronauts on the ground about the details of each experiment. The traditional interaction of visual and auditory channels is not enough for scientists to directly guide astronauts to experimentalize. An intuitive and transparent interaction interface between scientists and astronauts has to be built to meet the requirements of space science experiments. Therefore, this paper proposed a vibrotactile guidance system for cooperation between scientists and astronauts. We utilized Kinect V2 sensors to track the movements of the participants of space science experiments, process data in the virtual experimental environment developed by Unity 3D, and provide astronauts with different guidance instructions using the wearable vibrotactile device. Compared with other schemes using only visual and auditory channels, our approach provides more direct and more efficient guidance information that astronauts perceive is what they need to perform different tasks. Three virtual space science experiment tasks verified the feasibility of the vibrotactile operational guidance system. Participants were able to complete the experimental task with a short period of training, and the experimental results show that the method has an application prospect.
APA, Harvard, Vancouver, ISO, and other styles
16

MORITA, Ippei, Ayako KOHYAMA-KOGANEYA, Toki SAITO, Chie OBUCHI, and Hiroshi OYAMA. "VRAT: A Proposal of Training Method for Auditory Information Processing Using Virtual Space." Japanese Journal for Medical Virtual Reality 17, no. 1 (December 1, 2020): 23–32. http://dx.doi.org/10.7876/jmvr.17.23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Fujiki, Nobuya, Klaus A. J. Riederer, Veikko Jousmäki, Jyrki P. Mäkelä, and Riitta Hari. "Human cortical representation of virtual auditory space: differences between sound azimuth and elevation." European Journal of Neuroscience 16, no. 11 (December 2002): 2207–13. http://dx.doi.org/10.1046/j.1460-9568.2002.02276.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mrsic-Flogel, Thomas D., Andrew J. King, and Jan W. H. Schnupp. "Encoding of Virtual Acoustic Space Stimuli by Neurons in Ferret Primary Auditory Cortex." Journal of Neurophysiology 93, no. 6 (June 2005): 3489–503. http://dx.doi.org/10.1152/jn.00748.2004.

Full text
Abstract:
Recent studies from our laboratory have indicated that the spatial response fields (SRFs) of neurons in the ferret primary auditory cortex (A1) with best frequencies ≥4 kHz may arise from a largely linear processing of binaural level and spectral localization cues. Here we extend this analysis to investigate how well the linear model can predict the SRFs of neurons with different binaural response properties and the manner in which SRFs change with increases in sound level. We also consider whether temporal features of the response (e.g., response latency) vary with sound direction and whether such variations can be explained by linear processing. In keeping with previous studies, we show that A1 SRFs, which we measured with individualized virtual acoustic space stimuli, expand and shift in direction with increasing sound level. We found that these changes are, in most cases, in good agreement with predictions from a linear threshold model. However, changes in spatial tuning with increasing sound level were generally less well predicted for neurons whose binaural frequency-time receptive field (FTRF) exhibited strong excitatory inputs from both ears than for those in which the binaural FTRF revealed either a predominantly inhibitory effect or no clear contribution from the ipsilateral ear. Finally, we found (in agreement with other authors) that many A1 neurons exhibit systematic response latency shifts as a function of sound-source direction, although these temporal details could usually not be predicted from the neuron's binaural FTRF.
APA, Harvard, Vancouver, ISO, and other styles
19

Adams, Norman H., and Gregory H. Wakefield. "State-space models of head-related transfer functions for virtual auditory scene synthesis." Journal of the Acoustical Society of America 125, no. 6 (June 2009): 3894–902. http://dx.doi.org/10.1121/1.3124778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hunter, Michael D., Jessica K. Smith, Nita Taylor, William Woods, Sean A. Spence, Timothy D. Griffiths, and Peter W. R. Woodruff. "Laterality Effects in Perceived Spatial Location of Hallucination-Like Voices." Perceptual and Motor Skills 97, no. 1 (August 2003): 246–50. http://dx.doi.org/10.2466/pms.2003.97.1.246.

Full text
Abstract:
Aydin and colleagues reported a reversal of physiological ‘right-ear advantage' in a group of right-handed patients with schizophrenia, using an auditory acuity test. In schizophrenia, auditory hallucinations may appear to be spatially located inside or outside the patient's head. Here we show, using virtual acoustic space techniques, that normal right-handed subjects have a right-ear advantage for correctly locating the ‘source' of hallucination-like voices as from either inside or outside the head. We propose a model for understanding lateralised, external hallucinations in schizophrenia based upon reversal of normal cortical asymmetry for auditory spatial processing.
APA, Harvard, Vancouver, ISO, and other styles
21

Fu, Xiuyan, and Fang Cai. "Acoustic space in narration from the perspective of production." Chinese Semiotic Studies 18, no. 3 (August 1, 2022): 441–58. http://dx.doi.org/10.1515/css-2022-2075.

Full text
Abstract:
Abstract A major mark of the progress of human society is its transition from the production of things in space to the production of space itself. Oral narrative covers a certain range of space with sound, and physical spaces such as theaters, cinemas, and concert halls accommodate various forms of narrative communication. Drama was the most popular form of mass communication in the pre-industrial age, and its performance required a “sound wall” around the theater to keep people focused. Comment and discussion were both significant forms of narrative consumption. Some consumers who paid more attention to “being seen” even regarded the communication arena as a social platform. In present times, commentary subtitles (Danmaku in Japanese) pop up on computer screens from time to time to make consumers feel as if they are chatting with people while watching. “Chatting” in social media groups also belongs to interaction in virtual space. Surround sound in the cinema wraps the audience and screen into a unifying auditory space, blurring or even eliminating the boundaries between them. Narrative has been an act of producing acoustic space from the very beginning. Nowadays, people adopt more abundant narrative means yet still rely on the imitation of auditory communication.
APA, Harvard, Vancouver, ISO, and other styles
22

Mrsic-Flogel, Thomas D., Andrew J. King, Rick L. Jenison, and Jan W. H. Schnupp. "Listening Through Different Ears Alters Spatial Response Fields in Ferret Primary Auditory Cortex." Journal of Neurophysiology 86, no. 2 (August 1, 2001): 1043–46. http://dx.doi.org/10.1152/jn.2001.86.2.1043.

Full text
Abstract:
The localization of sounds in space is based on spatial cues that arise from the acoustical properties of the head and external ears. Individual differences in localization cue values result from variability in the shape and dimensions of these structures. We have mapped spatial response fields of high-frequency neurons in ferret primary auditory cortex using virtual sound sources based either on the animal's own ears or on the ears of other subjects. For 73% of units, the response fields measured using the animals' own ears differed significantly in shape and/or position from those obtained using spatial cues from another ferret. The observed changes correlated with individual differences in the acoustics. These data are consistent with previous reports showing that humans localize less accurately when listening to virtual sounds from other individuals. Together these findings support the notion that neural mechanisms underlying auditory space perception are calibrated by experience to the properties of the individual.
APA, Harvard, Vancouver, ISO, and other styles
23

Bălan, Oana, Alin Moldoveanu, and Florica Moldoveanu. "Multimodal Perceptual Training for Improving Spatial Auditory Performance in Blind and Sighted Listeners." Archives of Acoustics 40, no. 4 (December 1, 2015): 491–502. http://dx.doi.org/10.1515/aoa-2015-0049.

Full text
Abstract:
Abstract The use of individualised Head Related Transfer Functions (HRTF) is a fundamental prerequisite for obtaining an accurate rendering of 3D spatialised sounds in virtual auditory environments. The HRTFs are transfer functions that define the acoustical basis of auditory perception of a sound source in space and are frequently used in virtual auditory displays to simulate free-field listening conditions. However, they depend on the anatomical characteristics of the human body and significantly vary among individuals, so that the use of the same dataset of HRTFs for all the users of a designed system will not offer the same level of auditory performance. This paper presents an alternative approach to the use on non-individualised HRTFs that is based on a procedural learning, training, and adaptation to altered auditory cues.We tested the sound localisation performance of nine sighted and visually impaired people, before and after a series of perceptual (auditory, visual, and haptic) feedback based training sessions. The results demonstrated that our subjects significantly improved their spatial hearing under altered listening conditions (such as the presentation of 3D binaural sounds synthesised from non-individualized HRTFs), the improvement being reflected into a higher localisation accuracy and a lower rate of front-back confusion errors.
APA, Harvard, Vancouver, ISO, and other styles
24

Richard, Paul, Damien Chamaret, François-Xavier Inglese, Philippe Lucidarme, and Jean-Louis Ferrier. "Human-Scale Virtual Environment for Product Design: Effect of Sensory Substitution." International Journal of Virtual Reality 5, no. 2 (January 1, 2006): 37–44. http://dx.doi.org/10.20870/ijvr.2006.5.2.2687.

Full text
Abstract:
This paper presents a human-scale virtual environment (VE) with haptic feedback along with two experiments performed in the context of product design. The user interacts with a virtual mock-up using a large-scale bimanual string-based haptic interface called SPIDAR (Space Interface Device for Artificial Reality). An original self-calibration method is proposed. A vibro-tactile glove was developed and integrated to the SPIDAR to provide tactile cues to the operator. The purpose of the first experiment was: (1) to examine the effect of tactile feedback in a task involving reach-and-touch of different parts of a digital mock-up, and (2) to investigate the use of sensory substitution in such tasks. The second experiment aimed to investigate the effect of visual and auditory feedback in a car-light maintenance task. Results of the first experiment indicate that the users could easily and quickly access and finely touch the different parts of the digital mock-up when sensory feedback (either visual, auditory, or tactile) was present. Results of the of the second experiment show that visual and auditory feedbacks improve average placement accuracy by about 54 % and 60% respectively compared to the open loop case
APA, Harvard, Vancouver, ISO, and other styles
25

Calleri, Cristina, Louena Shtrepi, Alessandro Armando, and Arianna Astolfi. "Evaluation of the influence of building façade design on the acoustic characteristics and auditory perception of urban spaces." Building Acoustics 25, no. 1 (March 2018): 77–95. http://dx.doi.org/10.1177/1351010x18757353.

Full text
Abstract:
Auditory perception has been proved to have an influence on how we live and move through places and on the use of public spaces. However, despite the numerous studies that have focused on the theme of soundscape and auditory perception of urban spaces, these aspects have not been studied in connection with the design of the building surrounding such spaces. This study focuses on the influence of façade design on acoustic characteristics of an urban space and on the subjective spatial perception of the users. Simulations and auralizations have been conducted through ODEON software (v.13) on the virtual model of a small square of Turin (Italy). Different absorption and scattering coefficients of façade upholsteries have been applied to the façades of the building surrounding the square, choosing from a pool of typical building façade materials. Results of a listening test have proved that the absorption coefficient of the façades has an influence on the subjective perception of space wideness. Moreover, multiple regression analysis has been conducted in order to find a mathematical relation between space wideness perception and objective acoustic parameters. It was shown that the relation between the perceptual aspects and the objective parameters is strongly dependent on the listening position.
APA, Harvard, Vancouver, ISO, and other styles
26

Strybel, Thomas Z. "Auditory Spatial Information and Head-Coupled Display Systems." Proceedings of the Human Factors Society Annual Meeting 32, no. 2 (October 1988): 75. http://dx.doi.org/10.1177/154193128803200215.

Full text
Abstract:
Developments of head-coupled control/display systems have focused primarily on the display of three dimensional visual information, as the visual system is the optimal sensory channel for the aquisition of spatial information in humans. The auditory system improves the efficiency of vision, however, by obtaining spatial information about relevant objects outside of the visual field of view. This auditory information can be used to direct head and eye movements. Head-coupled display systems, can also benefit from the addition of auditory spatial information, as it provides a natural method of signaling the location of important events outside of the visual field of view. This symposium will report on current efforts in the developments of head-coupled display systems, with an emphasis on the auditory spatial component. The first paper “Virtual Interface Environment Workstations”, by Scott S. Fisher, will report on the development of a prototype virtual environment. This environment consists of a head-mounted, wide-angle, stereoscopic display system which is controlled by operator position, voice, and gesture. With this interface, an operator can virtually explore a 360 degree synthesized environment, and viscerally interact with its components. The second paper, “A Virtual Display System For Conveying Three-Dimensional Acoustic Information” by Elizabeth M. Wenzel, Frederic L. Wightman and Scott H. Foster, will report on the development of a method of synthetically generating three-dimensional sound cues for the above-mentioned interface. The development of simulated auditory spatial cues is limited to some extent, by our knowlege of auditory spatial processing. The remaining papers will report on two areas of auditory space perception that have recieved little attention until recently. “Perception of Real and Simulated Motion in the Auditory Modality”, by Thomas Z. Strybel, will review recent research on auditory motion perception, because a natural acoustic environment must contain moving sounds. This review will consider applications of this knowledge to head-coupled display systems. The last paper, “Auditory Psychomotor Coordination”, will examine the interplay between the auditory, visual and motor systems. The specific emphasis of this paper is the use of auditory spatial information in the regulation of motor responses so as to provide efficient application of the visual channel.
APA, Harvard, Vancouver, ISO, and other styles
27

Soret, Rébaï, Pom Charras, Christophe Hurter, and Vsevolod Peysakhovich. "Attentional Orienting in Front and Rear Spaces in a Virtual Reality Discrimination Task." Vision 6, no. 1 (January 6, 2022): 3. http://dx.doi.org/10.3390/vision6010003.

Full text
Abstract:
Recent studies on covert attention suggested that the visual processing of information in front of us is different, depending on whether the information is present in front of us or if it is a reflection of information behind us (mirror information). This difference in processing suggests that we have different processes for directing our attention to objects in front of us (front space) or behind us (rear space). In this study, we investigated the effects of attentional orienting in front and rear space consecutive of visual or auditory endogenous cues. Twenty-one participants performed a modified version of the Posner paradigm in virtual reality during a spaceship discrimination task. An eye tracker integrated into the virtual reality headset was used to make sure that the participants did not move their eyes and used their covert attention. The results show that informative cues produced faster response times than non-informative cues but no impact on target identification was observed. In addition, we observed faster response times when the target occurred in front space rather than in rear space. These results are consistent with an orienting cognitive process differentiation in the front and rear spaces. Several explanations are discussed. No effect was found on subjects’ eye movements, suggesting that participants did not use their overt attention to improve task performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Levey, Elizabeth J. "Analyzing from Home: The Virtual Space as a Flexible Container." Psychodynamic Psychiatry 49, no. 3 (August 2021): 425–40. http://dx.doi.org/10.1521/pdps.2021.49.3.425.

Full text
Abstract:
This manuscript explores the experience of teleanalysis for analyst and patient during the COVID-19 pandemic through the lenses of embodied intersubjective relating, the neurobiology of social engagement, and technologically mediated human interaction. At the beginning of the pandemic, many analytic dyads were embarking on remote work for the first time. More than a year later, we are facing the question of whether we will ever return to in-person work. In order to unpack this question, it is useful to consider how in-person analysis and in-person interaction more generally differ from remote interaction. Multiple nonverbal modalities are responsible for affective coregulation in intersubjective relating, including voice, body, and shared physical space. While conscious awareness tends to concentrate on auditory and visual inputs, other sensory inputs also impact affective experience. The impact of physical distance upon psychoanalytic treatment is compared with that of the couch. The shift in the balance of power introduced by teleanalysis is considered. Analyzing and being analyzed from home bend the frame of psychoanalysis, complicating notions about distance and intimacy and opening new spaces in which meaning can be cocreated. The COVID-19 pandemic presents an opportunity for psychoanalysis to engage more deeply with the questions raised by teleanalysis in order to enhance our understanding of its impact on treatment.
APA, Harvard, Vancouver, ISO, and other styles
29

Dufour, Frank. "Acoustic Shadows: An Auditory Exploration of the Sense of Space." SoundEffects - An Interdisciplinary Journal of Sound and Sound Experience 1, no. 1 (December 2, 2011): 82–97. http://dx.doi.org/10.7146/se.v1i1.4074.

Full text
Abstract:
This paper examines the question of auditory detection of the movements of silent objects in noisy environments. The approach to studying and exploring this phenomenon is primarily based on the framework of the ecology of perception defined by James Gibson (Gibson, 1979) in the sense that it focuses on the direct auditory perception of events, or “structured energy that specifies properties of the environment” (Michaels & Carello, 1981 P. 157). The goal of this study is triple: -Theoretical; for various reasons, this kind of acoustic situations has not been extensively studied by traditional acoustics and psychoacoustics, therefore, this project demonstrates and supports the pertinence of the Ecology of Perception for the description and explanation of such complex phenomena. -Practical; like echolocation, perception of acoustic shadows can be improved by practice, this project intends to contribute to the acknowledgment of this way of listening and to help individuals placed in noisy environments without the support of vision acquiring a detailed detection of the movements occurring in these environments. -Artistic; this project explores a new artistic expression based on the creation and exploration of complex multisensory environments. Acoustic Shadows, a multimedia interactive composition is being developed on the premises of the ecological approach to perception. The last dimension of this project is meant to be a contribution to the sonic representation of space in films and in computer generated virtual environments by producing simulations of acoustic shadows.
APA, Harvard, Vancouver, ISO, and other styles
30

Rylskaya, Elena A., and Dmitry N. Pogorelov. "Personal identity in the virtual space of social networks and real identity: comparative characteristics." Yaroslavl Pedagogical Bulletin 1, no. 118 (2021): 105–14. http://dx.doi.org/10.20323/1813-145x-2021-1-118-105-114.

Full text
Abstract:
The high referentiality of the virtual space contributes to a certain transformation of the ego-identity of modern users into the so-called identity in the virtual space of social networks. Personal identity in the virtual space of socialnetworks can be considered as a subsystem of ego identity, consisting of textual, visual, and auditory characteristics of the virtual image, reflecting the physical and personal properties and communication features that determine the integrity and identity of the individual within the subculture of users of social networks. The purpose of the article is a comparative analysis of the characteristics of real personality identity and personality identity in the virtual space of social networks. Study sample: 285 social media users aged 18 to 72 years. Research methods: Kuhn – McPartland test «Who am I?» with the modification «Who am I online?» and subsequent content analysis by a team of three experts. The study revealed that self-descriptions of real identity are more formalized, more often reflect the respondents' focus on real-life problems and contain negative connotations in the presentation of their character traits, emotional and physical state. In the characteristics of real identity, descriptions of oneself from the standpoint of marital status and actually performed social roles, interpretation of physical appearance and appearance are predominant. The image of a person in the virtual space of social networks, being more multifaceted, is characterized by greater creativity, predominance of positive emotional states, elements of «embellishment» of oneself, manifestations of an aggressive style of behavior. In the descriptions of personality identity in the virtual space of social networks, there are more often ideas about oneself from the standpoint of interpreting the sphere of communication and the peculiarities of communication with people, in the context of the specifics of activities in social networks, there is a large number of characteristics associated with the virtual appearance. In the presented self-descriptions, an invariant component was also revealed, containing general categories for real identity and personality identity in the virtual space of social networks: personality traits, character traits, activity content, abilities.
APA, Harvard, Vancouver, ISO, and other styles
31

Pralong, Danièle, and Simon Carlile. "The role of individualized headphone calibration for the generation of high fidelity virtual auditory space." Journal of the Acoustical Society of America 100, no. 6 (December 1996): 3785–93. http://dx.doi.org/10.1121/1.417337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Brimijoin, W. Owen, Shawn Featherly, and Philip Robinson. "Mapping the perceptual topology of auditory space permits the creation of hyperstable virtual acoustic environments." Acoustical Science and Technology 41, no. 1 (January 1, 2020): 245–48. http://dx.doi.org/10.1250/ast.41.245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Schlenker, Philippe. "Outline of Music Semantics." Music Perception 35, no. 1 (September 1, 2017): 3–37. http://dx.doi.org/10.1525/mp.2017.35.1.3.

Full text
Abstract:
We provide the outline of a semantics for music. We take music cognition to be continuous with normal auditory cognition, and thus to deliver inferences about “virtual sources” of the music. As a result, sound parameters that trigger inferences about sound sources in normal auditory cognition produce related ones in music. But music also triggers inferences on the basis of the movement of virtual sources in tonal pitch space, which has points of stability, points of instability, and relations of attraction among them. We sketch a framework that aggregates inferences from normal auditory cognition and tonal inferences, by way of a theory of musical truth: a source undergoing a musical movement m is true of an object undergoing a series of events e just in case there is a certain structure-preserving map between m and e. This framework can help revisit aspects of musical syntax: Lerdahl and Jackendoff’s (1983) grouping structure can be seen to reflect the mereology (“partology”) of events that are abstractly represented in the music. Finally, we argue that this “refentialist” approach to music semantics still has the potential to provide an account of diverse emotional effects in music.
APA, Harvard, Vancouver, ISO, and other styles
34

Yoon, YooChang, Dongmin Moon, and Seongah Chin. "Fine Tactile Representation of Materials for Virtual Reality." Journal of Sensors 2020 (January 17, 2020): 1–8. http://dx.doi.org/10.1155/2020/7296204.

Full text
Abstract:
The most important aspect of virtual reality (VR) is the degree by which a user can feel and experience virtual space as though it is reality. Until recently, the experience of VR had to be satisfied with operations using a separate controller along with the visual and auditory elements. However, for a far more realistic VR environment, users should be able to experience the delicacy of tactile materials. This study proposes tactile technology, which is inexpensive and easy to use. To achieve this, we analyzed the unique patterns of materials through image filtering and designed a computing model to deliver realistic vibrations to the user. In addition, we developed and tested a haptic glove so that the texture of the material can be sensed in a VR environment.
APA, Harvard, Vancouver, ISO, and other styles
35

Loomis, Jack M., Reginald G. Golledge, and Roberta L. Klatzky. "Navigation System for the Blind: Auditory Display Modes and Guidance." Presence: Teleoperators and Virtual Environments 7, no. 2 (April 1998): 193–203. http://dx.doi.org/10.1162/105474698565677.

Full text
Abstract:
The research we are reporting here is part of our effort to develop a navigation system for the blind. Our long-term goal is to create a portable, self-contained system that will allow visually impaired individuals to travel through familiar and unfamiliar environments without the assistance of guides. The system, as it exists now, consists of the following functional components: (1) a module for determining the traveler's position and orientation in space, (2) a Geographic Information System comprising a detailed database of our test site and software for route planning and for obtaining information from the database, and (3) the user interface. The experiment reported here is concerned with one function of the navigation system: guiding the traveler along a predefined route. We evaluate guidance performance as a function of four different display modes: one involving spatialized sound from a virtual acoustic display, and three involving verbal commands issued by a synthetic speech display. The virtual display mode fared best in terms of both guidance performance and user preferences.
APA, Harvard, Vancouver, ISO, and other styles
36

Lahav, O., H. Gedalevitz, S. Battersby, D. Brown, L. Evett, and P. Merritt. "Using Wii technology to explore real spaces via virtual environments for people who are blind." Journal of Assistive Technologies 8, no. 3 (September 9, 2014): 150–60. http://dx.doi.org/10.1108/jat-02-2014-0009.

Full text
Abstract:
Purpose – Virtual environments (VEs) that represent real spaces (RSs) give people who are blind the opportunity to build a cognitive map in advance that they will be able to use when arriving at the RS. The paper aims to discuss this issue. Design/methodology/approach – In this research study Nintendo Wii-based technology was used for exploring VEs via the Wiici application. The Wiimote allows the user to interact with VEs by simulating walking and scanning the space. Findings – By getting haptic and auditory feedback the user learned to explore new spaces. The authors examined the participants’ abilities to explore new simple and complex places, construct a cognitive map, and perform orientation tasks in the RS. Originality/value – To the authors’ knowledge, this finding presents the first VE for people who are blind that allow the participants to scan the environment and by this to construct map model spatial representations.
APA, Harvard, Vancouver, ISO, and other styles
37

Leman, Marc. "A Model of Retroactive Tone-Center Perception." Music Perception 12, no. 4 (1995): 439–71. http://dx.doi.org/10.2307/40285676.

Full text
Abstract:
In this paper, a model for tone-center perception is developed. It is based on an auditory model and principles of schema dynamics such as self-organization and association. The auditory module simulates virtual-pitch perception by transforming musical signals into auditory images. The schema- based module involves data-driven long-term learning for the self-organization of a schema for tone-center perception. The focus of this paper is on a retroactive process (called perceptual interpretation) by which the sense of tone center is adjusted according to a reconsideration of preceding perceptions in view of new contextual evidence within the schema. To this purpose, a metaphor is introduced, in which perceptual interpretation is described as the movement of a snaillike object in an attractor space. Additionally, the mathematical details of the model are presented, and the results of computer simulations are discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Otani, Makoto, Fuminari Hirata, Kazunori Itoh, Masami Hashimoto, and Mizue Kayama. "Perception of proximal sound sources in virtual auditory space by distance-variable head-related transfer functions." Acoustical Science and Technology 33, no. 5 (2012): 332–34. http://dx.doi.org/10.1250/ast.33.332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Campbell, R. A. A., A. J. King, F. R. Nodal, J. W. H. Schnupp, S. Carlile, and T. P. Doubell. "Virtual Adult Ears Reveal the Roles of Acoustical Factors and Experience in Auditory Space Map Development." Journal of Neuroscience 28, no. 45 (November 5, 2008): 11557–70. http://dx.doi.org/10.1523/jneurosci.0545-08.2008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Jingyi, Alexandra Mayer, and Andreas Butz. "Towards a Design Space of Haptics in Everyday Virtual Reality across Different Spatial Scales." Multimodal Technologies and Interaction 5, no. 7 (July 3, 2021): 36. http://dx.doi.org/10.3390/mti5070036.

Full text
Abstract:
Virtual Reality (VR) has become a consumer-grade technology, especially with the advent of standalone headsets working independently from a powerful computer. Domestic VR mainly uses the visual and auditory senses since VR headsets make this accessible. Haptic feedback, however, has the potential to increase immersion substantially. So far, it is mostly used in laboratory settings with specialized haptic devices. Especially for domestic VR, there is underexplored potential in exploiting physical elements of the often confined space in which it is used. In a literature review (n = 20), we analyzed VR interaction using haptic feedback with or without physical limitations. From this, we derive a design space for VR haptics across three spatial scales (seated, standing, and walking). In our narrow selection of papers, we found inspirations for future work and will discuss two example scenarios. Our work gives a current overview of haptic VR solutions and highlights strategies for adapting laboratory solutions to an everyday context.
APA, Harvard, Vancouver, ISO, and other styles
41

Greiter, Wolfgang, and Uwe Firzlaff. "Echo-acoustic flow shapes object representation in spatially complex acoustic scenes." Journal of Neurophysiology 117, no. 6 (June 1, 2017): 2113–24. http://dx.doi.org/10.1152/jn.00860.2016.

Full text
Abstract:
Echolocating bats use echoes of their sonar emissions to determine the position and distance of objects or prey. Target distance is represented as a map of echo delay in the auditory cortex (AC) of bats. During a bat’s flight through a natural complex environment, echo streams are reflected from multiple objects along its flight path. Separating such complex streams of echoes or other sounds is a challenge for the auditory system of bats as well as other animals. We investigated the representation of multiple echo streams in the AC of anesthetized bats ( Phyllostomus discolor) and tested the hypothesis that neurons can lock on echoes from specific objects in a complex echo-acoustic pattern while the representation of surrounding objects is suppressed. We combined naturalistic pulse/echo sequences simulating a bat’s flight through a virtual acoustic space with extracellular recordings. Neurons could selectively lock on echoes from one object in complex echo streams originating from two different objects along a virtual flight path. The objects were processed sequentially in the order in which they were approached. Object selection depended on sequential changes of echo delay and amplitude, but not on absolute values. Furthermore, the detailed representation of the object echo delays in the cortical target range map was not fixed but could be dynamically adapted depending on the temporal pattern of sonar emission during target approach within a simulated flight sequence. NEW & NOTEWORTHY Complex signal analysis is a challenging task in sensory processing for all animals, particularly for bats because they use echolocation for navigation in darkness. Recent studies proposed that the bat’s perceptional system might organize complex echo-acoustic information into auditory streams, allowing it to track specific auditory objects during flight. We show that in the auditory cortex of bats, neurons can selectively respond to echo streams from specific objects.
APA, Harvard, Vancouver, ISO, and other styles
42

Ellen Peng, Z., and Ruth Y. Litovsky. "The Role of Interaural Differences, Head Shadow, and Binaural Redundancy in Binaural Intelligibility Benefits Among School-Aged Children." Trends in Hearing 25 (January 2021): 233121652110453. http://dx.doi.org/10.1177/23312165211045313.

Full text
Abstract:
In complex listening environments, children can benefit from auditory spatial cues to understand speech in noise. When a spatial separation is introduced between the target and masker and/or listening with two ears versus one ear, children can gain intelligibility benefits with access to one or more auditory cues for unmasking: monaural head shadow, binaural redundancy, and interaural differences. This study systematically quantified the contribution of individual auditory cues in providing binaural speech intelligibility benefits for children with normal hearing between 6 and 15 years old. In virtual auditory space, target speech was presented from + 90° azimuth (i.e., listener's right), and two-talker babble maskers were either co-located (+ 90° azimuth) or separated by 180° (–90° azimuth, listener's left). Testing was conducted over headphones in monaural (i.e., right ear) or binaural (i.e., both ears) conditions. Results showed continuous improvement of speech reception threshold (SRT) between 6 and 15 years old and immature performance at 15 years of age for both SRTs and intelligibility benefits from more than one auditory cue. With early maturation of head shadow, the prolonged maturation of unmasking was likely driven by children's poorer ability to gain full benefits from interaural difference cues. In addition, children demonstrated a trade-off between the benefits from head shadow versus interaural differences, suggesting an important aspect of individual differences in accessing auditory cues for binaural intelligibility benefits during development.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Jing, Shan Shan Bai, Yi Long Wang, Ping Xi, and Bing Yi Li. "Design of Virtual Machine Assembly Simulation System in Single-Channel Immersion." Key Engineering Materials 620 (August 2014): 556–62. http://dx.doi.org/10.4028/www.scientific.net/kem.620.556.

Full text
Abstract:
The virtual reality technology has been widely applied in virtual manufacturing; virtual environment which integrated with visual, auditory and tactile can be established. In this paper, a single - channel immersion machine assembly simulation system was built by EON Studio of virtual reality software and virtual peripherals. For building immersion virtual environment, three dimensional entity model of machine was established by the CATIA software, rendering and coloring it through the 3ds Max, inputting interface of EON convert file. Researching on interactive virtual assembly technology of machine was done, two interactive methods were developed through the keyboard ,mouse and other input devices or data glove ,(Flock and Bird) and other virtual peripherals respectively: one was to realize the interaction based on keyboard by the triggering sensor node, event driven and routing mechanism ; another was established the virtual hand, achieving the spatial position of manpower through driving virtual hand by data glove and (Flock and Bird) , and converted it into the position of virtual hand in virtual space, to complete the grasping, moving, releasing the object operation in immersion virtual environment, so that completing the virtual machine assembly, assembly trajectory visualization was realized, the basis was provided for the path of assembly analysis. The human-computer interface of assembly simulation system of machine develop, each module integration was realized by the transmission of information between EON Studio and machine, and reasonable guidelines for machine assembly training was provided.
APA, Harvard, Vancouver, ISO, and other styles
44

Coulson, Graeme, and Helena Bender. "Wombat Roadkill Was Not Reduced by a Virtual Fence. Comment on Stannard et al. Can Virtual Fences Reduce Wombat Road Mortalities? Ecol. Eng. 2021, 172, 106414." Animals 12, no. 10 (May 23, 2022): 1323. http://dx.doi.org/10.3390/ani12101323.

Full text
Abstract:
The roadkill of wildlife is a global problem. Much has been written about deterring wildlife from roads, but, as of yet, there is no empirical support for deterrents based on visual and/or auditory signals. A recent paper entitled ‘Can virtual fences reduce wombat road mortalities?’reported the results of a roadkill mitigation trial. The authors installed a ‘virtual fence’ system produced by iPTE Traffic Solutions Ltd. (Graz, Austria) and evaluated its effectiveness for reducing roadkills of bare-nosed wombats (Vombatus ursinus) in southern Australia. The authors recorded roadkills in a simple Before-After-Control-Impact design but did not conduct any formal statistical analysis. They also measured three contextual variables (vegetation, wombat burrows, and vehicle velocity) but did not link these to the occurrence of roadkills in space and time. The authors concluded that the iPTE virtual fence system was ‘minimally effective’, yet ‘appears promising’. Our analysis of their data, using standard inferential statistics, showed no effect of the virtual fence on roadkills whatsoever. We conclude that the iPTE system was not effective for mitigating the roadkills of bare-nosed wombats.
APA, Harvard, Vancouver, ISO, and other styles
45

Collignon, Olivier, Marco Davare, Anne G. De Volder, Colline Poirier, Etienne Olivier, and Claude Veraart. "Time-course of Posterior Parietal and Occipital Cortex Contribution to Sound Localization." Journal of Cognitive Neuroscience 20, no. 8 (August 2008): 1454–63. http://dx.doi.org/10.1162/jocn.2008.20102.

Full text
Abstract:
It has been suggested that both the posterior parietal cortex (PPC) and the extrastriate occipital cortex (OC) participate in the spatial processing of sounds. However, the precise time-course of their contribution remains unknown, which is of particular interest, considering that it could give new insights into the mechanisms underlying auditory space perception. To address this issue, we have used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right PPC or right OC at different delays in subjects performing a sound lateralization task. Our results confirmed that these two areas participate in the spatial processing of sounds. More precisely, we found that TMS applied over the right OC 50 msec after the stimulus onset significantly impaired the localization of sounds presented either to the right or to the left side. Moreover, right PPC virtual lesions induced 100 and 150 msec after sound presentation led to a rightward bias for stimuli delivered on the center and on the left side, reproducing transiently the deficits commonly observed in hemineglect patients. The finding that the right OC is involved in sound processing before the right PPC suggests that the OC exerts a feedforward influence on the PPC during auditory spatial processing.
APA, Harvard, Vancouver, ISO, and other styles
46

Fan, Rong, and Wenqing Li. "Study on the Design of “Scented Space” Electronic Interactive Device System." MATEC Web of Conferences 228 (2018): 03003. http://dx.doi.org/10.1051/matecconf/201822803003.

Full text
Abstract:
Fragrance is noble, simple, deep and approachable. It accompanied the Chinese nation for over five thousand years and engaged in the glorious history of Chinese civilization. It inspires the inspiration of great virtues, nourishes people’s body and mind and builds a bridge of human wisdom, which is an important catalyst and promotion for the formation of Chinese human spirit and philosophical thought. This project takes the electronic interactive device art work, “Scented Space”, as an example, extracts the visualized elements in fragrance culture and uses natural interaction technologies like digitized “synaesthesia” art and somatosensory sensing, to carry out a communication with visitors on the theme of fragrance culture, to enable the viewers to have the experience of immersive “synaesthesia” from visual sense, auditory sense, virtual gestures, somatosensory interaction and sense of smell for experiencing the past and present of fragrance culture from multiple dimensions, and to explore the philosophy of life through the cross-border integration of science and technology and art.
APA, Harvard, Vancouver, ISO, and other styles
47

Bălan, Oana, Alin Moldoveanu, Florica Moldoveanu, Hunor Nagy, György Wersényi, and Rúnar Unnórsson. "Improving the Audio Game–Playing Performances of People with Visual Impairments through Multimodal Training." Journal of Visual Impairment & Blindness 111, no. 2 (March 2017): 148–64. http://dx.doi.org/10.1177/0145482x1711100206.

Full text
Abstract:
Introduction As the number of people with visual impairments (that is, those who are blind or have low vision) is continuously increasing, rehabilitation and engineering researchers have identified the need to design sensory-substitution devices that would offer assistance and guidance to these people for performing navigational tasks. Auditory and haptic cues have been shown to be an effective approach towards creating a rich spatial representation of the environment, so they are considered for inclusion in the development of assistive tools that would enable people with visual impairments to acquire knowledge of the surrounding space in a way close to the visually based perception of sighted individuals. However, achieving efficiency through a sensory substitution device requires extensive training for visually impaired users to learn how to process the artificial auditory cues and convert them into spatial information. Methods Considering all the potential advantages game-based learning can provide, we propose a new method for training sound localization and virtual navigational skills of visually impaired people in a 3D audio game with hierarchical levels of difficulty. The training procedure is focused on a multimodal (auditory and haptic) learning approach in which the subjects have been asked to listen to 3D sounds while simultaneously perceiving a series of vibrations on a haptic headband that corresponds to the direction of the sound source in space. Results The results we obtained in a sound-localization experiment with 10 visually impaired people showed that the proposed training strategy resulted in significant improvements in auditory performance and navigation skills of the subjects, thus ensuring behavioral gains in the spatial perception of the environment.
APA, Harvard, Vancouver, ISO, and other styles
48

Damayanti, Rully, Bramasta Putra Redyantanu, and Florian Kossak. "A study of multi-sensory senses in museum virtual-visits." IOP Conference Series: Earth and Environmental Science 907, no. 1 (November 1, 2021): 012020. http://dx.doi.org/10.1088/1755-1315/907/1/012020.

Full text
Abstract:
Abstract In the forced situation of living virtual during the recent pandemic, also human spatial perception needs to develop through experience and senses during virtual activities. People can go to new places using virtual media such as pictures, 360 panoramas, movies, Google Street View, and virtual tours while being physically separated and socially isolated. This also applies to museum visits when visitors can simply observe. This article offers data from a mixed-methods empirical study that examines how three Indonesian museums, Museum Pendidikan Surabaya, Museum Tsunami Aceh and Museum Bank Indonesia Jakarta, altered their perceptions in establishing a feeling of virtual space. The study has undertaken an identification of place descriptors related to senses multi-sensory systems. The respondents are young people in their twenties who have no prior museum experience. It demonstrates that in a virtual spatial experience, the respondents' perceptions are mostly influenced by the sensory system, which gets diverse information from the media, rather than social signals, which are frequently cited as the most important aspects in perceiving locations in real life. The component of familiarity (recalling memory) is also essential in detecting and identifying the sensory descriptors in this study. In a virtual spatial experience, all sensory systems perform differently; in this study, the visual and auditory sensory systems are the two strongest, while the chemical sensory system is the weakest. Virtual visits, although on a lesser scale than physical visits, can benefit from the multi-sensory system, which is crucial in museums.
APA, Harvard, Vancouver, ISO, and other styles
49

Brugge, John F., Richard A. Reale, and Joseph E. Hind. "Spatial Receptive Fields of Primary Auditory Cortical Neurons in Quiet and in the Presence of Continuous Background Noise." Journal of Neurophysiology 80, no. 5 (November 1, 1998): 2417–32. http://dx.doi.org/10.1152/jn.1998.80.5.2417.

Full text
Abstract:
Brugge, John F., Richard A. Reale, and Joseph E. Hind. Spatial receptive fields of primary auditory cortical neurons in quiet and in the presence of continuous background noise. J. Neurophysiol. 80: 2417–2432, 1998. Spatial receptive fields of primary auditory (AI) neurons were studied by delivering, binaurally, synthesized virtual-space signals via earphones to cats under barbiturate anesthesia. Signals were broadband or narrowband transients presented in quiet anechoic space or in acoustic space filled with uncorrelated continuous broadband noise. In the absence of background noise, AI virtual space receptive fields (VSRFs) are typically large, representing a quadrant or more of acoustic space. Within the receptive field, onset latency and firing strength form functional gradients. We hypothesized earlier that functional gradients in the receptive field provide information about sound-source direction. Previous studies indicated that spatial gradients could remain relatively constant across changes in signal intensity. In the current experiments we tested the hypothesis that directional sensitivity to a transient signal, as reflected in the gradient structure of VSRFs of AI neurons, is also retained in the presence of a continuous background noise. When background noise was introduced three major affects on VSRFs were observed. 1) The size of the VSRF was reduced, accompanied by a reduction of firing strength and lengthening of response latency for signals at an acoustic axis and on-lines of constant azimuth and elevation passing through the acoustic axis. These effects were monotonically related to the intensity of the background noise over a noise intensity range of∼30 dB. 2) The noise intensity-dependent changes in VSRFs were mirrored by the changes that occurred when the signal intensity was changed in signal-alone conditions. Thus adding background noise was equivalent to a shift in the threshold of a directional signal, and this shift was seen across the spatial receptive field. 3) The spatial gradients of response strength and latency remained evident over the range of background noise intensity that reduced spike count and lengthened onset latency. Those gradients along the azimuth that spanned the frontal midline tended to remain constant in slope and position in the face of increasing intensity of background noise. These findings are consistent with our hypothesis that, under background noise conditions, information that underlies directional acuity and accuracy is retained within the spatial receptive fields of an ensemble of AI neurons.
APA, Harvard, Vancouver, ISO, and other styles
50

Yin, Wenjing. "An Artificial Intelligent Virtual Reality Interactive Model for Distance Education." Journal of Mathematics 2022 (February 14, 2022): 1–7. http://dx.doi.org/10.1155/2022/7099963.

Full text
Abstract:
Information and communication technologies play an important role in education. This fact was emphasized even more due to the significant upheavals, globally, caused by the pandemic of COVID-19 and because distance education uses tools of modern technology that prove to be significantly important. Particularly, virtual reality immersion systems, which may include 3D spatial representations, multisensory interaction channels, and real-time intuitive physical interaction, can extend the learning process by providing learners with stimuli that represent a real advanced training environment. In the present work, based on the most modern technologies of computational intelligence, virtual and augmented reality, wireless communications, and space-sensitive positioning applications, an Artificial Intelligent Virtual Reality (AI-VR) interactive model for distance education is presented. This is a case study of escape-room-type educational application, wherewith appropriate practices and methodologies promote individual and collective participation, enhance the active role of the learner, personalize the educational experience, and upgrade the process of participation in distance education, strengthening and assisting role of the educator. Specifically, a Newton polynomial is used for max-polynomials arising from the stereoscopic problem of augmented reality used to indirectly indicate the number of linear regions in order to optimize the problem. The proposed application is related to the training course of agricultural education and methods of modernization of crops, including additional sound, 3D graphics, and short film projection. The history-space-objects relationship takes and evolves with various versions such as the parts of the story to be unlocked as the trainee moves from one tool to the next or the story to guide the trainee to explore the space-time continuum, giving a new dimension to the technological development of agricultural education and the methods of modernization of crops. This finding shows that they learn best when they receive information in a style that incorporates visual, auditory, and kinetic stimuli so that learning improves when learners can examine an idea or concept using a multidimensional approach.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography