Articoli di riviste sul tema "Auditory spatial perception"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Auditory spatial perception.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Auditory spatial perception".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Recanzone, Gregg H. "Auditory Influences on Visual Temporal Rate Perception". Journal of Neurophysiology 89, n. 2 (1 febbraio 2003): 1078–93. http://dx.doi.org/10.1152/jn.00706.2002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Visual stimuli are known to influence the perception of auditory stimuli in spatial tasks, giving rise to the ventriloquism effect. These influences can persist in the absence of visual input following a period of exposure to spatially disparate auditory and visual stimuli, a phenomenon termed the ventriloquism aftereffect. It has been speculated that the visual dominance over audition in spatial tasks is due to the superior spatial acuity of vision compared with audition. If that is the case, then the auditory system should dominate visual perception in a manner analogous to the ventriloquism effect and aftereffect if one uses a task in which the auditory system has superior acuity. To test this prediction, the interactions of visual and auditory stimuli were measured in a temporally based task in normal human subjects. The results show that the auditory system has a pronounced influence on visual temporal rate perception. This influence was independent of the spatial location, spectral bandwidth, and intensity of the auditory stimulus. The influence was, however, strongly dependent on the disparity in temporal rate between the two stimulus modalities. Further, aftereffects were observed following approximately 20 min of exposure to temporally disparate auditory and visual stimuli. These results show that the auditory system can strongly influence visual perception and are consistent with the idea that bimodal sensory conflicts are dominated by the sensory system with the greater acuity for the stimulus parameter being discriminated.
2

Haas, Ellen C. "Auditory Perception". Proceedings of the Human Factors Society Annual Meeting 36, n. 3 (ottobre 1992): 247. http://dx.doi.org/10.1518/107118192786751817.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Auditory perception involves the human listener's awareness or apprehension of auditory stimuli in the environment. Auditory stimuli, which include speech communications as well as non-speech signals, occur in the presence and absence of environmental noise. Non-speech auditory signals range from simple pure tones to complex signals found in three-dimensional auditory displays. Special hearing protection device (HPD) designs, as well as additions to conventional protectors, have been developed to improve speech communication and auditory perception capabilities of those exposed to noise. The thoughtful design of auditory stimuli and the proper design, selection, and use of HPDs within the environment can improve human performance and reduce accidents. The purpose of this symposium will be to discuss issues in auditory perception and to describe methods to improve the perception of auditory stimuli in environments with and without noise. The issues of interest include the perception of non-speech auditory signals and the improvement of auditory perception capabilities of persons exposed to noise. The first three papers of this symposium describe the perception of non-speech auditory signals. Ellen Haas defines the extent to which certain signal elements affect the perceived urgency of auditory warning signals. Michael D. Good and Dr. Robert H. Gilkey investigate free-field masking as a function of the spatial separation between signal and masker sounds within the horizontal and median planes. Jeffrey M. Gerth explores the discrimination of complex auditory signal components that differ by sound category, temporal pattern, density, and component manipulation. The fourth paper of this symposium focuses upon the improvement of auditory perception capabilities of persons exposed to hazardous noise, and who must wear hearing protection. Special HPD designs, as well as additions to conventional protectors, have been developed to improve speech communication and auditory perception capabilities of persons exposed to noise. Dr. John G. Casali reviews several new HPD technologies and describes construction features, empirical performance data, and applications of each device. These papers illustrate current research issues in the perception of auditory signals. The issues are all relevant to the human factors engineering of auditory signals and personal protective gear. The perception of auditory stimuli can be improved by the thoughtful human factors design of auditory stimuli and by the proper use of HPDs.
3

Best, Virginia, Jorg M. Buchholz e Tobias Weller. "Measuring auditory spatial perception in realistic environments". Journal of the Acoustical Society of America 141, n. 5 (maggio 2017): 3692. http://dx.doi.org/10.1121/1.4988040.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Lau, Bonnie K., Tanya St. John, Annette Estes e Stephen Dager. "Auditory processing in neurodiverse children". Journal of the Acoustical Society of America 155, n. 3_Supplement (1 marzo 2024): A75. http://dx.doi.org/10.1121/10.0026855.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Many neurodiverse individuals experience auditory processing differences including hyper- or hyposensitivity to sound, attraction or aversions to sound, and difficulty listening under noisy conditions. However, the origins of these auditory symptoms are not well understood. In this study, we tested 7-to-10-year-old autistic children and age and sex-matched neurotypical comparison participants. To simulate a realistic classroom situation where many people are often speaking simultaneously, we obtained neural and behavioral measures of speech perception in both quiet and noise conditions. Using electroencephalography, we recorded neural responses to naturalistic, continuous speech to assess the cortical encoding of the speech envelope. We also obtained behavioral multitalker speech perception thresholds and estimates of spatial release from masking, a binaural hearing phenomenon in which speech perception improves when distracting speakers are spatially separated from the target speaker. Our preliminary results from both neural and behavioral measures suggest that the autistic group shows worse speech perception in noise and less spatial release from masking than the neurotypical group. These findings suggest that autistic children may benefit from environments with reduced noise to facilitate speech perception. These findings also warrant further investigation into speech perception under real-world conditions and the neural mechanisms underlying sound processing in autistic children.
5

Peng, Z. Ellen. "School-age children show poor use of spatial cues in reverberation for speech-in-speech perception". Journal of the Acoustical Society of America 151, n. 4 (aprile 2022): A169. http://dx.doi.org/10.1121/10.0011001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Understanding speech is particularly difficult for children when there is competing speech in the background. When the target and masker talkers are spatially separated, as compared to co-located, the access to corresponding auditory spatial cues can provide release from masking, resulting in an intelligibility gain for speech-in-speech perception. When tested in free-field environments, previous work showed that children demonstrate adult-like spatial release from masking (SRM) by 9–10 years of age. However, in indoor environments where most critical communications occur such as classrooms, reverberation distorts the critical auditory spatial cues that lead to reduced SRM among adults. Little is known about how children process distorted auditory spatial cues for SRM to aid speech-in-speech perception. In this work, we measure SRM in children in virtual reverberant environments that mimic typical learning spaces. We show free-field measurements overestimate SRM maturation in realistic indoor environments. Children show a more protracted development of SRM in reverberation, likely due to immaturities in using distorted auditory spatial cues.
6

Koohi, Nehzat, Gilbert Thomas-Black, Paola Giunti e Doris-Eva Bamiou. "Auditory Phenotypic Variability in Friedreich’s Ataxia Patients". Cerebellum 20, n. 4 (18 febbraio 2021): 497–508. http://dx.doi.org/10.1007/s12311-021-01236-9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractAuditory neural impairment is a key clinical feature of Friedreich’s Ataxia (FRDA). We aimed to characterize the phenotypical spectrum of the auditory impairment in FRDA in order to facilitate early identification and timely management of auditory impairment in FRDA patients and to explore the relationship between the severity of auditory impairment with genetic variables (the expansion size of GAA trinucleotide repeats, GAA1 and GAA2), when controlled for variables such as disease duration, severity of the disease and cognitive status. Twenty-seven patients with genetically confirmed FRDA underwent baseline audiological assessment (pure-tone audiometry, otoacoustic emissions, auditory brainstem response). Twenty of these patients had additional psychophysical auditory processing evaluation including an auditory temporal processing test (gaps in noise test) and a binaural speech perception test that assesses spatial processing (Listening in Spatialized Noise-Sentences Test). Auditory spatial and auditory temporal processing ability were significantly associated with the repeat length of GAA1. Patients with GAA1 greater than 500 repeats had more severe auditory temporal and spatial processing deficits, leading to poorer speech perception. Furthermore, the spatial processing ability was strongly correlated with the Montreal Cognitive Assessment (MoCA) score. To our knowledge, this is the first study to demonstrate an association between genotype and auditory spatial processing phenotype in patients with FRDA. Auditory temporal processing, neural sound conduction, spatial processing and speech perception were more severely affected in patients with GAA1 greater than 500 repeats. The results of our study may indicate that auditory deprivation plays a role in the development of mild cognitive impairment in FRDA patients.
7

Cui, Qi N., Babak Razavi, William E. O'Neill e Gary D. Paige. "Perception of Auditory, Visual, and Egocentric Spatial Alignment Adapts Differently to Changes in Eye Position". Journal of Neurophysiology 103, n. 2 (febbraio 2010): 1020–35. http://dx.doi.org/10.1152/jn.00500.2009.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Vision and audition represent the outside world in spatial synergy that is crucial for guiding natural activities. Input conveying eye-in-head position is needed to maintain spatial congruence because the eyes move in the head while the ears remain head-fixed. Recently, we reported that the human perception of auditory space shifts with changes in eye position. In this study, we examined whether this phenomenon is 1) dependent on a visual fixation reference, 2) selective for frequency bands (high-pass and low-pass noise) related to specific auditory spatial channels, 3) matched by a shift in the perceived straight-ahead (PSA), and 4) accompanied by a spatial shift for visual and/or bimodal (visual and auditory) targets. Subjects were tested in a dark echo-attenuated chamber with their heads fixed facing a cylindrical screen, behind which a mobile speaker/LED presented targets across the frontal field. Subjects fixated alternating reference spots (0, ±20°) horizontally or vertically while either localizing targets or indicating PSA using a laser pointer. Results showed that the spatial shift induced by ocular eccentricity is 1) preserved for auditory targets without a visual fixation reference, 2) generalized for all frequency bands, and thus all auditory spatial channels, 3) paralleled by a shift in PSA, and 4) restricted to auditory space. Findings are consistent with a set-point control strategy by which eye position governs multimodal spatial alignment. The phenomenon is robust for auditory space and egocentric perception, and highlights the importance of controlling for eye position in the examination of spatial perception and behavior.
8

Strybel, Thomas Z. "Auditory Spatial Information and Head-Coupled Display Systems". Proceedings of the Human Factors Society Annual Meeting 32, n. 2 (ottobre 1988): 75. http://dx.doi.org/10.1177/154193128803200215.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Developments of head-coupled control/display systems have focused primarily on the display of three dimensional visual information, as the visual system is the optimal sensory channel for the aquisition of spatial information in humans. The auditory system improves the efficiency of vision, however, by obtaining spatial information about relevant objects outside of the visual field of view. This auditory information can be used to direct head and eye movements. Head-coupled display systems, can also benefit from the addition of auditory spatial information, as it provides a natural method of signaling the location of important events outside of the visual field of view. This symposium will report on current efforts in the developments of head-coupled display systems, with an emphasis on the auditory spatial component. The first paper “Virtual Interface Environment Workstations”, by Scott S. Fisher, will report on the development of a prototype virtual environment. This environment consists of a head-mounted, wide-angle, stereoscopic display system which is controlled by operator position, voice, and gesture. With this interface, an operator can virtually explore a 360 degree synthesized environment, and viscerally interact with its components. The second paper, “A Virtual Display System For Conveying Three-Dimensional Acoustic Information” by Elizabeth M. Wenzel, Frederic L. Wightman and Scott H. Foster, will report on the development of a method of synthetically generating three-dimensional sound cues for the above-mentioned interface. The development of simulated auditory spatial cues is limited to some extent, by our knowlege of auditory spatial processing. The remaining papers will report on two areas of auditory space perception that have recieved little attention until recently. “Perception of Real and Simulated Motion in the Auditory Modality”, by Thomas Z. Strybel, will review recent research on auditory motion perception, because a natural acoustic environment must contain moving sounds. This review will consider applications of this knowledge to head-coupled display systems. The last paper, “Auditory Psychomotor Coordination”, will examine the interplay between the auditory, visual and motor systems. The specific emphasis of this paper is the use of auditory spatial information in the regulation of motor responses so as to provide efficient application of the visual channel.
9

Upadhya, Sushmitha, Rohit Bhattacharyya, Ritwik Jargar e K. Nisha Venkateswaran. "Closed-field Auditory Spatial Perception and Its Relationship to Musical Aptitude". Journal of Indian Speech Language & Hearing Association 37, n. 2 (2023): 61–65. http://dx.doi.org/10.4103/jisha.jisha_20_23.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Introduction: Musical aptitude is the innate ability of an individual to understand, appreciate, improvise, and have a good sense of pitch and rhythm, even without undergoing formal musical training. The present study aimed to understand the effect of musical aptitude on auditory spatial perception. Method: Forty nonmusicians were subjected to a musical aptitude test Mini Profile of Music Perception Skills (Mini-PROMS) based on which they were divided into two groups. Group I included 20 nonmusicians with good musical aptitude (NM-GA) and Group II comprised 20 nonmusicians with poor musical aptitude (NM-PA). Auditory spatial tests included interaural time difference (ITD) and interaural level difference (ILD) threshold tests and a closed-field spatial test called virtual acoustic space identification (VASI) test. Results: Kruskal–Wallis test revealed a significant difference between Group I (NM-GA) and Group II (NM-PA) in ITD (p < 0.001), ILD (p = 0.002), and VASI (p = 0.012) tests, suggesting the role of musical aptitude in auditory spatial perception. Correlational analyses showed a moderate positive correlation between Mini-PROMS scores with VASI (r = 0.31, p = 0.04) and a moderate negative correlation with ILD (r = −0.3, p = 0.04) and ITD (r = −0.5, p = 0.001). Conclusion: This study defines a positive association between musical aptitude and auditory spatial perception. Further research should include a comparison of spatial skills among musicians and nonmusicians with good musical aptitude.
10

Terrence, Peter I., J. Christopher Brill e Richard D. Gilson. "Body Orientation and the Perception of Spatial Auditory and Tactile Cues". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, n. 17 (settembre 2005): 1663–67. http://dx.doi.org/10.1177/154193120504901735.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This study investigated the effects of five body orientations (supine, kneeling, sitting, standing, and prone) on perception of spatial auditory and spatial tactile cues along eight equidistant points (45° separation) of the azimuth, using a within-participant design. Participants (N = 30) used a graphics tablet and stylus to indicate the perceived direction indicated by either vibrotactile stimuli applied to the abdomen, or spatial auditory stimuli presented via headphones. Response time data show responses to spatial tactile cues were significantly faster than spatial auditory cues at each body position and for each point along the azimuth, with no significant effects of body orientation. Absolute angle differences between presented and perceived cues were significantly smaller in the tactile condition for five of the eight stimulus positions, with no significant effects of body orientation. Results are discussed in terms of designing multi-sensory directional cues for reducing visual search space for dismounted soldiers
11

Strybel, Thomas Z. "Perception of Real and Simulated Motion in the Auditory Modality". Proceedings of the Human Factors Society Annual Meeting 32, n. 2 (ottobre 1988): 76–80. http://dx.doi.org/10.1177/154193128803200216.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Future head-coupled display systems will include auditory spatial information in order to direct the pilot's attention to critical events in the environment. It is anticipated that such a system will provide dynamic as well as static auditory location information. This report reviews current research in the area of auditory motion perception, particularly as it applies to the development of simulated 3–dimensional auditory space.
12

Li, Rong, Xiangyang Zeng, Haitao Wang e Na Li. "Study on Auditory Perception Variability Caused by Distinct Structural Parameters of the Enclosure Space". Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 36, n. 4 (agosto 2018): 671–78. http://dx.doi.org/10.1051/jnwpu/20183640671.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The enclosure space is widespread in real life. However, from the angle on spatial hearing to study the variation of auditory perception caused by the changed structural parameters (such as volume, shape, etc) are very rare, particular in systematically research. In this paper, based on auditory scene simulation, the binaural auralization and sound field measurement techniques are used to obtain plenty of binaural signals in diverse enclosure spaces with different structural parameters. Subsequently, with the aid of the acoustic evaluation, the quantitative evaluation of spatial auditory perception change and its degree was carried out on the binaural signal. Major findings are summarized as follows:①The change of the volume, shape, absorption coefficient of indoor walls of enclosure space can induce the variation of auditory perception of sound field. According to the degree of change (from obvious to not), the absorption coefficient of indoor walls, the shape, the volume is sorted. ②changing the receiving location could affect the variation on auditory perception caused by the volume and shape of an enclosure space.
13

Lewald, Jörg, Ingo G. Meister, Jürgen Weidemann e Rudolf Töpper. "Involvement of the Superior Temporal Cortex and the Occipital Cortex in Spatial Hearing: Evidence from Repetitive Transcranial Magnetic Stimulation". Journal of Cognitive Neuroscience 16, n. 5 (giugno 2004): 828–38. http://dx.doi.org/10.1162/089892904970834.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The processing of auditory spatial information in cortical areas of the human brain outside of the primary auditory cortex remains poorly understood. Here we investigated the role of the superior temporal gyrus (STG) and the occipital cortex (OC) in spatial hearing using repetitive transcranial magnetic stimulation (rTMS). The right STG is known to be of crucial importance for visual spatial awareness, and has been suggested to be involved in auditory spatial perception. We found that rTMS of the right STG induced a systematic error in the perception of interaural time differences (a primary cue for sound localization in the azimuthal plane). This is in accordance with the recent view, based on both neurophysio-logical data obtained in monkeys and human neuroimaging studies, that information on sound location is processed within a dorsolateral “where” stream including the caudal STG. A similar, but opposite, auditory shift was obtained after rTMS of secondary visual areas of the right OC. Processing of auditory information in the OC has previously been shown to exist only in blind persons. Thus, the latter finding provides the first evidence of an involvement of the visual cortex in spatial hearing in sighted human subjects, and suggests a close interconnection of the neural representation of auditory and visual space. Because rTMS induced systematic shifts in auditory lateralization, but not a general deterioration, we propose that rTMS of STG or OC specifically affected neuronal circuits transforming auditory spatial coordinates in order to maintain alignment with vision.
14

Jamal, Yaseen, Simon Lacey, Lynne Nygaard e K. Sathian. "Interactions Between Auditory Elevation, Auditory Pitch and Visual Elevation During Multisensory Perception". Multisensory Research 30, n. 3-5 (2017): 287–306. http://dx.doi.org/10.1163/22134808-00002553.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cross-modal correspondences refer to associations between apparently unrelated stimulus features in different senses. For example, high and low auditory pitches are associated with high and low visual elevations, respectively. Here we examined how this crossmodal correspondence between visual elevation and auditory pitch relates to auditory elevation. We used audiovisual combinations of high- or low-frequency bursts of white noise and a visual stimulus comprising a white circle. Auditory and visual stimuli could each occur at high or low elevations. These multisensory stimuli could be congruent or incongruent for three correspondence types: cross-modal featural (auditory pitch/visual elevation), within-modal featural (auditory pitch/auditory elevation) and cross-modal spatial (auditory and visual elevation). Participants performed a 2AFC speeded classification (high or low) task while attending to auditory pitch, auditory elevation, or visual elevation. We tested for modulatory interactions between the three correspondence types. Modulatory interactions were absent when discriminating visual elevation. However, the within-modal featural correspondence affected the cross-modal featural correspondence during discrimination of auditory elevation and pitch, while the reverse modulation was observed only during discrimination of auditory pitch. The cross-modal spatial correspondence modulated the other two correspondences only when auditory elevation was being attended, was modulated by the cross-modal featural correspondence only during attention to auditory pitch, and was modulated by the within-modal featural correspondence while performing discrimination of either auditory elevation or pitch. We conclude that the cross-modal correspondence between auditory pitch and visual elevation interacts strongly with auditory elevation.
15

Razavi, B., W. E. O'Neill e G. D. Paige. "Auditory Spatial Perception Dynamically Realigns with Changing Eye Position". Journal of Neuroscience 27, n. 38 (19 settembre 2007): 10249–58. http://dx.doi.org/10.1523/jneurosci.0938-07.2007.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Wüstenberg, T., G. Fesl, T. Zähle, M. Palitzsch, M. Perckhofer, H. J. Heinze, T. A. Yousry, L. Jäncke e N. v. Steinbüchel. "Auditory perception of temporal-spatial order —a fMRI-study". NeuroImage 13, n. 6 (giugno 2001): 961. http://dx.doi.org/10.1016/s1053-8119(01)92301-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Woods, Timothy M., e Gregg H. Recanzone. "Visually Induced Plasticity of Auditory Spatial Perception in Macaques". Current Biology 14, n. 17 (settembre 2004): 1559–64. http://dx.doi.org/10.1016/j.cub.2004.08.059.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Dogan, Asli Zeynep, e Arzu Gonenc Sorguc. "Sound perception in virtual environments". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, n. 6 (30 novembre 2023): 2660–71. http://dx.doi.org/10.3397/in_2023_0388.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Virtual environments have been developing and changing the understanding of a space by means design, perception and usage. This study aims to contribute to the literature on the improvement of the auditory perception and cognition of virtual spaces used for education, training, and gaming purposes. This study proposes to offer a realistic representation of soundscapes in virtual environments according to spatial qualities instead of misleading synthetic sounds by integrating acoustical simulations with the immersive environment and questioning the experience of a regular user. The study explores the effects of acoustically simulated and immersive virtual soundscape design methods on auditory perception through changing forms and materials by series of cognitive experiments. The results revealed that the participants achieve more accurate results of source-localization, self-localization, and distance guessing in an immersive environment than in the simulated environment. Also, they were more aware of the soundwalk route, spent more time on tasks, and evaluated the experience more positively in an immersive environment compared to simulations. Despite the placement of the auralizations from the simulations as sound sources in immersive environments, there is still a lack of auditory representation of spatial qualities compared to the accurate calculation of acoustical parameters in a simulated environment.
19

Stecker, G. Christopher. "RESTARTing the stabilized auditory image: How transient sampling of binaural information supports the emergence of stable auditory scenes". Journal of the Acoustical Society of America 154, n. 4_supplement (1 ottobre 2023): A265. http://dx.doi.org/10.1121/10.0023479.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The perception of a spatial auditory scene involves the extraction and integration of multiple dynamic and unreliable sensory features (“cues”). Variation in each cue reflects the competing effects of multiple features of the auditory scene—e.g., relevant and irrelevant sound sources. Understanding which cue changes belong together—and to which objects—represents a fundamental challenge to the neural mechanisms of auditory scene perception. RESTART theory suggests a solution: transient, envelope-triggered sampling creates a temporally sparse representation of spatial features. Sparsity minimizes overlap between auditory objects and reduces ambiguity in the face of environmental distortions such as noise and reverberation. Envelope-triggered (“strobed”) sampling stabilizes the auditory image by emphasizing the most reliable cues. Here, we review the key evidence supporting RESTART theory, its neurocomputational mechanisms, and prior efforts to model them. [Work supported by US NIH R01-DC016643.]
20

Phillips, Dennis P. "Auditory Gap Detection, Perceptual Channels, and Temporal Resolution in Speech Perception". Journal of the American Academy of Audiology 10, n. 06 (giugno 1999): 343–54. http://dx.doi.org/10.1055/s-0042-1748505.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractThis article overviews some recent advances in our understanding of temporal processes in auditory perception. It begins with the premise that hearing is the online perceptual elaboration of acoustic events distributed in time. It examines studies of gap detection for two reasons: first, to probe the temporal acuity of auditory perception in its own right and, second, to show how studies of gap detection have provided new insights into the processes involved in speech perception and into the architecture of auditory spatial perceptual mechanisms. The implications of these new data for our comprehension of some central auditory processing disorders are examined. Abbreviations: SLI = specific language impairment, VOT = voice onset time
21

Razvaliaeva, A. Y., e V. N. Nosulenko. "Spatial Localization of Digital Sound in Scientific Experiment and Practice". Experimental Psychology (Russia) 16, n. 2 (15 agosto 2023): 20–35. http://dx.doi.org/10.17759/exppsy.2023160202.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<p style="text-align: justify;">Localization of sound in space is an important component of auditory perception, which is involved in the selection of various sound streams, the perception of speech in noise, and the organization of auditory images. Research over the past century has shown that sound localization is achieved through: differences in the intensity and time delay of sound waves arriving at different ears; spectral distortions arising from the anatomical features of the structure of the auricles, head, torso; dynamic cues (listener head movements), etc. However, some scientific and methodological issues (primarily related to the perception of natural sounds and the ecological validity of studies) have not been resolved. The development of digital audio techniques also leads to the emergence of new areas of research, including the processing of sound for the transmission of spatial information in headphones (which is solved using the head related transfer function &mdash; HRTF) and the creation of auditory interfaces. The tasks facing researchers in these areas are to improve the perception of spatial information (by manipulating the characteristics of the sound, prompts or training) and the creation of such sound events that can be perceived as object-related, i.e., inextricably linked with the purpose of the operator's activity. The methodology of the perceived quality of events, which makes it possible to distinguish which properties of the auditory image become the most important in human activity and which physical properties of the event they correspond to, can help in solving the tasks set and increasing the ecological validity of research.</p>
22

Yau, Jeffrey M., Pablo Celnik, Steven S. Hsiao e John E. Desmond. "Dissociable crossmodal recruitment of visual and auditory cortex for tactile perception". Seeing and Perceiving 25 (2012): 7. http://dx.doi.org/10.1163/187847612x646307.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Primary sensory areas previously thought to be devoted to a single modality can exhibit multisensory responses. Some have interpreted these responses as evidence for crossmodal recruitment (i.e., primary sensory processing for inputs in a non-primary modality); however, the direct contribution of this activity to perception is unclear. We tested the specific contributions of visual and auditory cortex to tactile perception in healthy adult volunteers using anodal transcranial direct current stimulation (tDCS). This form of non-invasive neuromodulation can enhance neural excitability and facilitate learning. In a series of psychophysical experiments we characterized participants’ ability to discriminate grating orientation or vibration frequency. We measured perceptual sensitivity before, during, and after tDCS application over either visual cortex or auditory cortex. Each participant received both anodal and sham interventions on separate sessions in counterbalanced order. We found that anodal stimulation over visual cortex selectively improved tactile spatial acuity, but not frequency sensitivity. Conversely, anodal stimulation over auditory cortex selectively improved tactile frequency sensitivity, but not spatial acuity. Furthermore, we found that improvements in tactile perception persisted after cessation of tDCS. These results reveal a clear double-dissociation in the crossmodal contributions of visual and auditory cortex to tactile perception, and support a supramodal brain organization scheme in which visual and auditory cortex comprise distributed networks that support shape and frequency perception, independent of sensory input modality.
23

Weeldreyer, Gabriel S., Lily M. Wang e Z. Ellen Peng. "Perceptual consequences of reverberant environments on spatial unmasking". Journal of the Acoustical Society of America 155, n. 3_Supplement (1 marzo 2024): A210. http://dx.doi.org/10.1121/10.0027333.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Spatial hearing provides access to auditory spatial cues that promote speech perception in noisy listening situations. However, reverberation degrades auditory spatial cues and limits listeners’ ability to utilize these cues for segregating target speech from competing babble. Hence, spatial unmasking—an intelligibility benefit from a spatial separation between a target and masker—is reduced in reverberant environments as compared to free field. To understand the perceptual consequences of poorer spatial unmasking in reverberation, we assessed three aspects of functional spatial hearing in virtual reverberant environments: perceived auditory source width, auditory spatial acuity, and spatial unmasking. Three auditory environments were simulated and auralized using ODEON to vary interaural coherence (IC): (1) a control anechoic environment, (2) a classroom designed to meet classroom acoustics standards (IC = 0.58), and (3) a classroom of the same size with more severe reverberation (IC = 0.37). Individually measured head-related transfer functions were used to binaurally reproduce the auralized signals over headphones. We hypothesize that interaural decorrelation, the result of increasing reverberation, will broaden the perceived auditory source width with a cascading effect of reduced auditory spatial acuity and subsequently poorer spatial unmasking. Preliminary data from normal-hearing adults will be presented. [Work supported by NIH R21DC020532.]
24

Xie, Bosun, e Guangzheng Yu. "Psychoacoustic Principle, Methods, and Problems with Perceived Distance Control in Spatial Audio". Applied Sciences 11, n. 23 (26 novembre 2021): 11242. http://dx.doi.org/10.3390/app112311242.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
One purpose of spatial audio is to create perceived virtual sources at various spatial positions in terms of direction and distance with respect to the listener. The psychoacoustic principle of spatial auditory perception is essential for creating perceived virtual sources. Currently, the technical means for recreating virtual sources in different directions of various spatial audio techniques are relatively mature. However, perceived distance control in spatial audio remains a challenging task. This article reviews the psychoacoustic principle, methods, and problems with perceived distance control and compares them with the principles and methods of directional localization control in spatial audio, showing that the validation of various methods for perceived distance control depends on the principle and method used for spatial audio. To improve perceived distance control, further research on the detailed psychoacoustic mechanisms of auditory distance perception is required.
25

Li, Yueying, Zimo Li, Aihui Deng, Hewu Zheng, Jianxin Chen, Yanna Ren e Weiping Yang. "The Modulation of Exogenous Attention on Emotional Audiovisual Integration". i-Perception 12, n. 3 (maggio 2021): 204166952110187. http://dx.doi.org/10.1177/20416695211018714.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Although emotional audiovisual integration has been investigated previously, whether emotional audiovisual integration is affected by the spatial allocation of visual attention is currently unknown. To examine this question, a variant of the exogenous spatial cueing paradigm was adopted, in which stimuli varying by facial expressions and nonverbal affective prosody were used to express six basic emotions (happiness, anger, disgust, sadness, fear, surprise) via a visual, an auditory, or an audiovisual modality. The emotional stimuli were preceded by an unpredictive cue that was used to attract participants’ visual attention. The results showed significantly higher accuracy and quicker response times in response to bimodal audiovisual stimuli than to unimodal visual or auditory stimuli for emotional perception under both valid and invalid cue conditions. The auditory facilitation effect was stronger than the visual facilitation effect under exogenous attention for the six emotions tested. Larger auditory enhancement was induced when the target was presented at the expected location than at the unexpected location. For emotional perception, happiness shared the biggest auditory enhancement among all six emotions. However, the influence of exogenous cueing effect on emotional perception seemed to be absent.
26

Lakatos, Stephen. "Recognition of Complex Auditory-Spatial Patterns". Perception 22, n. 3 (marzo 1993): 363–74. http://dx.doi.org/10.1068/p220363.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Two experiments were carried out to investigate the perception of complex auditory-spatial patterns. Subjects were asked to identify alphanumeric characters whose patterns could be outlined acoustically through the sequential activation of specific units in a speaker array. Signal bandwidths were varied systematically in both experiments. Signals in experiment 1 had sharp onsets and offsets; envelope shapes in experiment 2 were much more gradual. Subjects showed considerable ability in recognizing alphanumeric patterns traced with signals of varying acoustical composition. Reductions in the steepness of signal attack and decay produced limited declines in pattern recognition ability. Systematic trends in the relation between patterns and the distribution of incorrect responses suggest that subjects performed a pattern-matching task, in which identifications were made on the basis of component features. The unexpected pattern recognition abilities that subjects demonstrated in both experiments suggest that spatial hearing, like vision, has access to mechanisms for amodal spatial representations.
27

Hunter, M., H. Linnington, J. Mitchell, Y. Thakur, N. Mir, T. Griffiths e P. Woodruff. "351 – Evidence for impaired auditory spatial perception in people with schizophrenia and auditory hallucinations". Schizophrenia Research 98 (febbraio 2008): 178. http://dx.doi.org/10.1016/j.schres.2007.12.418.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Tonelli, Alessia, Luca Brayda e Monica Gori. "Influence of vision on auditory spatial perception in sighted people." Journal of Vision 15, n. 12 (1 settembre 2015): 857. http://dx.doi.org/10.1167/15.12.857.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

McBride, Maranda, Phuong Tran, Kimberly A. Pollard, Tomasz Letowski e Garnett P. McMillan. "Effects of Bone Vibrator Position on Auditory Spatial Perception Tasks". Human Factors: The Journal of the Human Factors and Ergonomics Society 57, n. 8 (29 luglio 2015): 1443–58. http://dx.doi.org/10.1177/0018720815596272.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Pourfaraman, Mojtaba, e Mahboobe Taher. "The Effectiveness of Visual Skill-based Computer Games on Visual-auditory-spatial Perception and Reading Tracking Speed of Students With Special Learning Disabilities". Journal of Learning Disabilities 10, n. 2 (1 gennaio 2022): 200–211. http://dx.doi.org/10.32598/jld.10.2.1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Objective: The aim of this study was to determine the effect of using computer games based on visual skills on visual-auditory-spatial perception and reading speed of reading students with special learning disabilities. Methods: The research method was quasi-experimental with pre-test, posttest design with a control group. The statistical population was students with learning disabilities in Joghatai city in 2017- 2018 academic year, 30 of whom were selected by purposive sampling and randomly assigned to experimental and control groups. Results: Both groups completed the Frostig Visual Perception Scale, the Hopman Audit Diagnosis, and the Abedi Reading Level Recognition Scale. The experimental group was intervened with computer games for 8 sessions of 45 minutes. The results of analysis of covariance showed that the mean of visual perception (P<0.01), auditory diagnosis (P<0.05) and reading level detection (P<0.05) of the experimental group increased. Conclusion: According to the obtained results, it can be said that computer games based on visual skills improved the visual-auditory-spatial perception and reading speed by increasing students' attention and accuracy.
31

Vicario, Carmelo Mario, Gaetano Rappo, Anna Maria Pepi e Massimiliano Oliveri. "Timing Flickers across Sensory Modalities". Perception 38, n. 8 (1 gennaio 2009): 1144–51. http://dx.doi.org/10.1068/p6362.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In tasks requiring a comparison of the duration of a reference and a test visual cue, the spatial position of test cue is likely to be implicitly coded, providing a form of a congruency effect or introducing a response bias according to the environmental scale or its vectorial reference. The precise mechanism generating these perceptual shifts in subjective duration is not understood, although several studies suggest that spatial attentional factors may play a critical role. Here we use a duration comparison task within and across sensory modalities to examine if temporal performance is also modulated when people are exposed to spatial distractors involving different sensory modalities. Different groups of healthy participants performed duration comparison tasks in separate sessions: a time comparison task of visual stimuli during exposure to spatially presented auditory distractors; and a time comparison task of auditory stimuli during exposure to spatially presented visual distractors. We found the duration of visual stimuli biased depending on the spatial position of auditory distractors. Observers underestimated the duration of stimuli presented in the left spatial field, while there was an overestimation trend in estimating the duration of stimuli presented in the right spatial field. In contrast, timing of auditory stimuli was unaffected by exposure to visual distractors. These results support the existence of multisensory interactions between space and time showing that, in cross-modal paradigms, the presence of auditory distractors can modify visuo-temporal perception but not vice versa. This asymmetry is discussed in terms of sensory–perceptual differences between the two systems.
32

Shayman, Corey S., Robert J. Peterka, Frederick J. Gallun, Yonghee Oh, Nai-Yuan N. Chang e Timothy E. Hullar. "Frequency-dependent integration of auditory and vestibular cues for self-motion perception". Journal of Neurophysiology 123, n. 3 (1 marzo 2020): 936–44. http://dx.doi.org/10.1152/jn.00307.2019.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Recent evidence has shown that auditory information may be used to improve postural stability, spatial orientation, navigation, and gait, suggesting an auditory component of self-motion perception. To determine how auditory and other sensory cues integrate for self-motion perception, we measured motion perception during yaw rotations of the body and the auditory environment. Psychophysical thresholds in humans were measured over a range of frequencies (0.1–1.0 Hz) during self-rotation without spatial auditory stimuli, rotation of a sound source around a stationary listener, and self-rotation in the presence of an earth-fixed sound source. Unisensory perceptual thresholds and the combined multisensory thresholds were found to be frequency dependent. Auditory thresholds were better at lower frequencies, and vestibular thresholds were better at higher frequencies. Expressed in terms of peak angular velocity, multisensory vestibular and auditory thresholds ranged from 0.39°/s at 0.1 Hz to 0.95°/s at 1.0 Hz and were significantly better over low frequencies than either the auditory-only (0.54°/s to 2.42°/s at 0.1 and 1.0 Hz, respectively) or vestibular-only (2.00°/s to 0.75°/s at 0.1 and 1.0 Hz, respectively) unisensory conditions. Monaurally presented auditory cues were less effective than binaural cues in lowering multisensory thresholds. Frequency-independent thresholds were derived, assuming that vestibular thresholds depended on a weighted combination of velocity and acceleration cues, whereas auditory thresholds depended on displacement and velocity cues. These results elucidate fundamental mechanisms for the contribution of audition to balance and help explain previous findings, indicating its significance in tasks requiring self-orientation. NEW & NOTEWORTHY Auditory information can be integrated with visual, proprioceptive, and vestibular signals to improve balance, orientation, and gait, but this process is poorly understood. Here, we show that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. Motion thresholds are determined by a weighted combination of displacement, velocity, and acceleration information. These findings may help understand and treat imbalance, particularly in people with sensory deficits.
33

King, Andrew J. "Visual influences on auditory spatial learning". Philosophical Transactions of the Royal Society B: Biological Sciences 364, n. 1515 (4 novembre 2008): 331–39. http://dx.doi.org/10.1098/rstb.2008.0230.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation.
34

Bertonati, Giorgia, Maria Bianca Amadeo, Claudio Campus e Monica Gori. "Auditory speed processing in sighted and blind individuals". PLOS ONE 16, n. 9 (22 settembre 2021): e0257676. http://dx.doi.org/10.1371/journal.pone.0257676.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Multisensory experience is crucial for developing a coherent perception of the world. In this context, vision and audition are essential tools to scaffold spatial and temporal representations, respectively. Since speed encompasses both space and time, investigating this dimension in blindness allows deepening the relationship between sensory modalities and the two representation domains. In the present study, we hypothesized that visual deprivation influences the use of spatial and temporal cues underlying acoustic speed perception. To this end, ten early blind and ten blindfolded sighted participants performed a speed discrimination task in which spatial, temporal, or both cues were available to infer moving sounds’ velocity. The results indicated that both sighted and early blind participants preferentially relied on temporal cues to determine stimuli speed, by following an assumption that identified as faster those sounds with a shorter duration. However, in some cases, this temporal assumption produces a misperception of the stimulus speed that negatively affected participants’ performance. Interestingly, early blind participants were more influenced by this misleading temporal assumption than sighted controls, resulting in a stronger impairment in the speed discrimination performance. These findings demonstrate that the absence of visual experience in early life increases the auditory system’s preference for the time domain and, consequentially, affects the perception of speed through audition.
35

Senna, Irene, Cesare V. Parise e Marc O. Ernst. "Modulation frequency as a cue for auditory speed perception". Proceedings of the Royal Society B: Biological Sciences 284, n. 1858 (12 luglio 2017): 20170673. http://dx.doi.org/10.1098/rspb.2017.0673.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities.
36

Hong, Fangfang, Stephanie Badde e Michael S. Landy. "Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception". PLOS Computational Biology 17, n. 11 (15 novembre 2021): e1008877. http://dx.doi.org/10.1371/journal.pcbi.1008877.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.
37

Jarollahi, Farnoush, Marzieh Amiri, Shohreh Jalaie e Seyyed Jalal Sameni. "The effects of auditory spatial training on informational masking release in elderly listeners: a study protocol for a randomized clinical trial". F1000Research 8 (9 aprile 2019): 420. http://dx.doi.org/10.12688/f1000research.18602.1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background: Regarding the strong auditory spatial plasticity capability of the central auditory system and the effect of short-term and long-term rehabilitation programs in elderly people, it seems that an auditory spatial training can help this population in informational masking release and better track speech in noisy environments. The main purposes of this study are developing an informational masking measurement test and an auditory spatial training program. Protocol: This study will be conducted in two parts. Part 1: develop and determine the validity of an informational masking measurement test by recruiting two groups of young (n=50) and old (n=50) participants with normal hearing who have no difficulty in understanding speech in noisy environments. Part 2 (clinical trial): two groups of 60-75-year-olds with normal hearing, who complain about difficulty in speech perception in noisy environments, will participate as control and intervention groups to examine the effect of auditory spatial training. Intervention: 8 sessions of auditory spatial training. The informational masking measurement test and Speech, Spatial and Qualities of Hearing Scale will be compared before intervention, immediately after intervention, and one month after intervention between the two groups. Discussion: Since auditory training programs do not deal with informational masking release, an auditory spatial training will be designed, aiming to improve hearing in noisy environments for elderly populations. Trial registration: Iranian Registry of Clinical Trials (IRCT20190118042404N1) on 25th February 2019.
38

Jarollahi, Farnoush, Marzieh Amiri, Shohreh Jalaie e Seyyed Jalal Sameni. "The effects of auditory spatial training on informational masking release in elderly listeners: a study protocol for a randomized clinical trial". F1000Research 8 (4 luglio 2019): 420. http://dx.doi.org/10.12688/f1000research.18602.2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background: Regarding the strong auditory spatial plasticity capability of the central auditory system and the effect of short-term and long-term rehabilitation programs in elderly people, it seems that an auditory spatial training can help this population in informational masking release and better track speech in noisy environments. The main purposes of this study are developing an informational masking measurement test and an auditory spatial training program. Protocol: This study will be conducted in two parts. Part 1: develop and determine the validity of an informational masking measurement test by recruiting two groups of young (n=50) and old (n=50) participants with normal hearing who have no difficulty in understanding speech in noisy environments. Part 2 (clinical trial): two groups of 60-75-year-olds with normal hearing, who complain about difficulty in speech perception in noisy environments, will participate as control and intervention groups to examine the effect of auditory spatial training. Intervention: 15 sessions of auditory spatial training. The informational masking measurement test and Speech, Spatial and Qualities of Hearing Scale will be compared before intervention, immediately after intervention, and five weeks after intervention between the two groups. Discussion: Since auditory training programs do not deal with informational masking release, an auditory spatial training will be designed, aiming to improve hearing in noisy environments for elderly populations. Trial registration: Iranian Registry of Clinical Trials (IRCT20190118042404N1) on 25th February 2019.
39

Hairston, W. D., M. T. Wallace, J. W. Vaughan, B. E. Stein, J. L. Norris e J. A. Schirillo. "Visual Localization Ability Influences Cross-Modal Bias". Journal of Cognitive Neuroscience 15, n. 1 (1 gennaio 2003): 20–29. http://dx.doi.org/10.1162/089892903321107792.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The ability of a visual signal to influence the localization of an auditory target (i.e., “cross-modal bias”) was examined as a function of the spatial disparity between the two stimuli and their absolute locations in space. Three experimental issues were examined: (a) the effect of a spatially disparate visual stimulus on auditory localization judgments; (b) how the ability to localize visual, auditory, and spatially aligned multi-sensory (visual-auditory) targets is related to cross-modal bias, and (c) the relationship between the magnitude of cross-modal bias and the perception that the two stimuli are spatially “unified” (i.e., originate from the same location). Whereas variability in localization of auditory targets was large and fairly uniform for all tested locations, variability in localizing visual or spatially aligned multisensory targets was much smaller, and increased with increasing distance from the midline. This trend proved to be strongly correlated with biasing effectiveness, for although visual-auditory bias was unexpectedly large in all conditions tested, it decreased progressively (as localization variability increased) with increasing distance from the mid-line. Thus, central visual stimuli had a substantially greater biasing effect on auditory target localization than did more peripheral visual stimuli. It was also apparent that cross-modal bias decreased as the degree of visual-auditory disparity increased. Consequently, the greatest visual-auditory biases were obtained with small disparities at central locations. In all cases, the magnitude of these biases covaried with judgments of spatial unity. The results suggest that functional properties of the visual system play the predominant role in determining these visual-auditory interactions and that cross-modal biases can be substantially greater than previously noted.
40

Phillips, Dennis P., Susan E. Hall, Susan E. Boehnke e Leanna E. D. Rutherford. "Spatial Stimulus Cue Information Supplying Auditory Saltation". Perception 31, n. 7 (luglio 2002): 875–85. http://dx.doi.org/10.1068/p3293.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Auditory saltation is a misperception of the spatial location of repetitive, transient stimuli. It arises when clicks at one location are followed in perfect temporal cadence by identical clicks at a second location. This report describes two psychophysical experiments designed to examine the sensitivity of auditory saltation to different stimulus cues for auditory spatial perception. Experiment 1 was a dichotic study in which six different six-click train stimuli were used to generate the saltation effect. Clicks lateralised by using interaural time differences and clicks lateralised by using interaural level differences produced equivalent saltation effects, confirming an earlier finding. Switching the stimulus cue from an interaural time difference to an interaural level difference (or the reverse) in mid train was inconsequential to the saltation illusion. Experiment 2 was a free-field study in which subjects rated the illusory motion generated by clicks emitted from two sound sources symmetrically disposed around the interaural axis, ie on the same cone of confusion in the auditory hemifield opposite one ear. Stimuli in such positions produce spatial location judgments that are based more heavily on monaural spectral information than on binaural computations. The free-field stimuli produced robust saltation. The data from both experiments are consistent with the view that auditory saltation can emerge from spatial processing, irrespective of the stimulus cue information used to determine click laterality or location.
41

Werner, Stephan, Florian Klein, Annika Neidhardt, Ulrike Sloma, Christian Schneiderwind e Karlheinz Brandenburg. "Creation of Auditory Augmented Reality Using a Position-Dynamic Binaural Synthesis System—Technical Components, Psychoacoustic Needs, and Perceptual Evaluation". Applied Sciences 11, n. 3 (27 gennaio 2021): 1150. http://dx.doi.org/10.3390/app11031150.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
For a spatial audio reproduction in the context of augmented reality, a position-dynamic binaural synthesis system can be used to synthesize the ear signals for a moving listener. The goal is the fusion of the auditory perception of the virtual audio objects with the real listening environment. Such a system has several components, each of which help to enable a plausible auditory simulation. For each possible position of the listener in the room, a set of binaural room impulse responses (BRIRs) congruent with the expected auditory environment is required to avoid room divergence effects. Adequate and efficient approaches are methods to synthesize new BRIRs using very few measurements of the listening room. The required spatial resolution of the BRIR positions can be estimated by spatial auditory perception thresholds. Retrieving and processing the tracking data of the listener’s head-pose and position as well as convolving BRIRs with an audio signal needs to be done in real-time. This contribution presents work done by the authors including several technical components of such a system in detail. It shows how the single components are affected by psychoacoustics. Furthermore, the paper also discusses the perceptive effect by means of listening tests demonstrating the appropriateness of the approaches.
42

Courtois, Gilles, Vincent Grimaldi, Hervé Lissek, Philippe Estoppey e Eleftheria Georganti. "Perception of Auditory Distance in Normal-Hearing and Moderate-to-Profound Hearing-Impaired Listeners". Trends in Hearing 23 (gennaio 2019): 233121651988761. http://dx.doi.org/10.1177/2331216519887615.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.
43

Shinn‐Cunningham, Barbara, e K. Anthony Hoover. "How does spatial auditory perception impact how we enjoy surround sound". Journal of the Acoustical Society of America 123, n. 5 (maggio 2008): 2980. http://dx.doi.org/10.1121/1.2932495.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Bennemann, Jan, Claudia Freigang, Marc Stöhr e Rudolf Rübsamen. "Interactions of multisensory integration and spatial attention in auditory space perception". Multisensory Research 26, n. 1-2 (2013): 209. http://dx.doi.org/10.1163/22134808-000s0157.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Kalashnyk, Mariya, Uriy Loshkov, Oleksandr Yakovlev, Anton Genkin e Hanna Savchenko. "Musically-acoustic thesaurus as spatial dimension of cognitive process". Scientific Herald of Uzhhorod University Series Physics 2024, n. 55 (21 gennaio 2024): 1421–27. http://dx.doi.org/10.54919/physics/55.2024.142af1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Relevance. This article explores the intricate relationship between the musically-acoustic thesaurus � encompassing both musical and extra-musical elements � and cognitive processes, emphasizing the spatial dimension of cognition within auditory experiences.Purpose. The primary aim is to dissect the structure and function of the musically-acoustic thesaurus in individual and collective cognitive domains, highlighting its role in encoding and navigating the acoustic environment and its impact on musical and emotional experiences.Methodology. Through a comprehensive analysis of the auditory activity, the study examines how sonic phenomena, both musical and non-musical, are categorized, internalized, and utilized within human cognition. It considers the sonic environment's organization and how it influences the perception and emotional engagement with music and sound.Results. The findings indicate that the musically-acoustic thesaurus serves as a crucial framework for understanding and interacting with the acoustic world. It delineates how sounds are integrated into a complex network of cognitive processes, facilitating orientation in space-time, enabling emotional experiences, and fostering an aesthetic appreciation of the acoustic environment.Conclusions. The musically-acoustic thesaurus emerges as a pivotal element in the cognitive processing of sound, underscoring its dual role in practical orientation and emotional-aesthetic experiences. The study reveals that this thesaurus not only aids in navigating the sonic landscape but also enriches the individual's interaction with music and sound, thereby significantly contributing to the broader understanding of cognitive processes in auditory perception.
46

Wang, Haitao, Xiangyang Zeng, Ye Lei, Shuwei Ren e Xiaoyan Zhang. "Reform and Practice of Immersive Teaching in Environmental Acoustics". Journal of Contemporary Educational Research 5, n. 12 (23 dicembre 2021): 32–37. http://dx.doi.org/10.26689/jcer.v5i12.2837.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Environmental Acoustics is a professional course that educates students majoring in Acoustics. Affected by practical equipment and other factors, the teaching effect of knowledge points relevant to subjective auditory perception contained in the course is poor. Taking the course of Spatial Hearing and 3D Stereo as an example, this study develops a virtual simulation experiment system for the topic of subjective hearing perception in order to carry out the reform and practice of immersive teaching. Combined with the virtual simulation experiment project with high sense of presence, students are supported by the actual auditory perception effect in the whole learning process and have achieved good learning effect.
47

Callan, Daniel E., Jeffery A. Jones, Kevin Munhall, Christian Kroos, Akiko M. Callan e Eric Vatikiotis-Bateson. "Multisensory Integration Sites Identified by Perception of Spatial Wavelet Filtered Visual Speech Gesture Information". Journal of Cognitive Neuroscience 16, n. 5 (giugno 2004): 805–16. http://dx.doi.org/10.1162/089892904970771.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Perception of speech is improved when presentation of the audio signal is accompanied by concordant visual speech gesture information. This enhancement is most prevalent when the audio signal is degraded. One potential means by which the brain affords perceptual enhancement is thought to be through the integration of concordant information from multiple sensory channels in a common site of convergence, multisensory integration (MSI) sites. Some studies have identified potential sites in the superior temporal gyrus/sulcus (STG/S) that are responsive to multisensory information from the auditory speech signal and visual speech movement. One limitation of these studies is that they do not control for activity resulting from attentional modulation cued by such things as visual information signaling the onsets and offsets of the acoustic speech signal, as well as activity resulting from MSI of properties of the auditory speech signal with aspects of gross visual motion that are not specific to place of articulation information. This fMRI experiment uses spatial wavelet bandpass filtered Japanese sentences presented with background multispeaker audio noise to discern brain activity reflecting MSI induced by auditory and visual correspondence of place of articulation information that controls for activity resulting from the above-mentioned factors. The experiment consists of a low-frequency (LF) filtered condition containing gross visual motion of the lips, jaw, and head without specific place of articulation information, a midfrequency (MF) filtered condition containing place of articulation information, and an unfiltered (UF) condition. Sites of MSI selectively induced by auditory and visual correspondence of place of articulation information were determined by the presence of activity for both the MF and UF conditions relative to the LF condition. Based on these criteria, sites of MSI were found predominantly in the left middle temporal gyrus (MTG), and the left STG/S (including the auditory cortex). By controlling for additional factors that could also induce greater activity resulting from visual motion information, this study identifies potential MSI sites that we believe are involved with improved speech perception intelligibility.
48

Findlay-Walsh, Iain. "Virtual auditory reality". SoundEffects - An Interdisciplinary Journal of Sound and Sound Experience 10, n. 1 (15 gennaio 2021): 71–90. http://dx.doi.org/10.7146/se.v10i1.124199.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This article examines popular music listening in light of recent research in auditory perception and spatial experience, record production, and virtual reality, while considering parallel developments in digital pop music production practice. The discussion begins by considering theories of listening and embodiment by Brandon LaBelle, Eric Clarke, Salomè Voegelin and Linda Salter, examining relations between listening subjects and aural environments, conceptualising listening as a process of environmental ‘inhabiting’, and considering auditory experience as the real-time construction of ‘reality’. These ideas are discussed in relation to recent research on popular music production and perception, with a focus on matters of spatial sound design, the virtual ‘staging’ of music performances and performing bodies, digital editing methods and effects, and on shifting relations between musical spatiality, singer-persona, audio technologies, and listener. Writings on music and virtual space by Martin Knakkergaard, Allan Moore, Ragnhild Brøvig-Hanssen & Anne Danielsen, Denis Smalley, Dale Chapman, Kodwo Eshun and Holger Schulze are discussed, before being related to conceptions of VR sound and user experience by Jaron Lanier, Rolf Nordahl & Niels Nilsson, Mel Slater, Tom Garner and Frances Dyson. This critical framework informs three short aural analyses of digital pop tracks released during the last 10 years - Titanium (Guetta & Sia 2010), Ultralight Beam (West 2016) and 2099 (Charli XCX 2019) - presented in the form of autoethnographic ‘listening notes’. Through this discussion on personal popular music listening and virtual spatiality, a theory of pop listening as embodied inhabiting of simulated narrative space, or virtual story-world, with reference to ‘aural-dominant realities’ (Salter), ‘sonic possible worlds’ (Voegelin), and ‘sonic fictions’ (Eshun), is developed. By examining personal music listening in relation to VR user experience, this study proposes listening to pop music in the 21st century as a mode of immersive, embodied ‘storyliving’, or ‘storydoing’ (Allen & Tucker).
49

Clayton, Colton, e Yi Zhou. "Visual capture and changes in response times in multisensory localization tasks". Journal of the Acoustical Society of America 151, n. 4 (aprile 2022): A163. http://dx.doi.org/10.1121/10.0010981.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In the real-world, environmental objects are often both seen and heard. Visual stimuli can influence the accuracy, variability, and timing of the listener’s responses to auditory targets. One well-known example of this visual influence is the Ventriloquist Illusion or visual capture of the perceived sound source location. However, less is known about how vision affects the timing of sensorimotor output driven by auditory events. We hypothesize that response time may manifest the influences of two different visual factors—stimulus features (e.g., spatial congruence) and environmental features (e.g., contextual reference). We measured eye saccades, a natural orienting behavior, in response to auditory and visual stimuli with different degrees of spatial alignment in two visual environments (with or without contextual reference). Results showed both visual factors can influence the timing of saccades to auditory targets. Response time was longer when auditory, and visual signals are spatially separated. Response time was also longer in the non-referenced than referenced environments, but the effect size was small. These results demonstrate response time as a valuable metric for understanding multisensory perception. Further studies are needed to investigate the patterns of changes in both response time and response accuracy to gain new insights into auditory localization in real-life settings.
50

Ivry, Richard B., e Paul C. Lebby. "Hemispheric Differences in Auditory Perception Are Similar to Those Found in Visual Perception". Psychological Science 4, n. 1 (gennaio 1993): 41–45. http://dx.doi.org/10.1111/j.1467-9280.1993.tb00554.x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In a pitch discrimination task, subjects were faster and more accurate in judging low-frequency sounds when these stimuli were presented to the left ear, compared with the right ear. In contrast, a right-ear advantage was found with high-frequency sounds. The effect was in terms of relative frequency and not absolute frequency, suggesting that the effect arises from postsensory mechanisms. A simitar laterality effect has been reported in visual perception with stimuli varying in spatial frequency. These multimodal laterality effects may reflect a general computational difference between the two cerebral hemispheres, with the left hemisphere biased for processing high-frequency information and the right hemisphere biased for processing low-frequency information.

Vai alla bibliografia