Journal articles on the topic 'Human sound localization'

To see the other types of publications on this topic, follow the link: Human sound localization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Human sound localization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Middlebrooks, John C., and David M. Green. "Sound Localization by Human Listeners." Annual Review of Psychology 42, no. 1 (January 1991): 135–59. http://dx.doi.org/10.1146/annurev.ps.42.020191.001031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Silva, T. G., W. P. S. Freitas, C. R. Cena, and A. M. B. Goncalves. "A demonstration about human sound localization." Physics Education 54, no. 1 (November 27, 2018): 013004. http://dx.doi.org/10.1088/1361-6552/aaf045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Palomäki, Kalle, Paavo Alku, Ville Mäkinen, Patrick May, and Hannu Tiitinen. "Sound localization in the human brain." NeuroReport 11, no. 7 (May 2000): 1535–38. http://dx.doi.org/10.1097/00001756-200005150-00034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Poirier, Pierre, Sylvain Miljours, Maryse Lassonde, and Franco Lepore. "Sound localization in acallosal human listeners." Brain 116, no. 1 (1993): 53–69. http://dx.doi.org/10.1093/brain/116.1.53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sato, Hayato, Masayuki Morimoto, and Hiroshi Sato. "Head movement in human sound localization." Journal of the Acoustical Society of America 140, no. 4 (October 2016): 2998. http://dx.doi.org/10.1121/1.4969287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Morrongiello, Barbara A., Kimberley D. Fenwick, Loretta Hillier, and Graham Chance. "Sound localization in newborn human infants." Developmental Psychobiology 27, no. 8 (December 1994): 519–38. http://dx.doi.org/10.1002/dev.420270805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mungamuru, B., and P. Aarabi. "Enhanced Sound Localization." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 34, no. 3 (June 2004): 1526–40. http://dx.doi.org/10.1109/tsmcb.2004.826398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wightman, Frederic L., and Doris J. Kistler. "Individual differences in human sound localization behavior." Journal of the Acoustical Society of America 99, no. 4 (April 1996): 2470–500. http://dx.doi.org/10.1121/1.415531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Makous, James C., and John C. Middlebrooks. "Two‐dimensional sound localization by human listeners." Journal of the Acoustical Society of America 87, no. 5 (May 1990): 2188–200. http://dx.doi.org/10.1121/1.399186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dobreva, Marina S., William E. O'Neill, and Gary D. Paige. "Influence of aging on human sound localization." Journal of Neurophysiology 105, no. 5 (May 2011): 2471–86. http://dx.doi.org/10.1152/jn.00951.2010.

Full text
Abstract:
Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age.
APA, Harvard, Vancouver, ISO, and other styles
11

Sabin, Andrew T., Ewan A. Macpherson, and John C. Middlebrooks. "Human sound localization at near-threshold levels." Hearing Research 199, no. 1-2 (January 2005): 124–34. http://dx.doi.org/10.1016/j.heares.2004.08.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wallace, John S., and Donald L. Fisher. "Sound Localization: Information Theory Analysis." Human Factors: The Journal of the Human Factors and Ergonomics Society 40, no. 1 (March 1998): 50–68. http://dx.doi.org/10.1518/001872098779480532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Christensen, Flemming, Morten Lydolf, and Michael F. Soerensen. "Human localization of sound signals with reduced bandwidth." Journal of the Acoustical Society of America 102, no. 5 (November 1997): 3140. http://dx.doi.org/10.1121/1.420669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jin, Craig, Markus Schenkel, and Simon Carlile. "Neural system identification model of human sound localization." Journal of the Acoustical Society of America 108, no. 3 (2000): 1215. http://dx.doi.org/10.1121/1.1288411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Van Grootel, Tom J., and A. John Van Opstal. "Human Sound-Localization Behavior Accounts for Ocular Drift." Journal of Neurophysiology 103, no. 4 (April 2010): 1927–36. http://dx.doi.org/10.1152/jn.00958.2009.

Full text
Abstract:
To generate an accurate saccade toward a sound in darkness requires a transformation of the head-centered sound location into an oculocentric motor command, which necessitates the use of an eye-in-head position signal. We tested whether this transformation uses a continuous representation of eye position by exploiting the property that the oculomotor neural integrator is leaky with a time constant of ∼20 s. Hence in complete darkness, the eyes tend to drift toward a neutral position. Alternatively, the spatial mapping stage could employ a sampled eye-position signal in which case drift will not be accounted for. Our data show that the sound location is accurately represented and that the transformation uses a dynamic eye-position signal. This signal, however, is slightly underestimated, leading to small systematic localization errors that tend to covary with the direction of eye position.
APA, Harvard, Vancouver, ISO, and other styles
16

Ege, Rachel, A. John Van Opstal, and Marc M. Van Wanrooij. "Perceived Target Range Shapes Human Sound-Localization Behavior." eneuro 6, no. 2 (March 2019): ENEURO.0111–18.2019. http://dx.doi.org/10.1523/eneuro.0111-18.2019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Langendijk, Erno H. A., and Adelbert W. Bronkhorst. "Contribution of spectral cues to human sound localization." Journal of the Acoustical Society of America 112, no. 4 (October 2002): 1583–96. http://dx.doi.org/10.1121/1.1501901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Reijniers, J., D. Vanderelst, C. Jin, S. Carlile, and H. Peremans. "An ideal-observer model of human sound localization." Biological Cybernetics 108, no. 2 (February 26, 2014): 169–81. http://dx.doi.org/10.1007/s00422-014-0588-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chemistruck, Mike, Andrew Allen, John Snyder, and Nikunj Raghuvanshi. "Efficient acoustic perception for virtual AI agents." Proceedings of the ACM on Computer Graphics and Interactive Techniques 4, no. 3 (September 22, 2021): 1–13. http://dx.doi.org/10.1145/3480139.

Full text
Abstract:
We model acoustic perception in AI agents efficiently within complex scenes with many sound events. The key idea is to employ perceptual parameters that capture how each sound event propagates through the scene to the agent's location. This naturally conforms virtual perception to human. We propose a simplified auditory masking model that limits localization capability in the presence of distracting sounds. We show that anisotropic reflections as well as the initial sound serve as useful localization cues. Our system is simple, fast, and modular and obtains natural results in our tests, letting agents navigate through passageways and portals by sound alone, and anticipate or track occluded but audible targets. Source code is provided.
APA, Harvard, Vancouver, ISO, and other styles
20

Argo, Theodore F., David A. Anderson, Andrew D. Brown, Nathaniel Greene, and Jennifer Jerding. "Development of an electromechanical test system and acoustical metrics to predict impacts of hearing protection devices on sound localization." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A165. http://dx.doi.org/10.1121/10.0010990.

Full text
Abstract:
Hearing protection devices (HPDs) such as earplugs and earmuffs can protect users from dangerously high acoustical pressures but also distort cues important for the spatial localization of sounds, likely through spectral distortions associated with occlusion of the pinnae. Evaluation of the degree to which an HPD distorts localization cues is essential for critical situations where users must be protected from high pressures but maintain spatial awareness. Automated testing offers several prospective advantages over human subject testing including cost, speed, and repeatability. In this paper, we describe an electromechanical system using a rotating manikin test fixture and a speaker on a track to simulate a hemispheric speaker array. The sound source localization impact of an HPD is estimated by comparing test signals recorded by the manikin with and without an HPD in place. Test results, as a function of sound source azimuth and elevation in the virtual speaker array, and as an average across locations, are shown for a variety of HPDs. Results are compared to an initial set of parallel measurements of sound localization during HPD use in human subjects.
APA, Harvard, Vancouver, ISO, and other styles
21

Anderson, T. R., J. A. Janko, and R. H. Gilkey. "An artificial neural network model of human sound localization." Journal of the Acoustical Society of America 92, no. 4 (October 1992): 2298. http://dx.doi.org/10.1121/1.405127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hofman, Paul M., and A. John Van Opstal. "Spectro-temporal factors in two-dimensional human sound localization." Journal of the Acoustical Society of America 103, no. 5 (May 1998): 2634–48. http://dx.doi.org/10.1121/1.422784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Langendijk, Erno H., and Adelbert W. Bronkhorst. "The contribution of spectral cues to human sound localization." Journal of the Acoustical Society of America 105, no. 2 (February 1999): 1036. http://dx.doi.org/10.1121/1.424945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hofman, P., and A. Van Opstal. "Binaural weighting of pinna cues in human sound localization." Experimental Brain Research 148, no. 4 (February 2003): 458–70. http://dx.doi.org/10.1007/s00221-002-1320-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

DUBROVSKIY, N. A., M. V. TARASOVA, and V. M. BARONKIN. "INFORMATION CRITERIA FOR A SOUND SOURCE LOCALIZATION BY HUMAN." Le Journal de Physique IV 02, no. C1 (April 1992): C1–261—C1–264. http://dx.doi.org/10.1051/jp4:1992155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ede, Reinhard, and Norman Scheel. "Development of a Device to Measure Human Sound Localization." Perceptual and Motor Skills 113, no. 2 (October 2011): 386–94. http://dx.doi.org/10.2466/03.24.27.pms.113.5.386-394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Howell, D. N. L. "Spatial hearing: The psychophysics of human sound localization 1983." Journal of Sound and Vibration 99, no. 4 (April 1985): 595. http://dx.doi.org/10.1016/0022-460x(85)90547-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Asakura, Takumi. "Bone Conduction Auditory Navigation Device for Blind People." Applied Sciences 11, no. 8 (April 8, 2021): 3356. http://dx.doi.org/10.3390/app11083356.

Full text
Abstract:
A navigation system using a binaural bone-conducted sound is proposed. This system has three features to accurately navigate the user to the destination point. First, the selection of the bone-conduction device and the optimal contact conditions between the device and the human head are discussed. Second, the basic performance of sound localization reproduced by the selected bone-conduction device with binaural sounds is confirmed considering the head-related transfer functions (HRTFs) obtained in the air-borne sound field. Here, a panned sound technique that may emphasize the localization of the sound is also validated. Third, to ensure the safety of the navigating person, which is the most important factor in the navigation of a visually impaired person by voice guidance, an appropriate warning sound reproduced by the bone-conduction device is investigated. Finally, based on the abovementioned conditions, we conduct an auditory navigation experiment using bone-conducted guide announcement. The time required to reach the destination of the navigation route is shorter in the case with voice information including the binaural sound reproduction, as compared to the case with only voice information. Therefore, a navigation system using binaural bone-conducted sound is confirmed to be effective.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhong, Xuan, William Yost, and Liang Sun. "Dynamic binaural sound source localization with ITD cues: Human listeners." Journal of the Acoustical Society of America 137, no. 4 (April 2015): 2376. http://dx.doi.org/10.1121/1.4920636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Van Grootel, Tom J., and A. John Van Opstal. "Human sound-localization behaviour after multiple changes in eye position." European Journal of Neuroscience 29, no. 11 (June 2009): 2233–46. http://dx.doi.org/10.1111/j.1460-9568.2009.06761.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zwiers, Marcel P., A. John Van Opstal, and Gary D. Paige. "Plasticity in human sound localization induced by compressed spatial vision." Nature Neuroscience 6, no. 2 (January 13, 2003): 175–81. http://dx.doi.org/10.1038/nn999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yost, William A. "Spatial Hearing: The Psychophysics of Human Sound Localization, Revised Edition." Ear and Hearing 19, no. 2 (April 1998): 167. http://dx.doi.org/10.1097/00003446-199804000-00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Baumgartner, Robert, Piotr Majdak, and Bernhard Laback. "Modeling sound-source localization in sagittal planes for human listeners." Journal of the Acoustical Society of America 136, no. 2 (August 2014): 791–802. http://dx.doi.org/10.1121/1.4887447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Vliegen, Joyce, and A. John Van Opstal. "The influence of duration and level on human sound localization." Journal of the Acoustical Society of America 115, no. 4 (April 2004): 1705–13. http://dx.doi.org/10.1121/1.1687423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Jin, Craig, Anna Corderoy, Simon Carlile, and André van Schaik. "Contrasting monaural and interaural spectral cues for human sound localization." Journal of the Acoustical Society of America 115, no. 6 (June 2004): 3124–41. http://dx.doi.org/10.1121/1.1736649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Van Wanrooij, Marc M., and A. John Van Opstal. "Sound Localization Under Perturbed Binaural Hearing." Journal of Neurophysiology 97, no. 1 (January 2007): 715–26. http://dx.doi.org/10.1152/jn.00260.2006.

Full text
Abstract:
This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband (0.5–20 kHz; BB) noises, with sound levels between 30 and 60 dB, A-weighted (dBA). To deny listeners any consistent azimuth-related head-shadow cues, stimuli were randomly interleaved. A plug immediately degraded azimuth performance, as evidenced by a sound level–dependent shift (“bias”) of responses contralateral to the plug, and a level-dependent change in the slope of the stimulus–response relation (“gain”). Although the azimuth bias and gain were highly correlated, they could not be predicted from the plug's acoustic attenuation. Interestingly, listeners performed best for low-intensity stimuli at their normal-hearing side. These data demonstrate that listeners rely on monaural spectral cues for sound-source azimuth localization as soon as the binaural difference cues break down. Also the elevation response components were affected by the plug: elevation gain depended on both stimulus azimuth and on sound level and, as for azimuth, localization was best for low-intensity stimuli at the hearing side. Our results show that the neural computation of elevation incorporates a binaural weighting process that relies on the perceived, rather than the actual, sound-source azimuth. It is our conjecture that sound localization ensues from a weighting of all acoustic cues for both azimuth and elevation, in which the weights may be partially determined, and rapidly updated, by the reliability of the particular cue.
APA, Harvard, Vancouver, ISO, and other styles
37

Yalta, Nelson, Kazuhiro Nakadai, and Tetsuya Ogata. "Sound Source Localization Using Deep Learning Models." Journal of Robotics and Mechatronics 29, no. 1 (February 20, 2017): 37–48. http://dx.doi.org/10.20965/jrm.2017.p0037.

Full text
Abstract:
[abstFig src='/00290001/04.jpg' width='300' text='Using a deep learning model, the robot locate the sound source from a multiple channel audio stream input' ] This study proposes the use of a deep neural network to localize a sound source using an array of microphones in a reverberant environment. During the last few years, applications based on deep neural networks have performed various tasks such as image classification or speech recognition to levels that exceed even human capabilities. In our study, we employ deep residual networks, which have recently shown remarkable performance in image classification tasks even when the training period is shorter than that of other models. Deep residual networks are used to process audio input similar to multiple signal classification (MUSIC) methods. We show that with end-to-end training and generic preprocessing, the performance of deep residual networks not only surpasses the block level accuracy of linear models on nearly clean environments but also shows robustness to challenging conditions by exploiting the time delay on power information.
APA, Harvard, Vancouver, ISO, and other styles
38

Wan, Xin Wang, and Juan Liang. "Speaker Localization in Reverberant Noisy Environment Using Principal Eigenvector and Classifier." Applied Mechanics and Materials 433-435 (October 2013): 416–19. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.416.

Full text
Abstract:
Sound source localization is essential in many microphone arrays application, ranging from speech enhancement to human-computer interface. The steered response power (SRP) using the phase transform (SRP-PHAT) method has been proved robust, but the algorithm may fail to locate the sound source in highly reverberant noisy environment. The Naive-Bayes localization algorithm based on classification of cross-correlation functions outperforms the SRP-PHAT in highly reverberant noisy environment. This paper proposes the improved Naive-Bayes localization algorithm using principal eigenvector. Simulation results have demonstrated that the proposed algorithm provides higher localization accuracy than the Naive-Bayes algorithm in reverberant noisy environment.
APA, Harvard, Vancouver, ISO, and other styles
39

Avots, Egils, Alekss Vecvanags, Jevgenijs Filipovs, Agris Brauns, Gundars Skudrins, Gundega Done, Janis Ozolins, Gholamreza Anbarjafari, and Dainis Jakovels. "Towards Automated Detection and Localization of Red Deer Cervus elaphus Using Passive Acoustic Sensors during the Rut." Remote Sensing 14, no. 10 (May 20, 2022): 2464. http://dx.doi.org/10.3390/rs14102464.

Full text
Abstract:
Passive acoustic sensors have the potential to become a valuable complementary component in red deer Cervus elaphus monitoring providing deeper insight into the behavior of stags during the rutting period. Automation of data acquisition and processing is crucial for adaptation and wider uptake of acoustic monitoring. Therefore, an automated data processing workflow concept for red deer call detection and localization was proposed and demonstrated. The unique dataset of red deer calls during the rut in September 2021 was collected with four GPS time-synchronized microphones. Five supervised machine learning algorithms were tested and compared for the detection of red deer rutting calls where the support-vector-machine-based approach demonstrated the best performance of −96.46% detection accuracy. For sound source location, a hyperbolic localization approach was applied. A novel approach based on cross-correlation and spectral feature similarity was proposed for sound delay assessment in multiple microphones resulting in the median localization error of 16 m, thus providing a solution for automated sound source localization—the main challenge in the automation of the data processing workflow. The automated approach outperformed manual sound delay assessment by a human expert where the median localization error was 43 m. Artificial sound records with a known location in the pilot territory were used for localization performance testing.
APA, Harvard, Vancouver, ISO, and other styles
40

Toshima, Iwaki, Shigeaki Aoki, and Tatsuya Hirahara. "Sound Localization Using an Acoustical Telepresence Robot: TeleHead II." Presence: Teleoperators and Virtual Environments 17, no. 4 (August 1, 2008): 392–404. http://dx.doi.org/10.1162/pres.17.4.392.

Full text
Abstract:
TeleHead I is an acoustical telepresence robot that we built on the basis of the concept that remote sound localization could be best achieved by using a user-like dummy head whose movement synchronizes with the user's head movement in real time. We clarified the characteristics of the latest version of TeleHead I, TeleHead II, and verified the validity of this concept by sound localization experiments. TeleHead II can synchronize stably with the user's head movement with a 120-ms delay. The driving noise level measured through headphones is below 24 dB SPL from 1 to 4 kHz. The shape difference between the dummy head and the user is about 3% in head width and 5% in head length. An overall measurement metric indicated that the difference between the head-related transfer functions (HRTFs) of the dummy head and the modeled listener is about 5 dB. The results of the sound localization experiments using TeleHead II clarified that head movement improves horizontal-plane sound localization performance even when the dummy head shape differs from the user's head shape. In contrast, the results for head movement when the dummy head shape and user head shape are different were inconsistent in the median plane. The accuracy of sound localization when using the same-shape dummy head with movement tethered to the user's head movement was always good. These results show that the TeleHead concept is acceptable for building an acoustical telepresence robot. They also show that the physical characteristics of TeleHead II are sufficient for conducting sound localization experiments.
APA, Harvard, Vancouver, ISO, and other styles
41

Gai, Yan, Vibhakar C. Kotak, Dan H. Sanes, and John Rinzel. "On the localization of complex sounds: temporal encoding based on input-slope coincidence detection of envelopes." Journal of Neurophysiology 112, no. 4 (August 15, 2014): 802–13. http://dx.doi.org/10.1152/jn.00044.2013.

Full text
Abstract:
Behavioral and neural findings demonstrate that animals can locate low-frequency sounds along the azimuth by detecting microsecond interaural time differences (ITDs). Information about ITDs is also available in the amplitude modulations (i.e., envelope) of high-frequency sounds. Since medial superior olivary (MSO) neurons encode low-frequency ITDs, we asked whether they employ a similar mechanism to process envelope ITDs with high-frequency carriers, and the effectiveness of this mechanism compared with the process of low-frequency sound. We developed a novel hybrid in vitro dynamic-clamp approach, which enabled us to mimic synaptic input to brain-slice neurons in response to virtual sound and to create conditions that cannot be achieved naturally but are useful for testing our hypotheses. For each simulated ear, a virtual sound, computer generated, was used as input to a computational auditory-nerve model. Model spike times were converted into synaptic input for MSO neurons, and ITD tuning curves were derived for several virtual-sound conditions: low-frequency pure tones, high-frequency tones modulated with two types of envelope, and speech sequences. Computational models were used to verify the physiological findings and explain the biophysical mechanism underlying the observed ITD coding. Both recordings and simulations indicate that MSO neurons are sensitive to ITDs carried by spectrotemporally complex virtual sounds, including speech tokens. Our findings strongly suggest that MSO neurons can encode ITDs across a broad-frequency spectrum using an input-slope-based coincidence-detection mechanism. Our data also provide an explanation at the cellular level for human localization performance involving high-frequency sound described by previous investigators.
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Ji-Yeoun, Su-young Chi, Jae-Yeun Lee, Minsoo Hahn, and Young-Jo Cho. "REAL-TIME SOUND LOCALIZATION USING TIME DIFFERENCE FOR HUMAN ROBOT INTERACTION." IFAC Proceedings Volumes 38, no. 1 (2005): 54–57. http://dx.doi.org/10.3182/20050703-6-cz-1902.01411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ashmead, Daniel H., DeFord L. Davis, Tracy Whalen, and Richard D. Odom. "Sound Localization and Sensitivity to Interaural Time Differences in Human Infants." Child Development 62, no. 6 (December 1991): 1211. http://dx.doi.org/10.2307/1130802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Blauert, Jens, and Robert A. Butler. "Spatial Hearing: The Psychophysics of Human Sound Localization by Jens Blauert." Journal of the Acoustical Society of America 77, no. 1 (January 1985): 334–35. http://dx.doi.org/10.1121/1.392109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wightman, Frederic, Doris Kistler, and Kristen Andersen. "Reassessment of the role of head movements in human sound localization." Journal of the Acoustical Society of America 95, no. 5 (May 1994): 3003–4. http://dx.doi.org/10.1121/1.408750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

van der Heijden, Kiki, Josef P. Rauschecker, Elia Formisano, Giancarlo Valente, and Beatrice de Gelder. "Active Sound Localization Sharpens Spatial Tuning in Human Primary Auditory Cortex." Journal of Neuroscience 38, no. 40 (August 20, 2018): 8574–87. http://dx.doi.org/10.1523/jneurosci.0587-18.2018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Okamoto, Yosuke, Seiji Nakagawa, Yoh-ichi Fujisaka, and Mitsuo Tonoike. "Human cortical activity related to sound localization in the median plane." International Congress Series 1278 (March 2005): 11–14. http://dx.doi.org/10.1016/j.ics.2004.11.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Willert, V., J. Eggert, J. Adamy, R. Stahl, and E. Korner. "A Probabilistic Model for Binaural Sound Localization." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 36, no. 5 (October 2006): 982–94. http://dx.doi.org/10.1109/tsmcb.2006.872263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Deleforge, Antoine, Florence Forbes, and Radu Horaud. "Acoustic Space Learning for Sound-Source Separation and Localization on Binaural Manifolds." International Journal of Neural Systems 25, no. 01 (January 6, 2015): 1440003. http://dx.doi.org/10.1142/s0129065714400036.

Full text
Abstract:
In this paper, we address the problems of modeling the acoustic space generated by a full-spectrum sound source and using the learned model for the localization and separation of multiple sources that simultaneously emit sparse-spectrum sounds. We lay theoretical and methodological grounds in order to introduce the binaural manifold paradigm. We perform an in-depth study of the latent low-dimensional structure of the high-dimensional interaural spectral data, based on a corpus recorded with a human-like audiomotor robot head. A nonlinear dimensionality reduction technique is used to show that these data lie on a two-dimensional (2D) smooth manifold parameterized by the motor states of the listener, or equivalently, the sound-source directions. We propose a probabilistic piecewise affine mapping model (PPAM) specifically designed to deal with high-dimensional data exhibiting an intrinsic piecewise linear structure. We derive a closed-form expectation-maximization (EM) procedure for estimating the model parameters, followed by Bayes inversion for obtaining the full posterior density function of a sound-source direction. We extend this solution to deal with missing data and redundancy in real-world spectrograms, and hence for 2D localization of natural sound sources such as speech. We further generalize the model to the challenging case of multiple sound sources and we propose a variational EM framework. The associated algorithm, referred to as variational EM for source separation and localization (VESSL) yields a Bayesian estimation of the 2D locations and time-frequency masks of all the sources. Comparisons of the proposed approach with several existing methods reveal that the combination of acoustic-space learning with Bayesian inference enables our method to outperform state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Tirado, Carlos, Billy Gerdfeldter, Stina C. Kärnekull, and Mats E. Nilsson. "Comparing Echo-Detection and Echo-Localization in Sighted Individuals." Perception 50, no. 4 (March 5, 2021): 308–27. http://dx.doi.org/10.1177/03010066211000617.

Full text
Abstract:
Echolocation is the ability to gather information from sound reflections. Most previous studies have focused on the ability to detect sound reflections, others on the ability to localize sound reflections, but no previous study has compared the two abilities in the same individuals. Our study compared echo-detection (reflecting object present or not?) and echo-localization (reflecting object to the left or right?) in 10 inexperienced sighted participants across 10 distances (1–4.25 m) to the reflecting object, using an automated system for studying human echolocation. There were substantial individual differences, particularly in the performance on the echo-localization task. However, most participants performed better on the detection than the localization task, in particular at the closest distances (1 and 1.7 m), illustrating that it sometimes may be hard to perceive whether an audible reflection came from the left or right.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography