Academic literature on the topic 'Human sound localization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Human sound localization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Human sound localization"

1

Middlebrooks, John C., and David M. Green. "Sound Localization by Human Listeners." Annual Review of Psychology 42, no. 1 (January 1991): 135–59. http://dx.doi.org/10.1146/annurev.ps.42.020191.001031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Silva, T. G., W. P. S. Freitas, C. R. Cena, and A. M. B. Goncalves. "A demonstration about human sound localization." Physics Education 54, no. 1 (November 27, 2018): 013004. http://dx.doi.org/10.1088/1361-6552/aaf045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Palomäki, Kalle, Paavo Alku, Ville Mäkinen, Patrick May, and Hannu Tiitinen. "Sound localization in the human brain." NeuroReport 11, no. 7 (May 2000): 1535–38. http://dx.doi.org/10.1097/00001756-200005150-00034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Poirier, Pierre, Sylvain Miljours, Maryse Lassonde, and Franco Lepore. "Sound localization in acallosal human listeners." Brain 116, no. 1 (1993): 53–69. http://dx.doi.org/10.1093/brain/116.1.53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sato, Hayato, Masayuki Morimoto, and Hiroshi Sato. "Head movement in human sound localization." Journal of the Acoustical Society of America 140, no. 4 (October 2016): 2998. http://dx.doi.org/10.1121/1.4969287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Morrongiello, Barbara A., Kimberley D. Fenwick, Loretta Hillier, and Graham Chance. "Sound localization in newborn human infants." Developmental Psychobiology 27, no. 8 (December 1994): 519–38. http://dx.doi.org/10.1002/dev.420270805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mungamuru, B., and P. Aarabi. "Enhanced Sound Localization." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 34, no. 3 (June 2004): 1526–40. http://dx.doi.org/10.1109/tsmcb.2004.826398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wightman, Frederic L., and Doris J. Kistler. "Individual differences in human sound localization behavior." Journal of the Acoustical Society of America 99, no. 4 (April 1996): 2470–500. http://dx.doi.org/10.1121/1.415531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Makous, James C., and John C. Middlebrooks. "Two‐dimensional sound localization by human listeners." Journal of the Acoustical Society of America 87, no. 5 (May 1990): 2188–200. http://dx.doi.org/10.1121/1.399186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dobreva, Marina S., William E. O'Neill, and Gary D. Paige. "Influence of aging on human sound localization." Journal of Neurophysiology 105, no. 5 (May 2011): 2471–86. http://dx.doi.org/10.1152/jn.00951.2010.

Full text
Abstract:
Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Human sound localization"

1

Strömberg, Ralf, and Stig-Åke Svensson. "Sound localization for human interaction in real environment." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-12496.

Full text
Abstract:
For a robot to succeed at speech recognition, it is advantageous to have a strong and clear signal tointerpret. To facilitate this the robot can steer and aim for the sound source to get a clearer signal, todo this a sound source localization system is required. If the robot turns towards the speaker thisalso gives a more natural feeling when a human interacts with the robot. To determine where thesound source is positioned, an angle relative to the microphone pair is calculated using theinteraural time difference (ITD), which is the difference in time of arrival of the sound between thepair of microphones. To achieve good result the microphone signals needs to be preprocessed andthere are also different algorithms for calculating the time difference which are investigated in thisthesis. The results presented in this work are from tests, with an emphasis on focusing at real-timesystems, involving noisy environment and response time. The results show the complexity of thebalance between computational time and precision.
För att en robot ska lyckas med taleigenkänning, är det fördelaktigt att ha en stark och tydlig signalatt tolka. För att underlätta detta kan roboten styra och rikta in sig mot ljudkällan för att få entydligare signal och för att detta skall vara möjligt krävs ett system för lokalisering av ljudkällan.Om roboten vänder sig mot talaren ger detta även en mer naturlig känsla när en människainteragerar med roboten. För att avgöra var ljudkällan är placerad, beräknas en vinkel i förhållandetill mikrofonparet med hjälp av interaurala tidsskillnaden (ITD), vilket är skillnaden i ankomsttid avljudet mellan mikrofonparet. För att uppnå bra resultat måste mikrofonsignalerna förbehandlas ochdet finns också olika algoritmer för att beräkna tidsskillnaden som undersöks i detta examensarbete.Det resultat som presenteras i detta arbete kommer från tester, med tonvikt på att fokusera pårealtidssystem, som inbegriper bullrig miljö och svarstid. Resultaten visar komplexiteten i balansenmellan beräknings tid och precision.
APA, Harvard, Vancouver, ISO, and other styles
2

Riehm, Christopher D. M. A. "On the Behavioral Dynamics of Human Sound Localization: Two Experiments Concerning Active Localization." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin159549944844441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jin, Craig T. "Spectral analysis and resolving spatial ambiguities in human sound localization." Connect to full text, 2001. http://hdl.handle.net/2123/1342.

Full text
Abstract:
Thesis (Ph. D.)--School of Electrical and Information Engineering, Faculty of Engineering, University of Sydney, 2001.
Title from title screen (viewed 13 January 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Electrical and Information Engineering, Faculty of Engineering. Includes bibliographical references. Also available in print form.
APA, Harvard, Vancouver, ISO, and other styles
4

Jin, Craig. "Spectral analysis and resolving spatial ambiguities in human sound localization." Thesis, The University of Sydney, 2001. http://hdl.handle.net/2123/1342.

Full text
Abstract:
This dissertation provides an overview of my research over the last five years into the spectral analysis involved in human sound localization. The work involved conducting psychophysical tests of human auditory localization performance and then applying analytical techniques to analyze and explain the data. It is a fundamental thesis of this work that human auditory localization response directions are primarily driven by the auditory localization cues associated with the acoustic filtering properties of the external auditory periphery, i.e., the head, torso, shoulder, neck, and external ears. This work can be considered as composed of three parts. In the first part of this work, I compared the auditory localization performance of a human subject and a time-delay neural network model under three sound conditions: broadband, high-pass, and low-pass. A “black-box” modeling paradigm was applied. The modeling results indicated that training the network to localize sounds of varying center-frequency and bandwidth could degrade localization performance results in a manner demonstrating some similarity to human auditory localization performance. As the data collected during the network modeling showed that humans demonstrate striking localization errors when tested using bandlimited sound stimuli, the second part of this work focused on human sound localization of bandpass filtered noise stimuli. Localization data was collected from 5 subjects and for 7 sound conditions: 300 Hz to 5 kHz, 300 Hz to 7 kHz, 300 Hz to 10 kHz, 300 Hz to 14 kHz, 3 to 8 kHz, 4 to 9 kHz, and 7 to 14 kHz. The localization results were analyzed using the method of cue similarity indices developed by Middlebrooks (1992). The data indicated that the energy level in relatively wide frequency bands could be driving the localization response directions, just as in Butler’s covert peak area model (see Butler and Musicant, 1993). The question was then raised as to whether the energy levels in the various frequency bands, as described above, are most likely analyzed by the human auditory localization system on a monaural or an interaural basis. In the third part of this work, an experiment was conducted using virtual auditory space sound stimuli in which the monaural spectral cues for auditory localization were disrupted, but the interaural spectral difference cue was preserved. The results from this work showed that the human auditory localization system relies primarily on a monaural analysis of spectral shape information for its discrimination of directions on the cone of confusion. The work described in the three parts lead to the suggestion that a spectral contrast model based on overlapping frequency bands of varying bandwidth and perhaps multiple frequency scales can provide a reasonable algorithm for explaining much of the current psychophysical and neurophysiological data related to human auditory localization.
APA, Harvard, Vancouver, ISO, and other styles
5

Jin, Craig. "Spectral analysis and resolving spatial ambiguities in human sound localization." University of Sydney, 2001. http://hdl.handle.net/2123/1342.

Full text
Abstract:
Doctor of Philosophy
This dissertation provides an overview of my research over the last five years into the spectral analysis involved in human sound localization. The work involved conducting psychophysical tests of human auditory localization performance and then applying analytical techniques to analyze and explain the data. It is a fundamental thesis of this work that human auditory localization response directions are primarily driven by the auditory localization cues associated with the acoustic filtering properties of the external auditory periphery, i.e., the head, torso, shoulder, neck, and external ears. This work can be considered as composed of three parts. In the first part of this work, I compared the auditory localization performance of a human subject and a time-delay neural network model under three sound conditions: broadband, high-pass, and low-pass. A “black-box” modeling paradigm was applied. The modeling results indicated that training the network to localize sounds of varying center-frequency and bandwidth could degrade localization performance results in a manner demonstrating some similarity to human auditory localization performance. As the data collected during the network modeling showed that humans demonstrate striking localization errors when tested using bandlimited sound stimuli, the second part of this work focused on human sound localization of bandpass filtered noise stimuli. Localization data was collected from 5 subjects and for 7 sound conditions: 300 Hz to 5 kHz, 300 Hz to 7 kHz, 300 Hz to 10 kHz, 300 Hz to 14 kHz, 3 to 8 kHz, 4 to 9 kHz, and 7 to 14 kHz. The localization results were analyzed using the method of cue similarity indices developed by Middlebrooks (1992). The data indicated that the energy level in relatively wide frequency bands could be driving the localization response directions, just as in Butler’s covert peak area model (see Butler and Musicant, 1993). The question was then raised as to whether the energy levels in the various frequency bands, as described above, are most likely analyzed by the human auditory localization system on a monaural or an interaural basis. In the third part of this work, an experiment was conducted using virtual auditory space sound stimuli in which the monaural spectral cues for auditory localization were disrupted, but the interaural spectral difference cue was preserved. The results from this work showed that the human auditory localization system relies primarily on a monaural analysis of spectral shape information for its discrimination of directions on the cone of confusion. The work described in the three parts lead to the suggestion that a spectral contrast model based on overlapping frequency bands of varying bandwidth and perhaps multiple frequency scales can provide a reasonable algorithm for explaining much of the current psychophysical and neurophysiological data related to human auditory localization.
APA, Harvard, Vancouver, ISO, and other styles
6

Benichoux, Victor. "Timing cues for azimuthal sound source localization." Phd thesis, Université René Descartes - Paris V, 2013. http://tel.archives-ouvertes.fr/tel-00931645.

Full text
Abstract:
Azimuth sound localization in many animals relies on the processing of differences in time-of-arrival of the low-frequency sounds at both ears: the interaural time differences (ITD). It was observed in some species that this cue depends on the spectrum of the signal emitted by the source. Yet, this variation is often discarded, as humans and animals are assumed to be insensitive to it. The purpose of this thesis is to assess this dependency using acoustical techniques, and explore the consequences of this additional complexity on the neurophysiology and psychophysics of sound localization. In the vicinity of rigid spheres, a sound field is diffracted, leading to frequency-dependent wave propagation regimes. Therefore, when the head is modeled as a rigid sphere, the ITD for a given position is a frequency-dependent quantity. I show that this is indeed reflected on human ITDs by studying acoustical recordings for a large number of human and animal subjects. Furthermore, I explain the effect of this variation at two scales. Locally in frequency the ITD introduces different envelope and fine structure delays in the signals reaching the ears. Second the ITD for low-frequency sounds is generally bigger than for high frequency sounds coming from the same position. In a second part, I introduce and discuss the current views on the binaural ITD-sensitive system in mammals. I expose that the heterogenous responses of such cells are well predicted when it is assumed that they are tuned to frequency-dependent ITDs. Furthermore, I discuss how those cells can be made to be tuned to a particular position in space irregardless of the frequency content of the stimulus. Overall, I argue that current data in mammals is consistent with the hypothesis that cells are tuned to a single position in space. Finally, I explore the impact of the frequency-dependence of ITD on human behavior, using psychoacoustical techniques. Subjects are asked to match the lateral position of sounds presented with different frequency content. Those results suggest that humans perceive sounds with different frequency contents at the same position provided that they have different ITDs, as predicted from acoustical data. The extent to which this occurs is well predicted by a spherical model of the head. Combining approaches from different fields, I show that the binaural system is remarkably adapted to the cues available in its environment. This processing strategy used by animals can be of great inspiration to the design of robotic systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Ui-Hyun. "Improvement of Sound Source Localization for a Binaural Robot of Spherical Head with Pinnae." 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/180475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hedges, Mitchell Lawrence. "An investigation into the use of intuitive control interfaces and distributed processing for enhanced three dimensional sound localization." Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/d1020615.

Full text
Abstract:
This thesis investigates the feasibility of using gestures as a means of control for localizing three dimensional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. the successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
APA, Harvard, Vancouver, ISO, and other styles
9

Lirussi, Igor. "Human-Robot interaction with low computational-power humanoids." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19120/.

Full text
Abstract:
This article investigates the possibilities of human-humanoid interaction with robots whose computational power is limited. The project has been carried during a year of work at the Computer and Robot Vision Laboratory (VisLab), part of the Institute for Systems and Robotics in Lisbon, Portugal. Communication, the basis of interaction, is simultaneously visual, verbal, and gestural. The robot's algorithm provides users a natural language communication, being able to catch and understand the person’s needs and feelings. The design of the system should, consequently, give it the capability to dialogue with people in a way that makes possible the understanding of their needs. The whole experience, to be natural, is independent from the GUI, used just as an auxiliary instrument. Furthermore, the humanoid can communicate with gestures, touch and visual perceptions and feedbacks. This creates a totally new type of interaction where the robot is not just a machine to use, but a figure to interact and talk with: a social robot.
APA, Harvard, Vancouver, ISO, and other styles
10

Feinkohl, Arne [Verfasser], Georg [Akademischer Betreuer] Klump, Hans [Akademischer Betreuer] Colonius, and Julia [Akademischer Betreuer] Fischer. "Psychophysical experiments on sound localization in starlings and humans / Arne Feinkohl. Betreuer: Georg Klump ; Hans Colonius ; Julia Fischer." Oldenburg : BIS der Universität Oldenburg, 2015. http://d-nb.info/1079000283/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Human sound localization"

1

Wightman, Frederic. Monaural sound localization revisited. [Washington, DC: National Aeronautics and Space Administration, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Spatial hearing: The psychophysics of human sound localization. Cambridge, Mass: MIT Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Opstal, John van. Auditory System and Human Sound-Localization Behavior. Elsevier Science & Technology Books, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Opstal, John van. Auditory System and Human Sound-Localization Behavior. Elsevier Science & Technology Books, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Blauert, Jens. Spatial Hearing: The Psychophysics of Human Sound Localization. The MIT Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

The Auditory System and Human Sound-Localization Behavior. Elsevier, 2016. http://dx.doi.org/10.1016/c2014-0-00203-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Human sound localization"

1

Wu, Kai, and Andy W. H. Khong. "Sound Source Localization and Tracking." In Human–Computer Interaction Series, 55–78. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19947-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wightman, Frederic L., Doris J. Kistler, and Mark E. Perkins. "A New Approach to the Study of Human Sound Localization." In Proceedings in Life Sciences, 26–48. New York, NY: Springer US, 1987. http://dx.doi.org/10.1007/978-1-4612-4738-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pavan, Gianni, Gregory Budney, Holger Klinck, Hervé Glotin, Dena J. Clink, and Jeanette A. Thomas. "History of Sound Recording and Analysis Equipment." In Exploring Animal Behavior Through Sound: Volume 1, 1–36. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97540-1_1.

Full text
Abstract:
AbstractOver the last 100 years, there has been an explosion of research in the field of animal bioacoustics. These changes have been facilitated by technological advances, decrease in size and cost of recording equipment, increased battery life and data storage capabilities, the transition from analog-to-digital recorders, and the development of sound analysis software. Acousticians can now study the airborne and underwater sounds from vocal species across the globe at temporal and spatial scales that were not previously feasible and often in the absence of human observers. Many advances in the field of bioacoustics were enabled by equipment initially developed for the military, professional musicians, and radio, TV, and film industries. This chapter reviews the history of the development of sound recorders, transducers (i.e., microphones and hydrophones), and signal processing hardware and software used in animal bioacoustics research. Microphones and hydrophones can be used as a single sensor or as an array of elements facilitating the localization of sound sources. Analog recorders, which relied on magnetic tape, have been replaced with digital recorders; acoustic data was initially stored on tapes, but is now stored on optical discs, hard drives, and/or solid-state memories. Recently, tablets and smartphones have become popular recording and analysis devices. With these advances, it has never been easier, or more cost-efficient, to study the sounds of the world.
APA, Harvard, Vancouver, ISO, and other styles
4

Tyagi, Aakanksha, Sanjeev Kumar, and Munesh Trivedi. "Sound Localization in 3-D Space Using Kalman Filter and Neural Network for Human like Robotics." In Networking Communication and Data Knowledge Engineering, 227–43. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4585-1_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Middlebrooks, John C. "Sound localization." In The Human Auditory System - Fundamental Organization and Clinical Disorders, 99–116. Elsevier, 2015. http://dx.doi.org/10.1016/b978-0-444-62630-1.00006-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

van Opstal, John. "Sound Localization Plasticity." In The Auditory System and Human Sound-Localization Behavior, 333–60. Elsevier, 2016. http://dx.doi.org/10.1016/b978-0-12-801529-2.00012-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

W., Blake. "Processing of Binaural Information in Human Auditory Cortex." In Advances in Sound Localization. InTech, 2011. http://dx.doi.org/10.5772/14268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

van Opstal, John. "Impaired Hearing and Sound Localization." In The Auditory System and Human Sound-Localization Behavior, 393–412. Elsevier, 2016. http://dx.doi.org/10.1016/b978-0-12-801529-2.00014-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ohkura, Michiko, Yasuyuki Yanagida, and Susumu Tachi. "Sound distance localization using virtual environment." In Advances in Human Factors/Ergonomics, 485–90. Elsevier, 1995. http://dx.doi.org/10.1016/s0921-2647(06)80079-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

van Opstal, John. "Acoustic Localization Cues." In The Auditory System and Human Sound-Localization Behavior, 171–208. Elsevier, 2016. http://dx.doi.org/10.1016/b978-0-12-801529-2.00007-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Human sound localization"

1

Jayaweera, W. G. Nuwan, A. G. Buddhika P. Jayasekara, and A. M. Harsha S. Abeykoon. "Sound localization: Human vs. machine." In 2014 7th International Conference on Information and Automation for Sustainability (ICIAfS). IEEE, 2014. http://dx.doi.org/10.1109/iciafs.2014.7069551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Greene, N. T., and G. D. Paige. "Influence of sound source width on human sound localization." In 2012 34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2012. http://dx.doi.org/10.1109/embc.2012.6347472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Marentakis, Georgios, and Rudolfs Liepins. "Evaluation of hear-through sound localization." In CHI '14: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2556288.2557168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Quang Nguyen and JongSuk Choi. "Multiple sound sources localization with perception sensor network." In 2013 IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 2013. http://dx.doi.org/10.1109/roman.2013.6628515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sodnik, Jaka, Saso Tomazic, Raphael Grasset, Andreas Duenser, and Mark Billinghurst. "Spatial sound localization in an augmented reality environment." In the 20th conference of the computer-human interaction special interest group (CHISIG) of Australia. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1228175.1228197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, Hong, Meiling Wang, Mengyin Fu, and Yi Yang. "Sound Source Localization Sensor of Robot for TDOA Method." In 2011 International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). IEEE, 2011. http://dx.doi.org/10.1109/ihmsc.2011.75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rothbucher, Martin, Marko Durkovic, Tim Habigt, Hao Shen, and Klaus Diepold. "HRTF-based localization and separation of multiple sound sources." In 2012 RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 2012. http://dx.doi.org/10.1109/roman.2012.6343894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cha, Elizabeth, Naomi T. Fitter, Yunkyung Kim, Terrence Fong, and Maja J. Matarić. "Effects of Robot Sound on Auditory Localization in Human-Robot Collaboration." In HRI '18: ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3171221.3171285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sung-Wan Kim, Ji-Yong Lee, Doik Kim, Bum-Jae You, and Nakju Lett Don. "Human localization based on the fusion of vision and sound system." In 2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2011). IEEE, 2011. http://dx.doi.org/10.1109/urai.2011.6145870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nakamura, Keisuke, Kazuhiro Nakadai, Futoshi Asano, and Gokhan Ince. "Intelligent sound source localization and its application to multimodal human tracking." In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011). IEEE, 2011. http://dx.doi.org/10.1109/iros.2011.6048166.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Human sound localization"

1

Cardoso, Leonardo, Roberto A. Tenenbaum, Ranny L. X. N. Michalski, Olavo M. Silva, and William D’Andrea Fonseca. Resenha de livros: A edição nº 53 recebe resenhas também dos autores. Revista Acústica e Vibrações, December 2021. http://dx.doi.org/10.55753/aev.v36e53.49.

Full text
Abstract:
Nesta edição da revista, a presente seção conta com cinco resenhas de livros (também chamadas de book reviews). E há uma novidade: as duas primeiras resenhas foram escritas pelos próprios autores dos livros. As outras três resenhas ficaram a cargo dos editores do número 53. Outra novidade é que todas elas estão também disponíveis em língua inglesa (elas começam na página 7). Lembramos que as resenhas são escritas de forma abreviada e concisa, de modo a resumir o conteúdo dos livros (em assuntos relacionados com as diversas ciências que envolvem acústica, vibrações e áudio) e trazer informações acerca dos autores (para contextualizar ainda mais as obras). Para este número trazemos as resenhas dos seguintes livros: Sound-Politics in São Paulo Autor: Leonardo Cardoso | Oxford Press, 2019 Dinâmica Aplicada Autor: Roberto A. Tenenbaum | Editora Manole, 2016 (4 ed.) Acústica nos Edifícios Autor: Jorge Patrício | Publindústria, 2018 (7 ed.) Understanding Acoustics: An Experimentalist’s View of Sound and Vibration Autor: Steven L. Garret | Springer, 2020 Spatial Hearing: The Psychophysics of Human Sound Localization Autor: Jens Blauert | MIT Press, 1996 (Rev. Ed.). Leonardo Cardoso, professor na Texas A&M University, apresenta seu livro sobre política sonora em São Paulo. Roberto Tenenbaum, professor da UFSM, apresenta a quarta edição de sua obra importante na compreensão de Acústica e Vibrações. Em seguida, é apresentado um dos livros de Jorge Patrício, referência portuguesa em Acústica de Edificações. O quarto livro é o “Entendendo a acústica”, de autoria de Steven Garret. Por último, o livro clássico do prof. alemão Jens Blauert, “Áudio espacial”, é apresentado. Esperamos que a leitura das resenhas ofereça as primeiras compreensões/impressões sobre as obras e desperte vontade de conhecê-las por inteiro: uma excelente maneira de ampliar o conhecimento e de se manter atualizado.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography