To see the other types of publications on this topic, follow the link: Sound location.

Journal articles on the topic 'Sound location'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sound location.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Stanley, Jenni A., Craig A. Radford, and Andrew G. Jeffs. "Location, location, location: finding a suitable home among the noise." Proceedings of the Royal Society B: Biological Sciences 279, no. 1742 (June 6, 2012): 3622–31. http://dx.doi.org/10.1098/rspb.2012.0697.

Full text
Abstract:
While sound is a useful cue for guiding the onshore orientation of larvae because it travels long distances underwater, it also has the potential to convey valuable information about the quality and type of the habitat at the source. Here, we provide, to our knowledge, the first evidence that settlement-stage coastal crab species can interpret and show a strong settlement and metamorphosis response to habitat-related differences in natural underwater sound. Laboratory- and field-based experiments demonstrated that time to metamorphosis in the settlement-stage larvae of common coastal crab species varied in response to different underwater sound signatures produced by different habitat types. The megalopae of five species of both temperate and tropical crabs showed a significant decrease in time to metamorphosis, when exposed to sound from their optimal settlement habitat type compared with other habitat types. These results indicate that sounds emanating from specific underwater habitats may play a major role in determining spatial patterns of recruitment in coastal crab species.
APA, Harvard, Vancouver, ISO, and other styles
2

Möttönen, Riikka, Kaisa Tiippana, Mikko Sams, and Hanna Puharinen. "Sound Location Can Influence Audiovisual Speech Perception When Spatial Attention Is Manipulated." Seeing and Perceiving 24, no. 1 (2011): 67–90. http://dx.doi.org/10.1163/187847511x557308.

Full text
Abstract:
AbstractAudiovisual speech perception has been considered to operate independent of sound location, since the McGurk effect (altered auditory speech perception caused by conflicting visual speech) has been shown to be unaffected by whether speech sounds are presented in the same or different location as a talking face. Here we show that sound location effects arise with manipulation of spatial attention. Sounds were presented from loudspeakers in five locations: the centre (location of the talking face) and 45°/90° to the left/right. Auditory spatial attention was focused on a location by presenting the majority (90%) of sounds from this location. In Experiment 1, the majority of sounds emanated from the centre, and the McGurk effect was enhanced there. In Experiment 2, the major location was 90° to the left, causing the McGurk effect to be stronger on the left and centre than on the right. Under control conditions, when sounds were presented with equal probability from all locations, the McGurk effect tended to be stronger for sounds emanating from the centre, but this tendency was not reliable. Additionally, reaction times were the shortest for a congruent audiovisual stimulus, and this was the case independent of location. Our main finding is that sound location can modulate audiovisual speech perception, and that spatial attention plays a role in this modulation.
APA, Harvard, Vancouver, ISO, and other styles
3

Elko, Gary W., James L. Flanagan, and James D. Johnston. "Sound location arrangement." Journal of the Acoustical Society of America 84, no. 5 (November 1988): 1966. http://dx.doi.org/10.1121/1.397072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Porter, Kristin Kelly, Ryan R. Metzger, and Jennifer M. Groh. "Representation of Eye Position in Primate Inferior Colliculus." Journal of Neurophysiology 95, no. 3 (March 2006): 1826–42. http://dx.doi.org/10.1152/jn.00857.2005.

Full text
Abstract:
We studied the representation of eye-position information in the primate inferior colliculus (IC). Monkeys fixated visual stimuli at one of eight or nine locations along the horizontal meridian between −24 and 24° while sounds were presented from loudspeakers at locations within that same range. Approximately 40% of our sample of 153 neurons showed statistically significant sensitivity to eye position during either the presentation of an auditory stimulus or in the absence of sound (Bonferroni corrected P < 0.05). The representation for eye position was predominantly monotonic and favored contralateral eye positions. Eye-position sensitivity was more prevalent among neurons without sound-location sensitivity: about half of neurons that were insensitive to sound location were sensitive to eye position, whereas only about one-quarter of sound-location-sensitive neurons were also sensitive to eye position. Our findings suggest that sound location and eye position are encoded using independent but overlapping rate codes at the level of the IC. The use of a common format has computational advantages for integrating these two signals. The differential distribution of eye-position sensitivity and sound-location sensitivity suggests that this process has begun by the level of the IC but is not yet complete at this stage. We discuss how these signals might fit into Groh and Sparks' vector subtraction model for coordinate transformations.
APA, Harvard, Vancouver, ISO, and other styles
5

Middlebrooks, John C., Li Xu, Ann Clock Eddins, and David M. Green. "Codes for Sound-Source Location in Nontonotopic Auditory Cortex." Journal of Neurophysiology 80, no. 2 (August 1, 1998): 863–81. http://dx.doi.org/10.1152/jn.1998.80.2.863.

Full text
Abstract:
Middlebrooks, John C., Li Xu, Ann Clock Eddins, and David M. Green. Codes for sound-source location in nontonopic auditor cortex. J. Neurophysiol. 80: 863–881, 1998. We evaluated two hypothetical codes for sound-source location in the auditory cortex. The topographical code assumed that single neurons are selective for particular locations and that sound-source locations are coded by the cortical location of small populations of maximally activated neurons. The distributed code assumed that the responses of individual neurons can carry information about locations throughout 360° of azimuth and that accurate sound localization derives from information that is distributed across large populations of such panoramic neurons. We recorded from single units in the anterior ectosylvian sulcus area (area AES) and in area A2 of α-chloralose–anesthetized cats. Results obtained in the two areas were essentially equivalent. Noise bursts were presented from loudspeakers spaced in 20° intervals of azimuth throughout 360° of the horizontal plane. Spike counts of the majority of units were modulated >50% by changes in sound-source azimuth. Nevertheless, sound-source locations that produced greater than half-maximal spike counts often spanned >180° of azimuth. The spatial selectivity of units tended to broaden and, often, to shift in azimuth as sound pressure levels (SPLs) were increased to a moderate level. We sometimes saw systematic changes in spatial tuning along segments of electrode tracks as long as 1.5 mm but such progressions were not evident at higher sound levels. Moderate-level sounds presented anywhere in the contralateral hemifield produced greater than half-maximal activation of nearly all units. These results are not consistent with the hypothesis of a topographic code. We used an artificial-neural–network algorithm to recognize spike patterns and, thereby, infer the locations of sound sources. Network input consisted of spike density functions formed by averages of responses to eight stimulus repetitions. Information carried in the responses of single units permitted reasonable estimates of sound-source locations throughout 360° of azimuth. The most accurate units exhibited median errors in localization of <25°, meaning that the network output fell within 25° of the correct location on half of the trials. Spike patterns tended to vary with stimulus SPL, but level-invariant features of patterns permitted estimates of locations of sound sources that varied through 20-dB ranges. Sound localization based on spike patterns that preserved details of spike timing consistently was more accurate than localization based on spike counts alone. These results support the hypothesis that sound-source locations are represented by a distributed code and that individual neurons are, in effect, panoramic localizers.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Haoping, Chunlin Yue, Cenyi Wang, Aijun Wang, Zonghao Zhang, and Li Luo. "Effect of Target Semantic Consistency in Different Sequence Positions and Processing Modes on T2 Recognition: Integration and Suppression Based on Cross-Modal Processing." Brain Sciences 13, no. 2 (February 16, 2023): 340. http://dx.doi.org/10.3390/brainsci13020340.

Full text
Abstract:
In the rapid serial visual presentation (RSVP) paradigm, sound affects participants’ recognition of targets. Although many studies have shown that sound improves cross-modal processing, researchers have not yet explored the effects of sound semantic information with respect to different locations and processing modalities after removing sound saliency. In this study, the RSVP paradigm was used to investigate the difference between attention under conditions of consistent and inconsistent semantics with the target (Experiment 1), as well as the difference between top-down (Experiment 2) and bottom-up processing (Experiment 3) for sounds with consistent semantics with target 2 (T2) at different sequence locations after removing sound saliency. The results showed that cross-modal processing significantly improved attentional blink (AB). The early or lagged appearance of sounds consistent with T2 did not affect participants’ judgments in the exogenous attentional modality. However, visual target judgments were improved with endogenous attention. The sequential location of sounds consistent with T2 influenced the judgment of auditory and visual congruency. The results illustrate the effects of sound semantic information in different locations and processing modalities.
APA, Harvard, Vancouver, ISO, and other styles
7

Ikemi, Itsuki, Kazunori Harada, Akiko Sugahara, and Yasuhiro Hiraguri. "A basic study on estimating location of sound source by using distributed acoustic measurement network." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 3 (August 1, 2021): 3530–37. http://dx.doi.org/10.3397/in-2021-2439.

Full text
Abstract:
The sounds from childcare facilities are often a cause of noise problems with neighbors, however since the sound power levels of children's play and other sounds in child-care facilities have not become clear, evaluation methods have not been established, making countermeasures difficult. In order to evaluate the noise, it is necessary to model the location of the sound source and the sound power level. We have been developing a sound source identification system that uses multiple Raspberry Pi-based recording devices to estimate the location of a sound source and sound power levels. By using GPS for time synchronization, the system can be distributed and placed without connecting cables, which is expected to expand the measurement area significantly. As a method of estimation, the arrival time difference is calculated by cross-correlation from the signals input to each recording device, and the sound source location is estimated from the calculated arrival time difference and the location information of the device. The effectiveness of this system was verified in an anechoic room and outdoor fields.
APA, Harvard, Vancouver, ISO, and other styles
8

Buchner, Axel, Raoul Bell, Klaus Rothermund, and Dirk Wentura. "Sound source location modulates the irrelevant-sound effect." Memory & Cognition 36, no. 3 (April 2008): 617–28. http://dx.doi.org/10.3758/mc.36.3.617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Miyauchi, Ryota, Dea-Gee Kang, Yukio Iwaya, and Yôiti Suzuki. "Relative Localization of Auditory and Visual Events Presented in Peripheral Visual Field." Multisensory Research 27, no. 1 (2014): 1–16. http://dx.doi.org/10.1163/22134808-00002442.

Full text
Abstract:
The brain apparently remaps the perceived locations of simultaneous auditory and visual events into a unified audio-visual space to integrate and/or compare multisensory inputs. However, there is little qualitative or quantitative data on how simultaneous auditory and visual events are located in the peripheral visual field (i.e., outside a few degrees of the fovea). We presented a sound burst and a flashing light simultaneously not only in the central visual field but also in the peripheral visual field and measured the relative perceived locations of the sound and flash. The results revealed that the sound and flash were perceptually located at the same location when the sound was presented at a 5° periphery of the flash, even when the participants’ eyes were fixed. Measurements of the unisensory locations of each sound and flash in a pointing task demonstrated that the perceived location of the sound shifted toward the front, while the perceived location of the flash shifted toward the periphery. As a result, the discrepancy between the perceptual location of the sound and the flash was around 4°. This suggests that the brain maps the unisensory locations of auditory and visual events into a unified audio-visual space, enabling it to generate unisensory spatial information about the events.
APA, Harvard, Vancouver, ISO, and other styles
10

Mickey, Brian J., and John C. Middlebrooks. "Sensitivity of Auditory Cortical Neurons to the Locations of Leading and Lagging Sounds." Journal of Neurophysiology 94, no. 2 (August 2005): 979–89. http://dx.doi.org/10.1152/jn.00580.2004.

Full text
Abstract:
We recorded unit activity in the auditory cortex (fields A1, A2, and PAF) of anesthetized cats while presenting paired clicks with variable locations and interstimulus delays (ISDs). In human listeners, such sounds elicit the precedence effect, in which localization of the lagging sound is impaired at ISDs ≲10 ms. In the present study, neurons typically responded to the leading stimulus with a brief burst of spikes, followed by suppression lasting 100–200 ms. At an ISD of 20 ms, at which listeners report a distinct lagging sound, only 12% of units showed discrete lagging responses. Long-lasting suppression was found in all sampled cortical fields, for all leading and lagging locations, and at all sound levels. Recordings from awake cats confirmed this long-lasting suppression in the absence of anesthesia, although recovery from suppression was faster in the awake state. Despite the lack of discrete lagging responses at delays of 1–20 ms, the spike patterns of 40% of units varied systematically with ISD, suggesting that many neurons represent lagging sounds implicitly in their temporal firing patterns rather than explicitly in discrete responses. We estimated the amount of location-related information transmitted by spike patterns at delays of 1–16 ms under conditions in which we varied only the leading location or only the lagging location. Consistent with human psychophysical results, transmission of information about the leading location was high at all ISDs. Unlike listeners, however, transmission of information about the lagging location remained low, even at ISDs of 12–16 ms.
APA, Harvard, Vancouver, ISO, and other styles
11

Yao, Justin D., Peter Bremen, and John C. Middlebrooks. "Transformation of spatial sensitivity along the ascending auditory pathway." Journal of Neurophysiology 113, no. 9 (May 2015): 3098–111. http://dx.doi.org/10.1152/jn.01029.2014.

Full text
Abstract:
Locations of sounds are computed in the central auditory pathway based primarily on differences in sound level and timing at the two ears. In rats, the results of that computation appear in the primary auditory cortex (A1) as exclusively contralateral hemifield spatial sensitivity, with strong responses to sounds contralateral to the recording site, sharp cutoffs across the midline, and weak, sound-level-tolerant responses to ipsilateral sounds. We surveyed the auditory pathway in anesthetized rats to identify the brain level(s) at which level-tolerant spatial sensitivity arises. Noise-burst stimuli were varied in horizontal sound location and in sound level. Neurons in the central nucleus of the inferior colliculus (ICc) displayed contralateral tuning at low sound levels, but tuning was degraded at successively higher sound levels. In contrast, neurons in the nucleus of the brachium of the inferior colliculus (BIN) showed sharp, level-tolerant spatial sensitivity. The ventral division of the medial geniculate body (MGBv) contained two discrete neural populations, one showing broad sensitivity like the ICc and one showing sharp sensitivity like A1. Dorsal, medial, and shell regions of the MGB showed fairly sharp spatial sensitivity, likely reflecting inputs from A1 and/or the BIN. The results demonstrate two parallel brainstem pathways for spatial hearing. The tectal pathway, in which sharp, level-tolerant spatial sensitivity arises between ICc and BIN, projects to the superior colliculus and could support reflexive orientation to sounds. The lemniscal pathway, in which such sensitivity arises between ICc and the MGBv, projects to the forebrain to support perception of sound location.
APA, Harvard, Vancouver, ISO, and other styles
12

Sawada, Hideyuki, and Toshiya Takechi. "A Robotic Auditory System that Interacts with Musical Sounds and Human Voices." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 10 (December 20, 2007): 1177–83. http://dx.doi.org/10.20965/jaciii.2007.p1177.

Full text
Abstract:
Voice and sounds are the primary media employed for human communication. Humans are able to exchange information smoothly using voice under different situations, such as a noisy environment and in the presence of multiple speakers. We are surrounded by various sounds, and yet are able to detect the location of a sound source in 3D space, extract a particular sound from a mixture of sounds, and recognize the source of a specific sound. Also, music is composed of various sounds generated by musical instruments, and directly affects our emotions and feelings. This paper introduces real-time detection and identification of a particular sound among plural sound sources using a microphone array based on the location of a speaker and the tonal characteristics. The technique will also be applied to an adaptive auditory system of a robotic arm, which interacts with humans.
APA, Harvard, Vancouver, ISO, and other styles
13

Middlebrooks, John C., Ann Clock Eddins, Li Xu, and David M. Green. "Cortical codes for sound location." Journal of the Acoustical Society of America 97, no. 5 (May 1995): 3398. http://dx.doi.org/10.1121/1.413003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bourquin, Nathalie M. P., Micah M. Murray, and Stephanie Clarke. "Location-independent and location-linked representations of sound objects." NeuroImage 73 (June 2013): 40–49. http://dx.doi.org/10.1016/j.neuroimage.2013.01.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bennett, Erica E., and Ruth Y. Litovsky. "Sound Localization in Toddlers with Normal Hearing and with Bilateral Cochlear Implants Revealed Through a Novel “Reaching for Sound” Task." Journal of the American Academy of Audiology 31, no. 03 (March 2020): 195–208. http://dx.doi.org/10.3766/jaaa.18092.

Full text
Abstract:
AbstractSpatial hearing abilities in children with bilateral cochlear implants (BiCIs) are typically improved when two implants are used compared with a single implant. However, even with BiCIs, spatial hearing is still worse compared to normal-hearing (NH) age-matched children. Here, we focused on children who were younger than three years, hence in their toddler years. Prior research with this age focused on measuring discrimination of sounds from the right versus left.This study measured both discrimination and sound location identification in a nine-alternative forced-choice paradigm using the “reaching for sound” method, whereby children reached for sounding objects as a means of capturing their spatial hearing abilities.Discrimination was measured with sounds randomly presented to the left versus right, and loudspeakers at fixed angles ranging from ±60° to ±15°. On a separate task, sound location identification was measured for locations ranging from ±60° in 15° increments.Thirteen children with BiCIs (27–42 months old) and fifteen age-matched (NH).Discrimination and sound localization were completed for all subjects. For the left–right discrimination task, participants were required to reach a criterion of 4/5 correct trials (80%) at each angular separation prior to beginning the localization task. For sound localization, data was analyzed in two ways. First, percent correct scores were tallied for each participant. Second, for each participant, the root-mean-square-error was calculated to determine the average distance between the response and stimulus, indicative of localization accuracy.All BiCI users were able to discriminate left versus right at angles as small as ±15° when listening with two implants; however, performance was significantly worse when listening with a single implant. All NH toddlers also had >80% correct at ±15°. Sound localization results revealed root-mean-square errors averaging 11.15° in NH toddlers. Children in the BiCI group were generally unable to identify source location on this complex task (average error 37.03°).Although some toddlers with BiCIs are able to localize sound in a manner consistent with NH toddlers, for the majority of toddlers with BiCIs, sound localization abilities are still emerging.
APA, Harvard, Vancouver, ISO, and other styles
16

Xie, Dong, Min Wang, Jian Qu Zhu, and Feng Wang. "An Equipment Fault Sound Location System Design." Applied Mechanics and Materials 462-463 (November 2013): 298–301. http://dx.doi.org/10.4028/www.scientific.net/amm.462-463.298.

Full text
Abstract:
The location of the sound source is an important technique for equipment failures diagnostic application. This paper design a specific fault sound location system for capturing the specific sound source and determine its orientation. The two functions of sound recognition and location system are combined. The frequency domain characteristics based on similarity calculation method is used to identify the sound signal. The source localization algorithm based on the time delay estimation is used to calculate the time difference of arrival sound signal. By method of combining hardware circuit preprocessing and software design, the orientation of positioning of sound source is determined by the known sound pickup arrays spatial position. The sound location system uses two piece of MCU to achieve its function with simplification efficient algorithm and high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
17

Groh, Jennifer M., Kristin A. Kelly, and Abigail M. Underhill. "A Monotonic Code for Sound Azimuth in Primate Inferior Colliculus." Journal of Cognitive Neuroscience 15, no. 8 (November 1, 2003): 1217–31. http://dx.doi.org/10.1162/089892903322598166.

Full text
Abstract:
We investigated the format of the code for sound location in the inferior colliculi of three awake monkeys (Macaca mulatta). We found that roughly half of our sample of 99 neurons was sensitive to the free-field locations of broadband noise presented in the frontal hemisphere. Such neurons nearly always responded monotonically as a function of sound azimuth, with stronger responses for more contralateral sound locations. Few, if any, neurons had circumscribed receptive fields. Spatial sensitivity was broad: the proportion of the total sample of neurons responding to a sound at a given location ranged from 30% for ipsilateral locations to 80% for contralateral locations. These findings suggest that sound azimuth is represented via a population rate code of very broadly responsive neurons in primate inferior colliculi. This representation differs in format from the place code used for encoding the locations of visual and tactile stimuli and poses problems for the eventual convergence of auditory and visual or somatosensory signals. Accordingly, models for converting this representation into a place code are discussed.
APA, Harvard, Vancouver, ISO, and other styles
18

Ko, Daijin, and Judith E. Zeh. "Detection of Migration Using Sound Location." Biometrics 44, no. 3 (September 1988): 751. http://dx.doi.org/10.2307/2531589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lovelace, Eugene A., and Donna M. Anderson. "The Role of Vision in Sound Localization." Perceptual and Motor Skills 77, no. 3 (December 1993): 843–50. http://dx.doi.org/10.2466/pms.1993.77.3.843.

Full text
Abstract:
Three experiments examined the role of vision in locating a brief sound (2-sec. speech noise) from an unseen source in the horizontal left front quadrant. The head could be freely moved. Subjects could point to the sound location more accurately with eyes open. However, since in a second study the accuracy of pointing a finger was poorer than for aiming one's eyes at the sound, the effect in the first study may reflect using vision to calibrate the hand location. A third study showed no difference in accuracy of aiming one's eyes at a sound when eyes were open versus closed during presentation of sound. More accurate auditory localization with eyes open than closed was not supported.
APA, Harvard, Vancouver, ISO, and other styles
20

Innami, Satoshi, and Hiroyuki Kasai. "Super-realistic environmental sound synthesizer for location-based sound search system." IEEE Transactions on Consumer Electronics 57, no. 4 (November 2011): 1891–98. http://dx.doi.org/10.1109/tce.2011.6131168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Harusawa, Koki, Yumi Inamura, Masaaki Hiroe, Hideyuki Hasegawa, Kentaro Nakamura, and Mari Ueda. "Measurement of very high frequency (VHF) sound in our daily experiences." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 2 (August 1, 2021): 4275–82. http://dx.doi.org/10.3397/in-2021-2647.

Full text
Abstract:
Recently, it is frequently reported that very high frequency (VHF) sounds are emitted from daily necessaries such as home electric appliances. Although we measured VHF sounds from home electric appliances in our previous study, the origins of such VHF sounds have not yet been identified. In the present study, we tried to identify the VHF sound source in each home electric appliance using a "sound camera", which visualizes the spatial distribution of the sound intensity using a microphone array. The sound camera visualized the location of the sound source at frequencies from 2 to 52 kHz with a field of view of 63 degrees. The sound camera elucidated that the VHF sounds were emitted from the power source of a LET light, the ventilation duct of an electric fan, and the body of an IH cooker. Their frequency characteristics were dependent on the sound source, i.e., combinations of pure tones in the LED light and distributing in a wide frequency range in the electric fan.
APA, Harvard, Vancouver, ISO, and other styles
22

Choi, Y. H., J. S. Kim, and Gihoon Byun. "Source localization based on steered frequency–wavenumber analysis for sparse array." Journal of the Acoustical Society of America 153, no. 5 (May 1, 2023): 3065. http://dx.doi.org/10.1121/10.0019552.

Full text
Abstract:
When using a sparse array, locating the target signal of a high-frequency component is difficult. Although forecasting the direction in a sparse situation is challenging, the frequency–wavenumber (f–k) spectrum can simultaneously determine the direction and frequency of the analyzed signal. The striation of the f–k spectrum shifts along the wavenumber axis in a sparse situation, which reduces the spatial resolution required to determine the target's direction using the f–k spectrum. In this study, f–k spectra of a high-frequency signal were used for near-field source localization. Snapping shrimp sounds (5–24 kHz) from SAVEX15 (a shallow-water acoustic variability experiment conducted in May 2015) were used as the data source, and a simulation was used to evaluate the proposed method. Beam steering was performed before creating the f–k spectrum to improve spatial resolution. We found that the spatial resolution was improved, and the location of the sound source could be determined when a signal with beam steering was utilized. The shrimp sound from SAVEX15, a near-field broadband signal, was used to determine the shrimp's location (range, 38 m; depth, 100 m) and the tilt of the vertical line array. These results suggest that the proposed analysis helps to accurately estimate the location of sound source.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Chao, De Jiang Shang, and Qi Li. "Effect of Drive Location on Vibro-Acoustic Characteristics of Submerged Double Cylindrical Shells with Damping Layers." Applied Mechanics and Materials 387 (August 2013): 59–63. http://dx.doi.org/10.4028/www.scientific.net/amm.387.59.

Full text
Abstract:
Based on the modal superposition method, the analytical model of vibration and sound radiation from submerged double cylindrical shells with damping layers was presented. The shells were described by the classical thin shell theory. The damping layers were described by three-dimensional viscoelastic theory. The annular plates, connecting the double shells, were analyzed with in-plane motion theory. For different drive locations of radial point force on the inner shell, the sound radiated power and the radial quadratic velocity of the model were calculated and analyzed. The results show that making the drive location near the annular plate helps to reduce the sound radiated power and radial quadratic velocity of model, and making the drive location far from the middle of model also helps to reduce the sound radiated power. The drive applied on the location of annular plate causes high similarity of vibrations from inner shell and outer shell.
APA, Harvard, Vancouver, ISO, and other styles
24

Doan, Daryl E., and James C. Saunders. "Sensitivity to Simulated Directional Sound Motion in the Rat Primary Auditory Cortex." Journal of Neurophysiology 81, no. 5 (May 1, 1999): 2075–87. http://dx.doi.org/10.1152/jn.1999.81.5.2075.

Full text
Abstract:
Sensitivity to simulated directional sound motion in the rat primary auditory cortex. This paper examines neuron responses in rat primary auditory cortex (AI) during sound stimulation of the two ears designed to simulate sound motion in the horizontal plane. The simulated sound motion was synthesized from mathematical equations that generated dynamic changes in interaural phase, intensity, and Doppler shifts at the two ears. The simulated sounds were based on moving sources in the right frontal horizontal quadrant. Stimuli consisted of three circumferential segments between 0 and 30°, 30 and 60°, and 60 and 90° and four radial segments at 0, 30, 60, and 90°. The constant velocity portion of each segment was 0.84 m long. The circumferential segments and center of the radial segments were calculated to simulate a distance of 2 m from the head. Each segment had two trajectories that simulated motion in both directions, and each trajectory was presented at two velocities. Young adult rats were anesthetized, the left primary auditory cortex was exposed, and microelectrode recordings were obtained from sound responsive cells in AI. All testing took place at a tonal frequency that most closely approximated the best frequency of the unit at a level 20 dB above the tuning curve threshold. The results were presented on polar plots that emphasized the two directions of simulated motion for each segment rather than the location of sound in space. The trajectory exhibiting a “maximum motion response” could be identified from these plots. “Neuron discharge profiles” within these trajectories were used to demonstrate neuron activity for the two motion directions. Cells were identified that clearly responded to simulated uni- or multidirectional sound motion (39%), that were sensitive to sound location only (19%), or that were sound driven but insensitive to our location or sound motion stimuli (42%). The results demonstrated the capacity of neurons in rat auditory cortex to selectively process dynamic stimulus conditions representing simulated motion on the horizontal plane. Our data further show that some cells were responsive to location along the horizontal plane but not sensitive to motion. Cells sensitive to motion, however, also responded best to the moving sound at a particular location within the trajectory. It would seem that the mechanisms underlying sensitivity to sound location as well as direction of motion converge on the same cell.
APA, Harvard, Vancouver, ISO, and other styles
25

Xu, Bin, Dan Yang, Yun Yi Zhang, and Xu Wang. "Research on the Peripheral Sound Visualization Using the Improved Ripple Mode." Advanced Engineering Forum 2-3 (December 2011): 123–26. http://dx.doi.org/10.4028/www.scientific.net/aef.2-3.123.

Full text
Abstract:
In this paper, we proposed a peripheral sound visualization method based on improved ripple mode for the deaf. In proposed mode, we designed the processes of transforming sound intensity and exterminating the locations of sound sources. We used power spectrum function to determine the sound intensity. ARTI neural network was subtly applied to identify which kind of the real-time input sound signals and to display the locations of the sound sources. We present the software that aids the development of peripheral displays and four sample peripheral displays are used to demonstrate our toolkit’s capabilities. The results show that the proposed ripple mode correctly showed the information of combination of the sound intensity and location of the sound source and ART1 neural network made accurate identifications for input audio signals. Moreover, we found that participants in the research were more likely to achieve more information of locations of sound sources.
APA, Harvard, Vancouver, ISO, and other styles
26

Iordache, Vlad, and Mihai Vlad Ionita. "Urban sound energy reduction by means of sound barriers." E3S Web of Conferences 32 (2018): 01024. http://dx.doi.org/10.1051/e3sconf/20183201024.

Full text
Abstract:
In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.
APA, Harvard, Vancouver, ISO, and other styles
27

Castro-Camacho, Wendy, Yolanda Peñaloza-López, Santiago J. Pérez-Ruiz, Felipe García-Pedroza, Ana L. Padilla-Ortiz, Adrián Poblano, Concepción Villarruel-Rivas, Alfredo Romero-Díaz, and Aidé Careaga-Olvera. "Sound localization and word discrimination in reverberant environment in children with developmental dyslexia." Arquivos de Neuro-Psiquiatria 73, no. 4 (April 2015): 314–20. http://dx.doi.org/10.1590/0004-282x20150005.

Full text
Abstract:
Objective Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. Method We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o), under reverberant and no-reverberant conditions; correct answers were compared. Results Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Conclusion Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.
APA, Harvard, Vancouver, ISO, and other styles
28

Störmer, Viola S., Wenfeng Feng, Antigona Martinez, John J. McDonald, and Steven A. Hillyard. "Salient, Irrelevant Sounds Reflexively Induce Alpha Rhythm Desynchronization in Parallel with Slow Potential Shifts in Visual Cortex." Journal of Cognitive Neuroscience 28, no. 3 (March 2016): 433–45. http://dx.doi.org/10.1162/jocn_a_00915.

Full text
Abstract:
Recent findings suggest that a salient, irrelevant sound attracts attention to its location involuntarily and facilitates processing of a colocalized visual event [McDonald, J. J., Störmer, V. S., Martinez, A., Feng, W. F., & Hillyard, S. A. Salient sounds activate human visual cortex automatically. Journal of Neuroscience, 33, 9194–9201, 2013]. Associated with this cross-modal facilitation is a sound-evoked slow potential over the contralateral visual cortex termed the auditory-evoked contralateral occipital positivity (ACOP). Here, we further tested the hypothesis that a salient sound captures visual attention involuntarily by examining sound-evoked modulations of the occipital alpha rhythm, which has been strongly associated with visual attention. In two purely auditory experiments, lateralized irrelevant sounds triggered a bilateral desynchronization of occipital alpha-band activity (10–14 Hz) that was more pronounced in the hemisphere contralateral to the sound's location. The timing of the contralateral alpha-band desynchronization overlapped with that of the ACOP (∼240–400 msec), and both measures of neural activity were estimated to arise from neural generators in the ventral-occipital cortex. The magnitude of the lateralized alpha desynchronization was correlated with ACOP amplitude on a trial-by-trial basis and between participants, suggesting that they arise from or are dependent on a common neural mechanism. These results support the hypothesis that the sound-induced alpha desynchronization and ACOP both reflect the involuntary cross-modal orienting of spatial attention to the sound's location.
APA, Harvard, Vancouver, ISO, and other styles
29

Larsen, Ole N. "Does the environment constrain avian sound localization?" Anais da Academia Brasileira de Ciências 76, no. 2 (June 2004): 267–73. http://dx.doi.org/10.1590/s0001-37652004000200013.

Full text
Abstract:
A bird needs to keep track not only of social interactions of conspecifics but also of their changing locations in space by determining their directions and distances. Current knowledge of accuracy in the computation of sound source location by birds is still insufficient, partly because physiological mechanisms of few species are studied in well defined laboratory settings, while field studies are performed in a variety of species and complex environments. Velocity gradients and reverberating surfaces may conceivably induce inaccuracy in sound source location (mainly elevation) by distorting the directional cues. However, most birds possess an inherently directional pressure difference receiver, which enhances the directional cues (mainly azimuth), and a computational mechanism in their auditory pathways to suppress echoes of redirected sound.
APA, Harvard, Vancouver, ISO, and other styles
30

Lewald, Jörg, Klaus A. J. Riederer, Tobias Lentz, and Ingo G. Meister. "Processing of sound location in human cortex." European Journal of Neuroscience 27, no. 5 (March 2008): 1261–70. http://dx.doi.org/10.1111/j.1460-9568.2008.06094.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Blackman, Allen, and Alan Krupnick. "Location-Efficient Mortgages: Is the Rationale Sound?" Journal of Policy Analysis and Management 20, no. 4 (2001): 633–49. http://dx.doi.org/10.1002/pam.1021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Getzmann, Stephan. "The Effect of Spectral Difference on Auditory Saltation." Experimental Psychology 55, no. 1 (January 2008): 64–71. http://dx.doi.org/10.1027/1618-3169.55.1.64.

Full text
Abstract:
Abstract. Auditory saltation is a spatiotemporal illusion in which the judged positions of sound stimuli are shifted toward subsequent stimuli that follow closely in time and space. In this study, the “reduced-rabbit” paradigm and a direct-location method were employed to investigate the effect of spectral sound content on the saltation illusion. Eighteen listeners were presented with sound sequences consisting of three high-pass or low-pass filtered noise bursts. Noise bursts within a sequence were either the same or differed in frequency. Listeners judged the position of the second sound using a hand pointer. When the time interval between the second and third sound was short, the target was shifted toward the location of the subsequent stimulus. This displacement effect did not depend on the spectral content of the first sound, but decreased substantially when the second and third sounds were different. The results indicated an effect of spectral difference on saltation that is discussed with regard to a recently proposed stimulus integration approach in which saltation was attributed to an interaction between perceptual processing of temporally proximate stimuli.
APA, Harvard, Vancouver, ISO, and other styles
33

van der Heijden, Kiki, Elia Formisano, Giancarlo Valente, Minye Zhan, Ron Kupers, and Beatrice de Gelder. "Reorganization of Sound Location Processing in the Auditory Cortex of Blind Humans." Cerebral Cortex 30, no. 3 (September 13, 2019): 1103–16. http://dx.doi.org/10.1093/cercor/bhz151.

Full text
Abstract:
Abstract Auditory spatial tasks induce functional activation in the occipital—visual—cortex of early blind humans. Less is known about the effects of blindness on auditory spatial processing in the temporal—auditory—cortex. Here, we investigated spatial (azimuth) processing in congenitally and early blind humans with a phase-encoding functional magnetic resonance imaging (fMRI) paradigm. Our results show that functional activation in response to sounds in general—independent of sound location—was stronger in the occipital cortex but reduced in the medial temporal cortex of blind participants in comparison with sighted participants. Additionally, activation patterns for binaural spatial processing were different for sighted and blind participants in planum temporale. Finally, fMRI responses in the auditory cortex of blind individuals carried less information on sound azimuth position than those in sighted individuals, as assessed with a 2-channel, opponent coding model for the cortical representation of sound azimuth. These results indicate that early visual deprivation results in reorganization of binaural spatial processing in the auditory cortex and that blind individuals may rely on alternative mechanisms for processing azimuth position.
APA, Harvard, Vancouver, ISO, and other styles
34

Chiu, M.-C. "Noise identification in reverberant sound field by using simulated annealing." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 222, no. 2 (February 1, 2008): 163–79. http://dx.doi.org/10.1243/09544062jmes622.

Full text
Abstract:
Noise control is important and essential in an enclosed machine room where the noise level is restricted by the occupational safety and health act. Before the appropriate noise abatement is performed, the identification work of location and free-field sound energy of equipment inside the reverberant sound field become crucial and an absolute prerequisite. Research on new techniques of single noise control and sound absorption system has been well addressed and developed; however, the research work on sound identification for an existing multi-noise enclosed room is rare and observably insufficient. Without the actual location and pure free-field noise level, noise control work will be improper and wasted; therefore, the numerical approach of noise recognition from the reverberant sound field becomes necessary and obligatory. In this paper, the novel technique of simulated annealing (SA) in conjunction with the method of minimized variation square is applied in the following numerical optimization. In addition, various sound monitoring systems in detecting the noise condition within the echo area is also introduced. Before noise identification can be carried out, the accuracy of the mathematical model in a single-noise enclosed system has to be checked by SoundPlan (a professional simulation package in sound field). Thereafter, the SA recognition of three kinds of multi-noise systems has to be exemplified and fully explored. The results reveal that either the locations or sound power levels (SWLs) of noises can be precisely distinguished. Consequently, this paper may provide an efficient and rapid way in distinguishing the location and free-field noise level of equipment in a complicated sound field.
APA, Harvard, Vancouver, ISO, and other styles
35

Jones, Heath G., Andrew D. Brown, Kanthaiah Koka, Jennifer L. Thornton, and Daniel J. Tollin. "Sound frequency-invariant neural coding of a frequency-dependent cue to sound source location." Journal of Neurophysiology 114, no. 1 (July 2015): 531–39. http://dx.doi.org/10.1152/jn.00062.2015.

Full text
Abstract:
The century-old duplex theory of sound localization posits that low- and high-frequency sounds are localized with two different acoustical cues, interaural time and level differences (ITDs and ILDs), respectively. While behavioral studies in humans and behavioral and neurophysiological studies in a variety of animal models have largely supported the duplex theory, behavioral sensitivity to ILD is curiously invariant across the audible spectrum. Here we demonstrate that auditory midbrain neurons in the chinchilla ( Chinchilla lanigera) also encode ILDs in a frequency-invariant manner, efficiently representing the full range of acoustical ILDs experienced as a joint function of sound source frequency, azimuth, and distance. We further show, using Fisher information, that nominal “low-frequency” and “high-frequency” ILD-sensitive neural populations can discriminate ILD with similar acuity, yielding neural ILD discrimination thresholds for near-midline sources comparable to behavioral discrimination thresholds estimated for chinchillas. These findings thus suggest a revision to the duplex theory and reinforce ecological and efficiency principles that hold that neural systems have evolved to encode the spectrum of biologically relevant sensory signals to which they are naturally exposed.
APA, Harvard, Vancouver, ISO, and other styles
36

He, Xingcheng. "Sound source signal location and tracking system based on STM32." Theoretical and Natural Science 5, no. 1 (May 25, 2023): 948–54. http://dx.doi.org/10.54254/2753-8818/5/20230569.

Full text
Abstract:
With the progress of science and technology, sound source localization technology has a wide range of applications in urban traffic, digital hearing AIDS, mechanical systems and other fields, especially in noise monitoring and other aspects. In order to solve problems such as failure risk, this technology can accurately determine the location of the problem. At present, low-end manual technology is used in many fields, which is inefficient and a waste of time. Therefore, this paper designed a sound source signal positioning and tracking system based on STM32. The system takes STM32F407VET6 as the control core, and the speaker as the sound source that can emit self-defined regular sound. After the amplification circuits, it is sent to the microcontroller for processing.After sampling, the distance and offset Angle between the test point and the sound source are calculated by calculating the time deviation of the sound signal collected by the two microphones. Then control the steering gear rotation and laser alignment to realize the positioning and tracking of the sound source, and display the distance and offset information between the test point and the sound source in the terminal. With the development of modern technology, sound source localization can be widely used in daily life, and with the development of machine learning, cloud computing and electronic technology, this technology will have a broader application prospect.
APA, Harvard, Vancouver, ISO, and other styles
37

Nazaré, Cristina Jordão, and Armando Mónica Oliveira. "Effects of Audiovisual Presentations on Visual Localization Errors: One or Several Multisensory Mechanisms?" Multisensory Research 34, no. 6 (April 20, 2021): 587–621. http://dx.doi.org/10.1163/22134808-bja10048.

Full text
Abstract:
Abstract The present study examines the extent to which temporal and spatial properties of sound modulate visual motion processing in spatial localization tasks. Participants were asked to locate the place at which a moving visual target unexpectedly vanished. Across different tasks, accompanying sounds were factorially varied within subjects as to their onset and offset times and/or positions relative to visual motion. Sound onset had no effect on the localization error. Sound offset was shown to modulate the perceived visual offset location, both for temporal and spatial disparities. This modulation did not conform to attraction toward the timing or location of the sounds but, demonstrably in the case of temporal disparities, to bimodal enhancement instead. Favorable indications to a contextual effect of audiovisual presentations on interspersed visual-only trials were also found. The short sound-leading offset asynchrony had equivalent benefits to audiovisual offset synchrony, suggestive of the involvement of early-level mechanisms, constrained by a temporal window, at these conditions. Yet, we tentatively hypothesize that the whole of the results and how they compare with previous studies requires the contribution of additional mechanisms, including learning-detection of auditory-visual associations and cross-sensory spread of endogenous attention.
APA, Harvard, Vancouver, ISO, and other styles
38

Amrulloh, Muhammad Afif, and Haliyatul Hasanah. "Analisis Kesalahan Fonologis Membaca Teks Bahasa Arab Siswa Madrasah Tsanawiyah Lampung Selatan." Arabiyatuna : Jurnal Bahasa Arab 3, no. 2 (November 13, 2019): 209. http://dx.doi.org/10.29240/jba.v3i2.815.

Full text
Abstract:
This study aims to find and reduce the location of errors in learning Arabic in the aspect of reciting Arabic letters (makhorijul letters) with the error analysis method. Focus more specifically on the phonetic aspect, namely on letters that sound like sounds. In addition, this study also aims to determine the forms of errors in reading Arabic texts in the phonological level (phonetic / makhroj aspects). This type of research is a qualitative study, conducted at MTs. Raudlatul Jannah Natar, South Lampung. Understanding the study of how to pronounce or sound Arabic letters is very important in learning Arabic to avoid pronunciation errors, so it does not hamper the learning process. The results of this study indicate that the forms of phonological errors that often occur when reading Arabic text activities are errors in sound in terms of articulation factors including among them: 1) apico-dental-alveolar sound. 2) Inter-dental sound. 3) Foronto-palatal sounds. 4) Dorso-uvular sound. 5) Dorso-velar sound. 6) Sound of avico-alveolars. 7) The sound of root-pharyngeals. The sound error in terms of aspects of articulation that is in: 1) fricative sound. 2) pop sound.
APA, Harvard, Vancouver, ISO, and other styles
39

Recanzone, Gregg H., and Nathan S. Beckerman. "Effects of intensity and location on sound location discrimination in macaque monkeys." Hearing Research 198, no. 1-2 (December 2004): 116–24. http://dx.doi.org/10.1016/j.heares.2004.07.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, S. C., G. P. Too, and C. W. Tu. "Development of the Source Reconstruction System by Combining Sound Source Localization and Time Reversal Method." Journal of Mechanics 34, no. 1 (March 9, 2016): 35–40. http://dx.doi.org/10.1017/jmech.2016.13.

Full text
Abstract:
AbstractThis study explored the target sound source location at unknown situation and processed the received signal to determine the location of the target, including the reconstructed signal of source immediately. In this paper, it used triangulation sound sources localization and time reversal method (TRM) to reconstruct the source signals. The purpose is to use a sound source localization method with a simple device to quickly locate the position of the sound source. This method uses the microphone array to measure signal from the target sound source. Then, the sound source location is calculated and is indicated by Cartesian coordinates. The sound source location is then used to evaluate free field impulse response function which can replace the impulse response function used in time-reversal method. This process reduces the computation time greatly which makes possible for a real time source localization and source signal separation.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Zhaoting, Longqing Zou, Xianglou Liu, Jiale Qiao, and Xiangbin Meng. "A point sound source location and detection method based on 19-element hemispheric distributed acoustic pressure sensor array." Insight - Non-Destructive Testing and Condition Monitoring 63, no. 8 (August 1, 2021): 479–87. http://dx.doi.org/10.1784/insi.2021.63.8.479.

Full text
Abstract:
To solve the key problem of diagnosing the operating condition of an oil transfer pump unit in a 3D closed space, this paper presents an approach for a point sound source location and detection method based on a hemispheric distributed sound pressure sensor array. The array model consists of 19 sound pressure sensors acting in the radial direction and uniformly distributed over the hemispherical surface. A spatial rectangular coordinate system is established by taking the projection point of the central sensor arranged at the apex of the hemisphere to the ground as the origin of the spatial coordinates. With reference to the central sensor, the point sound source is located by selecting the maximum measured sound level and its spatial coordinate in each of the three layers of sensors surrounding it as parameters and using a triangular or a quadrilateral area location algorithm based on virtual instrument technology. According to the location of the source, the A-weighted sound level of the sound source point is derived by the inversion of the sound field distribution law. Results show that the triangular and quadrilateral area location algorithms are both effective. The errors in location become larger for a measured sound source far from the centre.
APA, Harvard, Vancouver, ISO, and other styles
42

Hiramatsu, Koto, Shin-ichi Sakamoto, and Yoshiaki Watanabe. "Effect of an external sound superimposed on the self-excited oscillation in a loop-tube thermoacoustic system." Japanese Journal of Applied Physics 61, SG (March 23, 2022): SG1024. http://dx.doi.org/10.35848/1347-4065/ac45d6.

Full text
Abstract:
Abstract The influence of an external sound applied to a loop-tube type thermoacoustic system on the energy conversion efficiency is experimentally examined. The investigation is carried out by studying the effect of a loudspeaker (SP) set as an external sound source. As a result, it is found that the location of the SP affects the sound field in the system and that the amount of energy generated increases or decreases. The increasing or decreasing effect differs depending on the location of the SP. Furthermore, it is confirmed that, provided the SP is located near the particle velocity node, the sound energy can be increased by more than the input power to the SP without changing the sound field in the tube. From these results it can be confirmed that, similar to a straight-tube type thermoacoustic system, the energy conversion efficiency can be enhanced by locating the SP at a suitable position even in a loop-tube type system without end surfaces.
APA, Harvard, Vancouver, ISO, and other styles
43

Furukawa, Shigeto, and John C. Middlebrooks. "Cortical Representation of Auditory Space: Information-Bearing Features of Spike Patterns." Journal of Neurophysiology 87, no. 4 (April 1, 2002): 1749–62. http://dx.doi.org/10.1152/jn.00491.2001.

Full text
Abstract:
Previous studies have demonstrated that the spike patterns of cortical neurons vary systematically as a function of sound-source location such that the response of a single neuron can signal the location of a sound source throughout 360° of azimuth. The present study examined specific features of spike patterns that might transmit information related to sound-source location. Analysis was based on responses of well-isolated single units recorded from cortical area A2 in α-chloralose-anesthetized cats. Stimuli were 80-ms noise bursts presented from loudspeakers in the horizontal plane; source azimuths ranged through 360° in 20° steps. Spike patterns were averaged across samples of eight trials. A competitive artificial neural network (ANN) identified sound-source locations by recognizing spike patterns; the ANN was trained using the learning vector quantization learning rule. The information about stimulus location that was transmitted by spike patterns was computed from joint stimulus-response probability matrices. Spike patterns were manipulated in various ways to isolate particular features. Full-spike patterns, which contained all spike-count information and spike timing with 100-μs precision, transmitted the most stimulus-related information. Transmitted information was sensitive to disruption of spike timing on a scale of more than ∼4 ms and was reduced by an average of ∼35% when spike-timing information was obliterated entirely. In a condition in which all but the first spike in each pattern were eliminated, transmitted information decreased by an average of only ∼11%. In many cases, that condition showed essentially no loss of transmitted information. Three unidimensional features were extracted from spike patterns. Of those features, spike latency transmitted ∼60% more information than that transmitted either by spike count or by a measure of latency dispersion. Information transmission by spike patterns recorded on single trials was substantially reduced compared with the information transmitted by averages of eight trials. In a comparison of averaged and nonaveraged responses, however, the information transmitted by latencies was reduced by only ∼29%, whereas information transmitted by spike counts was reduced by 79%. Spike counts clearly are sensitive to sound-source location and could transmit information about sound-source locations. Nevertheless, the present results demonstrate that the timing of the first poststimulus spike carries a substantial amount, probably the majority, of the location-related information present in spike patterns. The results indicate that any complete model of the cortical representation of auditory space must incorporate the temporal characteristics of neuronal response patterns.
APA, Harvard, Vancouver, ISO, and other styles
44

Matook, Sherry, Mary Sullivan, Amy Salisbury, Robin Miller, and Barry Lester. "Variations of NICU Sound by Location and Time of Day." Neonatal Network 29, no. 2 (March 2010): 87–95. http://dx.doi.org/10.1891/0730-0832.29.2.87.

Full text
Abstract:
Purpose/Aims. The primary aim of this study was to identify time periods of sound levels >45 decibels (dB) in a large Level III NICU. The second aim was to determine whether there were differences in decibel levels across the five bays of the NICU, the four quadrants within each bay, and two 12-hour shifts.Design. A repeated measures design was used. Bay, quadrant, and shift were randomly selected for sampling. Staff and visitors were blinded to the location of the sound meter, which was placed in one of five identical wooden boxes and was preset to record for 12 hours.Sample. Sound levels were recorded every 60 seconds over 40 12-hour periods, 20 during the day shift and 20 during the night shift. Total hours measured were 480. Data were collected every other day during a three-month period. Covariates of staffing, infant census, infant acuity, and medical equipment were collected.Main Outcome Variable. The main outcome variable was sound levels in decibels, with units of measurement of energy equivalent sound level (Leq), peak instantaneous sound pressure level, and maximum sound pressure level during each interval for a total of 480 hours.Results. All sound levels were >45 dB, with average readings ranging from 49.5 to 89.5 dB. The middle bay had the highest levels, with an Leq of 85.74 dB. Quadrants at the back of a bay were louder than quadrants at the front of a bay. The day shift had higher decibel levels than the night shift. Covariates did not differ across bays or shifts.
APA, Harvard, Vancouver, ISO, and other styles
45

Ratcliffe, L., and Christopher Naugler. "A Field Test of the Sound Environment Hypothesis of Conspecific Song Recognition in American Tree Sparrows (Spizella Arborea)." Behaviour 123, no. 3-4 (1992): 314–24. http://dx.doi.org/10.1163/156853992x00075.

Full text
Abstract:
AbstractThe sound environment hypothesis predicts that the features of song that are important for conspecific recognition should be those that overlap least with the songs of other species in the same location. We tested this hypothesis with song playbacks to free-living male American tree sparrows (Spizella arborea). We first used a discriminant function analysis to determine which song features best separated American tree sparrow songs from the songs of heterospecific avian species at two locations representing different sound environments. Based on this analysis we predicted which song features should be most important for conspecific recognition at each location. We then produced synthetic songs containing alterations in these features. Males, however, did not respond to playbacks of altered features as predicted. Thus, our results do not support the sound environment hypothesis of conspecific song recognition.
APA, Harvard, Vancouver, ISO, and other styles
46

Xu, Li, Shigeto Furukawa, and John C. Middlebrooks. "Sensitivity to Sound-Source Elevation in Nontonotopic Auditory Cortex." Journal of Neurophysiology 80, no. 2 (August 1, 1998): 882–94. http://dx.doi.org/10.1152/jn.1998.80.2.882.

Full text
Abstract:
Xu, Li, Shigeto Furukawa, and John C. Middlebrooks. Sensitivity to sound-source elevation in nontonotopic auditory cortex. J. Neurophysiol. 80: 882–894, 1998. We have demonstrated that the spike patterns of auditory cortical neurons carry information about sound-source location in azimuth. The question arises as to whether those units integrate the multiple acoustical cues that signal the location of a sound source or whether they merely demonstrate sensitivity to a specific parameter that covaries with sound-source azimuth, such as interaural level difference. We addressed that issue by testing the sensitivity of cortical neurons to sound locations in the median vertical plane, where interaural difference cues are negligible. Auditory unit responses were recorded from 14 α-chloralose–anesthetized cats. We studied 113 units in the anterior ectosylvian auditory area and 82 units in auditory area A2. Broadband noise stimuli were presented in an anechoic room from 14 locations in the vertical midline in 20° steps, from 60° below the front horizon, up and over the head, to 20° below the rear horizon, as well as from 18 locations in the horizontal plane. The spike counts of most units showed fairly broad elevation tuning. An artificial neural network was used to recognize spike patterns, which contain both the number and timing of spikes, and thereby estimate the locations of sound sources in elevation. For each unit, the median error of neural-network estimates was used as a measure of the network performance. For all 195 units, the average of the median errors was 46.4 ± 9.1° (mean ± SD), compared with the expectation of 65° based on chance performance. To address the question of whether sensitivity to sound pressure level (SPL) alone might account for the modest sensitivity to elevation of neurons, we measured SPLs from the cat's ear canal and compared the neural elevation sensitivity with the acoustical data. In many instances, the artificial neural network discriminated stimulus elevations even when the free-field sound produced identical SPLs in the ear canal. Conversely, two stimuli at the same elevation could produce the same network estimate of elevation, even when we varied sound-source SPL over a 20-dB range. There was a significant correlation between the accuracy of network performance in azimuth and in elevation. Most units that localized well in elevation also localized well in azimuth. Because the principal acoustic cues for localization in elevation differ from those for localization in azimuth, that positive correlation suggests that individual cortical neurons can integrate multiple cues for sound-source location.
APA, Harvard, Vancouver, ISO, and other styles
47

Pinheiro, Sara. "Acousmatic Foley: Staging sound-fiction." Organised Sound 21, no. 3 (November 11, 2016): 242–48. http://dx.doi.org/10.1017/s1355771816000212.

Full text
Abstract:
This article proposes a narrative theory thought in terms that are specific to sound practice. It addresses two different fields – Acousmatic Music and Foley Art – as a possibility of understanding sound narration and conceptualising it around the idea of fiction. To this end, it begins from the concepts of sound-motif, sound-prop and sound-actors, in order to propose a dramaturgic practice specific to sound terms.The theory of sound dramaturgy acquires a practical outline by making use of multichannel constellations as a composition strategy, with specific loudspeaker arrangements. The theory advocates loudspeakers as the mediators of the experience and the stage as part of the audience’s assembly. This translates into a practice of staging sound fiction, which focuses on formulating a conjecture based on formal and factual structures, allowing for a direct relationship between the listener and the listening, between the sounds and their fictional location.
APA, Harvard, Vancouver, ISO, and other styles
48

Uneda, Michio, and Kenichi Ishikawa. "Location Finding of Sound Sources by MUSIC Algorithm." Proceedings of the Symposium on Evaluation and Diagnosis 2003.2 (2003): 47–51. http://dx.doi.org/10.1299/jsmesed.2003.2.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Koyama, Shoichi, Ken'ichi Furuya, Yoichi Haneda, and Hiroshi Saruwatari. "Source-Location-Informed Sound Field Recording and Reproduction." IEEE Journal of Selected Topics in Signal Processing 9, no. 5 (August 2015): 881–94. http://dx.doi.org/10.1109/jstsp.2015.2434319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Pregliasco, Rodolfo G., and Ernesto N. Martinez. "Gunshot Location Through Recorded Sound: A Preliminary Report." Journal of Forensic Sciences 47, no. 6 (November 1, 2002): 15566J. http://dx.doi.org/10.1520/jfs15566j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography