Journal articles on the topic 'Auditory source separation'

To see the other types of publications on this topic, follow the link: Auditory source separation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Auditory source separation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Han, Kean Chen, Lei Wang, Jianben Liu, Baoquan Wan, and Bing Zhou. "Sound Source Separation Mechanisms of Different Deep Networks Explained from the Perspective of Auditory Perception." Applied Sciences 12, no. 2 (January 14, 2022): 832. http://dx.doi.org/10.3390/app12020832.

Full text
Abstract:
Thanks to the development of deep learning, various sound source separation networks have been proposed and made significant progress. However, the study on the underlying separation mechanisms is still in its infancy. In this study, deep networks are explained from the perspective of auditory perception mechanisms. For separating two arbitrary sound sources from monaural recordings, three different networks with different parameters are trained and achieve excellent performances. The networks’ output can obtain an average scale-invariant signal-to-distortion ratio improvement (SI-SDRi) higher than 10 dB, comparable with the human performance to separate natural sources. More importantly, the most intuitive principle—proximity—is explored through simultaneous and sequential organization experiments. Results show that regardless of network structures and parameters, the proximity principle is learned spontaneously by all networks. If components are proximate in frequency or time, they are not easily separated by networks. Moreover, the frequency resolution at low frequencies is better than at high frequencies. These behavior characteristics of all three networks are highly consistent with those of the human auditory system, which implies that the learned proximity principle is not accidental, but the optimal strategy selected by networks and humans when facing the same task. The emergence of the auditory-like separation mechanisms provides the possibility to develop a universal system that can be adapted to all sources and scenes.
APA, Harvard, Vancouver, ISO, and other styles
2

Sasaki, Yoko, Saori Masunaga, Simon Thompson, Satoshi Kagami, and Hiroshi Mizoguchi. "Sound Localization and Separation for Mobile Robot Tele-Operation by Tri-Concentric Microphone Array." Journal of Robotics and Mechatronics 19, no. 3 (June 20, 2007): 281–89. http://dx.doi.org/10.20965/jrm.2007.p0281.

Full text
Abstract:
The paper describes a tele-operated mobile robot system which can perform multiple sound source localization and separation using a 32-channel tri-concentric microphone array. Tele-operated mobile robots require two main capabilities: 1) audio/visual presentation of the robot’s environment to the operator, and 2) autonomy for mobility. This paper focuses on the auditory system of a tele-operated mobile robot in order to improve both the presentation of sound sources to the operator and also to facilitate autonomous robot actions. The auditory system is based on a 32-channel distributed microphone array that uses highly efficient directional design for localizing and separating multiple moving sound sources. Experimental results demonstrate the feasibility of inter-person distant communication through the tele-operated robot system.
APA, Harvard, Vancouver, ISO, and other styles
3

Doll, Theodore J., Thomas E. Hanna, and Joseph S. Russotti. "Masking in Three-Dimensional Auditory Displays." Human Factors: The Journal of the Human Factors and Ergonomics Society 34, no. 3 (June 1992): 255–65. http://dx.doi.org/10.1177/001872089203400301.

Full text
Abstract:
The extent to which simultaneous inputs in a three-dimensional (3D) auditory display mask one another was studied in a simulated sonar task. The minimum signal-to-noise ratio (SNR) required to detect an amplitude-modulated SOO-Hz tone in a background of broadband noise was measured using a loudspeaker array in a free field. Three aspects of the 3D array were varied: angular separation of the sources, degree of correlation of the background noises, and listener head movement. Masking was substantially reduced when the sources were uncorrelated. The SNR needed for detection decreased with source separation, and the rate of decrease was significantly greater with uncorrelated sources than with partially or fully correlated sources. Head movement had no effect on the SNR required for detection. Implications for the design and application of 3D auditory displays are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Han, Kean Chen, Rong Li, Jianben Liu, Baoquan Wan, and Bing Zhou. "Auditory-like simultaneous separation mechanisms spontaneously learned by a deep source separation network." Applied Acoustics 188 (January 2022): 108591. http://dx.doi.org/10.1016/j.apacoust.2021.108591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Drake, Laura, and Janet Rutledge. "Auditory scene analysis‐constrained array processing for sound source separation." Journal of the Acoustical Society of America 101, no. 5 (May 1997): 3106. http://dx.doi.org/10.1121/1.418868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Farley, Brandon J., and Arnaud J. Noreña. "Membrane potential dynamics of populations of cortical neurons during auditory streaming." Journal of Neurophysiology 114, no. 4 (October 2015): 2418–30. http://dx.doi.org/10.1152/jn.00545.2015.

Full text
Abstract:
How a mixture of acoustic sources is perceptually organized into discrete auditory objects remains unclear. One current hypothesis postulates that perceptual segregation of different sources is related to the spatiotemporal separation of cortical responses induced by each acoustic source or stream. In the present study, the dynamics of subthreshold membrane potential activity were measured across the entire tonotopic axis of the rodent primary auditory cortex during the auditory streaming paradigm using voltage-sensitive dye imaging. Consistent with the proposed hypothesis, we observed enhanced spatiotemporal segregation of cortical responses to alternating tone sequences as their frequency separation or presentation rate was increased, both manipulations known to promote stream segregation. However, across most streaming paradigm conditions tested, a substantial cortical region maintaining a response to both tones coexisted with more peripheral cortical regions responding more selectively to one of them. We propose that these coexisting subthreshold representation types could provide neural substrates to support the flexible switching between the integrated and segregated streaming percepts.
APA, Harvard, Vancouver, ISO, and other styles
7

Drake, Laura A., Janet C. Rutledge, and Aggelos Katsaggelos. "Computational auditory scene analysis‐constrained array processing for sound source separation." Journal of the Acoustical Society of America 106, no. 4 (October 1999): 2238. http://dx.doi.org/10.1121/1.427622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zakeri, Sahar, and Masoud Geravanchizadeh. "Supervised binaural source separation using auditory attention detection in realistic scenarios." Applied Acoustics 175 (April 2021): 107826. http://dx.doi.org/10.1016/j.apacoust.2020.107826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

McElveen, J. K., Leonid Krasny, and Scott Nordlund. "Applying matched field array processing and machine learning to computational auditory scene analysis and source separation challenges." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A232. http://dx.doi.org/10.1121/10.0011162.

Full text
Abstract:
Matched field processing (MFP) techniques employing physics-based models of acoustic propagation have been successfully and widely applied to underwater target detection and localization, while machine learning (ML) techniques have enabled detection and extraction of patterns in data. Fusing MFP and ML enables the estimation of Green’s Function solutions to the Acoustic Wave Equation for waveguides from data captured in real, reverberant acoustic environments. These Green’s Function estimates can further enable the robust separation of individual sources, even in the presence of multiple loud, interfering, interposed, and competing noise sources. We first introduce MFP and ML and then discuss their application to Computational Auditory Scene Analysis (CASA) and acoustic source separation. Results from a variety of tests using a binaural headset, as well as different wearable and free-standing microphone arrays are then presented to illustrate the effects of the number and placement of sensors on the residual noise floor after separation. Finally, speculations on the similarities between this proprietary approach and the human auditory system’s use of interaural cross-correlation in formulation of acoustic spatial models will be introduced and ideas for further research proposed.
APA, Harvard, Vancouver, ISO, and other styles
10

Otsuka, Takuma, Katsuhiko Ishiguro, Hiroshi Sawada, and Hiroshi Okuno. "Bayesian Unification of Sound Source Localization and Separation with Permutation Resolution." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 2038–45. http://dx.doi.org/10.1609/aaai.v26i1.8376.

Full text
Abstract:
Sound source localization and separation with permutation resolution are essential for achieving a computational auditory scene analysis system that can extract useful information from a mixture of various sounds. Because existing methods cope separately with these problems despite their mutual dependence, the overall result with these approaches can be degraded by any failure in one of these components. This paper presents a unified Bayesian framework to solve these problems simultaneously where localization and separation are regarded as a clustering problem. Experimental results confirm that our method outperforms state-of-the-art methods in terms of the separation quality with various setups including practical reverberant environments.
APA, Harvard, Vancouver, ISO, and other styles
11

Geravanchizadeh, Masoud, and Sahar Zakeri. "Binaural source separation using auditory attention for salient and non-salient sounds." Applied Acoustics 195 (June 2022): 108822. http://dx.doi.org/10.1016/j.apacoust.2022.108822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Teng, Santani, Verena R. Sommer, Dimitrios Pantazis, and Aude Oliva. "Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation." eneuro 4, no. 1 (January 2017): ENEURO.0007–17.2017. http://dx.doi.org/10.1523/eneuro.0007-17.2017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Granados Barbero, Raúl, Astrid De Vos, and Jan Wouters. "The identification of predominant auditory steady‐state response brain sources in electroencephalography using denoising source separation." European Journal of Neuroscience 53, no. 11 (April 19, 2021): 3688–709. http://dx.doi.org/10.1111/ejn.15219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Caclin, Anne, Elvira Brattico, Mari Tervaniemi, Risto Näätänen, Dominique Morlet, Marie-Hélène Giard, and Stephen McAdams. "Separate Neural Processing of Timbre Dimensions in Auditory Sensory Memory." Journal of Cognitive Neuroscience 18, no. 12 (December 2006): 1959–72. http://dx.doi.org/10.1162/jocn.2006.18.12.1959.

Full text
Abstract:
Timbre is a multidimensional perceptual attribute of complex tones that characterizes the identity of a sound source. Our study explores the representation in auditory sensory memory of three timbre dimensions (acoustically related to attack time, spectral centroid, and spectrum fine structure), using the mismatch negativity (MMN) component of the auditory event-related potential. MMN is elicited by a discriminable change in a sound sequence and reflects the detection of the discrepancy between the current stimulus and traces in auditory sensory memory. The stimuli used in the present study were carefully controlled synthetic tones. MMNs were recorded after changes along each of the three timbre dimensions and their combinations. Additivity of unidimensional MMNs and dipole modeling results suggest partially separate MMN generators for different timbre dimensions, reflecting their mainly separate processing in auditory sensory memory. The results expand to timbre dimensions a property of separation of the representation in sensory memory that has already been reported between basic perceptual attributes (pitch, loudness, duration, and location) of sound sources.
APA, Harvard, Vancouver, ISO, and other styles
15

Xie, Yangbo, Tsung-Han Tsai, Adam Konneker, Bogdan-Ioan Popa, David J. Brady, and Steven A. Cummer. "Single-sensor multispeaker listening with acoustic metamaterials." Proceedings of the National Academy of Sciences 112, no. 34 (August 10, 2015): 10595–98. http://dx.doi.org/10.1073/pnas.1502276112.

Full text
Abstract:
Designing a “cocktail party listener” that functionally mimics the selective perception of a human auditory system has been pursued over the past decades. By exploiting acoustic metamaterials and compressive sensing, we present here a single-sensor listening device that separates simultaneous overlapping sounds from different sources. The device with a compact array of resonant metamaterials is demonstrated to distinguish three overlapping and independent sources with 96.67% correct audio recognition. Segregation of the audio signals is achieved using physical layer encoding without relying on source characteristics. This hardware approach to multichannel source separation can be applied to robust speech recognition and hearing aids and may be extended to other acoustic imaging and sensing applications.
APA, Harvard, Vancouver, ISO, and other styles
16

Orchard-Mills, Emily, Johahn Leung, David Burr, Maria Concetta Morrone, Ella Wufong, Simon Carlile, and David Alais. "A Mechanism for Detecting Coincidence of Auditory and Visual Spatial Signals." Multisensory Research 26, no. 4 (2013): 333–45. http://dx.doi.org/10.1163/22134808-00002425.

Full text
Abstract:
Information about the world is captured by our separate senses, and must be integrated to yield a unified representation. This raises the issue of which signals should be integrated and which should remain separate, as inappropriate integration will lead to misrepresentation and distortions. One strong cue suggesting that separate signals arise from a single source is coincidence, in space and in time. We measured increment thresholds for discriminating spatial intervals defined by pairs of simultaneously presented targets, one flash and one auditory sound, for various separations. We report a ‘dipper function’, in which thresholds follow a ‘U-shaped’ curve, with thresholds initially decreasing with spatial interval, and then increasing for larger separations. The presence of a dip in the audiovisual increment-discrimination function is evidence that the auditory and visual signals both input to a common mechanism encoding spatial separation, and a simple filter model with a sigmoidal transduction function simulated the results well. The function of an audiovisual spatial filter may be to detect coincidence, a fundamental cue guiding whether to integrate or segregate.
APA, Harvard, Vancouver, ISO, and other styles
17

Han, Cong, James O’Sullivan, Yi Luo, Jose Herrero, Ashesh D. Mehta, and Nima Mesgarani. "Speaker-independent auditory attention decoding without access to clean speech sources." Science Advances 5, no. 5 (May 2019): eaav6134. http://dx.doi.org/10.1126/sciadv.aav6134.

Full text
Abstract:
Speech perception in crowded environments is challenging for hearing-impaired listeners. Assistive hearing devices cannot lower interfering speakers without knowing which speaker the listener is focusing on. One possible solution is auditory attention decoding in which the brainwaves of listeners are compared with sound sources to determine the attended source, which can then be amplified to facilitate hearing. In realistic situations, however, only mixed audio is available. We utilize a novel speech separation algorithm to automatically separate speakers in mixed audio, with no need for the speakers to have prior training. Our results show that auditory attention decoding with automatically separated speakers is as accurate and fast as using clean speech sounds. The proposed method significantly improves the subjective and objective quality of the attended speaker. Our study addresses a major obstacle in actualization of auditory attention decoding that can assist hearing-impaired listeners and reduce listening effort for normal-hearing subjects.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhao, Mingqi, Gaia Bonassi, Roberto Guarnieri, Elisa Pelosin, Alice Nieuwboer, Laura Avanzino, and Dante Mantini. "A multi-step blind source separation approach for the attenuation of artifacts in mobile high-density electroencephalography data." Journal of Neural Engineering 18, no. 6 (December 1, 2021): 066041. http://dx.doi.org/10.1088/1741-2552/ac4084.

Full text
Abstract:
Abstract Objective. Electroencephalography (EEG) is a widely used technique to address research questions about brain functioning, from controlled laboratorial conditions to naturalistic environments. However, EEG data are affected by biological (e.g. ocular, myogenic) and non-biological (e.g. movement-related) artifacts, which—depending on their extent—may limit the interpretability of the study results. Blind source separation (BSS) approaches have demonstrated to be particularly promising for the attenuation of artifacts in high-density EEG (hdEEG) data. Previous EEG artifact removal studies suggested that it may not be optimal to use the same BSS method for different kinds of artifacts. Approach. In this study, we developed a novel multi-step BSS approach to optimize the attenuation of ocular, movement-related and myogenic artifacts from hdEEG data. For validation purposes, we used hdEEG data collected in a group of healthy participants in standing, slow-walking and fast-walking conditions. During part of the experiment, a series of tone bursts were used to evoke auditory responses. We quantified event-related potentials (ERPs) using hdEEG signals collected during an auditory stimulation, as well as the event-related desynchronization (ERD) by contrasting hdEEG signals collected in walking and standing conditions, without auditory stimulation. We compared the results obtained in terms of auditory ERP and motor-related ERD using the proposed multi-step BSS approach, with respect to two classically used single-step BSS approaches. Main results. The use of our approach yielded the lowest residual noise in the hdEEG data, and permitted to retrieve stronger and more reliable modulations of neural activity than alternative solutions. Overall, our study confirmed that the performance of BSS-based artifact removal can be improved by using specific BSS methods and parameters for different kinds of artifacts. Significance. Our technological solution supports a wider use of hdEEG-based source imaging in movement and rehabilitation studies, and contributes to the further development of mobile brain/body imaging applications.
APA, Harvard, Vancouver, ISO, and other styles
19

Furukawa, Shigeto, and John C. Middlebrooks. "Sensitivity of Auditory Cortical Neurons to Locations of Signals and Competing Noise Sources." Journal of Neurophysiology 86, no. 1 (July 1, 2001): 226–40. http://dx.doi.org/10.1152/jn.2001.86.1.226.

Full text
Abstract:
The present study examined cortical parallels to psychophysical signal detection and sound localization in the presence of background noise. The activity of single units or of small clusters of units was recorded in cortical area A2 of chloralose-anesthetized cats. Signals were 80-ms click trains that varied in location in the horizontal plane around the animal. Maskers were continuous broadband noises. In the focal masker condition, a single masker source was tested at various azimuths. In the diffuse masker condition, uncorrelated noise was presented from two speakers at ±90° lateral to the animal. For about 2/3 of units (“type A”), the presence of the masker generally reduced neural sensitivity to signals, and the effects of the masker depended on the relative locations of signal and masker sources. For the remaining 1/3 of units (“type B”), the masker reduced spike rates at low signal levels but often augmented spike rates at higher signal levels. Increases in spike rates of type B units were most common for signal sources in front of the ear contralateral to the recording site but tended to be independent of masker source location. For type A units, masker effects could be modeled as a shift toward higher levels of spike-rate- and spike-latency-versus-level functions. For a focal masker, the shift size decreased with increasing separation of signal and masker. That result resembled psychophysical spatial unmasking, i.e., improved signal detection by spatial separation of the signal from the noise source. For the diffuse masker condition, the shift size generally was constant across signal locations. For type A units, we examined the effects of maskers on cortical signaling of sound-source location, using an artificial-neural-network (ANN) algorithm. First, an ANN was trained to estimate the signal location in the quiet condition by recognizing the spike patterns of single units. Then we tested ANN responses for spike patterns recorded under various masker conditions. Addition of a masker generally altered spike patterns and disrupted ANN identification of signal location. That disruption was smaller, however, for signal and masker configurations in which the masker did not severely reduce units' spike rates. That result compared well with the psychophysical observation that listeners maintain good localization performance as long as signals are clearly audible.
APA, Harvard, Vancouver, ISO, and other styles
20

Gauer, Johannes, Anil Nagathil, Kai Eckel, Denis Belomestny, and Rainer Martin. "A versatile deep-neural-network-based music preprocessing and remixing scheme for cochlear implant listeners." Journal of the Acoustical Society of America 151, no. 5 (May 2022): 2975–86. http://dx.doi.org/10.1121/10.0010371.

Full text
Abstract:
While cochlear implants (CIs) have proven to restore speech perception to a remarkable extent, access to music remains difficult for most CI users. In this work, a methodology for the design of deep learning-based signal preprocessing strategies that simplify music signals and emphasize rhythmic information is proposed. It combines harmonic/percussive source separation and deep neural network (DNN) based source separation in a versatile source mixture model. Two different neural network architectures were assessed with regard to their applicability for this task. The method was evaluated with instrumental measures and in two listening experiments for both network architectures and six mixing presets. Normal-hearing subjects rated the signal quality of the processed signals compared to the original both with and without a vocoder which provides an approximation of the auditory perception in CI listeners. Four combinations of remix models and DNNs have been selected for an evaluation with vocoded signals and were all rated significantly better in comparison to the unprocessed signal. In particular, the two best-performing remix networks are promising candidates for further evaluation in CI listeners.
APA, Harvard, Vancouver, ISO, and other styles
21

Smith, Kevin R., I.-Hui Hsieh, Kourosh Saberi, and Gregory Hickok. "Auditory Spatial and Object Processing in the Human Planum Temporale: No Evidence for Selectivity." Journal of Cognitive Neuroscience 22, no. 4 (April 2010): 632–39. http://dx.doi.org/10.1162/jocn.2009.21196.

Full text
Abstract:
Although it is generally acknowledged that at least two processing streams exist in the primate cortical auditory system, the function of the posterior dorsal stream is a topic of much debate. Recent studies have reported selective activation to auditory spatial change in portions of the human planum temporale (PT) relative to nonspatial stimuli such as pitch changes or complex acoustic patterns. However, previous work has suggested that the PT may be sensitive to another kind of nonspatial variable, namely, the number of auditory objects simultaneously presented in the acoustic signal. The goal of the present fMRI experiment was to assess whether any portion of the PT showed spatial selectivity relative to manipulations of the number of auditory objects presented. Spatially sensitive regions in the PT were defined by comparing activity associated with listening to an auditory object (speech from a single talker) that changed location with one that remained stationary. Activity within these regions was then examined during a nonspatial manipulation: increasing the number of objects (talkers) from one to three. The nonspatial manipulation modulated activity within the “spatial” PT regions. No region within the PT was found to be selective for spatial or object processing. We suggest that previously documented spatial sensitivity in the PT reflects auditory source separation using spatial cues rather than spatial processing per se.
APA, Harvard, Vancouver, ISO, and other styles
22

Bőhm, Tamás M., Lidia Shestopalova, Alexandra Bendixen, Andreas G. Andreou, Julius Georgiou, Guillame Garreau, Philippe Pouliquen, Andrew Cassidy, Susan L. Denham, and István Winkler. "The role of perceived source location in auditory stream segregation: Separation affects sound organization, common fate does not." Learning & Perception 5, Supplement 2 (June 2013): 55–72. http://dx.doi.org/10.1556/lp.5.2013.suppl2.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Corbin, Nicole E., Emily Buss, and Lori J. Leibold. "Spatial Hearing and Functional Auditory Skills in Children With Unilateral Hearing Loss." Journal of Speech, Language, and Hearing Research 64, no. 11 (November 8, 2021): 4495–512. http://dx.doi.org/10.1044/2021_jslhr-20-00081.

Full text
Abstract:
Purpose The purpose of this study was to characterize spatial hearing abilities of children with longstanding unilateral hearing loss (UHL). UHL was expected to negatively impact children's sound source localization and masked speech recognition, particularly when the target and masker were separated in space. Spatial release from masking (SRM) in the presence of a two-talker speech masker was expected to predict functional auditory performance as assessed by parent report. Method Participants were 5- to 14-year-olds with sensorineural or mixed UHL, age-matched children with normal hearing (NH), and adults with NH. Sound source localization was assessed on the horizontal plane (−90° to 90°), with noise that was either all-pass, low-pass, high-pass, or an unpredictable mixture. Speech recognition thresholds were measured in the sound field for sentences presented in two-talker speech or speech-shaped noise. Target speech was always presented from 0°; the masker was either colocated with the target or spatially separated at ±90°. Parents of children with UHL rated their children's functional auditory performance in everyday environments via questionnaire. Results Sound source localization was poorer for children with UHL than those with NH. Children with UHL also derived less SRM than those with NH, with increased masking for some conditions. Effects of UHL were larger in the two-talker than the noise masker, and SRM in two-talker speech increased with age for both groups of children. Children with UHL whose parents reported greater functional difficulties achieved less SRM when either masker was on the side of the better-hearing ear. Conclusions Children with UHL are clearly at a disadvantage compared with children with NH for both sound source localization and masked speech recognition with spatial separation. Parents' report of their children's real-world communication abilities suggests that spatial hearing plays an important role in outcomes for children with UHL.
APA, Harvard, Vancouver, ISO, and other styles
24

Corbin, Nicole E., Emily Buss, and Lori J. Leibold. "Spatial Hearing and Functional Auditory Skills in Children With Unilateral Hearing Loss." Journal of Speech, Language, and Hearing Research 64, no. 11 (November 8, 2021): 4495–512. http://dx.doi.org/10.1044/2021_jslhr-20-00081.

Full text
Abstract:
Purpose The purpose of this study was to characterize spatial hearing abilities of children with longstanding unilateral hearing loss (UHL). UHL was expected to negatively impact children's sound source localization and masked speech recognition, particularly when the target and masker were separated in space. Spatial release from masking (SRM) in the presence of a two-talker speech masker was expected to predict functional auditory performance as assessed by parent report. Method Participants were 5- to 14-year-olds with sensorineural or mixed UHL, age-matched children with normal hearing (NH), and adults with NH. Sound source localization was assessed on the horizontal plane (−90° to 90°), with noise that was either all-pass, low-pass, high-pass, or an unpredictable mixture. Speech recognition thresholds were measured in the sound field for sentences presented in two-talker speech or speech-shaped noise. Target speech was always presented from 0°; the masker was either colocated with the target or spatially separated at ±90°. Parents of children with UHL rated their children's functional auditory performance in everyday environments via questionnaire. Results Sound source localization was poorer for children with UHL than those with NH. Children with UHL also derived less SRM than those with NH, with increased masking for some conditions. Effects of UHL were larger in the two-talker than the noise masker, and SRM in two-talker speech increased with age for both groups of children. Children with UHL whose parents reported greater functional difficulties achieved less SRM when either masker was on the side of the better-hearing ear. Conclusions Children with UHL are clearly at a disadvantage compared with children with NH for both sound source localization and masked speech recognition with spatial separation. Parents' report of their children's real-world communication abilities suggests that spatial hearing plays an important role in outcomes for children with UHL.
APA, Harvard, Vancouver, ISO, and other styles
25

Thomassen, Sabine, Kevin Hartung, Wolfgang Einhäuser, and Alexandra Bendixen. "Low-high-low or high-low-high? Pattern effects on sequential auditory scene analysis." Journal of the Acoustical Society of America 152, no. 5 (November 2022): 2758–68. http://dx.doi.org/10.1121/10.0015054.

Full text
Abstract:
Sequential auditory scene analysis (ASA) is often studied using sequences of two alternating tones, such as ABAB or ABA_, with “_” denoting a silent gap, and “A” and “B” sine tones differing in frequency (nominally low and high). Many studies implicitly assume that the specific arrangement (ABAB vs ABA_, as well as low-high-low vs high-low-high within ABA_) plays a negligible role, such that decisions about the tone pattern can be governed by other considerations. To explicitly test this assumption, a systematic comparison of different tone patterns for two-tone sequences was performed in three different experiments. Participants were asked to report whether they perceived the sequences as originating from a single sound source ( integrated) or from two interleaved sources ( segregated). Results indicate that core findings of sequential ASA, such as an effect of frequency separation on the proportion of integrated and segregated percepts, are similar across the different patterns during prolonged listening. However, at sequence onset, the integrated percept was more likely to be reported by the participants in ABA_low-high-low than in ABA_high-low-high sequences. This asymmetry is important for models of sequential ASA, since the formation of percepts at onset is an integral part of understanding how auditory interpretations build up.
APA, Harvard, Vancouver, ISO, and other styles
26

Kelly, J. B., and P. W. Judge. "Effects of medial geniculate lesions on sound localization by the rat." Journal of Neurophysiology 53, no. 2 (February 1, 1985): 361–72. http://dx.doi.org/10.1152/jn.1985.53.2.361.

Full text
Abstract:
Rats with bilateral lesions of the medial geniculate body were tested on a two-choice sound-localization task that required a directional response to a distant sound source. Stimuli included both broadband and filtered noise bursts presented singly or in repetitive trains. Separate tests were conducted with loudspeakers 180 and 60 degrees apart, centered around 0 degree azimuth. With complete bilateral destruction of the medial geniculate, rats could localize both trains and single bursts of noise and were capable of high levels of performance even at small angles of speaker separation. Some evidence of impaired performance was noted with high-frequency noise bursts, but generally the deficits were not severe. Animals with lesions that extended caudally into the brachium of the inferior colliculus and lateral tegmentum were severely impaired in their ability to localize sounds even at large angles of speaker separation. Three of the four animals in this group were incapable of localizing single bursts even with loudspeakers separated by 180 degrees, and the fourth was unable to perform above chance at 60 degrees. The effects of medial geniculate lesions were very similar to those reported previously for rats with lesions of the auditory cortex, but contrasted with reports of severe impairments in sound localization following damage to the auditory cortex in other mammalian species.
APA, Harvard, Vancouver, ISO, and other styles
27

Fishman, Yonatan I., and Mitchell Steinschneider. "Spectral Resolution of Monkey Primary Auditory Cortex (A1) Revealed With Two-Noise Masking." Journal of Neurophysiology 96, no. 3 (September 2006): 1105–15. http://dx.doi.org/10.1152/jn.00124.2006.

Full text
Abstract:
An important function of the auditory nervous system is to analyze the frequency content of environmental sounds. The neural structures involved in determining psychophysical frequency resolution remain unclear. Using a two-noise masking paradigm, the present study investigates the spectral resolution of neural populations in primary auditory cortex (A1) of awake macaques and the degree to which it matches psychophysical frequency resolution. Neural ensemble responses (auditory evoked potentials, multiunit activity, and current source density) evoked by a pulsed 60-dB SPL pure-tone signal fixed at the best frequency (BF) of the recorded neural populations were examined as a function of the frequency separation (ΔF) between the tone and two symmetrically flanking continuous 80-dB SPL, 50-Hz-wide bands of noise. ΔFs ranged from 0 to 50% of the BF, encompassing the range typically examined in psychoacoustic experiments. Responses to the signal were minimal for ΔF = 0% and progressively increased with ΔF, reaching a maximum at ΔF = 50%. Rounded exponential functions, used to model auditory filter shapes in psychoacoustic studies of frequency resolution, provided excellent fits to neural masking functions. Goodness-of-fit was greatest for response components in lamina 4 and lower lamina 3 and least for components recorded in more superficial cortical laminae. Physiological equivalent rectangular bandwidths (ERBs) increased with BF, measuring nearly 15% of the BF. These findings parallel results of psychoacoustic studies in both monkeys and humans, and thus indicate that a representation of perceptual frequency resolution is available at the level of A1.
APA, Harvard, Vancouver, ISO, and other styles
28

Kane, Brian. "Acousmatic Fabrications: Les Paul and the ‘Les Paulverizer’." Journal of Visual Culture 10, no. 2 (August 2011): 212–31. http://dx.doi.org/10.1177/1470412911402892.

Full text
Abstract:
Acousmatic sound – a sound that one hears without seeing the causes behind it – creates situations where visual contributions to auditory experience are diminished. The author theorizes that acousmatic separation unsettles the relationship of the source, cause and effect of sound. To draw out the consequences of this theory, Les Paul and Mary Ford’s multi-tracked recordings and live performances are examined, and three central claims are posited. First, Paul’s turn to multi-tracked recording was motivated by mimetic rivalry when his ‘sound’ was imitated on the radio. Second, Paul misdirected listeners of his radio program by creating scenarios that depended on false attributions of source and cause. Third, the problems that faced Paul in live performance of his multi-tracked hits resulted in Paul’s creation of the ‘Les Paulverizer’. This device afforded the maintenance of acousmatic spacing during live performance but also forced him into the unusual position of ventriloquizing his own voice.
APA, Harvard, Vancouver, ISO, and other styles
29

Borjigin, Agudemu, Kostas Kokkinakis, Hari Bharadwaj, and Josh Stohl. "Deep neural network models of speech-in-noise perception for hearing technologies and research." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A165. http://dx.doi.org/10.1121/10.0010989.

Full text
Abstract:
Widespread adoption of artificial intelligence has yet to occur in the hearing field. Hearing technologies, such as cochlear implants (CIs), provide limited benefits of noise reduction, even with current state-of-the-art signal processing strategies. Recent developments in machine learning have produced deep neural network (DNN) models achieving remarkable performance in speech enhancement and source separation tasks. However, there are currently no commercially available CI audio processors that utilize DNN models for noise suppression. Furthermore, the current research community lacks a computational tool to match the complexity of natural auditory processing. To address these gaps, we implemented two DNN models: a recurrent neural network (RNN)—a lightweight template model for speech enhancement, and the SepFormer—the current top-performing speech-separation model in the literature. The DNN models resulted in significant improvements in terms of objective evaluation metrics, as well as intelligibility scores obtained with CI users at different signal-to-noise ratios. Given their flexibility and good performance on complex tasks, these models can also be used to generate hypotheses about speech-in-noise perception and serve as richer substitutes for models commonly used in research. This work serves as a proof-of-concept and a guide for the next steps towards integrating DNN technology into hearing technologies and research.
APA, Harvard, Vancouver, ISO, and other styles
30

Yao, Qin, Zhencong Li, and Wanzhi Ma. "Research on Segmentation Experience of Music Signal Improved Based on Maximization of Negative Entropy." Complexity 2021 (May 24, 2021): 1–11. http://dx.doi.org/10.1155/2021/7442877.

Full text
Abstract:
With the rapid growth of digital music today, due to the complexity of the music itself, the ambiguity of the definition of music category, and the limited understanding of the characteristics of human auditory perception, the research on topics related to automatic segmentation of music is still in its infancy, while automatic music is still in its infancy. Segmentation is a prerequisite for fast and effective retrieval of music resources, and its potential application needs are huge. Therefore, topics related to automatic music segmentation have important research value. This paper studies an improved algorithm based on negative entropy maximization for well-posed speech and music separation. Aiming at the problem that the separation performance of the negative entropy maximization method depends on the selection of the initial matrix, the Newton downhill method is used instead of the Newton iteration method as the optimization algorithm to find the optimal matrix. By changing the descending factor, the objective function shows a downward trend, and the dependence of the algorithm on the initial value is reduced. The simulation experimental results show that the algorithm can separate the source signal well under different initial values. The average iteration time of the improved algorithm is reduced by 26.2%, the number of iterations is reduced by 69.4%, and the iteration time and the number of iterations are both small. Fluctuations within the range better solve the problem of sensitivity to the initial value. Experiments have proved that the new objective function can significantly improve the separation performance of neural networks. Compared with the existing music separation methods, the method in this paper shows excellent performance in both accompaniment and singing in separated music.
APA, Harvard, Vancouver, ISO, and other styles
31

Minelli, G., G. E. Puglisi, A. Astolfi, C. Hauth, and A. Warzybok. "Binaural Speech Intelligibility in a Real Elementary Classroom." Journal of Physics: Conference Series 2069, no. 1 (November 1, 2021): 012165. http://dx.doi.org/10.1088/1742-6596/2069/1/012165.

Full text
Abstract:
Abstract Since the fundamental phases of the learning process take place in elementary classrooms, it is necessary to guarantee a proper acoustic environment for the listening activity to children immersed in them. In this framework, speech intelligibility is especially important. In order to better understand and objectively quantify the effect of background noise and reverberation on speech intelligibility various models have been developed. Here, a binaural speech intelligibility model (BSIM) is investigated for speech intelligibility predictions in a real classroom considering the effect of talker-to-listener distance and binaural unmasking due to the spatial separation of noise and speech source. BSIM predictions are compared to the well-established room acoustic measures as reverberation time (T30), clarity or definition. Objective acoustical measurements were carried out in one Italian primary school classroom before (T30= 1.43s±0.03 s) and after (T30= 0.45±0.02 s) the acoustical treatment. Speech reception thresholds (SRTs) corresponding to signal-to-noise ratio yielding 80% of speech intelligibility will be obtained through the BSIM simulations using the measured binaural room impulse responses (BRIRs). A focus on the effect of different speech and noise source spatial positions on the SRT values will aim to show the importance of a model able to deal with the binaural aspects of the auditory system. In particular, it will be observed how the position of the noise source influences speech intelligibility when the target speech source lies always in the same position.
APA, Harvard, Vancouver, ISO, and other styles
32

Rouhbakhsh, Nematollah, John Mahdi, Jacob Hwo, Baran Nobel, and Fati Mousave. "Human Frequency Following Response Correlates of Spatial Release From Masking." Journal of Speech, Language, and Hearing Research 62, no. 11 (November 22, 2019): 4165–78. http://dx.doi.org/10.1044/2019_jslhr-h-18-0353.

Full text
Abstract:
Purpose Speech recognition in complex listening environments is enhanced by the extent of spatial separation between the speech source and background competing sources, an effect known as spatial release from masking (SRM). The aim of this study was to investigate whether the phase-locked neural activity in the central auditory pathways, reflected in the frequency following response (FFR), exhibits SRM. Method Eighteen normal-hearing adults (8 men and 10 women, ranging in age from 20 to 42 years) with no known neurological disorders participated in this study. FFRs were recorded from the participants in response to a target vowel /u/ presented with spatially colocated and separated competing talkers at 3 ranges of signal-to-noise ratios (SNRs), with median SNRs of −5.4, 0.5, and 6.8 dB and for different attentional conditions (attention and no attention). Results Amplitude of the FFR at the fundamental frequency was significantly larger in the spatially separated condition as compared to the colocated condition for only the lowest (< −2.4 dB SNR) of the 3 SNR ranges tested. A significant effect of attention was found when subjects were actively focusing on the target stimuli. No significant interaction effects were found between spatial separation and attention. Conclusions The enhanced representation of the target stimulus in the separated condition suggests that the temporal pattern of phase-locked brainstem neural activity generating the FFR may contain information relevant to the binaural processes underlying SRM but only in challenging listening environments. Attention may modulate FFR fundamental frequency amplitude but does not seem to modulate spatial processing at the level of generating the FFR. Supplemental Material https://doi.org/10.23641/asha.9992597
APA, Harvard, Vancouver, ISO, and other styles
33

Gerth, Jeffrey M. "Identification of Sounds with Multiple Timbres." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, no. 9 (October 1993): 539–43. http://dx.doi.org/10.1177/154193129303700905.

Full text
Abstract:
The present research examined identification of complex sounds created by simultaneously playing two or more component sounds in various combinations. Sixteen component sounds were used, created by imposing four distinct temporal patterns on four basic timbres, two musical timbres and two complex real-world timbres. In the present experiment, complex sounds were created by simultaneously playing one to four component sounds, each with a different timbre. Subjects heard a complex sound, followed by a second complex sound that always differed from the first by adding a component, deleting a component or substituting a component. Subjects indicated which component had been added, deleted, or substituted. Sound changes were identified with moderate accuracy (above 60 percent). The errors committed varied with temporal pattern, timbre, sound change and density. The analyses of identification confusions indicated that subjects identified the correct timbre of the sound change even when temporal patterning was confused. The finding that temporal patterns were confused largely within the sound category of the correct response limits the previous interpretation of other research, which found that similar temporal patterns are confusable even with differences in spectra. Results of the present investigation suggest that multiple, temporal patterns with varying timbres can be presented from a single physical location to convey a change in state or status of an informative sound source. Design contributions of the present research to auditory information systems such as virtual reality are discussed. For such an application, a combination of physical separation and multiple patterns with varying timbres could provide a coherent, yet informationally complex, auditory display.
APA, Harvard, Vancouver, ISO, and other styles
34

Sun, Sihua. "Digital Audio Scene Recognition Method Based on Machine Learning Technology." Scientific Programming 2021 (November 26, 2021): 1–9. http://dx.doi.org/10.1155/2021/2388697.

Full text
Abstract:
Audio scene recognition is a task that enables devices to understand their environment through digital audio analysis. It belongs to a branch of the field of computer auditory scene. At present, this technology has been widely used in intelligent wearable devices, robot sensing services, and other application scenarios. In order to explore the applicability of machine learning technology in the field of digital audio scene recognition, an audio scene recognition method based on optimized audio processing and convolutional neural network is proposed. Firstly, different from the traditional audio feature extraction method using mel-frequency cepstrum coefficient, the proposed method uses binaural representation and harmonic percussive source separation method to optimize the original audio and extract the corresponding features, so that the system can make use of the spatial features of the scene and then improve the recognition accuracy. Then, an audio scene recognition system with two-layer convolution module is designed and implemented. In terms of network structure, we try to learn from the VGGNet structure in the field of image recognition to increase the network depth and improve the system flexibility. Experimental data analysis shows that compared with traditional machine learning methods, the proposed method can greatly improve the recognition accuracy of each scene and achieve better generalization effect on different data.
APA, Harvard, Vancouver, ISO, and other styles
35

Mouraux, A., and G. D. Iannetti. "Nociceptive Laser-Evoked Brain Potentials Do Not Reflect Nociceptive-Specific Neural Activity." Journal of Neurophysiology 101, no. 6 (June 2009): 3258–69. http://dx.doi.org/10.1152/jn.91181.2008.

Full text
Abstract:
Brief radiant laser pulses can be used to activate cutaneous Aδ and C nociceptors selectively and elicit a number of transient brain responses [laser-evoked potentials (LEPs)] in the ongoing EEG. LEPs have been used extensively in the past 30 years to gain knowledge about the cortical mechanisms underlying nociception and pain in humans, by assuming that they reflect at least neural activities uniquely or preferentially involved in processing nociceptive input. Here, by applying a novel blind source separation algorithm (probabilistic independent component analysis) to 124-channel event-related potentials elicited by a random sequence of nociceptive and non-nociceptive somatosensory, auditory, and visual stimuli, we provide compelling evidence that this assumption is incorrect: LEPs do not reflect nociceptive-specific neural activity. Indeed, our results indicate that LEPs can be entirely explained by a combination of multimodal neural activities (i.e., activities also elicited by stimuli of other sensory modalities) and somatosensory-specific, but not nociceptive-specific, neural activities (i.e., activities elicited by both nociceptive and non-nociceptive somatosensory stimuli). Regardless of the sensory modality of the eliciting stimulus, the magnitude of multimodal activities correlated with the subjective rating of saliency, suggesting that these multimodal activities are involved in stimulus-triggered mechanisms of arousal or attentional reorientation.
APA, Harvard, Vancouver, ISO, and other styles
36

Mainero Rocca, Lucia, Nunziata L’Episcopo, Andrea Gordiani, Matteo Vitali, and Alessandro Staderini. "A ‘Dilute and Shoot’ Liquid Chromatography-Mass Spectrometry Method for Multiclass Drug Analysis in Pre-Cut Dried Blood Spots." International Journal of Environmental Research and Public Health 18, no. 6 (March 16, 2021): 3068. http://dx.doi.org/10.3390/ijerph18063068.

Full text
Abstract:
Drugs able to affect the auditory and nervous systems and consumed by workers to treatdifferent pathologies can represent a possible source of risk in the work environment. All the target compounds involved in the presented project show ototoxic and/or narcoleptic side effects and, for these reasons, occupational safety organizations have recognized them as potential causes of work injuries. A multiclass method for the analysis of 15 drugs among the most widespread worldwide (belonging to nine different classes including antihistamines, beta-blockers, antidepressants, Z-drugs and opioids), was developed and validated. This study describes a rapid, sensitive and effective method to analyse these substances in whole blood using tailored pre-cut dried blood spots. Detection was achieved with a triple quadrupole mass spectrometer after an easy and simple ‘dilute and shoot’ solubilisation followed by an UPLC separation. All the issues linked to the use of the dried blood spots and whole blood, such as haematocrit variability, volumetric evaluation and sample carrier choice were carefully studied and managed during method development. From the validation study results it emerged that this approach can be deemed successful thanks to its few pg µL−1 LOQs, good linear intervals, absolute recoveries of no less than 75%, an almost negligible matrix effect and accuracy and precision in line with the European and American guidelines for validation. All the obtained goals have been specifically pursued in order to encourage method diffusion as a primary prevention intervention, even in small private workplaces.
APA, Harvard, Vancouver, ISO, and other styles
37

Horigome, Mio, and Kazuhiko Kakehi. "Separation and integration of sound sources in auditory processing." Journal of the Acoustical Society of America 140, no. 4 (October 2016): 3214–15. http://dx.doi.org/10.1121/1.4970125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kern, Albert, and Ruedi Stoop. "Principles and Typical Computational Limitations of Sparse Speaker Separation Based on Deterministic Speech Features." Neural Computation 23, no. 9 (September 2011): 2358–89. http://dx.doi.org/10.1162/neco_a_00165.

Full text
Abstract:
The separation of mixed auditory signals into their sources is an eminent neuroscience and engineering challenge. We reveal the principles underlying a deterministic, neural network–like solution to this problem. This approach is orthogonal to ICA/PCA that views the signal constituents as independent realizations of random processes. We demonstrate exemplarily that in the absence of salient frequency modulations, the decomposition of speech signals into local cosine packets allows for a sparse, noise-robust speaker separation. As the main result, we present analytical limitations inherent in the approach, where we propose strategies of how to deal with this situation. Our results offer new perspectives toward efficient noise cleaning and auditory signal separation and provide a new perspective of how the brain might achieve these tasks.
APA, Harvard, Vancouver, ISO, and other styles
39

Ponton, Curtis, Jos J. Eggermont, Deepak Khosla, Betty Kwong, and Manuel Don. "Maturation of human central auditory system activity: separating auditory evoked potentials by dipole source modeling." Clinical Neurophysiology 113, no. 3 (March 2002): 407–20. http://dx.doi.org/10.1016/s1388-2457(01)00733-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Itatani, Naoya, and Georg M. Klump. "Animal models for auditory streaming." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (February 19, 2017): 20160112. http://dx.doi.org/10.1098/rstb.2016.0112.

Full text
Abstract:
Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.
APA, Harvard, Vancouver, ISO, and other styles
41

Higgins, Nathan C., and Erol J. Ozmeral. "The influence of binaural cues on the promotion or inhibition of auditory stream segregation." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A230. http://dx.doi.org/10.1121/10.0016108.

Full text
Abstract:
Though much is known about detection thresholds for binaural cues (interaural time [ITD] and level differences [ILD], and interaural correlation [IAC]), less is known about how these cues influence the separation of auditory sources. The ABA auditory stream segregation paradigm (Bregman, 1990) in a bistable configuration, elicits roughly equivalent proportions of integrated and segregated percepts, where listeners spontaneously switch back and forth between the two. To determine the influence of binaural cues we periodically (25 s) manipulated the binaural cue carried by the A components of the ABA triplet (made of narrowband noise, 6-semitone separation) while maintaining the B component at a stable binaural cue. Participants continuously indicated their perception via button box. Analyzed as a function of binaural cue, lateral cues were more likely to be reported as segregated than midline cues. Binaural cue boundaries (perceptual transition point) were defined using logarithmic regression and were significantly correlated for ILD and ITD (r2 = 0.65, p < 0.05). IAC modulation resulted in significantly more switches when both the A and B components were centered (IAC = 1) than when the IAC of the A component was more diffuse (IAC < 0.9). These results provide evidence of how listeners functionally use binaural cues to segregate auditory streams.
APA, Harvard, Vancouver, ISO, and other styles
42

Elias, Karla Maria Ibraim da Freiria, and Maria Valeriana Leme de Moura-Ribeiro. "Stroke caused auditory attention deficits in children." Arquivos de Neuro-Psiquiatria 71, no. 1 (January 8, 2013): 11–17. http://dx.doi.org/10.1590/s0004-282x2012005000018.

Full text
Abstract:
OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel) and binaural integration - digits and Staggered Spondaic Words Test (SSW) - were applied in 13 children (7 boys), from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.
APA, Harvard, Vancouver, ISO, and other styles
43

Sauvé, Sarah A., Jeremy Marozeau, and Benjamin Rich Zendel. "The effects of aging and musicianship on the use of auditory streaming cues." PLOS ONE 17, no. 9 (September 22, 2022): e0274631. http://dx.doi.org/10.1371/journal.pone.0274631.

Full text
Abstract:
Auditory stream segregation, or separating sounds into their respective sources and tracking them over time, is a fundamental auditory ability. Previous research has separately explored the impacts of aging and musicianship on the ability to separate and follow auditory streams. The current study evaluated the simultaneous effects of age and musicianship on auditory streaming induced by three physical features: intensity, spectral envelope and temporal envelope. In the first study, older and younger musicians and non-musicians with normal hearing identified deviants in a four-note melody interleaved with distractors that were more or less similar to the melody in terms of intensity, spectral envelope and temporal envelope. In the second study, older and younger musicians and non-musicians participated in a dissimilarity rating paradigm with pairs of melodies that differed along the same three features. Results suggested that auditory streaming skills are maintained in older adults but that older adults rely on intensity more than younger adults while musicianship is associated with increased sensitivity to spectral and temporal envelope, acoustic features that are typically less effective for stream segregation, particularly in older adults.
APA, Harvard, Vancouver, ISO, and other styles
44

Avan, Paul, Fabrice Giraudet, and Béla Büki. "Importance of Binaural Hearing." Audiology and Neurotology 20, Suppl. 1 (2015): 3–6. http://dx.doi.org/10.1159/000380741.

Full text
Abstract:
An essential task for the central auditory pathways is to parse the auditory messages sent by the two cochleae into auditory objects, the segregation and localisation of which constitute an important means of separating target signals from noise and competing sources. When hearing losses are too asymmetric, the patients face a situation in which the monaural exploitation of sound messages significantly lessens their performance compared to what it should be in a binaural situation. Rehabilitation procedures must aim at restoring as many binaural advantages as possible. These advantages encompass binaural redundancy, head shadow effect and binaural release from masking, the principles and requirements of which make up the topic of this short review. Notwithstanding the complete understanding of their neuronal mechanisms, empirical data show that binaural advantages can be restored even in situations in which faultless symmetry is inaccessible.
APA, Harvard, Vancouver, ISO, and other styles
45

Kidd, Gerald, Christine R. Mason, Tanya L. Rohtla, and Phalguni S. Deliwala. "Release from masking due to spatial separation of sources in the identification of nonspeech auditory patterns." Journal of the Acoustical Society of America 104, no. 1 (July 1998): 422–31. http://dx.doi.org/10.1121/1.423246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

FRATANI, JÉSSICA, MANOELA WOITOVICZ-CARDOSO, and ANA CAROLINA CALIJORNE LOURENÇO. "Osteology of Physalaemus nattereri (Anura: Leptodactylidae) with comments on intraspecific variation." Zootaxa 4227, no. 2 (February 2, 2017): 219. http://dx.doi.org/10.11646/zootaxa.4227.2.4.

Full text
Abstract:
The cranium, postcranium, and osteological variation of Physalaemus nattereri (Steindachner) are described. The main sources of variation involve the degree of mineralization of the nasal capsule and the lengths of dermal skull bones (e.g., vomer, sphenethmoid, and neopalatine). Osteologically, P. nattereri differs from its congeners by the anterior placement of the jaw articulation (which lies anterior to the intersection between the alae and cultriform process of parasphenoid), and by the separation of the frontoparietals from the anterior margins of exoccipitals. Descriptions of the nasal capsule, the auditory apparatus, and the iliosacral articulation are presented for the first time for this species. One putative morphological synapomorphy is presented for the P. signifer Clade.
APA, Harvard, Vancouver, ISO, and other styles
47

Jeng, Fuh-Cherng, Breanna N. Hart, and Chia-Der Lin. "Separating the Novel Speech Sound Perception of Lexical Tone Chimeras From Their Auditory Signal Manipulations: Behavioral and Electroencephalographic Evidence." Perceptual and Motor Skills 128, no. 6 (September 29, 2021): 2527–43. http://dx.doi.org/10.1177/00315125211049723.

Full text
Abstract:
Previous research has shown the novelty of lexical-tone chimeras (artificially constructed speech sounds created by combining normal speech sounds of a given language) to native speakers of the language from which the chimera components were drawn. However, the source of such novelty remains unclear. Our goal in this study was to separate the effects of chimeric tonal novelty in Mandarin speech from the effects of auditory signal manipulations. We recruited 20 native speakers of Mandarin and constructed two sets of lexical-tone chimeras by interchanging the envelopes and fine structures of both a falling/yi4/and a rising/yi2/Mandarin tone through 1, 2, 3, 4, 6, 8, 16, 32, and 64 auditory filter banks. We conducted pitch-perception ability tasks via a two-alternative, forced-choice paradigm to produce behavioral (versus physiological) pitch perception data. We also obtained electroencephalographic measurements through the scalp-recorded frequency-following response (FFR). Analyses of variances and post hoc Greenhouse-Geisser procedures revealed that the differences observed in the participants’ reaction times and FFR measurements were attributable primarily to chimeric novelty rather than signal manipulation effects. These findings can be useful in assessing neuroplasticity and developing speech-processing strategies.
APA, Harvard, Vancouver, ISO, and other styles
48

Krioni, A. E. "Detective Audit: Methodology for Assessing the Business Reliability of a Small and Medium-Sized Business Entity." Accounting. Analysis. Auditing 5, no. 4 (September 14, 2018): 64–77. http://dx.doi.org/10.26794/2408-9303-2018-5-4-64-77.

Full text
Abstract:
The method of private investigations is probably the most appropriate basis for the implementation of independent external control of commercial enterprises. Indeed, if we ignore the analysis of accounting documents that are perhaps the main and only source of financial activity history for the audited company, the remaining problem of forecasting business risk remains the search for unobvious, hidden from the public eyes factors of economic life of the company. In modern crisis conditions such analysis is difficult and requires new approaches for external control.Aims and objectives. The purpose of the work is to develop methodological provisions for the detective form of the layout of the auditing. The offered method is steady in demand among customers of detectives as it opens new opportunities for the honest business executives. This need for the interaction with the private investigation arises from the uncertainty of clients in the auditing opinion, which is one of the consequence of the direct dependence of the external auditor on the audited organization. A detective, on the contrary, is completely independent and free to collect, analyze any information that has to do with the actual financial and economic activities of the company.Methodology. The legal and institutional framework, the instant breakdown of technical equipment area and financial documentation, the interaction and separation of powers controls — these and about sixty more other most important factors are considered in the article, as essential qualitative characteristics affecting the current assessment of the financial and economic activity of the enterprise or firm.Results. The place of the detective auditing in the theory and practice of the auditor activity is specified. External and internal factors that exert a significant influence on the economic and financial activities of the organization are singled out. As a tool for its implementation, the method of factor analysis is proposed. Application area. The results of the research can be applied to the development of external auditing theory and practice.Conclusions. In the conditions of business opacity, detective auditing is a means of choosing future and reliable counterparties for the auditor’s clients.
APA, Harvard, Vancouver, ISO, and other styles
49

Zendel, Benjamin Rich, Charles-David Tremblay, Sylvie Belleville, and Isabelle Peretz. "The Impact of Musicianship on the Cortical Mechanisms Related to Separating Speech from Background Noise." Journal of Cognitive Neuroscience 27, no. 5 (May 2015): 1044–59. http://dx.doi.org/10.1162/jocn_a_00758.

Full text
Abstract:
Musicians have enhanced auditory processing abilities. In some studies, these abilities are paralleled by an improved understanding of speech in noisy environments, partially due to more robust encoding of speech signals in noise at the level of the brainstem. Little is known about the impact of musicianship on attention-dependent cortical activity related to lexical access during a speech-in-noise task. To address this issue, we presented musicians and nonmusicians with single words mixed with three levels of background noise, across two conditions, while monitoring electrical brain activity. In the active condition, listeners repeated the words aloud, and in the passive condition, they ignored the words and watched a silent film. When background noise was most intense, musicians repeated more words correctly compared with nonmusicians. Auditory evoked responses were attenuated and delayed with the addition of background noise. In musicians, P1 amplitude was marginally enhanced during active listening and was related to task performance in the most difficult listening condition. By comparing ERPs from the active and passive conditions, we isolated an N400 related to lexical access. The amplitude of the N400 was not influenced by the level of background noise in musicians, whereas N400 amplitude increased with the level of background noise in nonmusicians. In nonmusicians, the increase in N400 amplitude was related to a reduction in task performance. In musicians only, there was a rightward shift of the sources contributing to the N400 as the level of background noise increased. This pattern of results supports the hypothesis that encoding of speech in noise is more robust in musicians and suggests that this facilitates lexical access. Moreover, the shift in sources suggests that musicians, to a greater extent than nonmusicians, may increasingly rely on acoustic cues to understand speech in noise.
APA, Harvard, Vancouver, ISO, and other styles
50

Morinaga, Makoto, Takanori Matsui, Sonoko Kuwano, and Seiichiro Namba. "An experiment on the feeling of separation when multiple aircraft noises are overlapped." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 4 (August 1, 2021): 2058–63. http://dx.doi.org/10.3397/in-2021-2041.

Full text
Abstract:
In order to calculate the A-weighted single event sound exposure level () of aircraft noise, the following method is described in the manual for aircraft noise measurement in Japan. Firstly a time-section, which is the range between two points where the noise level is 10 dB lower than the maximum noise level (), should be identified, and secondly the energy within the section is integrated. This method can easily be applied to the single event noises. When multiple aircraft noises are overlapped simultaneously, there are cases where cannot be calculated adequately by this method. In such cases, it is required to record the number of aircraft noises in the field measurements. However, even in the case of manned measurement, it is not easy to separate sound sources just by listening to the sound. A pilot study of the psychoacoustic experiment was conducted using the stimuli where multiple aircraft noises were overlapped in order to find what condition is needed so that multiple aircraft noises were separately perceived. It was suggested that a considerable time interval was needed so that people felt the separation between aircraft noises only with auditory information.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography