Journal articles on the topic 'Computational auditory scene analysis'

To see the other types of publications on this topic, follow the link: Computational auditory scene analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computational auditory scene analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Brown, Guy J., and Martin Cooke. "Computational auditory scene analysis." Computer Speech & Language 8, no. 4 (October 1994): 297–336. http://dx.doi.org/10.1006/csla.1994.1016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alain, Claude, and Lori J. Bernstein. "Auditory Scene Analysis." Music Perception 33, no. 1 (September 1, 2015): 70–82. http://dx.doi.org/10.1525/mp.2015.33.1.70.

Full text
Abstract:
Albert Bregman’s (1990) book Auditory Scene Analysis: The Perceptual Organization of Sound has had a tremendous impact on research in auditory neuroscience. Here, we outline some of the accomplishments. This review is not meant to be exhaustive, but rather aims to highlight milestones in the brief history of auditory neuroscience. The steady increase in neuroscience research following the book’s pivotal publication has advanced knowledge about how the brain forms representations of auditory objects. This research has far-reaching societal implications on health and quality of life. For instance, it helped us understand why some people experience difficulties understanding speech in noise, which in turn has led to development of therapeutic interventions. Importantly, the book acts as a catalyst, providing scientists with a common conceptual framework for research in such diverse fields as speech perception, music perception, neurophysiology and computational neuroscience. This interdisciplinary approach to research in audition is one of this book’s legacies.
APA, Harvard, Vancouver, ISO, and other styles
3

Brown, Guy J. "Computational auditory scene analysis: A representational approach." Journal of the Acoustical Society of America 94, no. 4 (October 1993): 2454. http://dx.doi.org/10.1121/1.407441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lewicki, Michael S., Bruno A. Olshausen, Annemarie Surlykke, and Cynthia F. Moss. "Computational issues in natural auditory scene analysis." Journal of the Acoustical Society of America 137, no. 4 (April 2015): 2249. http://dx.doi.org/10.1121/1.4920202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Niessen, Maria E., Ronald A. Van Elburg, Dirkjan J. Krijnders, and Tjeerd C. Andringa. "A computational model for auditory scene analysis." Journal of the Acoustical Society of America 123, no. 5 (May 2008): 3301. http://dx.doi.org/10.1121/1.2933719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nakadai, Kazuhiro, and Hiroshi G. Okuno. "Robot Audition and Computational Auditory Scene Analysis." Advanced Intelligent Systems 2, no. 9 (July 8, 2020): 2000050. http://dx.doi.org/10.1002/aisy.202000050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Godsmark, Darryl, and Guy J. Brown. "A blackboard architecture for computational auditory scene analysis." Speech Communication 27, no. 3-4 (April 1999): 351–66. http://dx.doi.org/10.1016/s0167-6393(98)00082-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kondo, Hirohito M., Anouk M. van Loon, Jun-Ichiro Kawahara, and Brian C. J. Moore. "Auditory and visual scene analysis: an overview." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (February 19, 2017): 20160099. http://dx.doi.org/10.1098/rstb.2016.0099.

Full text
Abstract:
We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.
APA, Harvard, Vancouver, ISO, and other styles
9

Cooke, M. P., and G. J. Brown. "Computational auditory scene analysis: Exploiting principles of perceived continuity." Speech Communication 13, no. 3-4 (December 1993): 391–99. http://dx.doi.org/10.1016/0167-6393(93)90037-l.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shao, Yang, and DeLiang Wang. "Sequential organization of speech in computational auditory scene analysis." Speech Communication 51, no. 8 (August 2009): 657–67. http://dx.doi.org/10.1016/j.specom.2009.02.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fodróczi, Zoltán, and András Radványi. "Computational auditory scene analysis in cellular wave computing framework." International Journal of Circuit Theory and Applications 34, no. 4 (2006): 489–515. http://dx.doi.org/10.1002/cta.362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Bregman, Albert S. "Progress in Understanding Auditory Scene Analysis." Music Perception 33, no. 1 (September 1, 2015): 12–19. http://dx.doi.org/10.1525/mp.2015.33.1.12.

Full text
Abstract:
In this paper, I make the following claims: (1) Subjective experience is tremendously useful in guiding productive research. (2) Studies of auditory scene analysis (ASA) in adults, newborn infants, and non-human animals (e.g., in goldfish or pigeons) establish the generality of ASA and suggest that it has an innate foundation. (3) ASA theory does not favor one musical style over another. (4) The principles used in the composition of polyphony (slightly modified) apply not only to one particular musical style or culture but to any form of layered music. (5) Neural explanations of ASA do not supersede explanations in terms of capacities; the two are complementary. (6) In computational auditory scene analysis (CASA) – ASA by computer systems – or any adequate theory of ASA, the most difficult challenge will be to discover how the contributions of a very large number of types of acoustical evidence and top-down schemas (acquired knowledge about the sound sources in our environments), can be coordinated without producing conflict that disables the system. (7) Finally I argue that the movement of a listener within the auditory scene provides him/her/it with rich information that should not be ignored by ASA theorists and researchers.
APA, Harvard, Vancouver, ISO, and other styles
13

Crawford, Malcolm, Martin Cooke, and Guy Brown. "Interactive computational auditory scene analysis: An environment for exploring auditory representations and groups." Journal of the Acoustical Society of America 93, no. 4 (April 1993): 2308. http://dx.doi.org/10.1121/1.406432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hongyan, Li, Cao Meng, and Wang Yue. "Separation of Reverberant Speech Based on Computational Auditory Scene Analysis." Automatic Control and Computer Sciences 52, no. 6 (November 2018): 561–71. http://dx.doi.org/10.3103/s0146411618060068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kawamoto, Mitsuru. "Sound-Environment Monitoring Method Based on Computational Auditory Scene Analysis." Journal of Signal and Information Processing 08, no. 02 (2017): 65–77. http://dx.doi.org/10.4236/jsip.2017.82005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Cooke, Martin, Guy J. Brown, Malcolm Crawford, and Phil Green. "Computational auditory scene analysis: listening to several things at once." Endeavour 17, no. 4 (January 1993): 186–90. http://dx.doi.org/10.1016/0160-9327(93)90061-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Drake, Laura A., Janet C. Rutledge, and Aggelos Katsaggelos. "Computational auditory scene analysis‐constrained array processing for sound source separation." Journal of the Acoustical Society of America 106, no. 4 (October 1999): 2238. http://dx.doi.org/10.1121/1.427622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

McLACHLAN, NEIL, DINESH KANT KUMAR, and JOHN BECKER. "WAVELET CLASSIFICATION OF INDOOR ENVIRONMENTAL SOUND SOURCES." International Journal of Wavelets, Multiresolution and Information Processing 04, no. 01 (March 2006): 81–96. http://dx.doi.org/10.1142/s0219691306001105.

Full text
Abstract:
Computational auditory scene analysis (CASA) has been attracting growing interest since the publication of Bregman's text on human auditory scene analysis, and is expected to find many applications in data retrieval, autonomous robots, security and environmental analysis. This paper reports on the use of Fourier transforms and wavelet transforms to produce spectral data of sounds from different sources for classification by neural networks. It was found that the multiresolution time-frequency analyses of wavelet transforms dramatically improved classification accuracy when statistical descriptors that captured measures of band limited spectral energy and temporal energy fluctuation were used.
APA, Harvard, Vancouver, ISO, and other styles
19

Haykin, Simon, and Zhe Chen. "The Cocktail Party Problem." Neural Computation 17, no. 9 (September 1, 2005): 1875–902. http://dx.doi.org/10.1162/0899766054322964.

Full text
Abstract:
This review presents an overview of a challenging problem in auditory perception, the cocktail party phenomenon, the delineation of which goes back to a classic paper by Cherry in 1953. In this review, we address the following issues: (1) human auditory scene analysis, which is a general process carried out by the auditory system of a human listener; (2) insight into auditory perception, which is derived from Marr's vision theory; (3) computational auditory scene analysis, which focuses on specific approaches aimed at solving the machine cocktail party problem; (4) active audition, the proposal for which is motivated by analogy with active vision, and (5) discussion of brain theory and independent component analysis, on the one hand, and correlative neural firing, on the other.
APA, Harvard, Vancouver, ISO, and other styles
20

Kashino, Makio, Eisuke Adachi, and Haruto Hirose. "A computational model for the dynamic aspects of primitive auditory scene analysis." Journal of the Acoustical Society of America 131, no. 4 (April 2012): 3230. http://dx.doi.org/10.1121/1.4708046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Hu, Ying, and Guizhong Liu. "Singer identification based on computational auditory scene analysis and missing feature methods." Journal of Intelligent Information Systems 42, no. 3 (August 9, 2013): 333–52. http://dx.doi.org/10.1007/s10844-013-0271-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kaya, Emine Merve, and Mounya Elhilali. "Modelling auditory attention." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (February 19, 2017): 20160101. http://dx.doi.org/10.1098/rstb.2016.0101.

Full text
Abstract:
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by ‘bottom-up’ sensory-driven factors, as well as ‘top-down’ task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes. This article is part of the themed issue ‘Auditory and visual scene analysis’.
APA, Harvard, Vancouver, ISO, and other styles
23

Bregman, Albert S. "Constraints on computational models of auditory scene analysis, as derived from human perception." Journal of the Acoustical Society of Japan (E) 16, no. 3 (1995): 133–36. http://dx.doi.org/10.1250/ast.16.133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Shao, Yang, Soundararajan Srinivasan, Zhaozhang Jin, and DeLiang Wang. "A computational auditory scene analysis system for speech segregation and robust speech recognition." Computer Speech & Language 24, no. 1 (January 2010): 77–93. http://dx.doi.org/10.1016/j.csl.2008.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Cichy, Radoslaw Martin, and Santani Teng. "Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (February 19, 2017): 20160108. http://dx.doi.org/10.1098/rstb.2016.0108.

Full text
Abstract:
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’.
APA, Harvard, Vancouver, ISO, and other styles
26

Zeremdini, Jihen, Mohamed Anouar Ben Messaoud, and Aicha Bouzid. "A comparison of several computational auditory scene analysis (CASA) techniques for monaural speech segregation." Brain Informatics 2, no. 3 (August 4, 2015): 155–66. http://dx.doi.org/10.1007/s40708-015-0016-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hupé, Jean-Michel, and Daniel Pressnitzer. "The initial phase of auditory and visual scene analysis." Philosophical Transactions of the Royal Society B: Biological Sciences 367, no. 1591 (April 5, 2012): 942–53. http://dx.doi.org/10.1098/rstb.2011.0368.

Full text
Abstract:
Auditory streaming and visual plaids have been used extensively to study perceptual organization in each modality. Both stimuli can produce bistable alternations between grouped (one object) and split (two objects) interpretations. They also share two peculiar features: (i) at the onset of stimulus presentation, organization starts with a systematic bias towards the grouped interpretation; (ii) this first percept has ‘inertia’; it lasts longer than the subsequent ones. As a result, the probability of forming different objects builds up over time, a landmark of both behavioural and neurophysiological data on auditory streaming. Here we show that first percept bias and inertia are independent. In plaid perception, inertia is due to a depth ordering ambiguity in the transparent (split) interpretation that makes plaid perception tristable rather than bistable: experimental manipulations removing the depth ambiguity suppressed inertia. However, the first percept bias persisted. We attempted a similar manipulation for auditory streaming by introducing level differences between streams, to bias which stream would appear in the perceptual foreground. Here both inertia and first percept bias persisted. We thus argue that the critical common feature of the onset of perceptual organization is the grouping bias, which may be related to the transition from temporally/spatially local to temporally/spatially global computation.
APA, Harvard, Vancouver, ISO, and other styles
28

Darwin, Chris. "Computational Auditory Scene Analysis: Principles, Algorithms and ApplicationsComputational Auditory Scene Analysis: Principles, Algorithms and ApplicationsDeLiangWangGuy J.BrownWiley-IEEE Press, Hoboken, N.J., 2006. xxiii+395 pp. $95.50 (hardcover), ISBN: 0471741094." Journal of the Acoustical Society of America 124, no. 1 (July 2008): 13. http://dx.doi.org/10.1121/1.2920958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Li, P., Y. Guan, B. Xu, and W. Liu. "Monaural Speech Separation Based on Computational Auditory Scene Analysis and Objective Quality Assessment of Speech." IEEE Transactions on Audio, Speech and Language Processing 14, no. 6 (November 2006): 2014–23. http://dx.doi.org/10.1109/tasl.2006.883258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

PARK, J. H., J. S. YOON, and H. K. KIM. "HMM-Based Mask Estimation for a Speech Recognition Front-End Using Computational Auditory Scene Analysis." IEICE Transactions on Information and Systems E91-D, no. 9 (September 1, 2008): 2360–64. http://dx.doi.org/10.1093/ietisy/e91-d.9.2360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

McElveen, J. K., Leonid Krasny, and Scott Nordlund. "Applying matched field array processing and machine learning to computational auditory scene analysis and source separation challenges." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A232. http://dx.doi.org/10.1121/10.0011162.

Full text
Abstract:
Matched field processing (MFP) techniques employing physics-based models of acoustic propagation have been successfully and widely applied to underwater target detection and localization, while machine learning (ML) techniques have enabled detection and extraction of patterns in data. Fusing MFP and ML enables the estimation of Green’s Function solutions to the Acoustic Wave Equation for waveguides from data captured in real, reverberant acoustic environments. These Green’s Function estimates can further enable the robust separation of individual sources, even in the presence of multiple loud, interfering, interposed, and competing noise sources. We first introduce MFP and ML and then discuss their application to Computational Auditory Scene Analysis (CASA) and acoustic source separation. Results from a variety of tests using a binaural headset, as well as different wearable and free-standing microphone arrays are then presented to illustrate the effects of the number and placement of sensors on the residual noise floor after separation. Finally, speculations on the similarities between this proprietary approach and the human auditory system’s use of interaural cross-correlation in formulation of acoustic spatial models will be introduced and ideas for further research proposed.
APA, Harvard, Vancouver, ISO, and other styles
32

Veale, Richard, Ziad M. Hafed, and Masatoshi Yoshida. "How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (February 19, 2017): 20160113. http://dx.doi.org/10.1098/rstb.2016.0113.

Full text
Abstract:
Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a ‘saliency map’ topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority maps for output behaviour. We then delve into how a subcortical structure, the superior colliculus (SC), participates in salience computation. The SC represents a visual saliency map via a centre-surround inhibition mechanism in the superficial layers, which feeds into priority selection mechanisms in the deeper layers, thereby affecting saccadic and microsaccadic eye movements. Lateral interactions in the local SC circuit are particularly important for controlling active populations of neurons. This, in turn, might help explain long-range effects, such as those of peripheral cues on tiny microsaccades. Finally, we show how a combination of in vitro neurophysiology and large-scale computational modelling is able to clarify how salience computation is implemented in the local circuit of the SC. This article is part of the themed issue ‘Auditory and visual scene analysis’.
APA, Harvard, Vancouver, ISO, and other styles
33

Gregoire, Jerry. "Review and comparison of methods using spectral characteristics for the purposes of CASA, Computational Auditory Scene Analysis." Journal of the Acoustical Society of America 114, no. 4 (October 2003): 2331. http://dx.doi.org/10.1121/1.4781037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Rouat, J. "Computational Auditory Scene Analysis: Principles, Algorithms, and Applications (Wang, D. and Brown, G.J., Eds.; 2006) [Book review]." IEEE Transactions on Neural Networks 19, no. 1 (January 2008): 199. http://dx.doi.org/10.1109/tnn.2007.913988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Jiang, Yi, Yuan Yuan Zu, and Ying Ze Wang. "An Unsupervised Approach to Close-Talk Speech Enhancement." Applied Mechanics and Materials 614 (September 2014): 363–66. http://dx.doi.org/10.4028/www.scientific.net/amm.614.363.

Full text
Abstract:
A K-means based unsupervised approach to close-talk speech enhancement is proposed in this paper. With the frame work of computational auditory scene analysis (CASA), the dual-microphone energy difference (DMED) is used as the cue to classify the noise domain time-frequency (T-F) units and target speech domain units. A ratio mask is used to separate the target speech and noise. Experiment results show the robust performance of the proposed algorithm than the Wiener filtering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhou, Hong, Yi Jiang, Ming Jiang, and Qiang Chen. "Energy Difference Based Speech Segregation for Close-Talk System." Applied Mechanics and Materials 229-231 (November 2012): 1738–41. http://dx.doi.org/10.4028/www.scientific.net/amm.229-231.1738.

Full text
Abstract:
Within the framework of computational auditory scene analysis (CASA), a speech separation algorithm based on energy difference for close-talk system was proposed. The two microphones received the mixture signal of close target speech and far noise sound at the same time. The inter-microphone intensity differences (IMID) of the two microphones in time-frequency (T-F) units were calculated. And used as cues to generate the binary masks with the K-means two class clustering method. Experiments indicated that this novel algorithm could separate the target speech from the mixture sound, and performed well in a big noise environment.
APA, Harvard, Vancouver, ISO, and other styles
37

Otsuka, Takuma, Katsuhiko Ishiguro, Hiroshi Sawada, and Hiroshi Okuno. "Bayesian Unification of Sound Source Localization and Separation with Permutation Resolution." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 2038–45. http://dx.doi.org/10.1609/aaai.v26i1.8376.

Full text
Abstract:
Sound source localization and separation with permutation resolution are essential for achieving a computational auditory scene analysis system that can extract useful information from a mixture of various sounds. Because existing methods cope separately with these problems despite their mutual dependence, the overall result with these approaches can be degraded by any failure in one of these components. This paper presents a unified Bayesian framework to solve these problems simultaneously where localization and separation are regarded as a clustering problem. Experimental results confirm that our method outperforms state-of-the-art methods in terms of the separation quality with various setups including practical reverberant environments.
APA, Harvard, Vancouver, ISO, and other styles
38

Ellis, Daniel P. W. "Using knowledge to organize sound: The prediction-driven approach to computational auditory scene analysis and its application to speech/nonspeech mixtures." Speech Communication 27, no. 3-4 (April 1999): 281–98. http://dx.doi.org/10.1016/s0167-6393(98)00083-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Hongyan, Yue Wang, Rongrong Zhao, and Xueying Zhang. "An Unsupervised Two-Talker Speech Separation System Based on CASA." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 07 (March 14, 2018): 1858002. http://dx.doi.org/10.1142/s0218001418580028.

Full text
Abstract:
On the basis of the theory about blind separation of monaural speech based on computational auditory scene analysis (CASA), a two-talker speech separation system combining CASA and speaker recognition was proposed to separate speech from other speech interferences in this paper. First, a tandem algorithm is used to organize voiced speech, then based on the clustering of gammatone frequency cepstral coefficients (GFCCs), an object function is established to recognize the speaker, and the best group is achieved through exhaustive search or beam search, so that voiced speech is organized sequentially. Second, unvoiced segments are generated by estimating onset/offset, and then unvoiced–voiced (U–V) segments and unvoiced–unvoiced (U–U) segments are separated respectively. The U–V segments are managed via the binary mask of the separated voiced speech, while the U–V segments are separated evenly. So far the unvoiced segments are separated. The simulation and performance evaluation verify the feasibility and effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
40

Abe, Mototsugu, and Shigeru Ando. "Auditory scene analysis based on time-frequency integration of shared FM and AM (I): Lagrange differential features and frequency-axis integration." Systems and Computers in Japan 33, no. 11 (August 5, 2002): 95–106. http://dx.doi.org/10.1002/scj.1167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Abe, Mototsugu, and Shigeru Ando. "Auditory scene analysis based on time-frequency integration of shared FM and AM (II): Optimum time-domain integration and stream sound reconstruction." Systems and Computers in Japan 33, no. 10 (July 2, 2002): 83–94. http://dx.doi.org/10.1002/scj.1160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

He, Yuebo, Hui Gao, Hai Liu, and Guoxi Jing. "Identification of prominent noise components of an electric powertrain using a psychoacoustic model." Noise Control Engineering Journal 70, no. 2 (March 1, 2022): 103–14. http://dx.doi.org/10.3397/1/37709.

Full text
Abstract:
Because of the electric power transmission system has no sound masking effect compared with the traditional internal combustion power transmission system, electric powertrain noise has become the prominent noise of electric vehicles, adversely affecting the sound quality of the vehicle interior. Because of the strong coupling of motor and transmission noise, it is difficult to separate and identify the compositions of the electric powertrain by experiments. A psychoacoustic model is used to separate and identify the noise sources of the electric powertrain of a vehicle, considering the masking effect of the human ear. The electric powertrain noise is tested in a semi-anechoic chamber and recorded by a high-precision noise sensor. The noise source compositions of the electric powertrain are analyzed by the computational auditory scene analysis and robust independent component analysis. Five independent noise sources are obtained, i.e., the fundamental frequency of the first gear mesh noise, fundamental frequency of the second gear mesh noise, double frequency of the second gear mesh noise, radial electromagnetic force noise and stator slot harmonic noise. The results provide a guide for the optimization of the sound quality of the electric powertrain and for the improvement of the sound quality of the vehicle interior.
APA, Harvard, Vancouver, ISO, and other styles
43

He, Zhuang, and Yin Feng. "Singing Transcription from Polyphonic Music Using Melody Contour Filtering." Applied Sciences 11, no. 13 (June 25, 2021): 5913. http://dx.doi.org/10.3390/app11135913.

Full text
Abstract:
Automatic singing transcription and analysis from polyphonic music records are essential in a number of indexing techniques for computational auditory scenes. To obtain a note-level sequence in this work, we divide the singing transcription task into two subtasks: melody extraction and note transcription. We construct a salience function in terms of harmonic and rhythmic similarity and a measurement of spectral balance. Central to our proposed method is the measurement of melody contours, which are calculated using edge searching based on their continuity properties. We calculate the mean contour salience by separating melody analysis from the adjacent breakpoint connective strength matrix, and we select the final melody contour to determine MIDI notes. This unique method, combining audio signals with image edge analysis, provides a more interpretable analysis platform for continuous singing signals. Experimental analysis using Music Information Retrieval Evaluation Exchange (MIREX) datasets shows that our technique achieves promising results both for audio melody extraction and polyphonic singing transcription.
APA, Harvard, Vancouver, ISO, and other styles
44

Johnson, Keith. "Auditory Scene Analysis." Journal of Phonetics 21, no. 4 (October 1993): 491–96. http://dx.doi.org/10.1016/s0095-4470(19)30232-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

FAY, RICHARD R. "AUDITORY SCENE ANALYSIS." Bioacoustics 17, no. 1-3 (January 2008): 106–9. http://dx.doi.org/10.1080/09524622.2008.9753783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Sutter, Mitchell L., Christopher Petkov, Kathleen Baynes, and Kevin N. OʼConnor. "Auditory scene analysis in dyslexics." NeuroReport 11, no. 9 (June 2000): 1967–71. http://dx.doi.org/10.1097/00001756-200006260-00032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Fay, Richard. "Auditory scene analysis in fish." Journal of the Acoustical Society of America 126, no. 4 (2009): 2290. http://dx.doi.org/10.1121/1.3249386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bregman, Albert S. "Auditory scene analysis: Theory and phenomena." Journal of the Acoustical Society of America 93, no. 4 (April 1993): 2306. http://dx.doi.org/10.1121/1.406452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Sussman, Elyse. "Auditory Scene Analysis: An Attention Perspective." Clinical Research Education Library 1, no. 1 (2016): 1. http://dx.doi.org/10.1044/cred-pvd-c16002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Sussman, Elyse S. "Auditory Scene Analysis: An Attention Perspective." Journal of Speech, Language, and Hearing Research 60, no. 10 (October 17, 2017): 2989–3000. http://dx.doi.org/10.1044/2017_jslhr-h-17-0041.

Full text
Abstract:
Purpose This review article provides a new perspective on the role of attention in auditory scene analysis. Method A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601618
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography