Academic literature on the topic 'Computational auditory scene analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computational auditory scene analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computational auditory scene analysis"

1

Brown, Guy J., and Martin Cooke. "Computational auditory scene analysis." Computer Speech & Language 8, no. 4 (October 1994): 297–336. http://dx.doi.org/10.1006/csla.1994.1016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alain, Claude, and Lori J. Bernstein. "Auditory Scene Analysis." Music Perception 33, no. 1 (September 1, 2015): 70–82. http://dx.doi.org/10.1525/mp.2015.33.1.70.

Full text
Abstract:
Albert Bregman’s (1990) book Auditory Scene Analysis: The Perceptual Organization of Sound has had a tremendous impact on research in auditory neuroscience. Here, we outline some of the accomplishments. This review is not meant to be exhaustive, but rather aims to highlight milestones in the brief history of auditory neuroscience. The steady increase in neuroscience research following the book’s pivotal publication has advanced knowledge about how the brain forms representations of auditory objects. This research has far-reaching societal implications on health and quality of life. For instance, it helped us understand why some people experience difficulties understanding speech in noise, which in turn has led to development of therapeutic interventions. Importantly, the book acts as a catalyst, providing scientists with a common conceptual framework for research in such diverse fields as speech perception, music perception, neurophysiology and computational neuroscience. This interdisciplinary approach to research in audition is one of this book’s legacies.
APA, Harvard, Vancouver, ISO, and other styles
3

Brown, Guy J. "Computational auditory scene analysis: A representational approach." Journal of the Acoustical Society of America 94, no. 4 (October 1993): 2454. http://dx.doi.org/10.1121/1.407441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lewicki, Michael S., Bruno A. Olshausen, Annemarie Surlykke, and Cynthia F. Moss. "Computational issues in natural auditory scene analysis." Journal of the Acoustical Society of America 137, no. 4 (April 2015): 2249. http://dx.doi.org/10.1121/1.4920202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Niessen, Maria E., Ronald A. Van Elburg, Dirkjan J. Krijnders, and Tjeerd C. Andringa. "A computational model for auditory scene analysis." Journal of the Acoustical Society of America 123, no. 5 (May 2008): 3301. http://dx.doi.org/10.1121/1.2933719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nakadai, Kazuhiro, and Hiroshi G. Okuno. "Robot Audition and Computational Auditory Scene Analysis." Advanced Intelligent Systems 2, no. 9 (July 8, 2020): 2000050. http://dx.doi.org/10.1002/aisy.202000050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Godsmark, Darryl, and Guy J. Brown. "A blackboard architecture for computational auditory scene analysis." Speech Communication 27, no. 3-4 (April 1999): 351–66. http://dx.doi.org/10.1016/s0167-6393(98)00082-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kondo, Hirohito M., Anouk M. van Loon, Jun-Ichiro Kawahara, and Brian C. J. Moore. "Auditory and visual scene analysis: an overview." Philosophical Transactions of the Royal Society B: Biological Sciences 372, no. 1714 (February 19, 2017): 20160099. http://dx.doi.org/10.1098/rstb.2016.0099.

Full text
Abstract:
We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.
APA, Harvard, Vancouver, ISO, and other styles
9

Cooke, M. P., and G. J. Brown. "Computational auditory scene analysis: Exploiting principles of perceived continuity." Speech Communication 13, no. 3-4 (December 1993): 391–99. http://dx.doi.org/10.1016/0167-6393(93)90037-l.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shao, Yang, and DeLiang Wang. "Sequential organization of speech in computational auditory scene analysis." Speech Communication 51, no. 8 (August 2009): 657–67. http://dx.doi.org/10.1016/j.specom.2009.02.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Computational auditory scene analysis"

1

Ellis, Daniel Patrick Whittlesey. "Prediction-driven computational auditory scene analysis." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11006.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 173-180).
by Daniel P.W. Ellis.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Delmotte, Varinthira Duangudom. "Computational auditory saliency." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45888.

Full text
Abstract:
The objective of this dissertation research is to identify sounds that grab a listener's attention. These sounds that draw a person's attention are sounds that are considered salient. The focus here will be on investigating the role of saliency in the auditory attentional process. In order to identify these salient sounds, we have developed a computational auditory saliency model inspired by our understanding of the human auditory system and auditory perception. By identifying salient sounds we can obtain a better understanding of how sounds are processed by the auditory system, and in particular, the key features contributing to sound salience. Additionally, studying the salience of different auditory stimuli can lead to improvements in the performance of current computational models in several different areas, by making use of the information obtained about what stands out perceptually to observers in a particular scene. Auditory saliency also helps to rapidly sort the information present in a complex auditory scene. Since our resources are finite, not all information can be processed equally. We must, therefore, be able to quickly determine the importance of different objects in a scene. Additionally, an immediate response or decision may be required. In order to respond, the observer needs to know the key elements of the scene. The issue of saliency is closely related to many different areas, including scene analysis. The thesis provides a comprehensive look at auditory saliency. It explores the advantages and limitations of using auditory saliency models through different experiments and presents a general computational auditory saliency model that can be used for various applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Shao, Yang. "Sequential organization in computational auditory scene analysis." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1190127412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brown, Guy Jason. "Computational auditory scene analysis : a representational approach." Thesis, University of Sheffield, 1992. http://etheses.whiterose.ac.uk/2982/.

Full text
Abstract:
This thesis addresses the problem of how a listener groups together acoustic components which have arisen from the same environmental event, a phenomenon known as auditory scene analysis. A computational model of auditory scene analysis is presented, which is able to separate speech from a variety of interfering noises. The model consists of four processing stages. Firstly, the auditory periphery is simulated by a bank of bandpass filters and a model of inner hair cell function. In the second stage, physiologically-inspired models of higher auditory organization - aiditory maps - are used to provide a rich representational basis for scene analysis. Periodicities in the acoustic input are coded by an ant ocorrelation map and a crosscorrelation map. Information about spectral continuity is extracted by a frequency transition map. The times at which acoustic components start and stop are identified by an onset map and an offset map. In the third 8tage of processing, information from the periodicity and frequency transition maps is used to characterize the auditory scene as a collection of symbolic auditory objects. Finally, a search strategy identifies objects that have similar properties and groups them together. Specifically, objects are likely to form a group if they have a similar periodicity, onset time or offset time. The model has been evaluated in two ways, using the task of segregating voiced speech from a number of interfering sounds such as random noise, "cocktail party" noise and other speech. Firstly, a waveform can be resynthesized for each group in the auditory scene, so that segregation performance can be assessed by informal listening tests. The resynthesized speech is highly intelligible and fairly natural. Secondly, the linear nature of the resynthesis process allows the signal-to-noise ratio (SNR) to be compared before and after segregation. An improvement in SNR is obtained after segregation for each type of interfering noise. Additionally, the performance of the model is significantly better than that of a conventional frame-based autocorrelation segregation strategy.
APA, Harvard, Vancouver, ISO, and other styles
5

Srinivasan, Soundararajan. "Integrating computational auditory scene analysis and automatic speech recognition." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1158250036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Narayanan, Arun. "Computational auditory scene analysis and robust automatic speech recognition." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1401460288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Unnikrishnan, Harikrishnan. "AUDIO SCENE SEGEMENTATION USING A MICROPHONE ARRAY AND AUDITORY FEATURES." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_theses/622.

Full text
Abstract:
Auditory stream denotes the abstract effect a source creates in the mind of the listener. An auditory scene consists of many streams, which the listener uses to analyze and understand the environment. Computer analyses that attempt to mimic human analysis of a scene must first perform Audio Scene Segmentation (ASS). ASS find applications in surveillance, automatic speech recognition and human computer interfaces. Microphone arrays can be employed for extracting streams corresponding to spatially separated sources. However, when a source moves to a new location during a period of silence, such a system loses track of the source. This results in multiple spatially localized streams for the same source. This thesis proposes to identify local streams associated with the same source using auditory features extracted from the beamformed signal. ASS using the spatial cues is first performed. Then auditory features are extracted and segments are linked together based on similarity of the feature vector. An experiment was carried out with two simultaneous speakers. A classifier is used to classify the localized streams as belonging to one speaker or the other. The best performance was achieved when pitch appended with Gammatone Frequency Cepstral Coefficeints (GFCC) was used as the feature vector. An accuracy of 96.2% was achieved.
APA, Harvard, Vancouver, ISO, and other styles
8

Nakatani, Tomohiro. "Computational Auditory Scene Analysis Based on Residue-driven Architecture and Its Application to Mixed Speech Recognition." 京都大学 (Kyoto University), 2002. http://hdl.handle.net/2433/149754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Javadi, Ailar. "Bio-inspired noise robust auditory features." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44801.

Full text
Abstract:
The purpose of this work is to investigate a series of biologically inspired modifications to state-of-the-art Mel- frequency cepstral coefficients (MFCCs) that may improve automatic speech recognition results. We have provided recommendations to improve speech recognition results de- pending on signal-to-noise ratio levels of input signals. This work has been motivated by noise-robust auditory features (NRAF). In the feature extraction technique, after a signal is filtered using bandpass filters, a spatial derivative step is used to sharpen the results, followed by an envelope detector (recti- fication and smoothing) and down-sampling for each filter bank before being compressed. DCT is then applied to the results of all filter banks to produce features. The Hidden- Markov Model Toolkit (HTK) is used as the recognition back-end to perform speech recognition given the features we have extracted. In this work, we investigate the role of filter types, window size, spatial derivative, rectification types, smoothing, down- sampling and compression and compared the final results to state-of-the-art Mel-frequency cepstral coefficients (MFCC). A series of conclusions and insights are provided for each step of the process. The goal of this work has not been to outperform MFCCs; however, we have shown that by changing the compression type from log compression to 0.07 root compression we are able to outperform MFCCs for all noisy conditions.
APA, Harvard, Vancouver, ISO, and other styles
10

Melih, Kathy, and n/a. "Audio Source Separation Using Perceptual Principles for Content-Based Coding and Information Management." Griffith University. School of Information Technology, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20050114.081327.

Full text
Abstract:
The information age has brought with it a dual problem. In the first place, the ready access to mechanisms to capture and store vast amounts of data in all forms (text, audio, image and video), has resulted in a continued demand for ever more efficient means to store and transmit this data. In the second, the rapidly increasing store demands effective means to structure and access the data in an efficient and meaningful manner. In terms of audio data, the first challenge has traditionally been the realm of audio compression research that has focused on statistical, unstructured audio representations that obfuscate the inherent structure and semantic content of the underlying data. This has only served to further complicate the resolution of the second challenge resulting in access mechanisms that are either impractical to implement, too inflexible for general application or too low level for the average user. Thus, an artificial dichotomy has been created from what is in essence a dual problem. The founding motivation of this thesis is that, although the hypermedia model has been identified as the ideal, cognitively justified method for organising data, existing audio data representations and coding models provide little, if any, support for, or resemblance to, this model. It is the contention of the author that any successful attempt to create hyperaudio must resolve this schism, addressing both storage and information management issues simultaneously. In order to achieve this aim, an audio representation must be designed that provides compact data storage while, at the same time, revealing the inherent structure of the underlying data. Thus it is the aim of this thesis to present a representation designed with these factors in mind. Perhaps the most difficult hurdle in the way of achieving the aims of content-based audio coding and information management is that of auditory source separation. The MPEG committee has noted this requirement during the development of its MPEG-7 standard, however, the mechanics of "how" to achieve auditory source separation were left as an open research question. This same committee proposed that MPEG-7 would "support descriptors that can act as handles referring directly to the data, to allow manipulation of the multimedia material." While meta-data tags are a part solution to this problem, these cannot allow manipulation of audio material down to the level of individual sources when several simultaneous sources exist in a recording. In order to achieve this aim, the data themselves must be encoded in such a manner that allows these descriptors to be formed. Thus, content-based coding is obviously required. In the case of audio, this is impossible to achieve without effecting auditory source separation. Auditory source separation is the concern of computational auditory scene analysis (CASA). However, the findings of CASA research have traditionally been restricted to a limited domain. To date, the only real application of CASA research to what could loosely be classified as information management has been in the area of signal enhancement for automatic speech recognition systems. In these systems, a CASA front end serves as a means of separating the target speech from the background "noise". As such, the design of a CASA-based approach, as presented in this thesis, to one of the most significant challenges facing audio information management research represents a significant contribution to the field of information management. Thus, this thesis unifies research from three distinct fields in an attempt to resolve some specific and general challenges faced by all three. It describes an audio representation that is based on a sinusoidal model from which low-level auditory primitive elements are extracted. The use of a sinusoidal representation is somewhat contentious with the modern trend in CASA research tending toward more complex approaches in order to resolve issues relating to co-incident partials. However, the choice of a sinusoidal representation has been validated by the demonstration of a method to resolve many of these issues. The majority of the thesis contributes several algorithms to organise the low-level primitives into low-level auditory objects that may form the basis of nodes or link anchor points in a hyperaudio structure. Finally, preliminary investigations in the representation’s suitability for coding and information management tasks are outlined as directions for future research.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Computational auditory scene analysis"

1

F, Rosenthal David, and Okuno Hiroshi G, eds. Computational auditory scene analysis. Mahwah, N.J: Lawrence Erlbaum Associates, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lerch, Alexander. Audio content analysis: An introduction. Hoboken, N.J: Wiley, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Wenwu. Machine audition: Principles, algorithms, and systems. Hershey, PA: Information Science Reference, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rowe, Robert. Interactive music systems: Machine listening and composing. Cambridge, Mass: MIT Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

author, Pikrakis Aggelos, ed. Introduction to audio analysis: A MATLAB approach. Kidlington, Oxford: Academic Press is an imprint of Elsevier, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Auditory scene analysis: The perceptual organization of sound. Cambridge, Mass: MIT Press, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

McDonald, Kelly Loreen. The role of harmonicity and location cues in auditory scene analysis. Ottawa: National Library of Canada, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rosenthal, David F., Hiroshi G. Okuno, Hiroshi Okuno, and David Rosenthal, eds. Computational Auditory Scene Analysis. CRC Press, 2020. http://dx.doi.org/10.1201/9781003064183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, DeLiang, and Guy J. Brown. Computational Auditory Scene Analysis. IEEE, 2006. http://dx.doi.org/10.1109/9780470043387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Computational Auditory Scene Analysis: Principles, Algorithms, and Applications. Wiley-IEEE Press, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Computational auditory scene analysis"

1

Mellinger, David K., and Bernard M. Mont-Reynaud. "Scene Analysis." In Auditory Computation, 271–331. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-4070-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brown, Guy J. "Physiological Models of Auditory Scene Analysis." In Computational Models of the Auditory System, 203–36. Boston, MA: Springer US, 2010. http://dx.doi.org/10.1007/978-1-4419-5934-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Narayanan, Arun, and Deliang Wang. "Computational Auditory Scene Analysis and Automatic Speech Recognition." In Techniques for Noise Robustness in Automatic Speech Recognition, 433–62. Chichester, UK: John Wiley & Sons, Ltd, 2012. http://dx.doi.org/10.1002/9781118392683.ch16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kashino, Makio, Eisuke Adachi, and Haruto Hirose. "A Computational Approach to the Dynamic Aspects of Primitive Auditory Scene Analysis." In Advances in Experimental Medicine and Biology, 519–26. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-1590-9_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hummersone, Christopher, Toby Stokes, and Tim Brookes. "On the Ideal Ratio Mask as the Goal of Computational Auditory Scene Analysis." In Blind Source Separation, 349–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-55016-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, DeLiang. "Computational Scene Analysis." In Challenges for Computational Intelligence, 163–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-71984-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leibold, Lori J. "Development of Auditory Scene Analysis and Auditory Attention." In Human Auditory Development, 137–61. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4614-1421-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Carlyon, Robert P., Sarah K. Thompson, Antje Heinrich, Friedemann Pulvermuller, Matthew H. Davis, Yury Shtyrov, Rhodri Cusack, and Ingrid S. Johnsrude. "Objective Measures of Auditory Scene Analysis." In The Neurophysiological Bases of Auditory Perception, 507–19. New York, NY: Springer New York, 2010. http://dx.doi.org/10.1007/978-1-4419-5686-6_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stowell, Dan. "Computational Bioacoustic Scene Analysis." In Computational Analysis of Sound Scenes and Events, 303–33. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mountain, David C., and Allyn E. Hubbard. "Computational Analysis of Hair Cell and Auditory Nerve Processes." In Auditory Computation, 121–56. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-4070-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computational auditory scene analysis"

1

Brown, Guy J., and Martin P. Cooke. "A computational model of auditory scene analysis." In 2nd International Conference on Spoken Language Processing (ICSLP 1992). ISCA: ISCA, 1992. http://dx.doi.org/10.21437/icslp.1992-172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang Shao and DeLiang Wang. "Robust speaker identification using auditory features and computational auditory scene analysis." In ICASSP 2008 - 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/icassp.2008.4517928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tu, Ming, Xiang Xie, and Xingyu Na. "Computational Auditory Scene Analysis Based Voice Activity Detection." In 2014 22nd International Conference on Pattern Recognition (ICPR). IEEE, 2014. http://dx.doi.org/10.1109/icpr.2014.147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kawamoto, Mitsuru, and Takuji Hamamoto. "Building Health Monitoring Using Computational Auditory Scene Analysis." In 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS). IEEE, 2020. http://dx.doi.org/10.1109/dcoss49796.2020.00033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Larigaldie, Nathanael, and Ulrik Beierholm. "Explaining Human Auditory Scene Analysis Through Bayesian Clustering." In 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1227-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Okuno, Hiroshi G., Tetsuya Ogata, and Kazunori Komatani. "Robot Audition from the Viewpoint of Computational Auditory Scene Analysis." In International Conference on Informatics Education and Research for Knowledge-Circulating Society (icks 2008). IEEE, 2008. http://dx.doi.org/10.1109/icks.2008.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Srinivasan, Soundararajan, Yang Shao, Zhaozhang Jin, and DeLiang Wang. "A computational auditory scene analysis system for robust speech recognition." In Interspeech 2006. ISCA: ISCA, 2006. http://dx.doi.org/10.21437/interspeech.2006-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kawamoto, Mitsuru. "Sound-environment monitoring technique based on computational auditory scene analysis." In 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017. http://dx.doi.org/10.23919/eusipco.2017.8081664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Okuno, Hiroshi G., and Kazuhiro Nakadai. "Computational Auditory Scene Analysis and its Application to Robot Audition." In 2008 Hands-Free Speech Communication and Microphone Arrays (HSCMA 2008). IEEE, 2008. http://dx.doi.org/10.1109/hscma.2008.4538702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cusimano, Maddie, Luke Hewitt, Joshua B. Tenenbaum, and Josh H. McDermott. "Auditory scene analysis as Bayesian inference in sound source models." In 2018 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2018. http://dx.doi.org/10.32470/ccn.2018.1039-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computational auditory scene analysis"

1

Shao, Yang, Soundararajan Srinivasan, Zhaozhang Jin, and DeLiang Wang. A Computational Auditory Scene Analysis System for Speech Segregation and Robust Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, January 2007. http://dx.doi.org/10.21236/ad1001212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lazzaro, John, and John Wawrzynek. Silicon Models for Auditory Scene Analysis. Fort Belvoir, VA: Defense Technical Information Center, January 1995. http://dx.doi.org/10.21236/ada327239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McKinnon, Mark, and Daniel Madryzkowski. Literature Review to Support the Development of a Database of Contemporary Material Properties for Fire Investigation Analysis. UL Firefighter Safety Research Institute, June 2020. http://dx.doi.org/10.54206/102376/wmah2173.

Full text
Abstract:
The NIJ Technology Working Group’s Operational Requirements (TWG ORs) for Fire and Arson Investigation have included several scientific research needs that require knowledge of the thermophysical properties of materials that are common in the built environment, and therefore likely to be involved in a fire scene. The specific areas of research include: adequate materials property data inputs for accurate computer models, understanding the effect of materials properties on the development and interpretation of fire patterns, and evaluation of incident heat flux profiles to walls and neighboring items in support of fire model validation. These topics certainly address, in a concise way, many of the gaps that limit the analysis capability of fire investigators and engineers. Each of the three aforementioned research topics rely, in part, on accurate knowledge of the physical conditions of a material prior to the fire, how the material will respond to the exposure of heat, and how it will perform once it has ignited. This general information is required to visually assess a fire scene. The same information is needed by investigators to estimate the evolution and consequences of a fire incident using a computer model. Data sources that are currently most commonly used to determine the required properties and model inputs are outdated and incomplete. This report includes the literature review used to provide a technical approach to developing a materials database for use in fire investigations and computational fire models. A summary of the input from the project technical panel is presented which guided the initial selection of materials to be included in the database as well as the selection of test measurements.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography