Gotowa bibliografia na temat „Acoustic Scene Analysis”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Acoustic Scene Analysis”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Acoustic Scene Analysis"
Terez, Dmitry. "Acoustic scene analysis using microphone arrays." Journal of the Acoustical Society of America 128, nr 4 (październik 2010): 2442. http://dx.doi.org/10.1121/1.3508731.
Pełny tekst źródłaItatani, Naoya, i Georg M. Klump. "Animal models for auditory streaming". Philosophical Transactions of the Royal Society B: Biological Sciences 372, nr 1714 (19.02.2017): 20160112. http://dx.doi.org/10.1098/rstb.2016.0112.
Pełny tekst źródłaPark, Sangwook, Woohyun Choi i Hanseok Ko. "Acoustic scene classification using recurrence quantification analysis". Journal of the Acoustical Society of Korea 35, nr 1 (31.01.2016): 42–48. http://dx.doi.org/10.7776/ask.2016.35.1.042.
Pełny tekst źródłaImoto, Keisuke. "Introduction to acoustic event and scene analysis". Acoustical Science and Technology 39, nr 3 (1.05.2018): 182–88. http://dx.doi.org/10.1250/ast.39.182.
Pełny tekst źródłaWeisser, Adam, Jörg M. Buchholz, Chris Oreinos, Javier Badajoz-Davila, James Galloway, Timothy Beechey i Gitte Keidser. "The Ambisonic Recordings of Typical Environments (ARTE) Database". Acta Acustica united with Acustica 105, nr 4 (1.07.2019): 695–713. http://dx.doi.org/10.3813/aaa.919349.
Pełny tekst źródłaHou, Yuanbo, i Dick Botteldooren. "Artificial intelligence-based collaborative acoustic scene and event classification to support urban soundscape analysis and classification". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, nr 1 (1.02.2023): 6466–73. http://dx.doi.org/10.3397/in_2022_0974.
Pełny tekst źródłaTang, Zhenyu, Nicholas J. Bryan, Dingzeyu Li, Timothy R. Langlois i Dinesh Manocha. "Scene-Aware Audio Rendering via Deep Acoustic Analysis". IEEE Transactions on Visualization and Computer Graphics 26, nr 5 (maj 2020): 1991–2001. http://dx.doi.org/10.1109/tvcg.2020.2973058.
Pełny tekst źródłaEllison, William T., Adam S. Frankel, David Zeddies, Kathleen J. Vigness Raposa i Cheryl Schroeder. "Underwater acoustic scene analysis: Exploration of appropriate metrics." Journal of the Acoustical Society of America 124, nr 4 (październik 2008): 2433. http://dx.doi.org/10.1121/1.4782511.
Pełny tekst źródłaMakino, S. "Special Section on Acoustic Scene Analysis and Reproduction". IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E91-A, nr 6 (1.06.2008): 1301–2. http://dx.doi.org/10.1093/ietfec/e91-a.6.1301.
Pełny tekst źródłaWang, Mou, Xiao-Lei Zhang i Susanto Rahardja. "An Unsupervised Deep Learning System for Acoustic Scene Analysis". Applied Sciences 10, nr 6 (19.03.2020): 2076. http://dx.doi.org/10.3390/app10062076.
Pełny tekst źródłaRozprawy doktorskie na temat "Acoustic Scene Analysis"
Kudo, Hiroaki, Jinji Chen i Noboru Ohnishi. "Scene Analysis by Clues from the Acoustic Signals". INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2004. http://hdl.handle.net/2237/10426.
Pełny tekst źródłaFord, Logan H. "Large-scale acoustic scene analysis with deep residual networks". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123026.
Pełny tekst źródłaThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 63-66).
Many of the recent advances in audio event detection, particularly on the AudioSet dataset, have focused on improving performance using the released embeddings produced by a pre-trained model. In this work, we instead study the task of training a multi-label event classifier directly from the audio recordings of AudioSet. Using the audio recordings, not only are we able to reproduce results from prior work, we have also confirmed improvements of other proposed additions, such as an attention module. Moreover, by training the embedding network jointly with the additions, we achieve a mean Average Precision (mAP) of 0.392 and an area under ROC curve (AUC) of 0.971, surpassing the state-of-the-art without transfer learning from a large dataset. We also analyze the output activations of the network and find that the models are able to localize audio events when a finer time resolution is needed. In addition, we use this model in exploring multimodal learning, transfer learning, and realtime sound event detection tasks.
by Logan H. Ford.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Teutsch, Heinz. "Wavefield decomposition using microphone arrays and its application to acoustic scene analysis". [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=97902806X.
Pełny tekst źródłaMcMullan, Amanda R. "Electroencephalographic measures of auditory perception in dynamic acoustic environments". Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Neuroscience, c2013, 2013. http://hdl.handle.net/10133/3354.
Pełny tekst źródłax, 90 leaves : col. ill. ; 29 cm
Narayanan, Arun. "Computational auditory scene analysis and robust automatic speech recognition". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1401460288.
Pełny tekst źródłaCarlo, Diego Di. "Echo-aware signal processing for audio scene analysis". Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S075.
Pełny tekst źródłaMost of audio signal processing methods regard reverberation and in particular acoustic echoes as a nuisance. However, they convey important spatial and semantic information about sound sources and, based on this, recent echo-aware methods have been proposed. In this work we focus on two directions. First, we study the how to estimate acoustic echoes blindly from microphone recordings. Two approaches are proposed, one leveraging on continuous dictionaries, one using recent deep learning techniques. Then, we focus on extending existing methods in audio scene analysis to their echo-aware forms. The Multichannel NMF framework for audio source separation, the SRP-PHAT localization method, and the MVDR beamformer for speech enhancement are all extended to their echo-aware versions
Deleforge, Antoine. "Acoustic Space Mapping : A Machine Learning Approach to Sound Source Separation and Localization". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM033/document.
Pełny tekst źródłaIn this thesis, we address the long-studied problem of binaural (two microphones) sound source separation and localization through supervised leaning. To achieve this, we develop a new paradigm referred as acoustic space mapping, at the crossroads of binaural perception, robot hearing, audio signal processing and machine learning. The proposed approach consists in learning a link between auditory cues perceived by the system and the emitting sound source position in another modality of the system, such as the visual space or the motor space. We propose new experimental protocols to automatically gather large training sets that associates such data. Obtained datasets are then used to reveal some fundamental intrinsic properties of acoustic spaces and lead to the development of a general family of probabilistic models for locally-linear high- to low-dimensional space mapping. We show that these models unify several existing regression and dimensionality reduction techniques, while encompassing a large number of new models that generalize previous ones. The properties and inference of these models are thoroughly detailed, and the prominent advantage of proposed methods with respect to state-of-the-art techniques is established on different space mapping applications, beyond the scope of auditory scene analysis. We then show how the proposed methods can be probabilistically extended to tackle the long-known cocktail party problem, i.e., accurately localizing one or several sound sources emitting at the same time in a real-word environment, and separate the mixed signals. We show that resulting techniques perform these tasks with an unequaled accuracy. This demonstrates the important role of learning and puts forwards the acoustic space mapping paradigm as a promising tool for robustly addressing the most challenging problems in computational binaural audition
Mouterde, Solveig. "Long-range discrimination of individual vocal signatures by a songbird : from propagation constraints to neural substrate". Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4012/document.
Pełny tekst źródłaIn communication systems, one of the biggest challenges is that the information encoded by the emitter is always modified before reaching the receiver, who has to process this altered information in order to recover the intended message. In acoustic communication particularly, the transmission of sound through the environment is a major source of signal degradation, caused by attenuation, absorption and reflections, all of which lead to decreases in the signal relative to the background noise. How animals deal with the need for exchanging information in spite of constraining conditions has been the subject of many studies either at the emitter or at the receiver's levels. However, a more integrated research about auditory scene analysis has seldom been used, and is needed to address the complexity of this process. The goal of my research was to use a transversal approach to study how birds adapt to the constraints of long distance communication by investigating the information coding at the emitter's level, the propagation-induced degradation of the acoustic signal, and the discrimination of this degraded information by the receiver at both the behavioral and neural levels. Taking into account the everyday issues faced by animals in their natural environment, and using stimuli and paradigms that reflected the behavioral relevance of these challenges, has been the cornerstone of my approach. Focusing on the information about individual identity in the distance calls of zebra finches Taeniopygia guttata, I investigated how the individual vocal signature is encoded, degraded, and finally discriminated, from the emitter to the receiver. This study shows that the individual signature of zebra finches is very resistant to propagation-induced degradation, and that the most individualized acoustic parameters vary depending on distance. Testing female birds in operant conditioning experiments, I showed that they are experts at discriminating between the degraded vocal signatures of two males, and that they can improve their ability substantially when they can train over increasing distances. Finally, I showed that this impressive discrimination ability also occurs at the neural level: we found a population of neurons in the avian auditory forebrain that discriminate individual voices with various degrees of propagation-induced degradation without prior familiarization or training. The finding of such a high-level auditory processing, in the primary auditory cortex, opens a new range of investigations, at the interface of neural processing and behavior
Teki, S. "Cognitive analysis of complex acoustic scenes". Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1413017/.
Pełny tekst źródłaWang, Yuxuan. "Supervised Speech Separation Using Deep Neural Networks". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1426366690.
Pełny tekst źródłaKsiążki na temat "Acoustic Scene Analysis"
author, Pikrakis Aggelos, red. Introduction to audio analysis: A MATLAB approach. Kidlington, Oxford: Academic Press is an imprint of Elsevier, 2014.
Znajdź pełny tekst źródłaZur Nieden, Gesa. Symmetries in Spaces, Symmetries in Listening. Redaktorzy Christian Thorau i Hansjakob Ziemer. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190466961.013.16.
Pełny tekst źródłaPitozzi, Enrico. Body Soundscape. Redaktor Yael Kaduri. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199841547.013.43.
Pełny tekst źródłaKytö, Meri. Soundscapes of Istanbul in Turkish Film Soundtracks. Redaktorzy John Richardson, Claudia Gorbman i Carol Vernallis. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780199733866.013.0028.
Pełny tekst źródłaCzęści książek na temat "Acoustic Scene Analysis"
de Cheveigné, Alain. "The Cancellation Principle in Acoustic Scene Analysis". W Speech Separation by Humans and Machines, 245–59. Boston, MA: Springer US, 2005. http://dx.doi.org/10.1007/0-387-22794-6_16.
Pełny tekst źródłaGold, Erica, i Dan McIntyre. "Chapter 4. What the /fʌk/? An acoustic-pragmatic analysis of implicated meaning in a scene from The Wire". W Linguistic Approaches to Literature, 74–91. Amsterdam: John Benjamins Publishing Company, 2019. http://dx.doi.org/10.1075/lal.35.04gol.
Pełny tekst źródłaSerizel, Romain, Victor Bisot, Slim Essid i Gaël Richard. "Acoustic Features for Environmental Sound Analysis". W Computational Analysis of Sound Scenes and Events, 71–101. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_4.
Pełny tekst źródłaLemaitre, Guillaume, Nicolas Grimault i Clara Suied. "Acoustics and Psychoacoustics of Sound Scenes and Events". W Computational Analysis of Sound Scenes and Events, 41–67. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63450-0_3.
Pełny tekst źródłaPham, Lam, Hieu Tang, Anahid Jalali, Alexander Schindler, Ross King i Ian McLoughlin. "A Low-Complexity Deep Learning Framework For Acoustic Scene Classification". W Data Science – Analytics and Applications, 26–32. Wiesbaden: Springer Fachmedien Wiesbaden, 2022. http://dx.doi.org/10.1007/978-3-658-36295-9_4.
Pełny tekst źródłaHuron, David. "Sources and Images". W Voice Leading. The MIT Press, 2016. http://dx.doi.org/10.7551/mitpress/9780262034852.003.0003.
Pełny tekst źródła"An Auditory Scene Analysis Approach to Monaural Speech Segregation". W Topics in Acoustic Echo and Noise Control, 485–515. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-33213-8_12.
Pełny tekst źródłaHuron, David. "The Cultural Connection". W Voice Leading. The MIT Press, 2016. http://dx.doi.org/10.7551/mitpress/9780262034852.003.0015.
Pełny tekst źródłaEpstein, Hugh. "An Audible World". W Hardy, Conrad and the Senses, 139–92. Edinburgh University Press, 2019. http://dx.doi.org/10.3366/edinburgh/9781474449861.003.0005.
Pełny tekst źródłaPisano, Giusy. "In Praise of the Sound Dissolve: Evanescences, Uncertainties, Fusions, Resonances". W Indefinite Visions, tłumaczy Elise Harris i Martine Beugnet. Edinburgh University Press, 2017. http://dx.doi.org/10.3366/edinburgh/9781474407120.003.0007.
Pełny tekst źródłaStreszczenia konferencji na temat "Acoustic Scene Analysis"
Imoto, Keisuke, Yasunori Ohishi, Hisashi Uematsu i Hitoshi Ohmuro. "Acoustic scene analysis based on latent acoustic topic and event allocation". W 2013 IEEE International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2013. http://dx.doi.org/10.1109/mlsp.2013.6661957.
Pełny tekst źródłaImoto, Keisuke, i Nobutaka Ono. "Acoustic scene analysis from acoustic event sequence with intermittent missing event". W ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7177951.
Pełny tekst źródłaWang, Weimin, Weiran Wang, Ming Sun i Chao Wang. "Acoustic Scene Analysis with Multi-Head Attention Networks". W Interspeech 2020. ISCA: ISCA, 2020. http://dx.doi.org/10.21437/interspeech.2020-1342.
Pełny tekst źródłaImoto, Keisuke, i Nobutaka Ono. "Online acoustic scene analysis based on nonparametric Bayesian model". W 2016 24th European Signal Processing Conference (EUSIPCO). IEEE, 2016. http://dx.doi.org/10.1109/eusipco.2016.7760396.
Pełny tekst źródłaKwon, Homin, Harish Krishnamoorthi, Visar Berisha i Andreas Spanias. "A sensor network for real-time acoustic scene analysis". W 2009 IEEE International Symposium on Circuits and Systems - ISCAS 2009. IEEE, 2009. http://dx.doi.org/10.1109/iscas.2009.5117712.
Pełny tekst źródłaBasbug, Ahmet Melih, i Mustafa Sert. "Analysis of Deep Neural Network Models for Acoustic Scene Classification". W 2019 27th Signal Processing and Communications Applications Conference (SIU). IEEE, 2019. http://dx.doi.org/10.1109/siu.2019.8806301.
Pełny tekst źródłaFord, Logan, Hao Tang, François Grondin i James Glass. "A Deep Residual Network for Large-Scale Acoustic Scene Analysis". W Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-2731.
Pełny tekst źródłaSharma, Pulkit, Vinayak Abrol i Anshul Thakur. "ASe: Acoustic Scene Embedding Using Deep Archetypal Analysis and GMM". W Interspeech 2018. ISCA: ISCA, 2018. http://dx.doi.org/10.21437/interspeech.2018-1481.
Pełny tekst źródłaImoto, Keisuke, i Nobutaka Ono. "Spatial-feature-based acoustic scene analysis using distributed microphone array". W 2015 23rd European Signal Processing Conference (EUSIPCO). IEEE, 2015. http://dx.doi.org/10.1109/eusipco.2015.7362480.
Pełny tekst źródłaImoto, Keisuke. "Acoustic Scene Analysis Using Partially Connected Microphones Based on Graph Cepstrum". W 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553385.
Pełny tekst źródła