Gotowa bibliografia na temat „Auditory source separation”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Auditory source separation”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Auditory source separation"
Li, Han, Kean Chen, Lei Wang, Jianben Liu, Baoquan Wan i Bing Zhou. "Sound Source Separation Mechanisms of Different Deep Networks Explained from the Perspective of Auditory Perception". Applied Sciences 12, nr 2 (14.01.2022): 832. http://dx.doi.org/10.3390/app12020832.
Pełny tekst źródłaSasaki, Yoko, Saori Masunaga, Simon Thompson, Satoshi Kagami i Hiroshi Mizoguchi. "Sound Localization and Separation for Mobile Robot Tele-Operation by Tri-Concentric Microphone Array". Journal of Robotics and Mechatronics 19, nr 3 (20.06.2007): 281–89. http://dx.doi.org/10.20965/jrm.2007.p0281.
Pełny tekst źródłaDoll, Theodore J., Thomas E. Hanna i Joseph S. Russotti. "Masking in Three-Dimensional Auditory Displays". Human Factors: The Journal of the Human Factors and Ergonomics Society 34, nr 3 (czerwiec 1992): 255–65. http://dx.doi.org/10.1177/001872089203400301.
Pełny tekst źródłaLi, Han, Kean Chen, Rong Li, Jianben Liu, Baoquan Wan i Bing Zhou. "Auditory-like simultaneous separation mechanisms spontaneously learned by a deep source separation network". Applied Acoustics 188 (styczeń 2022): 108591. http://dx.doi.org/10.1016/j.apacoust.2021.108591.
Pełny tekst źródłaDrake, Laura, i Janet Rutledge. "Auditory scene analysis‐constrained array processing for sound source separation". Journal of the Acoustical Society of America 101, nr 5 (maj 1997): 3106. http://dx.doi.org/10.1121/1.418868.
Pełny tekst źródłaFarley, Brandon J., i Arnaud J. Noreña. "Membrane potential dynamics of populations of cortical neurons during auditory streaming". Journal of Neurophysiology 114, nr 4 (październik 2015): 2418–30. http://dx.doi.org/10.1152/jn.00545.2015.
Pełny tekst źródłaDrake, Laura A., Janet C. Rutledge i Aggelos Katsaggelos. "Computational auditory scene analysis‐constrained array processing for sound source separation". Journal of the Acoustical Society of America 106, nr 4 (październik 1999): 2238. http://dx.doi.org/10.1121/1.427622.
Pełny tekst źródłaZakeri, Sahar, i Masoud Geravanchizadeh. "Supervised binaural source separation using auditory attention detection in realistic scenarios". Applied Acoustics 175 (kwiecień 2021): 107826. http://dx.doi.org/10.1016/j.apacoust.2020.107826.
Pełny tekst źródłaMcElveen, J. K., Leonid Krasny i Scott Nordlund. "Applying matched field array processing and machine learning to computational auditory scene analysis and source separation challenges". Journal of the Acoustical Society of America 151, nr 4 (kwiecień 2022): A232. http://dx.doi.org/10.1121/10.0011162.
Pełny tekst źródłaOtsuka, Takuma, Katsuhiko Ishiguro, Hiroshi Sawada i Hiroshi Okuno. "Bayesian Unification of Sound Source Localization and Separation with Permutation Resolution". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 2038–45. http://dx.doi.org/10.1609/aaai.v26i1.8376.
Pełny tekst źródłaRozprawy doktorskie na temat "Auditory source separation"
Beauvois, Michael W. "A computer model of auditory stream segregation". Thesis, Loughborough University, 1991. https://dspace.lboro.ac.uk/2134/33091.
Pełny tekst źródłaLeech, Stuart Matthew. "The effect on audiovisual speech perception of auditory and visual source separation". Thesis, University of Sussex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.271770.
Pełny tekst źródłaMelih, Kathy, i n/a. "Audio Source Separation Using Perceptual Principles for Content-Based Coding and Information Management". Griffith University. School of Information Technology, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20050114.081327.
Pełny tekst źródłaMelih, Kathy. "Audio Source Separation Using Perceptual Principles for Content-Based Coding and Information Management". Thesis, Griffith University, 2004. http://hdl.handle.net/10072/366279.
Pełny tekst źródłaThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information Technology
Full Text
Belzner, Katharine Ann. "DPOAE two-source separation in adult Japanese quail (Coturnix coturnix japonica) /". Full-text of dissertation on the Internet (891.53 KB), 2010. http://www.lib.jmu.edu/general/etd/2010/doctorate/belzneka/belzneka_doctorate_04-19-2010_02.pdf.
Pełny tekst źródłaDeleforge, Antoine. "Acoustic Space Mapping : A Machine Learning Approach to Sound Source Separation and Localization". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM033/document.
Pełny tekst źródłaIn this thesis, we address the long-studied problem of binaural (two microphones) sound source separation and localization through supervised leaning. To achieve this, we develop a new paradigm referred as acoustic space mapping, at the crossroads of binaural perception, robot hearing, audio signal processing and machine learning. The proposed approach consists in learning a link between auditory cues perceived by the system and the emitting sound source position in another modality of the system, such as the visual space or the motor space. We propose new experimental protocols to automatically gather large training sets that associates such data. Obtained datasets are then used to reveal some fundamental intrinsic properties of acoustic spaces and lead to the development of a general family of probabilistic models for locally-linear high- to low-dimensional space mapping. We show that these models unify several existing regression and dimensionality reduction techniques, while encompassing a large number of new models that generalize previous ones. The properties and inference of these models are thoroughly detailed, and the prominent advantage of proposed methods with respect to state-of-the-art techniques is established on different space mapping applications, beyond the scope of auditory scene analysis. We then show how the proposed methods can be probabilistically extended to tackle the long-known cocktail party problem, i.e., accurately localizing one or several sound sources emitting at the same time in a real-word environment, and separate the mixed signals. We show that resulting techniques perform these tasks with an unequaled accuracy. This demonstrates the important role of learning and puts forwards the acoustic space mapping paradigm as a promising tool for robustly addressing the most challenging problems in computational binaural audition
Joseph, Joby. "Why only two ears? Some indicators from the study of source separation using two sensors". Thesis, Indian Institute of Science, 2004. http://hdl.handle.net/2005/55.
Pełny tekst źródłaArdam, Nagaraju. "Study of ASA Algorithms". Thesis, Linköpings universitet, Elektroniksystem, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70996.
Pełny tekst źródłaHearing-Aid
Otsuka, Takuma. "Bayesian Microphone Array Processing". 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188871.
Pełny tekst źródła0048
新制・課程博士
博士(情報学)
甲第18412号
情博第527号
新制||情||93(附属図書館)
31270
京都大学大学院情報学研究科知能情報学専攻
(主査)教授 奥乃 博, 教授 河原 達也, 准教授 CUTURI CAMETO Marco, 講師 吉井 和佳
学位規則第4条第1項該当
Chen, Zhuo. "Single Channel auditory source separation with neural network". Thesis, 2017. https://doi.org/10.7916/D8W09C8N.
Pełny tekst źródłaCzęści książek na temat "Auditory source separation"
Hummersone, Christopher, Toby Stokes i Tim Brookes. "On the Ideal Ratio Mask as the Goal of Computational Auditory Scene Analysis". W Blind Source Separation, 349–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-55016-4_12.
Pełny tekst źródłaDuong, Ngoc Q. K., Emmanuel Vincent i Rémi Gribonval. "Under-Determined Reverberant Audio Source Separation Using Local Observed Covariance and Auditory-Motivated Time-Frequency Representation". W Latent Variable Analysis and Signal Separation, 73–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15995-4_10.
Pełny tekst źródłaHamada, Nozomu, i Ning Ding. "Source Separation and DOA Estimation for Underdetermined Auditory Scene". W Soundscape Semiotics - Localisation and Categorisation. InTech, 2014. http://dx.doi.org/10.5772/56013.
Pełny tekst źródłaStreszczenia konferencji na temat "Auditory source separation"
Li, Han, Kean Chen i Bernhard U. Seeber. "Auditory Filterbanks Benefit Universal Sound Source Separation". W ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9414105.
Pełny tekst źródłaKim, Chanwoo, Kshitiz Kumar i Richard M. Stern. "Binaural sound source separation motivated by auditory processing". W ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947497.
Pełny tekst źródłaFaller, Kenneth John, Jason Riddley i Elijah Grubbs. "Automatic blind source separation of speech sources in an auditory scene". W 2017 51st Asilomar Conference on Signals, Systems, and Computers. IEEE, 2017. http://dx.doi.org/10.1109/acssc.2017.8335176.
Pełny tekst źródłaKong, Qiuqiang, Yuxuan Wang, Xuchen Song, Yin Cao, Wenwu Wang i Mark D. Plumbley. "Source Separation with Weakly Labelled Data: an Approach to Computational Auditory Scene Analysis". W ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053396.
Pełny tekst źródłaHussain, Abrar, Kalaivani Chellappan i Siti Zamratol Mai-Sarah Mukari. "Evaluation of source separation using projection pursuit algorithm for computer-based auditory training system". W 2017 7th IEEE International Conference on System Engineering and Technology (ICSET). IEEE, 2017. http://dx.doi.org/10.1109/icsengt.2017.8123436.
Pełny tekst źródłaCantisani, Giorgia, Slim Essid i Gael Richard. "Neuro-Steered Music Source Separation With EEG-Based Auditory Attention Decoding And Contrastive-NMF". W ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413841.
Pełny tekst źródła