Gotowa bibliografia na temat „Sound recognition”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Sound recognition”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Sound recognition"
Ishihara, Kazushi, Kazunori Komatani, Tetsuya Ogata i Hiroshi G. Okuno. "Sound-Imitation Word Recognition for Environmental Sounds". Transactions of the Japanese Society for Artificial Intelligence 20 (2005): 229–36. http://dx.doi.org/10.1527/tjsai.20.229.
Pełny tekst źródłaOkubo, Shota, Zhihao Gong, Kento Fujita i Ken Sasaki. "Recognition of Transient Environmental Sounds Based on Temporal and Frequency Features". International Journal of Automation Technology 13, nr 6 (5.11.2019): 803–9. http://dx.doi.org/10.20965/ijat.2019.p0803.
Pełny tekst źródłaHanna, S. A., i Ann Stuart Laubstein. "Speaker‐independent sound recognition". Journal of the Acoustical Society of America 92, nr 4 (październik 1992): 2475–76. http://dx.doi.org/10.1121/1.404442.
Pełny tekst źródłaIbrahim Alsaif, Omar, Kifaa Hadi Thanoon i Asmaa Hadi Al_bayati. "Auto electronic recognition of the Arabic letters sound". Indonesian Journal of Electrical Engineering and Computer Science 28, nr 2 (1.11.2022): 769. http://dx.doi.org/10.11591/ijeecs.v28.i2.pp769-776.
Pełny tekst źródłaGuo, Xuan, Yoshiyuki Toyoda, Huankang Li, Jie Huang, Shuxue Ding i Yong Liu. "Environmental Sound Recognition Using Time-Frequency Intersection Patterns". Applied Computational Intelligence and Soft Computing 2012 (2012): 1–6. http://dx.doi.org/10.1155/2012/650818.
Pełny tekst źródłaCheng, Xiefeng, Pengfei Wang i Chenjun She. "Biometric Identification Method for Heart Sound Based on Multimodal Multiscale Dispersion Entropy". Entropy 22, nr 2 (20.02.2020): 238. http://dx.doi.org/10.3390/e22020238.
Pełny tekst źródłaNorman-Haignere, Sam V., i Josh H. McDermott. "Sound recognition depends on real-world sound level". Journal of the Acoustical Society of America 139, nr 4 (kwiecień 2016): 2156. http://dx.doi.org/10.1121/1.4950385.
Pełny tekst źródłaZhai, Xiu, Fatemeh Khatami, Mina Sadeghi, Fengrong He, Heather L. Read, Ian H. Stevenson i Monty A. Escabí. "Distinct neural ensemble response statistics are associated with recognition and discrimination of natural sound textures". Proceedings of the National Academy of Sciences 117, nr 49 (20.11.2020): 31482–93. http://dx.doi.org/10.1073/pnas.2005644117.
Pełny tekst źródłaSong, Hang, Bin Zhao, Jun Hu, Haonan Sun i Zheng Zhou. "Research on Improved DenseNets Pig Cough Sound Recognition Model Based on SENets". Electronics 11, nr 21 (31.10.2022): 3562. http://dx.doi.org/10.3390/electronics11213562.
Pełny tekst źródłaBinh, Nguyen Dang. "Gestures Recognition from Sound Waves". EAI Endorsed Transactions on Context-aware Systems and Applications 3, nr 10 (12.09.2016): 151679. http://dx.doi.org/10.4108/eai.12-9-2016.151679.
Pełny tekst źródłaRozprawy doktorskie na temat "Sound recognition"
Kawaguchi, Nobuo, i Yuya Negishi. "Instant Learning Sound Sensor: Flexible Environmental Sound Recognition System". IEEE, 2007. http://hdl.handle.net/2237/15456.
Pełny tekst źródłaChapman, David P. "Playing with sounds : a spatial solution for computer sound synthesis". Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307047.
Pełny tekst źródłaStäger, Mathias. "Low-power sound-based user activity recognition /". Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16719.
Pełny tekst źródłaMedhat, Fady. "Masked conditional neural networks for sound recognition". Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/21594/.
Pełny tekst źródłaRodeia, José Pedro dos Santos. "Analysis and recognition of similar environmental sounds". Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/2305.
Pełny tekst źródłaHumans have the ability to identify sound sources just by hearing a sound. Adapting the same problem to computers is called (automatic) sound recognition. Several sound recognizers have been developed throughout the years. The accuracy provided by these recognizers is influenced by the features they use and the classification method implemented. While there are many approaches in sound feature extraction and in sound classification, most have been used to classify sounds with very different characteristics. Here, we implemented a similar sound recognizer. This recognizer uses sounds with very similar properties making the recognition process harder. Therefore, we will use both temporal and spectral properties of the sound. These properties will be extracted using the Intrinsic Structures Analysis (ISA) method, which uses Independent Component Analysis and Principal Component Analysis. We will implement the classification method based on k-Nearest Neighbor algorithm. Here we prove that the features extracted in this way are powerful in sound recognition. We tested our recognizer with several sets of features the ISA method retrieves, and achieved great results. We, finally, did a user study to compare human performance distinguishing similar sounds against our recognizer. The study allowed us to conclude the sounds are in fact really similar and difficult to distinguish and that our recognizer has much more ability than humans to identify them.
Martin, Keith Dana. "Sound-source recognition : a theory and computational model". Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9468.
Pełny tekst źródłaIncludes bibliographical references (p. 159-172).
The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of "listening" to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using both isolated musical tones and excerpts from compact disc recordings as test stimuli. The computer model's performance is robust with regard to the variations of reverberation and ambient noise (although not with regard to competing sound sources) in commercial compact disc recordings, and the system performs better than three out of fourteen skilled human listeners on a forced-choice classification task. This work has implications for research in musical timbre, automatic media annotation, human talker identification, and computational auditory scene analysis.
by Keith Dana Martin.
Ph.D.
Hunter, Jane Louise. "Integrated sound synchronisation for computer animation". Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239569.
Pełny tekst źródłaSoltani-Farani, A. A. "Sound visualisation as an aid for the deaf : a new approach". Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/844112/.
Pełny tekst źródłaGillespie, Bradford W. "Strategies for improving audible quality and speech recognition accuracy of reverberant speech /". Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/5930.
Pełny tekst źródłaCorbet, Remy. "A SOUND FOR RECOGNITION: BLUES MUSIC AND THE AFRICAN AMERICAN COMMUNITY". OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/730.
Pełny tekst źródłaKsiążki na temat "Sound recognition"
Artificial perception and music recognition. Berlin: Springer-Verlag, 1993.
Znajdź pełny tekst źródłaMinker, Wolfgang. Incorporating Knowledge Sources into Statistical Speech Recognition. Boston, MA: Springer Science+Business Media, LLC, 2009.
Znajdź pełny tekst źródłaCarlin, Kevin John. A domestic sound recognition and identification alert system for the profoundly deaf. [s.l: The Author], 1996.
Znajdź pełny tekst źródłaKil, David H. Pattern recognition and prediction with applications to signal characterization. Woodbury, N.Y: AIP Press, 1996.
Znajdź pełny tekst źródłaWolfman, Karen Anne Siegelman. The influence of spelling-sound consistency on the use of reading strategies. Ottawa: National Library of Canada, 1993.
Znajdź pełny tekst źródłaJunqua, Jean-Claude. Robustness in automatic speech recognition: Fundamentals and applications. Boston: Kluwer Academic Publishers, 1996.
Znajdź pełny tekst źródłaWord sorts and more: Sound, pattern, and meaning explorations K-3. New York: Guilford Press, 2006.
Znajdź pełny tekst źródłaJacob, Benesty, Sondhi M. Mohan 1933- i Huang Yiteng 1972-, red. Springer handbook of speech processing. Berlin: Springer, 2008.
Znajdź pełny tekst źródłaJens, Blauert, red. Communication acoustics. Berlin: Springer-Verlag, 2005.
Znajdź pełny tekst źródłaV, Kryssanov Victor, Ogawa Hitoshi, Brewster Stephen 1967- i SpringerLink (Online service), red. Haptic and Audio Interaction Design: 6th International Workshop, HAID 2011, Kusatsu, Japan, August 25-26, 2011. Proceedings. Berlin, Heidelberg: Springer-Verlag GmbH Berlin Heidelberg, 2011.
Znajdź pełny tekst źródłaCzęści książek na temat "Sound recognition"
Cai, Yang, i Károly D. Pados. "Sound Recognition". W Computing with Instinct, 16–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19757-4_2.
Pełny tekst źródłaNam, Juhan, Gautham J. Mysore i Paris Smaragdis. "Sound Recognition in Mixtures". W Latent Variable Analysis and Signal Separation, 405–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28551-6_50.
Pełny tekst źródłaPopova, Anastasiya S., Alexandr G. Rassadin i Alexander A. Ponomarenko. "Emotion Recognition in Sound". W Advances in Neural Computation, Machine Learning, and Cognitive Research, 117–24. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66604-4_18.
Pełny tekst źródłaAgus, Trevor R., Clara Suied i Daniel Pressnitzer. "Timbre Recognition and Sound Source Identification". W Timbre: Acoustics, Perception, and Cognition, 59–85. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14832-4_3.
Pełny tekst źródłaHayes, Kimberley, i Amit Rajput. "NDE 4.0: Image and Sound Recognition". W Handbook of Nondestructive Evaluation 4.0, 403–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-73206-6_26.
Pełny tekst źródłaIfukube, Tohru. "Speech Recognition Systems for the Hearing Impaired and the Elderly". W Sound-Based Assistive Technology, 145–67. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-47997-2_5.
Pełny tekst źródłaTheodorou, Theodoros, Iosif Mporas i Nikos Fakotakis. "Automatic Sound Recognition of Urban Environment Events". W Speech and Computer, 129–36. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_16.
Pełny tekst źródłaBolat, Bülent, i Ünal Küçük. "Musical Sound Recognition by Active Learning PNN". W Multimedia Content Representation, Classification and Security, 474–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11848035_63.
Pełny tekst źródłaZhang, Zhichao, Shugong Xu, Tianhao Qiao, Shunqing Zhang i Shan Cao. "Attention Based Convolutional Recurrent Neural Network for Environmental Sound Classification". W Pattern Recognition and Computer Vision, 261–71. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31654-9_23.
Pełny tekst źródłaSantos, Vasco C. F., Miguel F. M. Sousa i Aníbal J. S. Ferreira. "Quality Assessment of Manufactured Roof-Tiles Using Digital Sound Processing". W Pattern Recognition and Image Analysis, 927–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-44871-6_107.
Pełny tekst źródłaStreszczenia konferencji na temat "Sound recognition"
Mohanapriya, S. P., i R. Karthika. "Unsupervised environmental sound recognition". W 2014 International Conference on Embedded Systems (ICES). IEEE, 2014. http://dx.doi.org/10.1109/embeddedsys.2014.6953048.
Pełny tekst źródłaArslan, Yuksel, i Huseyin Canbolat. "A sound database development for environmental sound recognition". W 2017 25th Signal Processing and Communications Applications Conference (SIU). IEEE, 2017. http://dx.doi.org/10.1109/siu.2017.7960241.
Pełny tekst źródłaBear, Helen L., Inês Nolasco i Emmanouil Benetos. "Towards Joint Sound Scene and Polyphonic Sound Event Recognition". W Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-2169.
Pełny tekst źródłaNegishi, Yuya, i Nobuo Kawaguchi. "Instant Learning Sound Sensor: Flexible Environmental Sound Recognition System". W 2007 Fourth International Conference on Networked Sensing Systems. IEEE, 2007. http://dx.doi.org/10.1109/inss.2007.4297447.
Pełny tekst źródłaHarb, H., i L. Chen. "Sound recognition: a connectionist approach". W Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings. IEEE, 2003. http://dx.doi.org/10.1109/isspa.2003.1224953.
Pełny tekst źródłaDamnong, Punyanut, Phimphaka Taninpong i Jakramate Bootkrajang. "Steam Trap Opening Sound Recognition". W 2021 18th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, 2021. http://dx.doi.org/10.1109/ecti-con51831.2021.9454929.
Pełny tekst źródłaChachada, Sachin, i C. C. Jay Kuo. "Environmental sound recognition: A survey". W 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE, 2013. http://dx.doi.org/10.1109/apsipa.2013.6694338.
Pełny tekst źródłaVryzas, Nikolaos, Maria Matsiola, Rigas Kotsakis, Charalampos Dimoulas i George Kalliris. "Subjective Evaluation of a Speech Emotion Recognition Interaction Framework". W AM'18: Sound in Immersion and Emotion. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3243274.3243294.
Pełny tekst źródłaUnaffiliated, Edward M. Schaefer. "Representing pictures with sound". W 2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2014. http://dx.doi.org/10.1109/aipr.2014.7041934.
Pełny tekst źródłaFan, Changyuan, i Zhenfeng Li. "Research of artillery's sound recognition technology". W Instruments (ICEMI). IEEE, 2009. http://dx.doi.org/10.1109/icemi.2009.5274519.
Pełny tekst źródłaRaporty organizacyjne na temat "Sound recognition"
Ballas, James A. Recognition of Environmental Sounds. Fort Belvoir, VA: Defense Technical Information Center, listopad 1989. http://dx.doi.org/10.21236/ada214942.
Pełny tekst źródłaFrom Risk and Conflict to Peace and Prosperity: The urgency of securing community land rights in a turbulent world. Rights and Resources Initiative, luty 2017. http://dx.doi.org/10.53892/sdos4115.
Pełny tekst źródła