Literatura académica sobre el tema "Sound recognition"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Sound recognition".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Sound recognition"
Ishihara, Kazushi, Kazunori Komatani, Tetsuya Ogata y Hiroshi G. Okuno. "Sound-Imitation Word Recognition for Environmental Sounds". Transactions of the Japanese Society for Artificial Intelligence 20 (2005): 229–36. http://dx.doi.org/10.1527/tjsai.20.229.
Texto completoOkubo, Shota, Zhihao Gong, Kento Fujita y Ken Sasaki. "Recognition of Transient Environmental Sounds Based on Temporal and Frequency Features". International Journal of Automation Technology 13, n.º 6 (5 de noviembre de 2019): 803–9. http://dx.doi.org/10.20965/ijat.2019.p0803.
Texto completoHanna, S. A. y Ann Stuart Laubstein. "Speaker‐independent sound recognition". Journal of the Acoustical Society of America 92, n.º 4 (octubre de 1992): 2475–76. http://dx.doi.org/10.1121/1.404442.
Texto completoIbrahim Alsaif, Omar, Kifaa Hadi Thanoon y Asmaa Hadi Al_bayati. "Auto electronic recognition of the Arabic letters sound". Indonesian Journal of Electrical Engineering and Computer Science 28, n.º 2 (1 de noviembre de 2022): 769. http://dx.doi.org/10.11591/ijeecs.v28.i2.pp769-776.
Texto completoGuo, Xuan, Yoshiyuki Toyoda, Huankang Li, Jie Huang, Shuxue Ding y Yong Liu. "Environmental Sound Recognition Using Time-Frequency Intersection Patterns". Applied Computational Intelligence and Soft Computing 2012 (2012): 1–6. http://dx.doi.org/10.1155/2012/650818.
Texto completoCheng, Xiefeng, Pengfei Wang y Chenjun She. "Biometric Identification Method for Heart Sound Based on Multimodal Multiscale Dispersion Entropy". Entropy 22, n.º 2 (20 de febrero de 2020): 238. http://dx.doi.org/10.3390/e22020238.
Texto completoNorman-Haignere, Sam V. y Josh H. McDermott. "Sound recognition depends on real-world sound level". Journal of the Acoustical Society of America 139, n.º 4 (abril de 2016): 2156. http://dx.doi.org/10.1121/1.4950385.
Texto completoZhai, Xiu, Fatemeh Khatami, Mina Sadeghi, Fengrong He, Heather L. Read, Ian H. Stevenson y Monty A. Escabí. "Distinct neural ensemble response statistics are associated with recognition and discrimination of natural sound textures". Proceedings of the National Academy of Sciences 117, n.º 49 (20 de noviembre de 2020): 31482–93. http://dx.doi.org/10.1073/pnas.2005644117.
Texto completoSong, Hang, Bin Zhao, Jun Hu, Haonan Sun y Zheng Zhou. "Research on Improved DenseNets Pig Cough Sound Recognition Model Based on SENets". Electronics 11, n.º 21 (31 de octubre de 2022): 3562. http://dx.doi.org/10.3390/electronics11213562.
Texto completoBinh, Nguyen Dang. "Gestures Recognition from Sound Waves". EAI Endorsed Transactions on Context-aware Systems and Applications 3, n.º 10 (12 de septiembre de 2016): 151679. http://dx.doi.org/10.4108/eai.12-9-2016.151679.
Texto completoTesis sobre el tema "Sound recognition"
Kawaguchi, Nobuo y Yuya Negishi. "Instant Learning Sound Sensor: Flexible Environmental Sound Recognition System". IEEE, 2007. http://hdl.handle.net/2237/15456.
Texto completoChapman, David P. "Playing with sounds : a spatial solution for computer sound synthesis". Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307047.
Texto completoStäger, Mathias. "Low-power sound-based user activity recognition /". Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16719.
Texto completoMedhat, Fady. "Masked conditional neural networks for sound recognition". Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/21594/.
Texto completoRodeia, José Pedro dos Santos. "Analysis and recognition of similar environmental sounds". Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/2305.
Texto completoHumans have the ability to identify sound sources just by hearing a sound. Adapting the same problem to computers is called (automatic) sound recognition. Several sound recognizers have been developed throughout the years. The accuracy provided by these recognizers is influenced by the features they use and the classification method implemented. While there are many approaches in sound feature extraction and in sound classification, most have been used to classify sounds with very different characteristics. Here, we implemented a similar sound recognizer. This recognizer uses sounds with very similar properties making the recognition process harder. Therefore, we will use both temporal and spectral properties of the sound. These properties will be extracted using the Intrinsic Structures Analysis (ISA) method, which uses Independent Component Analysis and Principal Component Analysis. We will implement the classification method based on k-Nearest Neighbor algorithm. Here we prove that the features extracted in this way are powerful in sound recognition. We tested our recognizer with several sets of features the ISA method retrieves, and achieved great results. We, finally, did a user study to compare human performance distinguishing similar sounds against our recognizer. The study allowed us to conclude the sounds are in fact really similar and difficult to distinguish and that our recognizer has much more ability than humans to identify them.
Martin, Keith Dana. "Sound-source recognition : a theory and computational model". Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9468.
Texto completoIncludes bibliographical references (p. 159-172).
The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of "listening" to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using both isolated musical tones and excerpts from compact disc recordings as test stimuli. The computer model's performance is robust with regard to the variations of reverberation and ambient noise (although not with regard to competing sound sources) in commercial compact disc recordings, and the system performs better than three out of fourteen skilled human listeners on a forced-choice classification task. This work has implications for research in musical timbre, automatic media annotation, human talker identification, and computational auditory scene analysis.
by Keith Dana Martin.
Ph.D.
Hunter, Jane Louise. "Integrated sound synchronisation for computer animation". Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239569.
Texto completoSoltani-Farani, A. A. "Sound visualisation as an aid for the deaf : a new approach". Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/844112/.
Texto completoGillespie, Bradford W. "Strategies for improving audible quality and speech recognition accuracy of reverberant speech /". Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/5930.
Texto completoCorbet, Remy. "A SOUND FOR RECOGNITION: BLUES MUSIC AND THE AFRICAN AMERICAN COMMUNITY". OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/730.
Texto completoLibros sobre el tema "Sound recognition"
Artificial perception and music recognition. Berlin: Springer-Verlag, 1993.
Buscar texto completoMinker, Wolfgang. Incorporating Knowledge Sources into Statistical Speech Recognition. Boston, MA: Springer Science+Business Media, LLC, 2009.
Buscar texto completoCarlin, Kevin John. A domestic sound recognition and identification alert system for the profoundly deaf. [s.l: The Author], 1996.
Buscar texto completoKil, David H. Pattern recognition and prediction with applications to signal characterization. Woodbury, N.Y: AIP Press, 1996.
Buscar texto completoWolfman, Karen Anne Siegelman. The influence of spelling-sound consistency on the use of reading strategies. Ottawa: National Library of Canada, 1993.
Buscar texto completoJunqua, Jean-Claude. Robustness in automatic speech recognition: Fundamentals and applications. Boston: Kluwer Academic Publishers, 1996.
Buscar texto completoWord sorts and more: Sound, pattern, and meaning explorations K-3. New York: Guilford Press, 2006.
Buscar texto completoJacob, Benesty, Sondhi M. Mohan 1933- y Huang Yiteng 1972-, eds. Springer handbook of speech processing. Berlin: Springer, 2008.
Buscar texto completoJens, Blauert, ed. Communication acoustics. Berlin: Springer-Verlag, 2005.
Buscar texto completoV, Kryssanov Victor, Ogawa Hitoshi, Brewster Stephen 1967- y SpringerLink (Online service), eds. Haptic and Audio Interaction Design: 6th International Workshop, HAID 2011, Kusatsu, Japan, August 25-26, 2011. Proceedings. Berlin, Heidelberg: Springer-Verlag GmbH Berlin Heidelberg, 2011.
Buscar texto completoCapítulos de libros sobre el tema "Sound recognition"
Cai, Yang y Károly D. Pados. "Sound Recognition". En Computing with Instinct, 16–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19757-4_2.
Texto completoNam, Juhan, Gautham J. Mysore y Paris Smaragdis. "Sound Recognition in Mixtures". En Latent Variable Analysis and Signal Separation, 405–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28551-6_50.
Texto completoPopova, Anastasiya S., Alexandr G. Rassadin y Alexander A. Ponomarenko. "Emotion Recognition in Sound". En Advances in Neural Computation, Machine Learning, and Cognitive Research, 117–24. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66604-4_18.
Texto completoAgus, Trevor R., Clara Suied y Daniel Pressnitzer. "Timbre Recognition and Sound Source Identification". En Timbre: Acoustics, Perception, and Cognition, 59–85. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14832-4_3.
Texto completoHayes, Kimberley y Amit Rajput. "NDE 4.0: Image and Sound Recognition". En Handbook of Nondestructive Evaluation 4.0, 403–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-73206-6_26.
Texto completoIfukube, Tohru. "Speech Recognition Systems for the Hearing Impaired and the Elderly". En Sound-Based Assistive Technology, 145–67. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-47997-2_5.
Texto completoTheodorou, Theodoros, Iosif Mporas y Nikos Fakotakis. "Automatic Sound Recognition of Urban Environment Events". En Speech and Computer, 129–36. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_16.
Texto completoBolat, Bülent y Ünal Küçük. "Musical Sound Recognition by Active Learning PNN". En Multimedia Content Representation, Classification and Security, 474–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11848035_63.
Texto completoZhang, Zhichao, Shugong Xu, Tianhao Qiao, Shunqing Zhang y Shan Cao. "Attention Based Convolutional Recurrent Neural Network for Environmental Sound Classification". En Pattern Recognition and Computer Vision, 261–71. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31654-9_23.
Texto completoSantos, Vasco C. F., Miguel F. M. Sousa y Aníbal J. S. Ferreira. "Quality Assessment of Manufactured Roof-Tiles Using Digital Sound Processing". En Pattern Recognition and Image Analysis, 927–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-44871-6_107.
Texto completoActas de conferencias sobre el tema "Sound recognition"
Mohanapriya, S. P. y R. Karthika. "Unsupervised environmental sound recognition". En 2014 International Conference on Embedded Systems (ICES). IEEE, 2014. http://dx.doi.org/10.1109/embeddedsys.2014.6953048.
Texto completoArslan, Yuksel y Huseyin Canbolat. "A sound database development for environmental sound recognition". En 2017 25th Signal Processing and Communications Applications Conference (SIU). IEEE, 2017. http://dx.doi.org/10.1109/siu.2017.7960241.
Texto completoBear, Helen L., Inês Nolasco y Emmanouil Benetos. "Towards Joint Sound Scene and Polyphonic Sound Event Recognition". En Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-2169.
Texto completoNegishi, Yuya y Nobuo Kawaguchi. "Instant Learning Sound Sensor: Flexible Environmental Sound Recognition System". En 2007 Fourth International Conference on Networked Sensing Systems. IEEE, 2007. http://dx.doi.org/10.1109/inss.2007.4297447.
Texto completoHarb, H. y L. Chen. "Sound recognition: a connectionist approach". En Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings. IEEE, 2003. http://dx.doi.org/10.1109/isspa.2003.1224953.
Texto completoDamnong, Punyanut, Phimphaka Taninpong y Jakramate Bootkrajang. "Steam Trap Opening Sound Recognition". En 2021 18th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, 2021. http://dx.doi.org/10.1109/ecti-con51831.2021.9454929.
Texto completoChachada, Sachin y C. C. Jay Kuo. "Environmental sound recognition: A survey". En 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE, 2013. http://dx.doi.org/10.1109/apsipa.2013.6694338.
Texto completoVryzas, Nikolaos, Maria Matsiola, Rigas Kotsakis, Charalampos Dimoulas y George Kalliris. "Subjective Evaluation of a Speech Emotion Recognition Interaction Framework". En AM'18: Sound in Immersion and Emotion. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3243274.3243294.
Texto completoUnaffiliated, Edward M. Schaefer. "Representing pictures with sound". En 2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2014. http://dx.doi.org/10.1109/aipr.2014.7041934.
Texto completoFan, Changyuan y Zhenfeng Li. "Research of artillery's sound recognition technology". En Instruments (ICEMI). IEEE, 2009. http://dx.doi.org/10.1109/icemi.2009.5274519.
Texto completoInformes sobre el tema "Sound recognition"
Ballas, James A. Recognition of Environmental Sounds. Fort Belvoir, VA: Defense Technical Information Center, noviembre de 1989. http://dx.doi.org/10.21236/ada214942.
Texto completoFrom Risk and Conflict to Peace and Prosperity: The urgency of securing community land rights in a turbulent world. Rights and Resources Initiative, febrero de 2017. http://dx.doi.org/10.53892/sdos4115.
Texto completo