Literatura académica sobre el tema "Sound recognition"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Sound recognition".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Sound recognition"

1

Ishihara, Kazushi, Kazunori Komatani, Tetsuya Ogata y Hiroshi G. Okuno. "Sound-Imitation Word Recognition for Environmental Sounds". Transactions of the Japanese Society for Artificial Intelligence 20 (2005): 229–36. http://dx.doi.org/10.1527/tjsai.20.229.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Okubo, Shota, Zhihao Gong, Kento Fujita y Ken Sasaki. "Recognition of Transient Environmental Sounds Based on Temporal and Frequency Features". International Journal of Automation Technology 13, n.º 6 (5 de noviembre de 2019): 803–9. http://dx.doi.org/10.20965/ijat.2019.p0803.

Texto completo
Resumen
Environmental sound recognition (ESR) refers to the recognition of all sounds other than the human voice or musical sounds. Typical ESR methods utilize spectral information and variation within it with respect to time. However, in the case of transient sounds, spectral information is insufficient because only an average quantity of a given signal within a time period can be recognized. In this study, the waveform of sound signals and their spectrum were analyzed visually to extract temporal characteristics of the sound more directly. Based on the observations, features such as the initial rise time, duration, and smoothness of the sound signal; the distribution and smoothness of the spectrum; the clarity of the sustaining sound components; and the number and interval of collisions in chattering were proposed. Experimental feature values were obtained for eight transient environmental sounds, and the distributions of the values were evaluated. A recognition experiment was conducted on 11 transient sounds. The Mel-frequency cepstral coefficient (MFCC) was selected as reference. A support vector machine was adopted as the classification algorithm. The recognition rates obtained from the MFCC were below 50% for five of the 11 sounds, and the overall recognition rate was 69%. In contrast, the recognition rates obtained using the proposed features were above 50% for all sounds, and the overall rate was 86%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Hanna, S. A. y Ann Stuart Laubstein. "Speaker‐independent sound recognition". Journal of the Acoustical Society of America 92, n.º 4 (octubre de 1992): 2475–76. http://dx.doi.org/10.1121/1.404442.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ibrahim Alsaif, Omar, Kifaa Hadi Thanoon y Asmaa Hadi Al_bayati. "Auto electronic recognition of the Arabic letters sound". Indonesian Journal of Electrical Engineering and Computer Science 28, n.º 2 (1 de noviembre de 2022): 769. http://dx.doi.org/10.11591/ijeecs.v28.i2.pp769-776.

Texto completo
Resumen
In this research Arabic speech sounds have been studied and investigated, so as to find the distinctive features of each articulated sound. Therefore, certain Arabic sound which share certain approximate distinctive significant features have been chosen for study the ability of distinguishing among them through abstracting characteristic features for them. The signals of speech for the sounds have been recorded through the microphone which represented in a binary matrix. This procedure was implemented so as prepare these signals for processing operation through which two features for the co-occurrence matrix (contrast, energy) have been counted. The values of these features were studied and compared from one person to another to discover the certain speech sounds properties sharing certain common distinguishing features approximate in their articulation one another. The results analysis for this study gave the ability of the dependence to these features for distinguish the sound of speaker, in addition to the high ability which provided to distinguish among the arabic letters, where no connect between both co-occurrence matrix elements and the features of signaling of any arabic letters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Guo, Xuan, Yoshiyuki Toyoda, Huankang Li, Jie Huang, Shuxue Ding y Yong Liu. "Environmental Sound Recognition Using Time-Frequency Intersection Patterns". Applied Computational Intelligence and Soft Computing 2012 (2012): 1–6. http://dx.doi.org/10.1155/2012/650818.

Texto completo
Resumen
Environmental sound recognition is an important function of robots and intelligent computer systems. In this research, we use a multistage perceptron neural network system for environmental sound recognition. The input data is a combination of time-variance pattern of instantaneous powers and frequency-variance pattern with instantaneous spectrum at the power peak, referred to as a time-frequency intersection pattern. Spectra of many environmental sounds change more slowly than those of speech or voice, so the intersectional time-frequency pattern will preserve the major features of environmental sounds but with drastically reduced data requirements. Two experiments were conducted using an original database and an open database created by the RWCP project. The recognition rate for 20 kinds of environmental sounds was 92%. The recognition rate of the new method was about 12% higher than methods using only an instantaneous spectrum. The results are also comparable with HMM-based methods, although those methods need to treat the time variance of an input vector series with more complicated computations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Cheng, Xiefeng, Pengfei Wang y Chenjun She. "Biometric Identification Method for Heart Sound Based on Multimodal Multiscale Dispersion Entropy". Entropy 22, n.º 2 (20 de febrero de 2020): 238. http://dx.doi.org/10.3390/e22020238.

Texto completo
Resumen
In this paper, a new method of biometric characterization of heart sounds based on multimodal multiscale dispersion entropy is proposed. Firstly, the heart sound is periodically segmented, and then each single-cycle heart sound is decomposed into a group of intrinsic mode functions (IMFs) by improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN). These IMFs are then segmented to a series of frames, which is used to calculate the refine composite multiscale dispersion entropy (RCMDE) as the characteristic representation of heart sound. In the simulation experiments I, carried out on the open heart sounds database Michigan, Washington and Littman, the feature representation method was combined with the heart sound segmentation method based on logistic regression (LR) and hidden semi-Markov models (HSMM), and feature selection was performed through the Fisher ratio (FR). Finally, the Euclidean distance (ED) and the close principle are used for matching and identification, and the recognition accuracy rate was 96.08%. To improve the practical application value of this method, the proposed method was applied to 80 heart sounds database constructed by 40 volunteer heart sounds to discuss the effect of single-cycle heart sounds with different starting positions on performance in experiment II. The experimental results show that the single-cycle heart sound with the starting position of the start of the first heart sound (S1) has the highest recognition rate of 97.5%. In summary, the proposed method is effective for heart sound biometric recognition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Norman-Haignere, Sam V. y Josh H. McDermott. "Sound recognition depends on real-world sound level". Journal of the Acoustical Society of America 139, n.º 4 (abril de 2016): 2156. http://dx.doi.org/10.1121/1.4950385.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zhai, Xiu, Fatemeh Khatami, Mina Sadeghi, Fengrong He, Heather L. Read, Ian H. Stevenson y Monty A. Escabí. "Distinct neural ensemble response statistics are associated with recognition and discrimination of natural sound textures". Proceedings of the National Academy of Sciences 117, n.º 49 (20 de noviembre de 2020): 31482–93. http://dx.doi.org/10.1073/pnas.2005644117.

Texto completo
Resumen
The perception of sound textures, a class of natural sounds defined by statistical sound structure such as fire, wind, and rain, has been proposed to arise through the integration of time-averaged summary statistics. Where and how the auditory system might encode these summary statistics to create internal representations of these stationary sounds, however, is unknown. Here, using natural textures and synthetic variants with reduced statistics, we show that summary statistics modulate the correlations between frequency organized neuron ensembles in the awake rabbit inferior colliculus (IC). These neural ensemble correlation statistics capture high-order sound structure and allow for accurate neural decoding in a single trial recognition task with evidence accumulation times approaching 1 s. In contrast, the average activity across the neural ensemble (neural spectrum) provides a fast (tens of milliseconds) and salient signal that contributes primarily to texture discrimination. Intriguingly, perceptual studies in human listeners reveal analogous trends: the sound spectrum is integrated quickly and serves as a salient discrimination cue while high-order sound statistics are integrated slowly and contribute substantially more toward recognition. The findings suggest statistical sound cues such as the sound spectrum and correlation structure are represented by distinct response statistics in auditory midbrain ensembles, and that these neural response statistics may have dissociable roles and time scales for the recognition and discrimination of natural sounds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Song, Hang, Bin Zhao, Jun Hu, Haonan Sun y Zheng Zhou. "Research on Improved DenseNets Pig Cough Sound Recognition Model Based on SENets". Electronics 11, n.º 21 (31 de octubre de 2022): 3562. http://dx.doi.org/10.3390/electronics11213562.

Texto completo
Resumen
In order to real-time monitor the health status of pigs in the process of breeding and to achieve the purpose of early warning of swine respiratory diseases, the SE-DenseNet-121 recognition model was established to recognize pig cough sounds. The 13-dimensional MFCC, ΔMFCC and Δ2MFCC were transverse spliced to obtain six groups of parameters that could reflect the static, dynamic and mixed characteristics of pig sound signals respectively, and the DenseNet-121 recognition model was used to compare the performance of the six sets of parameters to obtain the optimal set of parameters. The DenseNet-121 recognition model was improved by using the SENets attention module to enhance the recognition model's ability to extract effective features from the pig sound signals. The results showed that the optimal set of parameters was the 26-dimensional MFCC + ΔMFCC, and the rate of recognition accuracy, recall, precision and F1 score of the SE-DenseNet-121 recognition model for pig cough sounds were 93.8%, 98.6%, 97% and 97.8%, respectively. The above results can be used to develop a pig cough sound recognition system for early warning of pig respiratory diseases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Binh, Nguyen Dang. "Gestures Recognition from Sound Waves". EAI Endorsed Transactions on Context-aware Systems and Applications 3, n.º 10 (12 de septiembre de 2016): 151679. http://dx.doi.org/10.4108/eai.12-9-2016.151679.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Sound recognition"

1

Kawaguchi, Nobuo y Yuya Negishi. "Instant Learning Sound Sensor: Flexible Environmental Sound Recognition System". IEEE, 2007. http://hdl.handle.net/2237/15456.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Chapman, David P. "Playing with sounds : a spatial solution for computer sound synthesis". Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307047.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Stäger, Mathias. "Low-power sound-based user activity recognition /". Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16719.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Medhat, Fady. "Masked conditional neural networks for sound recognition". Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/21594/.

Texto completo
Resumen
Sound recognition has been studied for decades to grant machines the human hearing ability. The advances in this field help in a range of applications, from industrial ones such as fault detection in machines and noise monitoring to household applications such as surveillance and hearing aids. The problem of sound recognition like any pattern recognition task involves the reliability of the extracted features and the recognition model. The problem has been approached through decades of crafted features used collaboratively with models based on neural networks or statistical models such as Gaussian Mixtures and Hidden Markov models. Neural networks are currently being considered as a method to automate the feature extraction stage together with the already incorporated role of recognition. The performance of such models is approaching handcrafted features. Current neural network based models are not primarily designed for the nature of the sound signal, which may not optimally harness distinctive properties of the signal. This thesis proposes neural network models that exploit the nature of the time-frequency representation of the sound signal. We propose the ConditionaL Neural Network (CLNN) and the Masked ConditionaL Neural Network (MCLNN). The CLNN is designed to account for the temporal dimension of a signal and behaves as the framework for the MCLNN. The MCLNN allows a filterbank-like behaviour to be embedded within the network using a specially designed binary mask. The masking subdivides the frequency range of a signal into bands and allows concurrent consideration of different feature combinations analogous to the manual handcrafting of the optimum set of features for a recognition task. The proposed models have been evaluated through an extensive set of experiments using a range of publicly available datasets of music genres and environmental sounds, where they surpass state-of-the-art Convolutional Neural Networks and several hand-crafted attempts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Rodeia, José Pedro dos Santos. "Analysis and recognition of similar environmental sounds". Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/2305.

Texto completo
Resumen
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Humans have the ability to identify sound sources just by hearing a sound. Adapting the same problem to computers is called (automatic) sound recognition. Several sound recognizers have been developed throughout the years. The accuracy provided by these recognizers is influenced by the features they use and the classification method implemented. While there are many approaches in sound feature extraction and in sound classification, most have been used to classify sounds with very different characteristics. Here, we implemented a similar sound recognizer. This recognizer uses sounds with very similar properties making the recognition process harder. Therefore, we will use both temporal and spectral properties of the sound. These properties will be extracted using the Intrinsic Structures Analysis (ISA) method, which uses Independent Component Analysis and Principal Component Analysis. We will implement the classification method based on k-Nearest Neighbor algorithm. Here we prove that the features extracted in this way are powerful in sound recognition. We tested our recognizer with several sets of features the ISA method retrieves, and achieved great results. We, finally, did a user study to compare human performance distinguishing similar sounds against our recognizer. The study allowed us to conclude the sounds are in fact really similar and difficult to distinguish and that our recognizer has much more ability than humans to identify them.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Martin, Keith Dana. "Sound-source recognition : a theory and computational model". Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9468.

Texto completo
Resumen
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 159-172).
The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of "listening" to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using both isolated musical tones and excerpts from compact disc recordings as test stimuli. The computer model's performance is robust with regard to the variations of reverberation and ambient noise (although not with regard to competing sound sources) in commercial compact disc recordings, and the system performs better than three out of fourteen skilled human listeners on a forced-choice classification task. This work has implications for research in musical timbre, automatic media annotation, human talker identification, and computational auditory scene analysis.
by Keith Dana Martin.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Hunter, Jane Louise. "Integrated sound synchronisation for computer animation". Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239569.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Soltani-Farani, A. A. "Sound visualisation as an aid for the deaf : a new approach". Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/844112/.

Texto completo
Resumen
Visual translation of speech as an aid for the deaf has long been a subject of electronic research and development. This thesis is concerned with a technique of sound visualisation based upon the theory of the primacy of dynamic, rather than static, information in the perception of speech sounds. The goal is design and evaluation of a system to display the perceptually important features of an input sound in a dynamic format as similar as possible to the auditory representation of that sound. The human auditory system, as the most effective system of sound representation, is first studied. Then, based on the latest theories of hearing and techniques of auditory modelling, a simplified model of the human ear is developed. In this model, the outer and middle ears together are simulated by a high-pass filter, and the inner ear is modelled by a bank of band-pass filters the outputs of which, after rectification and compression, are applied to a visualiser block. To design an appropriate visualiser block, theories of sound and speech perception are reviewed. Then the perceptually important properties of sound, and their relations to the physical attributes of the sound pressure wave, are considered to map the outputs from the auditory model onto an informative and recognisable running image-like the one known as cochleagram. This conveyor-like image is then sampled by a window of 20 milliseconds duration at a rate of 50 samples per second, so that a sequence of phase-locked, rectangular images is produced. Animation of these images results in a novel method of spectrography displaying both the time-varying and the time-independent information of the underlying sound with a high resolution in real time. The resulting system translates a spoken word into a visual gesture, and displays a still picture when the input is a steady state sound. Finally the implementation of this visualiser system is evaluated through several experiments undertaken by normal-hearing subjects. In these experiments, recognition of the gestures of a number of spoken words, is examined through a set of two-word and multi-word forced-choice tests. The results of these preliminary experiments show a high recognition score (40-90 percent, where zero represents chance expectation) after only 10 learning trials. General conclusions from the results suggest: a potential quick learning of the gestures, language independence of the system, fidelity of the system in translating the auditory information, and persistence of the learned gestures in the long-term memory. The results are very promising and motivate further investigations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Gillespie, Bradford W. "Strategies for improving audible quality and speech recognition accuracy of reverberant speech /". Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/5930.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Corbet, Remy. "A SOUND FOR RECOGNITION: BLUES MUSIC AND THE AFRICAN AMERICAN COMMUNITY". OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/730.

Texto completo
Resumen
AN ABSTRACT OF THE THESIS OF Remy Corbet for the Masters in History degree in American History, presented on August 3, 2011, at Southern Illinois University Carbondale. TITLE: A SOUND FOR RECOGNITION: BLUES MUSIC AND THE AFRICAN AMERICAN COMMUNITY MAJOR PROFESSOR: Dr. Robbie Lieberman Blues music is a reflection of all the changes that shaped the African American experience. It is an affirmation of the African American identity, looking forward to the future with one eye glancing at the past. It is a reminder of the tragedies and inequalities that accompanied African Americans from slavery to official freedom, then from freedom to equality. It is the witness of the development of African Americans, and of their acculturation to the individual voice, symbol of the American ethos, which made the link between their African past and their American future.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Sound recognition"

1

Artificial perception and music recognition. Berlin: Springer-Verlag, 1993.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Minker, Wolfgang. Incorporating Knowledge Sources into Statistical Speech Recognition. Boston, MA: Springer Science+Business Media, LLC, 2009.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Carlin, Kevin John. A domestic sound recognition and identification alert system for the profoundly deaf. [s.l: The Author], 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kil, David H. Pattern recognition and prediction with applications to signal characterization. Woodbury, N.Y: AIP Press, 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Wolfman, Karen Anne Siegelman. The influence of spelling-sound consistency on the use of reading strategies. Ottawa: National Library of Canada, 1993.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Junqua, Jean-Claude. Robustness in automatic speech recognition: Fundamentals and applications. Boston: Kluwer Academic Publishers, 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Word sorts and more: Sound, pattern, and meaning explorations K-3. New York: Guilford Press, 2006.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Jacob, Benesty, Sondhi M. Mohan 1933- y Huang Yiteng 1972-, eds. Springer handbook of speech processing. Berlin: Springer, 2008.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Jens, Blauert, ed. Communication acoustics. Berlin: Springer-Verlag, 2005.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

V, Kryssanov Victor, Ogawa Hitoshi, Brewster Stephen 1967- y SpringerLink (Online service), eds. Haptic and Audio Interaction Design: 6th International Workshop, HAID 2011, Kusatsu, Japan, August 25-26, 2011. Proceedings. Berlin, Heidelberg: Springer-Verlag GmbH Berlin Heidelberg, 2011.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Sound recognition"

1

Cai, Yang y Károly D. Pados. "Sound Recognition". En Computing with Instinct, 16–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19757-4_2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Nam, Juhan, Gautham J. Mysore y Paris Smaragdis. "Sound Recognition in Mixtures". En Latent Variable Analysis and Signal Separation, 405–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28551-6_50.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Popova, Anastasiya S., Alexandr G. Rassadin y Alexander A. Ponomarenko. "Emotion Recognition in Sound". En Advances in Neural Computation, Machine Learning, and Cognitive Research, 117–24. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66604-4_18.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Agus, Trevor R., Clara Suied y Daniel Pressnitzer. "Timbre Recognition and Sound Source Identification". En Timbre: Acoustics, Perception, and Cognition, 59–85. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14832-4_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hayes, Kimberley y Amit Rajput. "NDE 4.0: Image and Sound Recognition". En Handbook of Nondestructive Evaluation 4.0, 403–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-73206-6_26.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ifukube, Tohru. "Speech Recognition Systems for the Hearing Impaired and the Elderly". En Sound-Based Assistive Technology, 145–67. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-47997-2_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Theodorou, Theodoros, Iosif Mporas y Nikos Fakotakis. "Automatic Sound Recognition of Urban Environment Events". En Speech and Computer, 129–36. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Bolat, Bülent y Ünal Küçük. "Musical Sound Recognition by Active Learning PNN". En Multimedia Content Representation, Classification and Security, 474–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11848035_63.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zhang, Zhichao, Shugong Xu, Tianhao Qiao, Shunqing Zhang y Shan Cao. "Attention Based Convolutional Recurrent Neural Network for Environmental Sound Classification". En Pattern Recognition and Computer Vision, 261–71. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31654-9_23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Santos, Vasco C. F., Miguel F. M. Sousa y Aníbal J. S. Ferreira. "Quality Assessment of Manufactured Roof-Tiles Using Digital Sound Processing". En Pattern Recognition and Image Analysis, 927–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-44871-6_107.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Sound recognition"

1

Mohanapriya, S. P. y R. Karthika. "Unsupervised environmental sound recognition". En 2014 International Conference on Embedded Systems (ICES). IEEE, 2014. http://dx.doi.org/10.1109/embeddedsys.2014.6953048.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Arslan, Yuksel y Huseyin Canbolat. "A sound database development for environmental sound recognition". En 2017 25th Signal Processing and Communications Applications Conference (SIU). IEEE, 2017. http://dx.doi.org/10.1109/siu.2017.7960241.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bear, Helen L., Inês Nolasco y Emmanouil Benetos. "Towards Joint Sound Scene and Polyphonic Sound Event Recognition". En Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-2169.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Negishi, Yuya y Nobuo Kawaguchi. "Instant Learning Sound Sensor: Flexible Environmental Sound Recognition System". En 2007 Fourth International Conference on Networked Sensing Systems. IEEE, 2007. http://dx.doi.org/10.1109/inss.2007.4297447.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Harb, H. y L. Chen. "Sound recognition: a connectionist approach". En Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings. IEEE, 2003. http://dx.doi.org/10.1109/isspa.2003.1224953.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Damnong, Punyanut, Phimphaka Taninpong y Jakramate Bootkrajang. "Steam Trap Opening Sound Recognition". En 2021 18th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, 2021. http://dx.doi.org/10.1109/ecti-con51831.2021.9454929.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Chachada, Sachin y C. C. Jay Kuo. "Environmental sound recognition: A survey". En 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE, 2013. http://dx.doi.org/10.1109/apsipa.2013.6694338.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Vryzas, Nikolaos, Maria Matsiola, Rigas Kotsakis, Charalampos Dimoulas y George Kalliris. "Subjective Evaluation of a Speech Emotion Recognition Interaction Framework". En AM'18: Sound in Immersion and Emotion. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3243274.3243294.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Unaffiliated, Edward M. Schaefer. "Representing pictures with sound". En 2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2014. http://dx.doi.org/10.1109/aipr.2014.7041934.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Fan, Changyuan y Zhenfeng Li. "Research of artillery's sound recognition technology". En Instruments (ICEMI). IEEE, 2009. http://dx.doi.org/10.1109/icemi.2009.5274519.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Sound recognition"

1

Ballas, James A. Recognition of Environmental Sounds. Fort Belvoir, VA: Defense Technical Information Center, noviembre de 1989. http://dx.doi.org/10.21236/ada214942.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

From Risk and Conflict to Peace and Prosperity: The urgency of securing community land rights in a turbulent world. Rights and Resources Initiative, febrero de 2017. http://dx.doi.org/10.53892/sdos4115.

Texto completo
Resumen
Amid the realities of major political turbulence, there was growing recognition in 2016 that the land rights of Indigenous Peoples and local communities are key to ensuring peace and prosperity, economic development, sound investment, and climate change mitigation and adaptation. Despite equivocation by governments, a critical mass of influential investors and companies now recognize the market rationale for respecting community land rights. There is also increased recognition that ignoring these rights carries significant financial and reputational risks; causes conflict with local peoples; and almost always fails to deliver on development promises.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía