Academic literature on the topic 'Sound recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sound recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sound recognition"

1

Ishihara, Kazushi, Kazunori Komatani, Tetsuya Ogata, and Hiroshi G. Okuno. "Sound-Imitation Word Recognition for Environmental Sounds." Transactions of the Japanese Society for Artificial Intelligence 20 (2005): 229–36. http://dx.doi.org/10.1527/tjsai.20.229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Okubo, Shota, Zhihao Gong, Kento Fujita, and Ken Sasaki. "Recognition of Transient Environmental Sounds Based on Temporal and Frequency Features." International Journal of Automation Technology 13, no. 6 (November 5, 2019): 803–9. http://dx.doi.org/10.20965/ijat.2019.p0803.

Full text
Abstract:
Environmental sound recognition (ESR) refers to the recognition of all sounds other than the human voice or musical sounds. Typical ESR methods utilize spectral information and variation within it with respect to time. However, in the case of transient sounds, spectral information is insufficient because only an average quantity of a given signal within a time period can be recognized. In this study, the waveform of sound signals and their spectrum were analyzed visually to extract temporal characteristics of the sound more directly. Based on the observations, features such as the initial rise time, duration, and smoothness of the sound signal; the distribution and smoothness of the spectrum; the clarity of the sustaining sound components; and the number and interval of collisions in chattering were proposed. Experimental feature values were obtained for eight transient environmental sounds, and the distributions of the values were evaluated. A recognition experiment was conducted on 11 transient sounds. The Mel-frequency cepstral coefficient (MFCC) was selected as reference. A support vector machine was adopted as the classification algorithm. The recognition rates obtained from the MFCC were below 50% for five of the 11 sounds, and the overall recognition rate was 69%. In contrast, the recognition rates obtained using the proposed features were above 50% for all sounds, and the overall rate was 86%.
APA, Harvard, Vancouver, ISO, and other styles
3

Hanna, S. A., and Ann Stuart Laubstein. "Speaker‐independent sound recognition." Journal of the Acoustical Society of America 92, no. 4 (October 1992): 2475–76. http://dx.doi.org/10.1121/1.404442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ibrahim Alsaif, Omar, Kifaa Hadi Thanoon, and Asmaa Hadi Al_bayati. "Auto electronic recognition of the Arabic letters sound." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 2 (November 1, 2022): 769. http://dx.doi.org/10.11591/ijeecs.v28.i2.pp769-776.

Full text
Abstract:
In this research Arabic speech sounds have been studied and investigated, so as to find the distinctive features of each articulated sound. Therefore, certain Arabic sound which share certain approximate distinctive significant features have been chosen for study the ability of distinguishing among them through abstracting characteristic features for them. The signals of speech for the sounds have been recorded through the microphone which represented in a binary matrix. This procedure was implemented so as prepare these signals for processing operation through which two features for the co-occurrence matrix (contrast, energy) have been counted. The values of these features were studied and compared from one person to another to discover the certain speech sounds properties sharing certain common distinguishing features approximate in their articulation one another. The results analysis for this study gave the ability of the dependence to these features for distinguish the sound of speaker, in addition to the high ability which provided to distinguish among the arabic letters, where no connect between both co-occurrence matrix elements and the features of signaling of any arabic letters.
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Xuan, Yoshiyuki Toyoda, Huankang Li, Jie Huang, Shuxue Ding, and Yong Liu. "Environmental Sound Recognition Using Time-Frequency Intersection Patterns." Applied Computational Intelligence and Soft Computing 2012 (2012): 1–6. http://dx.doi.org/10.1155/2012/650818.

Full text
Abstract:
Environmental sound recognition is an important function of robots and intelligent computer systems. In this research, we use a multistage perceptron neural network system for environmental sound recognition. The input data is a combination of time-variance pattern of instantaneous powers and frequency-variance pattern with instantaneous spectrum at the power peak, referred to as a time-frequency intersection pattern. Spectra of many environmental sounds change more slowly than those of speech or voice, so the intersectional time-frequency pattern will preserve the major features of environmental sounds but with drastically reduced data requirements. Two experiments were conducted using an original database and an open database created by the RWCP project. The recognition rate for 20 kinds of environmental sounds was 92%. The recognition rate of the new method was about 12% higher than methods using only an instantaneous spectrum. The results are also comparable with HMM-based methods, although those methods need to treat the time variance of an input vector series with more complicated computations.
APA, Harvard, Vancouver, ISO, and other styles
6

Cheng, Xiefeng, Pengfei Wang, and Chenjun She. "Biometric Identification Method for Heart Sound Based on Multimodal Multiscale Dispersion Entropy." Entropy 22, no. 2 (February 20, 2020): 238. http://dx.doi.org/10.3390/e22020238.

Full text
Abstract:
In this paper, a new method of biometric characterization of heart sounds based on multimodal multiscale dispersion entropy is proposed. Firstly, the heart sound is periodically segmented, and then each single-cycle heart sound is decomposed into a group of intrinsic mode functions (IMFs) by improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN). These IMFs are then segmented to a series of frames, which is used to calculate the refine composite multiscale dispersion entropy (RCMDE) as the characteristic representation of heart sound. In the simulation experiments I, carried out on the open heart sounds database Michigan, Washington and Littman, the feature representation method was combined with the heart sound segmentation method based on logistic regression (LR) and hidden semi-Markov models (HSMM), and feature selection was performed through the Fisher ratio (FR). Finally, the Euclidean distance (ED) and the close principle are used for matching and identification, and the recognition accuracy rate was 96.08%. To improve the practical application value of this method, the proposed method was applied to 80 heart sounds database constructed by 40 volunteer heart sounds to discuss the effect of single-cycle heart sounds with different starting positions on performance in experiment II. The experimental results show that the single-cycle heart sound with the starting position of the start of the first heart sound (S1) has the highest recognition rate of 97.5%. In summary, the proposed method is effective for heart sound biometric recognition.
APA, Harvard, Vancouver, ISO, and other styles
7

Norman-Haignere, Sam V., and Josh H. McDermott. "Sound recognition depends on real-world sound level." Journal of the Acoustical Society of America 139, no. 4 (April 2016): 2156. http://dx.doi.org/10.1121/1.4950385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhai, Xiu, Fatemeh Khatami, Mina Sadeghi, Fengrong He, Heather L. Read, Ian H. Stevenson, and Monty A. Escabí. "Distinct neural ensemble response statistics are associated with recognition and discrimination of natural sound textures." Proceedings of the National Academy of Sciences 117, no. 49 (November 20, 2020): 31482–93. http://dx.doi.org/10.1073/pnas.2005644117.

Full text
Abstract:
The perception of sound textures, a class of natural sounds defined by statistical sound structure such as fire, wind, and rain, has been proposed to arise through the integration of time-averaged summary statistics. Where and how the auditory system might encode these summary statistics to create internal representations of these stationary sounds, however, is unknown. Here, using natural textures and synthetic variants with reduced statistics, we show that summary statistics modulate the correlations between frequency organized neuron ensembles in the awake rabbit inferior colliculus (IC). These neural ensemble correlation statistics capture high-order sound structure and allow for accurate neural decoding in a single trial recognition task with evidence accumulation times approaching 1 s. In contrast, the average activity across the neural ensemble (neural spectrum) provides a fast (tens of milliseconds) and salient signal that contributes primarily to texture discrimination. Intriguingly, perceptual studies in human listeners reveal analogous trends: the sound spectrum is integrated quickly and serves as a salient discrimination cue while high-order sound statistics are integrated slowly and contribute substantially more toward recognition. The findings suggest statistical sound cues such as the sound spectrum and correlation structure are represented by distinct response statistics in auditory midbrain ensembles, and that these neural response statistics may have dissociable roles and time scales for the recognition and discrimination of natural sounds.
APA, Harvard, Vancouver, ISO, and other styles
9

Song, Hang, Bin Zhao, Jun Hu, Haonan Sun, and Zheng Zhou. "Research on Improved DenseNets Pig Cough Sound Recognition Model Based on SENets." Electronics 11, no. 21 (October 31, 2022): 3562. http://dx.doi.org/10.3390/electronics11213562.

Full text
Abstract:
In order to real-time monitor the health status of pigs in the process of breeding and to achieve the purpose of early warning of swine respiratory diseases, the SE-DenseNet-121 recognition model was established to recognize pig cough sounds. The 13-dimensional MFCC, ΔMFCC and Δ2MFCC were transverse spliced to obtain six groups of parameters that could reflect the static, dynamic and mixed characteristics of pig sound signals respectively, and the DenseNet-121 recognition model was used to compare the performance of the six sets of parameters to obtain the optimal set of parameters. The DenseNet-121 recognition model was improved by using the SENets attention module to enhance the recognition model's ability to extract effective features from the pig sound signals. The results showed that the optimal set of parameters was the 26-dimensional MFCC + ΔMFCC, and the rate of recognition accuracy, recall, precision and F1 score of the SE-DenseNet-121 recognition model for pig cough sounds were 93.8%, 98.6%, 97% and 97.8%, respectively. The above results can be used to develop a pig cough sound recognition system for early warning of pig respiratory diseases.
APA, Harvard, Vancouver, ISO, and other styles
10

Binh, Nguyen Dang. "Gestures Recognition from Sound Waves." EAI Endorsed Transactions on Context-aware Systems and Applications 3, no. 10 (September 12, 2016): 151679. http://dx.doi.org/10.4108/eai.12-9-2016.151679.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sound recognition"

1

Kawaguchi, Nobuo, and Yuya Negishi. "Instant Learning Sound Sensor: Flexible Environmental Sound Recognition System." IEEE, 2007. http://hdl.handle.net/2237/15456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chapman, David P. "Playing with sounds : a spatial solution for computer sound synthesis." Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Stäger, Mathias. "Low-power sound-based user activity recognition /." Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Medhat, Fady. "Masked conditional neural networks for sound recognition." Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/21594/.

Full text
Abstract:
Sound recognition has been studied for decades to grant machines the human hearing ability. The advances in this field help in a range of applications, from industrial ones such as fault detection in machines and noise monitoring to household applications such as surveillance and hearing aids. The problem of sound recognition like any pattern recognition task involves the reliability of the extracted features and the recognition model. The problem has been approached through decades of crafted features used collaboratively with models based on neural networks or statistical models such as Gaussian Mixtures and Hidden Markov models. Neural networks are currently being considered as a method to automate the feature extraction stage together with the already incorporated role of recognition. The performance of such models is approaching handcrafted features. Current neural network based models are not primarily designed for the nature of the sound signal, which may not optimally harness distinctive properties of the signal. This thesis proposes neural network models that exploit the nature of the time-frequency representation of the sound signal. We propose the ConditionaL Neural Network (CLNN) and the Masked ConditionaL Neural Network (MCLNN). The CLNN is designed to account for the temporal dimension of a signal and behaves as the framework for the MCLNN. The MCLNN allows a filterbank-like behaviour to be embedded within the network using a specially designed binary mask. The masking subdivides the frequency range of a signal into bands and allows concurrent consideration of different feature combinations analogous to the manual handcrafting of the optimum set of features for a recognition task. The proposed models have been evaluated through an extensive set of experiments using a range of publicly available datasets of music genres and environmental sounds, where they surpass state-of-the-art Convolutional Neural Networks and several hand-crafted attempts.
APA, Harvard, Vancouver, ISO, and other styles
5

Rodeia, José Pedro dos Santos. "Analysis and recognition of similar environmental sounds." Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/2305.

Full text
Abstract:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Humans have the ability to identify sound sources just by hearing a sound. Adapting the same problem to computers is called (automatic) sound recognition. Several sound recognizers have been developed throughout the years. The accuracy provided by these recognizers is influenced by the features they use and the classification method implemented. While there are many approaches in sound feature extraction and in sound classification, most have been used to classify sounds with very different characteristics. Here, we implemented a similar sound recognizer. This recognizer uses sounds with very similar properties making the recognition process harder. Therefore, we will use both temporal and spectral properties of the sound. These properties will be extracted using the Intrinsic Structures Analysis (ISA) method, which uses Independent Component Analysis and Principal Component Analysis. We will implement the classification method based on k-Nearest Neighbor algorithm. Here we prove that the features extracted in this way are powerful in sound recognition. We tested our recognizer with several sets of features the ISA method retrieves, and achieved great results. We, finally, did a user study to compare human performance distinguishing similar sounds against our recognizer. The study allowed us to conclude the sounds are in fact really similar and difficult to distinguish and that our recognizer has much more ability than humans to identify them.
APA, Harvard, Vancouver, ISO, and other styles
6

Martin, Keith Dana. "Sound-source recognition : a theory and computational model." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9468.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 159-172).
The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of "listening" to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using both isolated musical tones and excerpts from compact disc recordings as test stimuli. The computer model's performance is robust with regard to the variations of reverberation and ambient noise (although not with regard to competing sound sources) in commercial compact disc recordings, and the system performs better than three out of fourteen skilled human listeners on a forced-choice classification task. This work has implications for research in musical timbre, automatic media annotation, human talker identification, and computational auditory scene analysis.
by Keith Dana Martin.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
7

Hunter, Jane Louise. "Integrated sound synchronisation for computer animation." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Soltani-Farani, A. A. "Sound visualisation as an aid for the deaf : a new approach." Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/844112/.

Full text
Abstract:
Visual translation of speech as an aid for the deaf has long been a subject of electronic research and development. This thesis is concerned with a technique of sound visualisation based upon the theory of the primacy of dynamic, rather than static, information in the perception of speech sounds. The goal is design and evaluation of a system to display the perceptually important features of an input sound in a dynamic format as similar as possible to the auditory representation of that sound. The human auditory system, as the most effective system of sound representation, is first studied. Then, based on the latest theories of hearing and techniques of auditory modelling, a simplified model of the human ear is developed. In this model, the outer and middle ears together are simulated by a high-pass filter, and the inner ear is modelled by a bank of band-pass filters the outputs of which, after rectification and compression, are applied to a visualiser block. To design an appropriate visualiser block, theories of sound and speech perception are reviewed. Then the perceptually important properties of sound, and their relations to the physical attributes of the sound pressure wave, are considered to map the outputs from the auditory model onto an informative and recognisable running image-like the one known as cochleagram. This conveyor-like image is then sampled by a window of 20 milliseconds duration at a rate of 50 samples per second, so that a sequence of phase-locked, rectangular images is produced. Animation of these images results in a novel method of spectrography displaying both the time-varying and the time-independent information of the underlying sound with a high resolution in real time. The resulting system translates a spoken word into a visual gesture, and displays a still picture when the input is a steady state sound. Finally the implementation of this visualiser system is evaluated through several experiments undertaken by normal-hearing subjects. In these experiments, recognition of the gestures of a number of spoken words, is examined through a set of two-word and multi-word forced-choice tests. The results of these preliminary experiments show a high recognition score (40-90 percent, where zero represents chance expectation) after only 10 learning trials. General conclusions from the results suggest: a potential quick learning of the gestures, language independence of the system, fidelity of the system in translating the auditory information, and persistence of the learned gestures in the long-term memory. The results are very promising and motivate further investigations.
APA, Harvard, Vancouver, ISO, and other styles
9

Gillespie, Bradford W. "Strategies for improving audible quality and speech recognition accuracy of reverberant speech /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/5930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Corbet, Remy. "A SOUND FOR RECOGNITION: BLUES MUSIC AND THE AFRICAN AMERICAN COMMUNITY." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/730.

Full text
Abstract:
AN ABSTRACT OF THE THESIS OF Remy Corbet for the Masters in History degree in American History, presented on August 3, 2011, at Southern Illinois University Carbondale. TITLE: A SOUND FOR RECOGNITION: BLUES MUSIC AND THE AFRICAN AMERICAN COMMUNITY MAJOR PROFESSOR: Dr. Robbie Lieberman Blues music is a reflection of all the changes that shaped the African American experience. It is an affirmation of the African American identity, looking forward to the future with one eye glancing at the past. It is a reminder of the tragedies and inequalities that accompanied African Americans from slavery to official freedom, then from freedom to equality. It is the witness of the development of African Americans, and of their acculturation to the individual voice, symbol of the American ethos, which made the link between their African past and their American future.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sound recognition"

1

Artificial perception and music recognition. Berlin: Springer-Verlag, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Minker, Wolfgang. Incorporating Knowledge Sources into Statistical Speech Recognition. Boston, MA: Springer Science+Business Media, LLC, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Carlin, Kevin John. A domestic sound recognition and identification alert system for the profoundly deaf. [s.l: The Author], 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kil, David H. Pattern recognition and prediction with applications to signal characterization. Woodbury, N.Y: AIP Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wolfman, Karen Anne Siegelman. The influence of spelling-sound consistency on the use of reading strategies. Ottawa: National Library of Canada, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Junqua, Jean-Claude. Robustness in automatic speech recognition: Fundamentals and applications. Boston: Kluwer Academic Publishers, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Word sorts and more: Sound, pattern, and meaning explorations K-3. New York: Guilford Press, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jacob, Benesty, Sondhi M. Mohan 1933-, and Huang Yiteng 1972-, eds. Springer handbook of speech processing. Berlin: Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jens, Blauert, ed. Communication acoustics. Berlin: Springer-Verlag, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

V, Kryssanov Victor, Ogawa Hitoshi, Brewster Stephen 1967-, and SpringerLink (Online service), eds. Haptic and Audio Interaction Design: 6th International Workshop, HAID 2011, Kusatsu, Japan, August 25-26, 2011. Proceedings. Berlin, Heidelberg: Springer-Verlag GmbH Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sound recognition"

1

Cai, Yang, and Károly D. Pados. "Sound Recognition." In Computing with Instinct, 16–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19757-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nam, Juhan, Gautham J. Mysore, and Paris Smaragdis. "Sound Recognition in Mixtures." In Latent Variable Analysis and Signal Separation, 405–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28551-6_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Popova, Anastasiya S., Alexandr G. Rassadin, and Alexander A. Ponomarenko. "Emotion Recognition in Sound." In Advances in Neural Computation, Machine Learning, and Cognitive Research, 117–24. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66604-4_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Agus, Trevor R., Clara Suied, and Daniel Pressnitzer. "Timbre Recognition and Sound Source Identification." In Timbre: Acoustics, Perception, and Cognition, 59–85. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14832-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hayes, Kimberley, and Amit Rajput. "NDE 4.0: Image and Sound Recognition." In Handbook of Nondestructive Evaluation 4.0, 403–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-73206-6_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ifukube, Tohru. "Speech Recognition Systems for the Hearing Impaired and the Elderly." In Sound-Based Assistive Technology, 145–67. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-47997-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Theodorou, Theodoros, Iosif Mporas, and Nikos Fakotakis. "Automatic Sound Recognition of Urban Environment Events." In Speech and Computer, 129–36. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bolat, Bülent, and Ünal Küçük. "Musical Sound Recognition by Active Learning PNN." In Multimedia Content Representation, Classification and Security, 474–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11848035_63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Zhichao, Shugong Xu, Tianhao Qiao, Shunqing Zhang, and Shan Cao. "Attention Based Convolutional Recurrent Neural Network for Environmental Sound Classification." In Pattern Recognition and Computer Vision, 261–71. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31654-9_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Santos, Vasco C. F., Miguel F. M. Sousa, and Aníbal J. S. Ferreira. "Quality Assessment of Manufactured Roof-Tiles Using Digital Sound Processing." In Pattern Recognition and Image Analysis, 927–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-44871-6_107.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sound recognition"

1

Mohanapriya, S. P., and R. Karthika. "Unsupervised environmental sound recognition." In 2014 International Conference on Embedded Systems (ICES). IEEE, 2014. http://dx.doi.org/10.1109/embeddedsys.2014.6953048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arslan, Yuksel, and Huseyin Canbolat. "A sound database development for environmental sound recognition." In 2017 25th Signal Processing and Communications Applications Conference (SIU). IEEE, 2017. http://dx.doi.org/10.1109/siu.2017.7960241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bear, Helen L., Inês Nolasco, and Emmanouil Benetos. "Towards Joint Sound Scene and Polyphonic Sound Event Recognition." In Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-2169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Negishi, Yuya, and Nobuo Kawaguchi. "Instant Learning Sound Sensor: Flexible Environmental Sound Recognition System." In 2007 Fourth International Conference on Networked Sensing Systems. IEEE, 2007. http://dx.doi.org/10.1109/inss.2007.4297447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Harb, H., and L. Chen. "Sound recognition: a connectionist approach." In Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings. IEEE, 2003. http://dx.doi.org/10.1109/isspa.2003.1224953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Damnong, Punyanut, Phimphaka Taninpong, and Jakramate Bootkrajang. "Steam Trap Opening Sound Recognition." In 2021 18th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, 2021. http://dx.doi.org/10.1109/ecti-con51831.2021.9454929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chachada, Sachin, and C. C. Jay Kuo. "Environmental sound recognition: A survey." In 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE, 2013. http://dx.doi.org/10.1109/apsipa.2013.6694338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vryzas, Nikolaos, Maria Matsiola, Rigas Kotsakis, Charalampos Dimoulas, and George Kalliris. "Subjective Evaluation of a Speech Emotion Recognition Interaction Framework." In AM'18: Sound in Immersion and Emotion. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3243274.3243294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Unaffiliated, Edward M. Schaefer. "Representing pictures with sound." In 2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2014. http://dx.doi.org/10.1109/aipr.2014.7041934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fan, Changyuan, and Zhenfeng Li. "Research of artillery's sound recognition technology." In Instruments (ICEMI). IEEE, 2009. http://dx.doi.org/10.1109/icemi.2009.5274519.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sound recognition"

1

Ballas, James A. Recognition of Environmental Sounds. Fort Belvoir, VA: Defense Technical Information Center, November 1989. http://dx.doi.org/10.21236/ada214942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

From Risk and Conflict to Peace and Prosperity: The urgency of securing community land rights in a turbulent world. Rights and Resources Initiative, February 2017. http://dx.doi.org/10.53892/sdos4115.

Full text
Abstract:
Amid the realities of major political turbulence, there was growing recognition in 2016 that the land rights of Indigenous Peoples and local communities are key to ensuring peace and prosperity, economic development, sound investment, and climate change mitigation and adaptation. Despite equivocation by governments, a critical mass of influential investors and companies now recognize the market rationale for respecting community land rights. There is also increased recognition that ignoring these rights carries significant financial and reputational risks; causes conflict with local peoples; and almost always fails to deliver on development promises.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography