Academic literature on the topic 'Mel-Frequency Cepstral coefficients'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Mel-Frequency Cepstral coefficients.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Mel-Frequency Cepstral coefficients"
Sato, Nobuo, and Yasunari Obuchi. "Emotion Recognition using Mel-Frequency Cepstral Coefficients." Journal of Natural Language Processing 14, no. 4 (2007): 83–96. http://dx.doi.org/10.5715/jnlp.14.4_83.
Full textHashad, F. G., T. M. Halim, S. M. Diab, B. M. Sallam, and F. E. Abd El-Samie. "Fingerprint recognition using mel-frequency cepstral coefficients." Pattern Recognition and Image Analysis 20, no. 3 (September 2010): 360–69. http://dx.doi.org/10.1134/s1054661810030120.
Full textPark, Won Gyeong, Young Bae Lim, Dong Woo Kim, Ho Kyoung Lee, and Sdeongwon Cho. "Prediction Method of Electrical Abnormal States Using Simplified Mel-Frequency Cepstral Coefficients." Journal of Korean Institute of Intelligent Systems 28, no. 5 (October 31, 2018): 514–22. http://dx.doi.org/10.5391/jkiis.2018.28.5.514.
Full textINDRAWATY, YOULLIA, IRMA AMELIA DEWI, and RIZKI LUKMAN. "Ekstraksi Ciri Pelafalan Huruf Hijaiyyah Dengan Metode Mel-Frequency Cepstral Coefficients." MIND Journal 4, no. 1 (June 1, 2019): 49–64. http://dx.doi.org/10.26760/mindjournal.v4i1.49-64.
Full textARORA, SHRUTI, SUSHMA JAIN, and INDERVEER CHANA. "A FUSION FRAMEWORK BASED ON CEPSTRAL DOMAIN FEATURES FROM PHONOCARDIOGRAM TO PREDICT HEART HEALTH STATUS." Journal of Mechanics in Medicine and Biology 21, no. 04 (April 22, 2021): 2150034. http://dx.doi.org/10.1142/s0219519421500342.
Full textSheu, Jia-Shing, and Ching-Wen Chen. "Voice Recognition and Marking Using Mel-frequency Cepstral Coefficients." Sensors and Materials 32, no. 10 (October 9, 2020): 3209. http://dx.doi.org/10.18494/sam.2020.2860.
Full textKoolagudi, Shashidhar G., Deepika Rastogi, and K. Sreenivasa Rao. "Identification of Language using Mel-Frequency Cepstral Coefficients (MFCC)." Procedia Engineering 38 (2012): 3391–98. http://dx.doi.org/10.1016/j.proeng.2012.06.392.
Full textSaldanha, Jennifer C., T. Ananthakrishna, and Rohan Pinto. "Vocal Fold Pathology Assessment Using Mel-Frequency Cepstral Coefficients and Linear Predictive Cepstral Coefficients Features." Journal of Medical Imaging and Health Informatics 4, no. 2 (April 1, 2014): 168–73. http://dx.doi.org/10.1166/jmihi.2014.1253.
Full textPedalanka, P. S. Subhashini, M. SatyaSai Ram, and Duggirala Sreenivasa Rao. "Mel Frequency Cepstral Coefficients based Bacterial Foraging Optimization with DNN-RBF for Speaker Recognition." Indian Journal of Science and Technology 14, no. 41 (November 3, 2021): 3082–92. http://dx.doi.org/10.17485/ijst/v14i41.1858.
Full textFahmy, Maged M. M. "Palmprint recognition based on Mel frequency Cepstral coefficients feature extraction." Ain Shams Engineering Journal 1, no. 1 (September 2010): 39–47. http://dx.doi.org/10.1016/j.asej.2010.09.005.
Full textDissertations / Theses on the topic "Mel-Frequency Cepstral coefficients"
Darch, Jonathan J. A. "Robust acoustic speech feature prediction from Mel frequency cepstral coefficients." Thesis, University of East Anglia, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445206.
Full textEdman, Sebastian. "Radar target classification using Support Vector Machines and Mel Frequency Cepstral Coefficients." Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214794.
Full textI radar applikationer räcker det ibland inte med att veta att systemet observerat ett mål när en reflekted signal dekekteras, det är ofta också utav stort intresse att veta vilket typ av föremål som signalen reflekterades mot. Detta projekt undersöker möjligheterna att utifrån rå radardata transformera de reflekterade signalerna och använda sina mänskliga sinnen, mer specifikt våran hörsel, för att skilja på olika mål och också genom en maskininlärnings approach där med hjälp av mönster och karaktärsdrag för dessa signaler används för att besvara frågeställningen. Mer ingående avgränsas denna undersökning till två typer av mål, mindre obemannade flygande farkoster (UAV) och fåglar. Genom att extrahera komplexvärd radar video även känt som I/Q data från tidigare nämnda typer av mål via signalbehandlingsmetoder transformera denna data till reella signaler, därefter transformeras dessa signaler till hörbara signaler. För att klassificera dessa typer av signaler används typiska särdrag som också används inom taligenkänning, nämligen, Mel Frequency Cepstral Coefficients tillsammans med två modeller av en Support Vector Machine klassificerings metod. Med den linjära modellen uppnåddes en prediktions noggrannhet på 93.33%. Individuellt var noggrannheten 93.33 % korrekt klassificering utav UAV:n och 93.33 % på fåglar. Med radial bas modellen uppnåddes en prediktions noggrannhet på 98.33%. Individuellt var noggrannheten 100 % korrekt klassificering utav UAV:n och 96.76% på fåglar. Projektet är delvis utfört med J. Clemedson [2] vars fokus är att, som tidigare nämnt, transformera dessa signaler till hörbara signaler.
Yang, Chenguang. "Security in Voice Authentication." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-dissertations/79.
Full textWu, Qiming. "A robust audio-based symbol recognition system using machine learning techniques." University of the Western Cape, 2020. http://hdl.handle.net/11394/7614.
Full textThis research investigates the creation of an audio-shape recognition system that is able to interpret a user’s drawn audio shapes—fundamental shapes, digits and/or letters— on a given surface such as a table-top using a generic stylus such as the back of a pen. The system aims to make use of one, two or three Piezo microphones, as required, to capture the sound of the audio gestures, and a combination of the Mel-Frequency Cepstral Coefficients (MFCC) feature descriptor and Support Vector Machines (SVMs) to recognise audio shapes. The novelty of the system is in the use of piezo microphones which are low cost, light-weight and portable, and the main investigation is around determining whether these microphones are able to provide sufficiently rich information to recognise the audio shapes mentioned in such a framework.
Candel, Ramón Antonio José. "Verificación automática de locutores aplicando pruebas diagnósticas múltiples en serie y en paralelo basadas en DTW (Dynamic Time Warping) y NFCC (Mel-Frequency Cepstral coefficients)." Doctoral thesis, Universidad de Murcia, 2015. http://hdl.handle.net/10803/300433.
Full textThe present thesis is the design of a system capable of performing automatic speaker verification, for which is based on modeling using the DTW (Dynamic Time Warping) and procedures MFCC (Mel-Frequency Cepstral Coefficients). Once designed it, we have evaluated the system so both at individual events, DTW and MFCC separately as multiple, combining both in series and in parallel, to recordings obtained from the data base AHUMADA from the Guardia Civil. All results have been seen considering the statistical significance thereof, derived from performing a given finite number of tests. Statistical results have been obtained in such a system for different sizes of the databases used, allowing us to conclude the influence of these in the method in order to fix a priori the different variables of this, in order to make the best possible study. To the same conclusion, we can identify what is the best system, consisting of model type and sample size, we use a forensic study based on the intended purpose.
Lindstål, Tim, and Daniel Marklund. "Application of LabVIEW and myRIO to voice controlled home automation." Thesis, Uppsala universitet, Signaler och System, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-380866.
Full textLarsson, Alm Kevin. "Automatic Speech Quality Assessment in Unified Communication : A Case Study." Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159794.
Full textNeville, Katrina Lee, and katrina neville@rmit edu au. "Channel Compensation for Speaker Recognition Systems." RMIT University. Electrical and Computer Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080514.093453.
Full textAlvarenga, Rodrigo Jorge. "Reconhecimento de comandos de voz por redes neurais." Universidade de Taubaté, 2012. http://www.bdtd.unitau.br/tedesimplificado/tde_busca/arquivo.php?codArquivo=587.
Full textSystems for speech recognition have widespread use in the industrial universe, in the improvement of human operations and procedures and in the area of entertainment and recreation. The specific objective of this study was to design and develop a voice recognition system, capable of identifying voice commands, regardless of the speaker. The main purpose of the system is to control movement of robots, with applications in industry and in aid of disabled people. We used the approach of decision making, by means of a neural network trained with the distinctive features of the speech of 16 speakers. The samples of the voice commands were collected under the criterion of convenience (age and sex), to ensure a greater discrimination between the voice characteristics and to reach the generalization of the neural network. Preprocessing consisted in the determination of the endpoints of each command signal and in the adaptive Wiener filtering. Each speech command was segmented into 200 windows with overlapping of 25%. The features used were the zero crossing rate, the short-term energy and the mel-frequency ceptral coefficients. The first two coefficients of the linear predictive coding and its error were also tested. The neural network classifier was a multilayer perceptron, trained by the backpropagation algorithm. Several experiments were performed for the choice of thresholds, practical values, features and neural network configurations. Results were considered very good, reaching an acceptance rate of 89,16%, under the `worst case conditions for the sampling of the commands.
Larsson, Joel. "Optimizing text-independent speaker recognition using an LSTM neural network." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-26312.
Full textBook chapters on the topic "Mel-Frequency Cepstral coefficients"
Sueur, Jérôme. "Mel-Frequency Cepstral and Linear Predictive Coefficients." In Sound Analysis and Synthesis with R, 381–98. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77647-7_12.
Full textSrivastava, Sumit, Mahesh Chandra, and G. Sahoo. "Phase Based Mel Frequency Cepstral Coefficients for Speaker Identification." In Advances in Intelligent Systems and Computing, 309–16. New Delhi: Springer India, 2016. http://dx.doi.org/10.1007/978-81-322-2757-1_31.
Full textKarahoda, Bertan, Krenare Pireva, and Ali Shariq Imran. "Mel Frequency Cepstral Coefficients Based Similar Albanian Phonemes Recognition." In Human Interface and the Management of Information: Information, Design and Interaction, 491–500. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-40349-6_47.
Full textEzeiza, Aitzol, Karmele López de Ipiña, Carmen Hernández, and Nora Barroso. "Combining Mel Frequency Cepstral Coefficients and Fractal Dimensions for Automatic Speech Recognition." In Advances in Nonlinear Speech Processing, 183–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25020-0_24.
Full textPalo, Hemanta Kumar, Mahesh Chandra, and Mihir Narayan Mohanty. "Recognition of Human Speech Emotion Using Variants of Mel-Frequency Cepstral Coefficients." In Advances in Systems, Control and Automation, 491–98. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4762-6_47.
Full textHusain, Moula, S. M. Meena, and Manjunath K. Gonal. "Speech Based Arithmetic Calculator Using Mel-Frequency Cepstral Coefficients and Gaussian Mixture Models." In Proceedings of 3rd International Conference on Advanced Computing, Networking and Informatics, 209–18. New Delhi: Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-2538-6_22.
Full textBenkedjouh, Tarak, Taha Chettibi, Yassine Saadouni, and Mohamed Afroun. "Gearbox Fault Diagnosis Based on Mel-Frequency Cepstral Coefficients and Support Vector Machine." In Computational Intelligence and Its Applications, 220–31. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-89743-1_20.
Full textTraboulsi, Ahmad, and Michel Barbeau. "Identification of Drone Payload Using Mel-Frequency Cepstral Coefficients and LSTM Neural Networks." In Proceedings of the Future Technologies Conference (FTC) 2020, Volume 1, 402–12. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63128-4_30.
Full textXue, Yang. "Speaker Recognition System Using Dynamic Time Warping Matching and Mel-Scale Frequency Cepstral Coefficients." In Lecture Notes in Electrical Engineering, 961–67. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8411-4_127.
Full textDhonde, S. B., Amol Chaudhari, and S. M. Jagade. "Integration of Mel-frequency Cepstral Coefficients with Log Energy and Temporal Derivatives for Text-Independent Speaker Identification." In Proceedings of the International Conference on Data Engineering and Communication Technology, 791–97. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-1675-2_78.
Full textConference papers on the topic "Mel-Frequency Cepstral coefficients"
Jithendra, Uppu, Usha Mittal, and Priyanka Chawla. "Audio Detection using Mel-frequency Cepstral Coefficients." In 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). IEEE, 2021. http://dx.doi.org/10.1109/icrito51393.2021.9596443.
Full textNguyen Viet Cuong, Vu Dinh, and Lam Si Tung Ho. "Mel-frequency Cepstral Coefficients for Eye Movement Identification." In 2012 IEEE 24th International Conference on Tools with Artificial Intelligence (ICTAI 2012). IEEE, 2012. http://dx.doi.org/10.1109/ictai.2012.42.
Full textZhou, Xinhui, Daniel Garcia-Romero, Ramani Duraiswami, Carol Espy-Wilson, and Shihab Shamma. "Linear versus mel frequency cepstral coefficients for speaker recognition." In Understanding (ASRU). IEEE, 2011. http://dx.doi.org/10.1109/asru.2011.6163888.
Full textThaine, Patricia, and Gerald Penn. "Extracting Mel-Frequency and Bark-Frequency Cepstral Coefficients from Encrypted Signals." In Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-1136.
Full textRamirez, Angel David Pedroza, Jose Ismael de la Rosa Vargas, Rogelio Rosas Valdez, and Aldonso Becerra. "A comparative between Mel Frequency Cepstral Coefficients (MFCC) and Inverse Mel Frequency Cepstral Coefficients (IMFCC) features for an Automatic Bird Species Recognition System." In 2018 IEEE Latin American Conference on Computational Intelligence (LA-CCI). IEEE, 2018. http://dx.doi.org/10.1109/la-cci.2018.8625230.
Full textTalal, T. M., and Ayman El-Sayed. "Identification of satellite images based on mel frequency cepstral coefficients." In Systems (ICCES). IEEE, 2009. http://dx.doi.org/10.1109/icces.2009.5383270.
Full textKorkmaz, Onur Erdem, and Ayten Atasoy. "Emotion recognition from speech signal using mel-frequency cepstral coefficients." In 2015 9th International Conference on Electrical and Electronics Engineering (ELECO). IEEE, 2015. http://dx.doi.org/10.1109/eleco.2015.7394435.
Full textJokic, Ivan D., Stevan D. Jokic, Vlado D. Delic, and Zoran H. Perie. "Mel-frequency cepstral coefficients as features for automatic speaker recognition." In 2015 23rd Telecommunications Forum Telfor (TELFOR). IEEE, 2015. http://dx.doi.org/10.1109/telfor.2015.7377497.
Full textLin, Ming, Shangping Zhong, and Lingli Lin. "Chicken Sound Recognition Using Anti-noise Mel Frequency Cepstral Coefficients." In 2015 Third International Conference on Robot, Vision and Signal Processing (RVSP). IEEE, 2015. http://dx.doi.org/10.1109/rvsp.2015.60.
Full textCooney, Ciaran, Rafaella Folli, and Damien Coyle. "Mel Frequency Cepstral Coefficients Enhance Imagined Speech Decoding Accuracy from EEG." In 2018 29th Irish Signals and Systems Conference (ISSC). IEEE, 2018. http://dx.doi.org/10.1109/issc.2018.8585291.
Full text