Academic literature on the topic 'Acoustic speech features'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Acoustic speech features.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Acoustic speech features"
Masih, Dawa A. A., Nawzad K. Jalal, Manar N. A. Mohammed, and Sulaiman A. Mustafa. "The Assessment of Acoustical Characteristics for Recent Mosque Buildings in Erbil City of Iraq." ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 9, no. 1 (March 1, 2021): 51–66. http://dx.doi.org/10.14500/aro.10784.
Full textVyaltseva, Darya. "Acoustic Features of Twins’ Speech." Vestnik Volgogradskogo gosudarstvennogo universiteta. Serija 2. Jazykoznanije 16, no. 3 (November 15, 2017): 227–34. http://dx.doi.org/10.15688/jvolsu2.2017.3.24.
Full textSepulveda-Sepulveda, Alexander, and German Castellanos-Domínguez. "Time-Frequency Energy Features for Articulator Position Inference on Stop Consonants." Ingeniería y Ciencia 8, no. 16 (November 30, 2012): 37–56. http://dx.doi.org/10.17230/ingciencia.8.16.2.
Full textIshimoto, Yuichi, and Noriko Suzuki. "Acoustic features of speech after glossectomy." Journal of the Acoustical Society of America 120, no. 5 (November 2006): 3350–51. http://dx.doi.org/10.1121/1.4781416.
Full textShuiskaya, Tatiana V., and Svetlana V. Androsova. "ACOUSTIC FEATURES OF CHILD SPEECH SOUNDS: CONSONANTS." Theoretical and Applied Linguistics 2, no. 3 (2016): 123–37. http://dx.doi.org/10.22250/2410-7190_2016_2_3_123_137.
Full textKobayashi, Maori, Yasuhiro Hamada, and Masato Akagi. "Acoustic features in speech for emergency perception." Journal of the Acoustical Society of America 144, no. 3 (September 2018): 1835. http://dx.doi.org/10.1121/1.5068086.
Full textRoh, Yong-Wan, Dong-Ju Kim, Woo-Seok Lee, and Kwang-Seok Hong. "Novel acoustic features for speech emotion recognition." Science in China Series E: Technological Sciences 52, no. 7 (June 9, 2009): 1838–48. http://dx.doi.org/10.1007/s11431-009-0204-3.
Full textYamamoto, Katsuhiko, Toshio Irino, Toshie Matsui, Shoko Araki, Keisuke Kinoshita, and Tomohiro Nakatani. "Analysis of acoustic features for speech intelligibility prediction models analysis of acoustic features for speech intelligibility prediction models." Journal of the Acoustical Society of America 140, no. 4 (October 2016): 3114. http://dx.doi.org/10.1121/1.4969744.
Full textJiang, Wei, Zheng Wang, Jesse S. Jin, Xianfeng Han, and Chunguang Li. "Speech Emotion Recognition with Heterogeneous Feature Unification of Deep Neural Network." Sensors 19, no. 12 (June 18, 2019): 2730. http://dx.doi.org/10.3390/s19122730.
Full textZlokarnik, Igor. "Adding articulatory features to acoustic features for automatic speech recognition." Journal of the Acoustical Society of America 97, no. 5 (May 1995): 3246. http://dx.doi.org/10.1121/1.411699.
Full textDissertations / Theses on the topic "Acoustic speech features"
Leung, Ka Yee. "Combining acoustic features and articulatory features for speech recognition /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20LEUNGK.
Full textIncludes bibliographical references (leaves 92-96). Also available in electronic version. Access restricted to campus users.
Juneja, Amit. "Speech recognition based on phonetic features and acoustic landmarks." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/2148.
Full textThesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Tyson, Na'im R. "Exploration of Acoustic Features for Automatic Vowel Discrimination in Spontaneous Speech." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339695879.
Full textSun, Rui. "The evaluation of the stability of acoustic features in affective conveyance across multiple emotional databases." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49041.
Full textTorres, Juan Félix. "Estimation of glottal source features from the spectral envelope of the acoustic speech signal." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34736.
Full textIshizuka, Kentaro. "Studies on Acoustic Features for Automatic Speech Recognition and Speaker Diarization in Real Environments." 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/123834.
Full textDiekema, Emily D. "Acoustic Measurements of Clear Speech Cue Fade in Adults with Idiopathic Parkinson Disease." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1460063159.
Full textTran, Thi-Anh-Xuan. "Acoustic gesture modeling. Application to a Vietnamese speech recognition system." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT023/document.
Full textSpeech plays a vital role in human communication. Selection of relevant acoustic speech features is key to in the design of any system using speech processing. For some 40 years, speech was typically considered as a sequence of quasi-stable portions of signal (vowels) separated by transitions (consonants). Despite a wealth of studies that clearly document the importance of coarticulation, and reveal that articulatory and acoustic targets are not context-independent, the view that each vowel has an acoustic target that can be specified in a context-independent manner remains widespread. This point of view entails strong limitations. It is well known that formant frequencies are acoustic characteristics that bear a clear relationship with speech production, and that can distinguish among vowels. Therefore, vowels are generally described with static articulatory configurations represented by targets in the acoustic space, typically by formant frequencies in F1-F2 and F2-F3 planes. Plosive consonants can be described in terms of places of articulation, represented by locus or locus equations in an acoustic plane. But formant frequencies trajectories in fluent speech rarely display a steady state for each vowel. They vary with speaker, consonantal environment (co-articulation) and speaking rate (relating to continuum between hypo- and hyper-articulation). In view of inherent limitations of static approaches, the approach adopted here consists in studying both vowels and consonants from a dynamic point of view.Firstly we studied the effects of the impulse response at the beginning, at the end and during transitions of the signal both in the speech signal and at the perception level. Variations of the phases of the components were then examined. Results show that the effects of these parameters can be observed in spectrograms. Crucially, the amplitudes of the spectral components distinguished under the approach advocated here are sufficient for perceptual discrimination. From this result, for all speech analysis, we only focus on amplitude domain, deliberately leaving aside phase information. Next we extent the work to vowel-consonant-vowel perception from a dynamic point of view. These perceptual results, together with those obtained earlier by Carré (2009a), show that vowel-to-vowel and vowel-consonant-vowel stimuli can be characterized and separated by the direction and rate of the transitions on formant plane, even when absolute frequency values are outside the vowel triangle (i.e. the vowel acoustic space in absolute values).Due to limitations of formant measurements, the dynamic approach needs to develop new tools, based on parameters that can replace formant frequency estimation. Spectral Subband Centroid Frequency (SSCF) features was studied. Comparison with vowel formant frequencies show that SSCFs can replace formant frequencies and act as “pseudo-formant” even during consonant production.On this basis, SSCF is used as a tool to compute dynamic characteristics. We propose a new way to model the dynamic speech features: we called it SSCF Angles. Our analysis work on SSCF Angles were performed on transitions of vowel-to-vowel (V1V2) sequences of both Vietnamese and French. SSCF Angles appear as reliable and robust parameters. For each language, the analysis results show that: (i) SSCF Angles can distinguish V1V2 transitions; (ii) V1V2 and V2V1 have symmetrical properties on the acoustic domain based on SSCF Angles; (iii) SSCF Angles for male and female are fairly similar in the same studied transition of context V1V2; and (iv) they are also more or less invariant for speech rate (normal speech rate and fast one). And finally, these dynamic acoustic speech features are used in Vietnamese automatic speech recognition system with several obtained interesting results
Wang, Yuxuan. "Supervised Speech Separation Using Deep Neural Networks." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1426366690.
Full textChen, Jitong. "On Generalization of Supervised Speech Separation." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492038295603502.
Full textBooks on the topic "Acoustic speech features"
Gabsdil, Malte. Automatic classification of speech recognition hypotheses using acoustic and pragmatic features. Saarbrücken: DFKI & Universität des Saarlandes, 2005.
Find full textKálmán, Bolla. A phonetic conspectus of Polish: The articulatory and acoustic features of Polish speech sounds. Budapest: Linguistics Institute of the Hungarian Academy of Sciences, 1987.
Find full textKálmán, Béla. A Phonetic conspectus of English: The articulatory and acoustic features of British English speech sounds. Budapest: Linguistics Institute of the Hungarian Academy of Sciences, 1989.
Find full textBolla, Kálmán. A phonetic conspectus of English: The articulatory and acoustic features of British English speech sounds. Budapest: Linguistics Institute of the Hungarian Academy of Sciences, 1989.
Find full textKubozono, Haruo, ed. The Phonetics and Phonology of Geminate Consonants. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198754930.001.0001.
Full textLamel, Lori, and Jean-Luc Gauvain. Speech Recognition. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0016.
Full textStanford, James N. New England English. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190625658.001.0001.
Full textBook chapters on the topic "Acoustic speech features"
Mizera, Petr, and Petr Pollak. "Improved Estimation of Articulatory Features Based on Acoustic Features with Temporal Context." In Text, Speech, and Dialogue, 560–68. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24033-6_63.
Full textŽibert, Janez, and France Mihelič. "Fusion of Acoustic and Prosodic Features for Speaker Clustering." In Text, Speech and Dialogue, 210–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04208-9_31.
Full textLyakso, Elena, Olga Frolova, and Aleksey Grigorev. "Perception and Acoustic Features of Speech of Children with Autism Spectrum Disorders." In Speech and Computer, 602–12. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66429-3_60.
Full textTomashenko, Natalia, Yuri Khokhlov, Anthony Larcher, and Yannick Estève. "Exploring GMM-derived Features for Unsupervised Adaptation of Deep Neural Network Acoustic Models." In Speech and Computer, 304–11. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43958-7_36.
Full textKocharov, Daniil, Tatiana Kachkovskaia, Aliya Mirzagitova, and Pavel Skrelin. "Combining Syntactic and Acoustic Features for Prosodic Boundary Detection in Russian." In Statistical Language and Speech Processing, 68–79. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45925-7_6.
Full textProença, Jorge, Arlindo Veiga, Sara Candeias, João Lemos, Cristina Januário, and Fernando Perdigão. "Characterizing Parkinson’s Disease Speech by Acoustic and Phonetic Features." In Lecture Notes in Computer Science, 24–35. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09761-9_3.
Full textYasmin, Ghazaala, and Asit K. Das. "Speech and Non-speech Audio Files Discrimination Extracting Textural and Acoustic Features." In Recent Trends in Signal and Image Processing, 197–206. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8863-6_20.
Full textVerkhodanova, Vasilisa, and Vladimir Shapranov. "Filled Pauses and Lengthenings Detection Based on the Acoustic Features for the Spontaneous Russian Speech." In Speech and Computer, 227–34. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11581-8_28.
Full textPao, Tsang-Long, Yu-Te Chen, Jun-Heng Yeh, and Wen-Yuan Liao. "Combining Acoustic Features for Improved Emotion Recognition in Mandarin Speech." In Affective Computing and Intelligent Interaction, 279–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11573548_36.
Full textLyakso, Elena, Olga Frolova, and Aleksey Grigorev. "A Comparison of Acoustic Features of Speech of Typically Developing Children and Children with Autism Spectrum Disorders." In Speech and Computer, 43–50. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43958-7_4.
Full textConference papers on the topic "Acoustic speech features"
Chen, Shizhe, Qin Jin, Xirong Li, Gang Yang, and Jieping Xu. "Speech emotion classification using acoustic features." In 2014 9th International Symposium on Chinese Spoken Language Processing (ISCSLP). IEEE, 2014. http://dx.doi.org/10.1109/iscslp.2014.6936664.
Full textMuroi, Takashi, Ryoichi Takashima, Tetsuya Takiguchi, and Yasuo Ariki. "Gradient-based acoustic features for speech recognition." In 2009 International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS 2009). IEEE, 2009. http://dx.doi.org/10.1109/ispacs.2009.5383805.
Full textLi, Lujun, Chuxiong Qin, and Dan Qu. "Improvements of Acoustic Features for Speech Separation." In 2016 Joint International Information Technology, Mechanical and Electronic Engineering Conference. Paris, France: Atlantis Press, 2016. http://dx.doi.org/10.2991/jimec-16.2016.23.
Full textKocharov, Daniil, András Zolnay, Ralf Schlüter, and Hermann Ney. "Articulatory motivated acoustic features for speech recognition." In Interspeech 2005. ISCA: ISCA, 2005. http://dx.doi.org/10.21437/interspeech.2005-122.
Full textDominguez, Mónica, Mireia Farrús, and Leo Wanner. "Combining acoustic and linguistic features in phrase-oriented prosody prediction." In Speech Prosody 2016. ISCA, 2016. http://dx.doi.org/10.21437/speechprosody.2016-163.
Full textJianfang Tang and Haiyan Zhang. "Acoustic features comparison from Chinese speech emotion recognition." In 2012 IEEE International Conference on Oxide Materials for Electronic Engineering (OMEE). IEEE, 2012. http://dx.doi.org/10.1109/omee.2012.6343661.
Full textRamdinmawii, Esther, and Vinay Kumar Mittal. "Emotional speech discrimination using sub-segmental acoustic features." In 2017 2nd International Conference on Telecommunication and Networks (TEL-NET). IEEE, 2017. http://dx.doi.org/10.1109/tel-net.2017.8343515.
Full textDas, Rohan Kumar, Jichen Yang, and Haizhou Li. "Long Range Acoustic Features for Spoofed Speech Detection." In Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-1887.
Full textJin, Qin, Chengxin Li, Shizhe Chen, and Huimin Wu. "Speech emotion recognition with acoustic and lexical features." In ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178872.
Full textHarding, Philip, and Ben Milner. "Speech enhancement by reconstruction from cleaned acoustic features." In Interspeech 2011. ISCA: ISCA, 2011. http://dx.doi.org/10.21437/interspeech.2011-420.
Full text