Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Acoustic speech features“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Acoustic speech features" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Acoustic speech features"
Masih, Dawa A. A., Nawzad K. Jalal, Manar N. A. Mohammed und Sulaiman A. Mustafa. „The Assessment of Acoustical Characteristics for Recent Mosque Buildings in Erbil City of Iraq“. ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 9, Nr. 1 (01.03.2021): 51–66. http://dx.doi.org/10.14500/aro.10784.
Der volle Inhalt der QuelleVyaltseva, Darya. „Acoustic Features of Twins’ Speech“. Vestnik Volgogradskogo gosudarstvennogo universiteta. Serija 2. Jazykoznanije 16, Nr. 3 (15.11.2017): 227–34. http://dx.doi.org/10.15688/jvolsu2.2017.3.24.
Der volle Inhalt der QuelleSepulveda-Sepulveda, Alexander, und German Castellanos-Domínguez. „Time-Frequency Energy Features for Articulator Position Inference on Stop Consonants“. Ingeniería y Ciencia 8, Nr. 16 (30.11.2012): 37–56. http://dx.doi.org/10.17230/ingciencia.8.16.2.
Der volle Inhalt der QuelleIshimoto, Yuichi, und Noriko Suzuki. „Acoustic features of speech after glossectomy“. Journal of the Acoustical Society of America 120, Nr. 5 (November 2006): 3350–51. http://dx.doi.org/10.1121/1.4781416.
Der volle Inhalt der QuelleShuiskaya, Tatiana V., und Svetlana V. Androsova. „ACOUSTIC FEATURES OF CHILD SPEECH SOUNDS: CONSONANTS“. Theoretical and Applied Linguistics 2, Nr. 3 (2016): 123–37. http://dx.doi.org/10.22250/2410-7190_2016_2_3_123_137.
Der volle Inhalt der QuelleKobayashi, Maori, Yasuhiro Hamada und Masato Akagi. „Acoustic features in speech for emergency perception“. Journal of the Acoustical Society of America 144, Nr. 3 (September 2018): 1835. http://dx.doi.org/10.1121/1.5068086.
Der volle Inhalt der QuelleRoh, Yong-Wan, Dong-Ju Kim, Woo-Seok Lee und Kwang-Seok Hong. „Novel acoustic features for speech emotion recognition“. Science in China Series E: Technological Sciences 52, Nr. 7 (09.06.2009): 1838–48. http://dx.doi.org/10.1007/s11431-009-0204-3.
Der volle Inhalt der QuelleYamamoto, Katsuhiko, Toshio Irino, Toshie Matsui, Shoko Araki, Keisuke Kinoshita und Tomohiro Nakatani. „Analysis of acoustic features for speech intelligibility prediction models analysis of acoustic features for speech intelligibility prediction models“. Journal of the Acoustical Society of America 140, Nr. 4 (Oktober 2016): 3114. http://dx.doi.org/10.1121/1.4969744.
Der volle Inhalt der QuelleJiang, Wei, Zheng Wang, Jesse S. Jin, Xianfeng Han und Chunguang Li. „Speech Emotion Recognition with Heterogeneous Feature Unification of Deep Neural Network“. Sensors 19, Nr. 12 (18.06.2019): 2730. http://dx.doi.org/10.3390/s19122730.
Der volle Inhalt der QuelleZlokarnik, Igor. „Adding articulatory features to acoustic features for automatic speech recognition“. Journal of the Acoustical Society of America 97, Nr. 5 (Mai 1995): 3246. http://dx.doi.org/10.1121/1.411699.
Der volle Inhalt der QuelleDissertationen zum Thema "Acoustic speech features"
Leung, Ka Yee. „Combining acoustic features and articulatory features for speech recognition /“. View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20LEUNGK.
Der volle Inhalt der QuelleIncludes bibliographical references (leaves 92-96). Also available in electronic version. Access restricted to campus users.
Juneja, Amit. „Speech recognition based on phonetic features and acoustic landmarks“. College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/2148.
Der volle Inhalt der QuelleThesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Tyson, Na'im R. „Exploration of Acoustic Features for Automatic Vowel Discrimination in Spontaneous Speech“. The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339695879.
Der volle Inhalt der QuelleSun, Rui. „The evaluation of the stability of acoustic features in affective conveyance across multiple emotional databases“. Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49041.
Der volle Inhalt der QuelleTorres, Juan Félix. „Estimation of glottal source features from the spectral envelope of the acoustic speech signal“. Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34736.
Der volle Inhalt der QuelleIshizuka, Kentaro. „Studies on Acoustic Features for Automatic Speech Recognition and Speaker Diarization in Real Environments“. 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/123834.
Der volle Inhalt der QuelleDiekema, Emily D. „Acoustic Measurements of Clear Speech Cue Fade in Adults with Idiopathic Parkinson Disease“. Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1460063159.
Der volle Inhalt der QuelleTran, Thi-Anh-Xuan. „Acoustic gesture modeling. Application to a Vietnamese speech recognition system“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT023/document.
Der volle Inhalt der QuelleSpeech plays a vital role in human communication. Selection of relevant acoustic speech features is key to in the design of any system using speech processing. For some 40 years, speech was typically considered as a sequence of quasi-stable portions of signal (vowels) separated by transitions (consonants). Despite a wealth of studies that clearly document the importance of coarticulation, and reveal that articulatory and acoustic targets are not context-independent, the view that each vowel has an acoustic target that can be specified in a context-independent manner remains widespread. This point of view entails strong limitations. It is well known that formant frequencies are acoustic characteristics that bear a clear relationship with speech production, and that can distinguish among vowels. Therefore, vowels are generally described with static articulatory configurations represented by targets in the acoustic space, typically by formant frequencies in F1-F2 and F2-F3 planes. Plosive consonants can be described in terms of places of articulation, represented by locus or locus equations in an acoustic plane. But formant frequencies trajectories in fluent speech rarely display a steady state for each vowel. They vary with speaker, consonantal environment (co-articulation) and speaking rate (relating to continuum between hypo- and hyper-articulation). In view of inherent limitations of static approaches, the approach adopted here consists in studying both vowels and consonants from a dynamic point of view.Firstly we studied the effects of the impulse response at the beginning, at the end and during transitions of the signal both in the speech signal and at the perception level. Variations of the phases of the components were then examined. Results show that the effects of these parameters can be observed in spectrograms. Crucially, the amplitudes of the spectral components distinguished under the approach advocated here are sufficient for perceptual discrimination. From this result, for all speech analysis, we only focus on amplitude domain, deliberately leaving aside phase information. Next we extent the work to vowel-consonant-vowel perception from a dynamic point of view. These perceptual results, together with those obtained earlier by Carré (2009a), show that vowel-to-vowel and vowel-consonant-vowel stimuli can be characterized and separated by the direction and rate of the transitions on formant plane, even when absolute frequency values are outside the vowel triangle (i.e. the vowel acoustic space in absolute values).Due to limitations of formant measurements, the dynamic approach needs to develop new tools, based on parameters that can replace formant frequency estimation. Spectral Subband Centroid Frequency (SSCF) features was studied. Comparison with vowel formant frequencies show that SSCFs can replace formant frequencies and act as “pseudo-formant” even during consonant production.On this basis, SSCF is used as a tool to compute dynamic characteristics. We propose a new way to model the dynamic speech features: we called it SSCF Angles. Our analysis work on SSCF Angles were performed on transitions of vowel-to-vowel (V1V2) sequences of both Vietnamese and French. SSCF Angles appear as reliable and robust parameters. For each language, the analysis results show that: (i) SSCF Angles can distinguish V1V2 transitions; (ii) V1V2 and V2V1 have symmetrical properties on the acoustic domain based on SSCF Angles; (iii) SSCF Angles for male and female are fairly similar in the same studied transition of context V1V2; and (iv) they are also more or less invariant for speech rate (normal speech rate and fast one). And finally, these dynamic acoustic speech features are used in Vietnamese automatic speech recognition system with several obtained interesting results
Wang, Yuxuan. „Supervised Speech Separation Using Deep Neural Networks“. The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1426366690.
Der volle Inhalt der QuelleChen, Jitong. „On Generalization of Supervised Speech Separation“. The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492038295603502.
Der volle Inhalt der QuelleBücher zum Thema "Acoustic speech features"
Gabsdil, Malte. Automatic classification of speech recognition hypotheses using acoustic and pragmatic features. Saarbrücken: DFKI & Universität des Saarlandes, 2005.
Den vollen Inhalt der Quelle findenKálmán, Bolla. A phonetic conspectus of Polish: The articulatory and acoustic features of Polish speech sounds. Budapest: Linguistics Institute of the Hungarian Academy of Sciences, 1987.
Den vollen Inhalt der Quelle findenKálmán, Béla. A Phonetic conspectus of English: The articulatory and acoustic features of British English speech sounds. Budapest: Linguistics Institute of the Hungarian Academy of Sciences, 1989.
Den vollen Inhalt der Quelle findenBolla, Kálmán. A phonetic conspectus of English: The articulatory and acoustic features of British English speech sounds. Budapest: Linguistics Institute of the Hungarian Academy of Sciences, 1989.
Den vollen Inhalt der Quelle findenKubozono, Haruo, Hrsg. The Phonetics and Phonology of Geminate Consonants. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198754930.001.0001.
Der volle Inhalt der QuelleLamel, Lori, und Jean-Luc Gauvain. Speech Recognition. Herausgegeben von Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0016.
Der volle Inhalt der QuelleStanford, James N. New England English. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190625658.001.0001.
Der volle Inhalt der QuelleBuchteile zum Thema "Acoustic speech features"
Mizera, Petr, und Petr Pollak. „Improved Estimation of Articulatory Features Based on Acoustic Features with Temporal Context“. In Text, Speech, and Dialogue, 560–68. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24033-6_63.
Der volle Inhalt der QuelleŽibert, Janez, und France Mihelič. „Fusion of Acoustic and Prosodic Features for Speaker Clustering“. In Text, Speech and Dialogue, 210–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04208-9_31.
Der volle Inhalt der QuelleLyakso, Elena, Olga Frolova und Aleksey Grigorev. „Perception and Acoustic Features of Speech of Children with Autism Spectrum Disorders“. In Speech and Computer, 602–12. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66429-3_60.
Der volle Inhalt der QuelleTomashenko, Natalia, Yuri Khokhlov, Anthony Larcher und Yannick Estève. „Exploring GMM-derived Features for Unsupervised Adaptation of Deep Neural Network Acoustic Models“. In Speech and Computer, 304–11. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43958-7_36.
Der volle Inhalt der QuelleKocharov, Daniil, Tatiana Kachkovskaia, Aliya Mirzagitova und Pavel Skrelin. „Combining Syntactic and Acoustic Features for Prosodic Boundary Detection in Russian“. In Statistical Language and Speech Processing, 68–79. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45925-7_6.
Der volle Inhalt der QuelleProença, Jorge, Arlindo Veiga, Sara Candeias, João Lemos, Cristina Januário und Fernando Perdigão. „Characterizing Parkinson’s Disease Speech by Acoustic and Phonetic Features“. In Lecture Notes in Computer Science, 24–35. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09761-9_3.
Der volle Inhalt der QuelleYasmin, Ghazaala, und Asit K. Das. „Speech and Non-speech Audio Files Discrimination Extracting Textural and Acoustic Features“. In Recent Trends in Signal and Image Processing, 197–206. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8863-6_20.
Der volle Inhalt der QuelleVerkhodanova, Vasilisa, und Vladimir Shapranov. „Filled Pauses and Lengthenings Detection Based on the Acoustic Features for the Spontaneous Russian Speech“. In Speech and Computer, 227–34. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11581-8_28.
Der volle Inhalt der QuellePao, Tsang-Long, Yu-Te Chen, Jun-Heng Yeh und Wen-Yuan Liao. „Combining Acoustic Features for Improved Emotion Recognition in Mandarin Speech“. In Affective Computing and Intelligent Interaction, 279–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11573548_36.
Der volle Inhalt der QuelleLyakso, Elena, Olga Frolova und Aleksey Grigorev. „A Comparison of Acoustic Features of Speech of Typically Developing Children and Children with Autism Spectrum Disorders“. In Speech and Computer, 43–50. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43958-7_4.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Acoustic speech features"
Chen, Shizhe, Qin Jin, Xirong Li, Gang Yang und Jieping Xu. „Speech emotion classification using acoustic features“. In 2014 9th International Symposium on Chinese Spoken Language Processing (ISCSLP). IEEE, 2014. http://dx.doi.org/10.1109/iscslp.2014.6936664.
Der volle Inhalt der QuelleMuroi, Takashi, Ryoichi Takashima, Tetsuya Takiguchi und Yasuo Ariki. „Gradient-based acoustic features for speech recognition“. In 2009 International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS 2009). IEEE, 2009. http://dx.doi.org/10.1109/ispacs.2009.5383805.
Der volle Inhalt der QuelleLi, Lujun, Chuxiong Qin und Dan Qu. „Improvements of Acoustic Features for Speech Separation“. In 2016 Joint International Information Technology, Mechanical and Electronic Engineering Conference. Paris, France: Atlantis Press, 2016. http://dx.doi.org/10.2991/jimec-16.2016.23.
Der volle Inhalt der QuelleKocharov, Daniil, András Zolnay, Ralf Schlüter und Hermann Ney. „Articulatory motivated acoustic features for speech recognition“. In Interspeech 2005. ISCA: ISCA, 2005. http://dx.doi.org/10.21437/interspeech.2005-122.
Der volle Inhalt der QuelleDominguez, Mónica, Mireia Farrús und Leo Wanner. „Combining acoustic and linguistic features in phrase-oriented prosody prediction“. In Speech Prosody 2016. ISCA, 2016. http://dx.doi.org/10.21437/speechprosody.2016-163.
Der volle Inhalt der QuelleJianfang Tang und Haiyan Zhang. „Acoustic features comparison from Chinese speech emotion recognition“. In 2012 IEEE International Conference on Oxide Materials for Electronic Engineering (OMEE). IEEE, 2012. http://dx.doi.org/10.1109/omee.2012.6343661.
Der volle Inhalt der QuelleRamdinmawii, Esther, und Vinay Kumar Mittal. „Emotional speech discrimination using sub-segmental acoustic features“. In 2017 2nd International Conference on Telecommunication and Networks (TEL-NET). IEEE, 2017. http://dx.doi.org/10.1109/tel-net.2017.8343515.
Der volle Inhalt der QuelleDas, Rohan Kumar, Jichen Yang und Haizhou Li. „Long Range Acoustic Features for Spoofed Speech Detection“. In Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-1887.
Der volle Inhalt der QuelleJin, Qin, Chengxin Li, Shizhe Chen und Huimin Wu. „Speech emotion recognition with acoustic and lexical features“. In ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178872.
Der volle Inhalt der QuelleHarding, Philip, und Ben Milner. „Speech enhancement by reconstruction from cleaned acoustic features“. In Interspeech 2011. ISCA: ISCA, 2011. http://dx.doi.org/10.21437/interspeech.2011-420.
Der volle Inhalt der Quelle