Gotowa bibliografia na temat „Articulatory data”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Articulatory data”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Articulatory data"
Silva, Samuel, Nuno Almeida, Conceição Cunha, Arun Joseph, Jens Frahm i António Teixeira. "Data-Driven Critical Tract Variable Determination for European Portuguese". Information 11, nr 10 (21.10.2020): 491. http://dx.doi.org/10.3390/info11100491.
Pełny tekst źródłaAbirami, S., L. Anirudh i P. Vijayalakshmi. "Silent Speech Interface: An Inversion Problem". Journal of Physics: Conference Series 2318, nr 1 (1.08.2022): 012008. http://dx.doi.org/10.1088/1742-6596/2318/1/012008.
Pełny tekst źródłaBrowman, Catherine P., i Louis Goldstein. "Articulatory gestures as phonological units". Phonology 6, nr 2 (sierpień 1989): 201–51. http://dx.doi.org/10.1017/s0952675700001019.
Pełny tekst źródłaWang, Jun, Jordan R. Green, Ashok Samal i Yana Yunusova. "Articulatory Distinctiveness of Vowels and Consonants: A Data-Driven Approach". Journal of Speech, Language, and Hearing Research 56, nr 5 (październik 2013): 1539–51. http://dx.doi.org/10.1044/1092-4388(2013/12-0030).
Pełny tekst źródłaKuruvilla-Dugdale, Mili, i Antje S. Mefferd. "Articulatory Performance in Dysarthria: Using a Data-Driven Approach to Estimate Articulatory Demands and Deficits". Brain Sciences 12, nr 10 (20.10.2022): 1409. http://dx.doi.org/10.3390/brainsci12101409.
Pełny tekst źródłaM., Dhanalakshmi, Nagarajan T. i Vijayalakshmi P. "Significant sensors and parameters in assessment of dysarthric speech". Sensor Review 41, nr 3 (26.07.2021): 271–86. http://dx.doi.org/10.1108/sr-01-2021-0004.
Pełny tekst źródłaByrd, Dani, Edward Flemming, Carl Andrew Mueller i Cheng Cheng Tan. "Using Regions and Indices in EPG Data Reduction". Journal of Speech, Language, and Hearing Research 38, nr 4 (sierpień 1995): 821–27. http://dx.doi.org/10.1044/jshr.3804.821.
Pełny tekst źródłaLee, Jimin, Michael Bell i Zachary Simmons. "Articulatory Kinematic Characteristics Across the Dysarthria Severity Spectrum in Individuals With Amyotrophic Lateral Sclerosis". American Journal of Speech-Language Pathology 27, nr 1 (6.02.2018): 258–69. http://dx.doi.org/10.1044/2017_ajslp-16-0230.
Pełny tekst źródłaStevens, Kenneth N. "Inferring articulatory movements from acoustic data". Journal of the Acoustical Society of America 93, nr 4 (kwiecień 1993): 2416. http://dx.doi.org/10.1121/1.405910.
Pełny tekst źródłaBaum, Shari R., David H. McFarland i Mai Diab. "Compensation to articulatory perturbation: Perceptual data". Journal of the Acoustical Society of America 99, nr 6 (czerwiec 1996): 3791–94. http://dx.doi.org/10.1121/1.414996.
Pełny tekst źródłaRozprawy doktorskie na temat "Articulatory data"
Berry, Jeffrey James. "Machine Learning Methods for Articulatory Data". Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/223348.
Pełny tekst źródłaMoody, Jay T. "Visualizing speech with a recurrent neural network trained on human acoustic-articulatory data /". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9930904.
Pełny tekst źródłaDrake, Eleanor Katherine Elizabeth. "The involvement of the speech production system in prediction during comprehension : an articulatory imaging investigation". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/22912.
Pełny tekst źródłaChen, Cheng. "Inter-gestural Coordination in Temporal and Spatial Domains in Italian: Synchronous EPG + UTI Data". Doctoral thesis, Scuola Normale Superiore, 2019. http://hdl.handle.net/11384/86022.
Pełny tekst źródłaDouros, Ioannis. "Towards a 3 dimensional dynamic generic speaker model to study geometry simplifications of the vocal tract using magnetic resonance imaging data". Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0115.
Pełny tekst źródłaIn this thesis we used MRI (Magnetic Resonance Imaging) data of the vocal tract to study speech production. The first part consist of the study of the impact that the velum, the epiglottis and the head position has on the phonation of five french vowels. Acoustic simulations were used to compare the formants of the studied cases with the reference in order to measure their impact. For this part of the work, we used 3D static MR (Magnetic Resonance) images. As speech is usually a dynamic phenomenon, a question arose, whether it would be possible to process the 3D data in order to incorporate dynamic information of continuous speech. Therefore the second part presents some algorithms that one can use in order to enhance speech production data. Several image transformations were combined in order to generate estimations of vocal tract shapes which are more informative than the original ones. At this point, we envisaged apart from enhancing speech production data, to create a generic speaker model that could provide enhanced information not for a specific subject, but globally for speech. As a result, we devoted the third part in the investigation of an algorithm that one can use to create a spatiotemporal atlas of the vocal tract which can be used as a reference or standard speaker for speech studies as it is speaker independent. Finally, the last part of the thesis, refers to a selection of open questions of the field that are still left unanswered, some interesting directions that one can expand this thesis and some potential approaches that could help someone move forward towards these directions
Blackwood, Ximenes Arwen. "The relation between acoustic and articulatory variation in vowels : data from American and Australian English". Thesis, 2022. http://hdl.handle.net/1959.7/uws:68957.
Pełny tekst źródłaSteiner, Ingmar Michael A. [Verfasser]. "Observations on the dynamic control of an articulatory synthesizer using speech production data / vorgelegt von Ingmar Michael Augustus Steiner". 2010. http://d-nb.info/1005833303/34.
Pełny tekst źródłaKsiążki na temat "Articulatory data"
Seminar on Speech Production (5th 2000 Kloster Seeon). Proceedings of the 5th Seminar on Speech Production: Models and data & CREST Workshop on Models of Speech Production : motor planning and articulatory modelling. Munich: SPS5, 2000.
Znajdź pełny tekst źródłaAhlers, M. Oliver. Simulation of occlusion in restorative dentistry: The Artex system ; an up-to-date concept regarding facebow-registration, individual recordings, articulators and measuring instruments. Hamburg: DentaConcept, 2000.
Znajdź pełny tekst źródłaGibson, Mark, i Juana Gil, red. Romance Phonetics and Phonology. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.001.0001.
Pełny tekst źródłaRecasens, Daniel. Phonetic Causes of Sound Change. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198845010.001.0001.
Pełny tekst źródłaVihman, Marilyn May. Phonological Templates in Development. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198793564.001.0001.
Pełny tekst źródłaCzęści książek na temat "Articulatory data"
Bauer, Dominik, Jim Kannampuzha i Bernd J. Kröger. "Articulatory Speech Re-synthesis: Profiting from Natural Acoustic Speech Data". W Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions, 344–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03320-9_32.
Pełny tekst źródłaPerkell, J. S. "Testing Theories of Speech Production: Implications of Some Detailed Analyses of Variable Articulatory Data". W Speech Production and Speech Modelling, 263–88. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-2037-8_11.
Pełny tekst źródłaSepulveda-Sepulveda, Alexander, i German Castellanos-Dominguez. "Assessment of the Relation Between Low-Frequency Features and Velum Opening by Using Real Articulatory Data". W Speech and Computer, 131–39. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43958-7_15.
Pełny tekst źródłaBadin, Pierre, Frédéric Elisei, Gérard Bailly i Yuliya Tarabalka. "An Audiovisual Talking Head for Augmented Speech Generation: Models and Animations Based on a Real Speaker’s Articulatory Data". W Articulated Motion and Deformable Objects, 132–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-70517-8_14.
Pełny tekst źródłaZampaulo, André. "The phonetics of palatals". W Palatal Sound Change in the Romance Languages, 31–45. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198807384.003.0003.
Pełny tekst źródłaRecasens, Daniel, i Meritxell Mira. "Articulatory setting, articulatory symmetry, and production mechanisms for Catalan consonant sequences". W Romance Phonetics and Phonology, 146–58. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.003.0009.
Pełny tekst źródłaRecasens, Daniel. "Velar palatalization". W Phonetic Causes of Sound Change, 22–76. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198845010.003.0003.
Pełny tekst źródłaRecasens, Daniel. "Introduction". W Phonetic Causes of Sound Change, 1–12. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198845010.003.0001.
Pełny tekst źródłaChitoran, Ioana, i Stefania Marin. "Vowels and diphthongs". W Romance Phonetics and Phonology, 118–32. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.003.0007.
Pełny tekst źródłaCelata, Chiara, Alessandro Vietti i Lorenzo Spreafico. "An articulatory account of rhotic variation in Tuscan Italian". W Romance Phonetics and Phonology, 91–117. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739401.003.0006.
Pełny tekst źródłaStreszczenia konferencji na temat "Articulatory data"
Kato, Tsuneo, Sungbok Lee i Shrikanth Narayanan. "An analysis of articulatory-acoustic data based on articulatory strokes". W ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4960628.
Pełny tekst źródłaWrench, Alan A., i Korin Richmond. "Continuous speech recognition using articulatory data". W 6th International Conference on Spoken Language Processing (ICSLP 2000). ISCA: ISCA, 2000. http://dx.doi.org/10.21437/icslp.2000-772.
Pełny tekst źródłaPayan, Yohan. "A 2D Biomechanical Model of the Human Tongue". W ASME 1998 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/imece1998-0306.
Pełny tekst źródłaMaharana, Sarthak Kumar, Aravind Illa, Renuka Mannem, Yamini Belur, Preetie Shetty, Veeramani Preethish Kumar, Seena Vengalil, Kiran Polavarapu, Nalini Atchayaram i Prasanta Kumar Ghosh. "Acoustic-to-Articulatory Inversion for Dysarthric Speech by Using Cross-Corpus Acoustic-Articulatory Data". W ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413625.
Pełny tekst źródłaOuni, Slim, Loïc Mangeonjean i Ingmar Steiner. "Visartico: a visualization tool for articulatory data". W Interspeech 2012. ISCA: ISCA, 2012. http://dx.doi.org/10.21437/interspeech.2012-510.
Pełny tekst źródłaAron, Michael, Nicolas Ferveur, Erwan Kerrien, Marie-Odile Berger i Yves Laprie. "Acquisition and synchronization of multimodal articulatory data". W Interspeech 2007. ISCA: ISCA, 2007. http://dx.doi.org/10.21437/interspeech.2007-25.
Pełny tekst źródłaJun Wa, Ashok Samal, Jordan R. Green i Tom D. Carrell. "Vowel recognition from articulatory position time-series data". W 2009 3rd International Conference on Signal Processing and Communication Systems (ICSPCS 2009). IEEE, 2009. http://dx.doi.org/10.1109/icspcs.2009.5306418.
Pełny tekst źródłaProm-on, Santitham, Peter Birkholz i Yi Xu. "Training an articulatory synthesizer with continuous acoustic data". W Interspeech 2013. ISCA: ISCA, 2013. http://dx.doi.org/10.21437/interspeech.2013-98.
Pełny tekst źródłaKrug, Paul Konstantin, Peter Birkholz, Branislav Gerazov, Daniel Rudolph van Niekerk, Anqi Xu i Yi Xu. "Articulatory Synthesis for Data Augmentation in Phoneme Recognition". W Interspeech 2022. ISCA: ISCA, 2022. http://dx.doi.org/10.21437/interspeech.2022-10874.
Pełny tekst źródłaToth, Arthur R., i Alan W. Black. "Cross-speaker articulatory position data for phonetic feature prediction". W Interspeech 2005. ISCA: ISCA, 2005. http://dx.doi.org/10.21437/interspeech.2005-132.
Pełny tekst źródła