Gotowa bibliografia na temat „Conversion of Human Signed Language”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Conversion of Human Signed Language”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Conversion of Human Signed Language"
Willoughby, Louisa, Howard Manns, Shimako Iwasaki i Meredith Bartlett. "Are you trying to be funny? Communicating humour in deafblind conversations". Discourse Studies 21, nr 5 (15.05.2019): 584–602. http://dx.doi.org/10.1177/1461445619846704.
Pełny tekst źródłaRowe, Meredith L. "Gesture, speech, and sign. Lynn Messing and Ruth Campbell (Eds.). New York: Oxford University Press, 1999. Pp. 227." Applied Psycholinguistics 22, nr 4 (grudzień 2001): 643–47. http://dx.doi.org/10.1017/s0142716401224084.
Pełny tekst źródłaCorina, David P., i Heather Patterson Knapp. "Signed Language and Human Action Processing". Annals of the New York Academy of Sciences 1145, nr 1 (grudzień 2008): 100–112. http://dx.doi.org/10.1196/annals.1416.023.
Pełny tekst źródłaRobinson, Octavian. "Puppets, Jesters, Memes, and Benevolence Porn: The Spectacle of Access". Przegląd Kulturoznawczy, nr 3 (53) (14.12.2022): 329–44. http://dx.doi.org/10.4467/20843860pk.22.024.16613.
Pełny tekst źródłaWolfe, Rosalee, John C. McDonald, Thomas Hanke, Sarah Ebling, Davy Van Landuyt, Frankie Picron, Verena Krausneker, Eleni Efthimiou, Evita Fotinea i Annelies Braffort. "Sign Language Avatars: A Question of Representation". Information 13, nr 4 (18.04.2022): 206. http://dx.doi.org/10.3390/info13040206.
Pełny tekst źródłaGabarró-López, Sílvia, i Laurence Meurant. "Contrasting signed and spoken languages". Languages in Contrast 22, nr 2 (23.08.2022): 169–94. http://dx.doi.org/10.1075/lic.00024.gab.
Pełny tekst źródłaCorina, David. "Sign language and the brain: Apes, apraxia, and aphasia". Behavioral and Brain Sciences 19, nr 4 (grudzień 1996): 633–34. http://dx.doi.org/10.1017/s0140525x00043338.
Pełny tekst źródłaSlobin, Dan Isaac. "Breaking the Molds: Signed Languages and the Nature of Human Language". Sign Language Studies 8, nr 2 (2008): 114–30. http://dx.doi.org/10.1353/sls.2008.0004.
Pełny tekst źródłaThompson, Robin L., David P. Vinson, Bencie Woll i Gabriella Vigliocco. "The Road to Language Learning Is Iconic". Psychological Science 23, nr 12 (12.11.2012): 1443–48. http://dx.doi.org/10.1177/0956797612459763.
Pełny tekst źródłaMcburney, Susan Lloyd. "William Stokoe and the discipline of sign language linguistics". Historiographia Linguistica 28, nr 1-2 (7.09.2001): 143–86. http://dx.doi.org/10.1075/hl.28.1.10mcb.
Pełny tekst źródłaRozprawy doktorskie na temat "Conversion of Human Signed Language"
Schneider, Andréia Rodrigues de Assunção. "Animação de humanos virtuais aplicada para língua brasileira de sinais". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15313.
Pełny tekst źródłaDeaf people have a limited capacity of using oral language to communicate. Because of this, they use gestural languages as their native language. This makes it especially difficult for them to make use of basic services in a satisfactory way and to properly integrate the hearing world, to which the majority of the population belongs. Due to the fact that this language is only gestural, it is possible to say that the signs it comprises of can be simulated with the animation of virtual humans without losing the correct perception of their inherent meanings (what words they represent). This work describes a technique of animation for LIBRAS. The main idea is to take the movement of a sign from a description of its animation and execute it in a more or less wide manner in order to better use the available space for gesticulation without losing the meaning. The computer animation of a sign must be as close to the real gesture as possible. Its meaning must be easily understood and its execution must be natural (smooth and continuous). For that, the signs must be defined in accordance with the movement limitations imposed by the human joints, and the field of view of the receiver. Besides that, some relevant parameters must be analyzed and defined: speed of the movement, time and amplitude of the signs. Another important aspect to be addressed is the space that is available for the execution of the sign: depending on the area, the sign must be animated in a manner that makes it properly fit in it. The implementation of the technique resulted in a animation system for LIBRAS, that consists of three modules: • a virtual human modeler, so that the joints and DOFs are anatomically consistent with reality; • a gesture generator, which is responsible for the processing of parameters such as speed, time of execution of the gesture, joint configuration, in a file that describes the animation of the pose. It is worth emphasizing that the words in LIBRAS are known as signs. Already a sign is composed of one or more gestures and they are composed of poses; • an animator, which is responsible for generating the animation of a previously created sign, fitting (if necessary) the sign amplitude to the space available for its animation. The generated system has been submitted for tests in order to validate the technique. The goal of the tests was to check whether the generated signs were understandable - if the generated animation represented a certain word. All aspects above are presented and analyzed in detail.
Książki na temat "Conversion of Human Signed Language"
Olivier, Pietquin, i SpringerLink (Online service), red. Data-Driven Methods for Adaptive Spoken Dialogue Systems: Computational Learning for Conversational Interfaces. New York, NY: Springer New York, 2012.
Znajdź pełny tekst źródłaSang-in, Chŏn, red. Hanʼguk hyŏndaesa: Chinsil kwa haesŏk. Kyŏnggi-do Pʻaju-si: Nanam Chʻulpʻan, 2005.
Znajdź pełny tekst źródłaWilcox, Sherman, i Corrine Occhino. Historical Change in Signed Languages. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199935345.013.24.
Pełny tekst źródłaLemon, Oliver, i Olivier Pietquin. Data-Driven Methods for Adaptive Spoken Dialogue Systems: Computational Learning for Conversational Interfaces. Springer, 2014.
Znajdź pełny tekst źródłaLemon, Oliver, i Olivier Pietquin. Data-Driven Methods for Adaptive Spoken Dialogue Systems: Computational Learning for Conversational Interfaces. Springer, 2012.
Znajdź pełny tekst źródłaColetânea de pragmática: grupo de pesquisa linguagem, comunicação e cognição - VOL II. Brazil Publishing, 2022. http://dx.doi.org/10.31012/978-65-5861-431-9.
Pełny tekst źródłaRuokanen, Miikka. Trinitarian Grace in Martin Luther's The Bondage of the Will. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192895837.001.0001.
Pełny tekst źródłaCzęści książek na temat "Conversion of Human Signed Language"
Wilcox, Sherman, i Jill P. Morford. "Empirical methods in signed language research". W Human Cognitive Processing, 171–200. Amsterdam: John Benjamins Publishing Company, 2007. http://dx.doi.org/10.1075/hcp.18.14wil.
Pełny tekst źródłaBono, Mayumi, Tomohiro Okada, Kouhei Kikuchi, Rui Sakaida, Victor Skobov, Yusuke Miyao i Yutaka Osugi. "Chapter 13. Utterance unit annotation for the Japanese Sign Language Dialogue Corpus". W Advances in Sign Language Corpus Linguistics, 353–82. Amsterdam: John Benjamins Publishing Company, 2023. http://dx.doi.org/10.1075/scl.108.13bon.
Pełny tekst źródłaCrasborn, Onno, i Menzo Windhouwer. "ISOcat Data Categories for Signed Language Resources". W Gesture and Sign Language in Human-Computer Interaction and Embodied Communication, 118–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34182-3_11.
Pełny tekst źródłaKallimani, Jagadish S., V. K. Ananthashayana i Debjani Goswami. "The Feature Extraction Algorithm for the Production of Emotions in Text-to-Speech (TTS) System for an Indian Regional Language". W Advances in Business Information Systems and Analytics, 17–30. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-996-0.ch003.
Pełny tekst źródłaEvans, Carolyn, i Timnah Rachel Baker. "Communal Religious Rights or Majoritarian Oppression". W Freedom of Religion, Secularism, and Human Rights, 69–94. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198812067.003.0004.
Pełny tekst źródłaMorosov, Ivete. "A PRAGMÁTICA E OS JOGOS COMPORTAMENTAIS NAS COMUNICAÇÕES ORGANIZACIONAIS". W Coletânea de Pragmática: Grupo de Pesquisa Linguagem, comunicação e cognição. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-075-5-10.
Pełny tekst źródłaDias, Luzia Schalkoski, i Angela Mari Gusso. "ANÁLISE MULTIMODAL DAS ESTRATÉGIAS DE POLIDEZ EM CAMPANHA DE DOAÇÃO DE SANGUE DO MINISTÉRIO DA SAÚDE". W Coletânea de Pragmática: Grupo de Pesquisa Linguagem, comunicação e cognição. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-075-5-11.
Pełny tekst źródłaFerreira, Marina Xavier. "A OSTENSÃO COMO ELEMENTO PRAGMÁTICO DA LIBRAS". W Coletânea de Pragmática: Grupo de Pesquisa Linguagem, comunicação e cognição. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-075-5-12.
Pełny tekst źródłaSantos, Sebastião Lourenço dos. "LINGUAGEM E COGNIÇÃO: UMA ABORDAGEM INTERDISCIPLINAR DOS PROCESSOS DE INTERPRETAÇÃO HUMANA". W Coletânea de Pragmática: Grupo de Pesquisa Linguagem, comunicação e cognição. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-075-5-2.
Pełny tekst źródłaFerreira, Rodrigo Bueno, i Elena Godoy. "POÉTICA COGNITIVA: A PRAGMÁTICA NA COMUNICAÇÃO LITERÁRIA". W Coletânea de Pragmática: Grupo de Pesquisa Linguagem, comunicação e cognição. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-075-5-3.
Pełny tekst źródłaStreszczenia konferencji na temat "Conversion of Human Signed Language"
Vaswani, Vaishali, Akriti Kumari Singh, Anshuman Shastri i Namrata Arora Charpe. "COMM-G: A Communication Glove for Smart Communication". W Human Interaction and Emerging Technologies (IHIET-AI 2022) Artificial Intelligence and Future Applications. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe100905.
Pełny tekst źródłaKrishnan, Vinodh, i Jacob Eisenstein. ""You’re Mr. Lebowski, I’m the Dude": Inducing Address Term Formality in Signed Social Networks". W Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/n15-1185.
Pełny tekst źródłaMurtaza, Zain, Hadia Akmal, Wardah Afzal, Hasan Erteza Gelani, Zain ul Abdin i Muhammad Hamza Gulzar. "Human Computer Interaction Based on Gestural Cues Recognition/Sign Language to Text Conversion". W 2019 International Conference on Engineering and Emerging Technologies (ICEET). IEEE, 2019. http://dx.doi.org/10.1109/ceet1.2019.8711835.
Pełny tekst źródłaBerg-Kirkpatrick, Taylor, i Dan Klein. "GPU-Friendly Local Regression for Voice Conversion". W Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/n15-1148.
Pełny tekst źródłaKim, Hwa-Yeon, Jong-Hwan Kim i Jae-Min Kim. "Fast Bilingual Grapheme-To-Phoneme Conversion". W Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.naacl-industry.32.
Pełny tekst źródłaAldeneh, Zakaria, Matthew Perez i Emily Mower Provost. "Learning Paralinguistic Features from Audiobooks through Style Voice Conversion". W Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.377.
Pełny tekst źródłaYamasaki, Tomohiro. "Grapheme-to-Phoneme Conversion for Thai using Neural Regression Models". W Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.naacl-main.315.
Pełny tekst źródłaGodage, Ishika, Ruvan Weerasignhe i Damitha Sandaruwan. "Sign Language Recognition for Sentence Level Continuous Signings". W 10th International Conference on Natural Language Processing (NLP 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112305.
Pełny tekst źródłaAlhalwachi, Ali, Omar Moreno-Flores, Shelbie Davis, Matthew Torrey, Khalid Altamimi i Shawn Duan. "Design and Assembly of a New Methane Generation System for Energy Conversion From Biowaste". W ASME 2020 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/imece2020-23482.
Pełny tekst źródłaRama, Taraka, Anil Kumar Singh i Sudheer Kolachina. "Modeling letter-to-phoneme conversion as a phrase based statistical machine translation problem with minimum error rate training". W Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium. Morristown, NJ, USA: Association for Computational Linguistics, 2009. http://dx.doi.org/10.3115/1620932.1620948.
Pełny tekst źródła