Добірка наукової літератури з теми "Conversion of Human Signed Language"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Conversion of Human Signed Language".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Conversion of Human Signed Language"
Willoughby, Louisa, Howard Manns, Shimako Iwasaki, and Meredith Bartlett. "Are you trying to be funny? Communicating humour in deafblind conversations." Discourse Studies 21, no. 5 (May 15, 2019): 584–602. http://dx.doi.org/10.1177/1461445619846704.
Повний текст джерелаRowe, Meredith L. "Gesture, speech, and sign. Lynn Messing and Ruth Campbell (Eds.). New York: Oxford University Press, 1999. Pp. 227." Applied Psycholinguistics 22, no. 4 (December 2001): 643–47. http://dx.doi.org/10.1017/s0142716401224084.
Повний текст джерелаCorina, David P., and Heather Patterson Knapp. "Signed Language and Human Action Processing." Annals of the New York Academy of Sciences 1145, no. 1 (December 2008): 100–112. http://dx.doi.org/10.1196/annals.1416.023.
Повний текст джерелаRobinson, Octavian. "Puppets, Jesters, Memes, and Benevolence Porn: The Spectacle of Access." Przegląd Kulturoznawczy, no. 3 (53) (December 14, 2022): 329–44. http://dx.doi.org/10.4467/20843860pk.22.024.16613.
Повний текст джерелаWolfe, Rosalee, John C. McDonald, Thomas Hanke, Sarah Ebling, Davy Van Landuyt, Frankie Picron, Verena Krausneker, Eleni Efthimiou, Evita Fotinea, and Annelies Braffort. "Sign Language Avatars: A Question of Representation." Information 13, no. 4 (April 18, 2022): 206. http://dx.doi.org/10.3390/info13040206.
Повний текст джерелаGabarró-López, Sílvia, and Laurence Meurant. "Contrasting signed and spoken languages." Languages in Contrast 22, no. 2 (August 23, 2022): 169–94. http://dx.doi.org/10.1075/lic.00024.gab.
Повний текст джерелаCorina, David. "Sign language and the brain: Apes, apraxia, and aphasia." Behavioral and Brain Sciences 19, no. 4 (December 1996): 633–34. http://dx.doi.org/10.1017/s0140525x00043338.
Повний текст джерелаSlobin, Dan Isaac. "Breaking the Molds: Signed Languages and the Nature of Human Language." Sign Language Studies 8, no. 2 (2008): 114–30. http://dx.doi.org/10.1353/sls.2008.0004.
Повний текст джерелаThompson, Robin L., David P. Vinson, Bencie Woll, and Gabriella Vigliocco. "The Road to Language Learning Is Iconic." Psychological Science 23, no. 12 (November 12, 2012): 1443–48. http://dx.doi.org/10.1177/0956797612459763.
Повний текст джерелаMcburney, Susan Lloyd. "William Stokoe and the discipline of sign language linguistics." Historiographia Linguistica 28, no. 1-2 (September 7, 2001): 143–86. http://dx.doi.org/10.1075/hl.28.1.10mcb.
Повний текст джерелаДисертації з теми "Conversion of Human Signed Language"
Schneider, Andréia Rodrigues de Assunção. "Animação de humanos virtuais aplicada para língua brasileira de sinais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15313.
Повний текст джерелаDeaf people have a limited capacity of using oral language to communicate. Because of this, they use gestural languages as their native language. This makes it especially difficult for them to make use of basic services in a satisfactory way and to properly integrate the hearing world, to which the majority of the population belongs. Due to the fact that this language is only gestural, it is possible to say that the signs it comprises of can be simulated with the animation of virtual humans without losing the correct perception of their inherent meanings (what words they represent). This work describes a technique of animation for LIBRAS. The main idea is to take the movement of a sign from a description of its animation and execute it in a more or less wide manner in order to better use the available space for gesticulation without losing the meaning. The computer animation of a sign must be as close to the real gesture as possible. Its meaning must be easily understood and its execution must be natural (smooth and continuous). For that, the signs must be defined in accordance with the movement limitations imposed by the human joints, and the field of view of the receiver. Besides that, some relevant parameters must be analyzed and defined: speed of the movement, time and amplitude of the signs. Another important aspect to be addressed is the space that is available for the execution of the sign: depending on the area, the sign must be animated in a manner that makes it properly fit in it. The implementation of the technique resulted in a animation system for LIBRAS, that consists of three modules: • a virtual human modeler, so that the joints and DOFs are anatomically consistent with reality; • a gesture generator, which is responsible for the processing of parameters such as speed, time of execution of the gesture, joint configuration, in a file that describes the animation of the pose. It is worth emphasizing that the words in LIBRAS are known as signs. Already a sign is composed of one or more gestures and they are composed of poses; • an animator, which is responsible for generating the animation of a previously created sign, fitting (if necessary) the sign amplitude to the space available for its animation. The generated system has been submitted for tests in order to validate the technique. The goal of the tests was to check whether the generated signs were understandable - if the generated animation represented a certain word. All aspects above are presented and analyzed in detail.
Книги з теми "Conversion of Human Signed Language"
Olivier, Pietquin, and SpringerLink (Online service), eds. Data-Driven Methods for Adaptive Spoken Dialogue Systems: Computational Learning for Conversational Interfaces. New York, NY: Springer New York, 2012.
Знайти повний текст джерелаSang-in, Chŏn, ред. Hanʼguk hyŏndaesa: Chinsil kwa haesŏk. Kyŏnggi-do Pʻaju-si: Nanam Chʻulpʻan, 2005.
Знайти повний текст джерелаWilcox, Sherman, and Corrine Occhino. Historical Change in Signed Languages. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199935345.013.24.
Повний текст джерелаLemon, Oliver, and Olivier Pietquin. Data-Driven Methods for Adaptive Spoken Dialogue Systems: Computational Learning for Conversational Interfaces. Springer, 2014.
Знайти повний текст джерелаLemon, Oliver, and Olivier Pietquin. Data-Driven Methods for Adaptive Spoken Dialogue Systems: Computational Learning for Conversational Interfaces. Springer, 2012.
Знайти повний текст джерелаColetânea de pragmática: grupo de pesquisa linguagem, comunicação e cognição - VOL II. Brazil Publishing, 2022. http://dx.doi.org/10.31012/978-65-5861-431-9.
Повний текст джерелаRuokanen, Miikka. Trinitarian Grace in Martin Luther's The Bondage of the Will. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192895837.001.0001.
Повний текст джерелаЧастини книг з теми "Conversion of Human Signed Language"
Wilcox, Sherman, and Jill P. Morford. "Empirical methods in signed language research." In Human Cognitive Processing, 171–200. Amsterdam: John Benjamins Publishing Company, 2007. http://dx.doi.org/10.1075/hcp.18.14wil.
Повний текст джерелаBono, Mayumi, Tomohiro Okada, Kouhei Kikuchi, Rui Sakaida, Victor Skobov, Yusuke Miyao, and Yutaka Osugi. "Chapter 13. Utterance unit annotation for the Japanese Sign Language Dialogue Corpus." In Advances in Sign Language Corpus Linguistics, 353–82. Amsterdam: John Benjamins Publishing Company, 2023. http://dx.doi.org/10.1075/scl.108.13bon.
Повний текст джерелаCrasborn, Onno, and Menzo Windhouwer. "ISOcat Data Categories for Signed Language Resources." In Gesture and Sign Language in Human-Computer Interaction and Embodied Communication, 118–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34182-3_11.
Повний текст джерелаKallimani, Jagadish S., V. K. Ananthashayana, and Debjani Goswami. "The Feature Extraction Algorithm for the Production of Emotions in Text-to-Speech (TTS) System for an Indian Regional Language." In Advances in Business Information Systems and Analytics, 17–30. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-996-0.ch003.
Повний текст джерелаEvans, Carolyn, and Timnah Rachel Baker. "Communal Religious Rights or Majoritarian Oppression." In Freedom of Religion, Secularism, and Human Rights, 69–94. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198812067.003.0004.
Повний текст джерелаMorosov, Ivete. "A PRAGMÁTICA E OS JOGOS COMPORTAMENTAIS NAS COMUNICAÇÕES ORGANIZACIONAIS." In Coletânea de Pragmática: Grupo de Pesquisa Linguagem, comunicação e cognição. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-075-5-10.
Повний текст джерелаDias, Luzia Schalkoski, and Angela Mari Gusso. "ANÁLISE MULTIMODAL DAS ESTRATÉGIAS DE POLIDEZ EM CAMPANHA DE DOAÇÃO DE SANGUE DO MINISTÉRIO DA SAÚDE." In Coletânea de Pragmática: Grupo de Pesquisa Linguagem, comunicação e cognição. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-075-5-11.
Повний текст джерелаFerreira, Marina Xavier. "A OSTENSÃO COMO ELEMENTO PRAGMÁTICO DA LIBRAS." In Coletânea de Pragmática: Grupo de Pesquisa Linguagem, comunicação e cognição. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-075-5-12.
Повний текст джерелаSantos, Sebastião Lourenço dos. "LINGUAGEM E COGNIÇÃO: UMA ABORDAGEM INTERDISCIPLINAR DOS PROCESSOS DE INTERPRETAÇÃO HUMANA." In Coletânea de Pragmática: Grupo de Pesquisa Linguagem, comunicação e cognição. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-075-5-2.
Повний текст джерелаFerreira, Rodrigo Bueno, and Elena Godoy. "POÉTICA COGNITIVA: A PRAGMÁTICA NA COMUNICAÇÃO LITERÁRIA." In Coletânea de Pragmática: Grupo de Pesquisa Linguagem, comunicação e cognição. Brazil Publishing, 2020. http://dx.doi.org/10.31012/978-65-5861-075-5-3.
Повний текст джерелаТези доповідей конференцій з теми "Conversion of Human Signed Language"
Vaswani, Vaishali, Akriti Kumari Singh, Anshuman Shastri, and Namrata Arora Charpe. "COMM-G: A Communication Glove for Smart Communication." In Human Interaction and Emerging Technologies (IHIET-AI 2022) Artificial Intelligence and Future Applications. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe100905.
Повний текст джерелаKrishnan, Vinodh, and Jacob Eisenstein. ""You’re Mr. Lebowski, I’m the Dude": Inducing Address Term Formality in Signed Social Networks." In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/n15-1185.
Повний текст джерелаMurtaza, Zain, Hadia Akmal, Wardah Afzal, Hasan Erteza Gelani, Zain ul Abdin, and Muhammad Hamza Gulzar. "Human Computer Interaction Based on Gestural Cues Recognition/Sign Language to Text Conversion." In 2019 International Conference on Engineering and Emerging Technologies (ICEET). IEEE, 2019. http://dx.doi.org/10.1109/ceet1.2019.8711835.
Повний текст джерелаBerg-Kirkpatrick, Taylor, and Dan Klein. "GPU-Friendly Local Regression for Voice Conversion." In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/n15-1148.
Повний текст джерелаKim, Hwa-Yeon, Jong-Hwan Kim, and Jae-Min Kim. "Fast Bilingual Grapheme-To-Phoneme Conversion." In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.naacl-industry.32.
Повний текст джерелаAldeneh, Zakaria, Matthew Perez, and Emily Mower Provost. "Learning Paralinguistic Features from Audiobooks through Style Voice Conversion." In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.377.
Повний текст джерелаYamasaki, Tomohiro. "Grapheme-to-Phoneme Conversion for Thai using Neural Regression Models." In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.naacl-main.315.
Повний текст джерелаGodage, Ishika, Ruvan Weerasignhe, and Damitha Sandaruwan. "Sign Language Recognition for Sentence Level Continuous Signings." In 10th International Conference on Natural Language Processing (NLP 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112305.
Повний текст джерелаAlhalwachi, Ali, Omar Moreno-Flores, Shelbie Davis, Matthew Torrey, Khalid Altamimi, and Shawn Duan. "Design and Assembly of a New Methane Generation System for Energy Conversion From Biowaste." In ASME 2020 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/imece2020-23482.
Повний текст джерелаRama, Taraka, Anil Kumar Singh, and Sudheer Kolachina. "Modeling letter-to-phoneme conversion as a phrase based statistical machine translation problem with minimum error rate training." In Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium. Morristown, NJ, USA: Association for Computational Linguistics, 2009. http://dx.doi.org/10.3115/1620932.1620948.
Повний текст джерела