Добірка наукової літератури з теми "Human Signed Language"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Human Signed Language".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Human Signed Language"

1

Corina, David P., and Heather Patterson Knapp. "Signed Language and Human Action Processing." Annals of the New York Academy of Sciences 1145, no. 1 (December 2008): 100–112. http://dx.doi.org/10.1196/annals.1416.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gabarró-López, Sílvia, and Laurence Meurant. "Contrasting signed and spoken languages." Languages in Contrast 22, no. 2 (August 23, 2022): 169–94. http://dx.doi.org/10.1075/lic.00024.gab.

Повний текст джерела
Анотація:
Abstract For years, the study of spoken languages, on the basis of written and then also oral productions, was the only way to investigate the human language capacity. As an introduction to this first volume of Languages in Contrast devoted to the comparison of spoken and signed languages, we propose to look at the reasons for the late emergence of the consideration of signed languages and multimodality in language studies. Next, the main stages of the history of sign language research are summarized. We highlight the benefits of studying cross-modal and multimodal data, as opposed to the isolated investigation of signed or spoken languages, and point out the remaining methodological obstacles to this approach. This contextualization prefaces the presentation of the outline of the volume.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Slobin, Dan Isaac. "Breaking the Molds: Signed Languages and the Nature of Human Language." Sign Language Studies 8, no. 2 (2008): 114–30. http://dx.doi.org/10.1353/sls.2008.0004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Robinson, Octavian. "Puppets, Jesters, Memes, and Benevolence Porn: The Spectacle of Access." Przegląd Kulturoznawczy, no. 3 (53) (December 14, 2022): 329–44. http://dx.doi.org/10.4467/20843860pk.22.024.16613.

Повний текст джерела
Анотація:
Signed language interpreters’proximity to significant political figures and entertainers invites the nondisabled gaze. The spotlight on interpreters in the media is a symptom of celebrity culture intersected with toxic benevolence. This paper considers media attention given interpreters as a site of tension surrounding attitudes toward access for disabled people. Signed language interpretation is provided for deaf people’s access. The presence of signed language interpreters in public spaces and their proximity to significant figures subjects signed languages to public consumption, which is then rendered into sources of entertainment for nonsigning people. The reduction of signed language interpreters to entertainment material signifies the value placed upon accessibility, creates hostile workspaces for signed language interpreters, and reinforces notions of signed languages as novelties. Such actions have adverse effects on signing deaf people’s linguistic human rights and their ability to participate as informed citizens in their respective communities. The media, its audiences, and some of the ways that interpreters have embraced such attention have actively co-produced signed language interpretation as a venue for ableism, linguistic chauvinism, and displacement.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Corina, David. "Sign language and the brain: Apes, apraxia, and aphasia." Behavioral and Brain Sciences 19, no. 4 (December 1996): 633–34. http://dx.doi.org/10.1017/s0140525x00043338.

Повний текст джерела
Анотація:
AbstractThe study of signed languages has inspired scientific' speculation regarding foundations of human language. Relationships between the acquisition of sign language in apes and man are discounted on logical grounds. Evidence from the differential hreakdown of sign language and manual pantomime places limits on the degree of overlap between language and nonlanguage motor systems. Evidence from functional magnetic resonance imaging reveals neural areas of convergence and divergence underlying signed and spoken languages.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Thompson, Robin L., David P. Vinson, Bencie Woll, and Gabriella Vigliocco. "The Road to Language Learning Is Iconic." Psychological Science 23, no. 12 (November 12, 2012): 1443–48. http://dx.doi.org/10.1177/0956797612459763.

Повний текст джерела
Анотація:
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wolfe, Rosalee, John C. McDonald, Thomas Hanke, Sarah Ebling, Davy Van Landuyt, Frankie Picron, Verena Krausneker, Eleni Efthimiou, Evita Fotinea, and Annelies Braffort. "Sign Language Avatars: A Question of Representation." Information 13, no. 4 (April 18, 2022): 206. http://dx.doi.org/10.3390/info13040206.

Повний текст джерела
Анотація:
Given the achievements in automatically translating text from one language to another, one would expect to see similar advancements in translating between signed and spoken languages. However, progress in this effort has lagged in comparison. Typically, machine translation consists of processing text from one language to produce text in another. Because signed languages have no generally-accepted written form, translating spoken to signed language requires the additional step of displaying the language visually as animation through the use of a three-dimensional (3D) virtual human commonly known as an avatar. Researchers have been grappling with this problem for over twenty years, and it is still an open question. With the goal of developing a deeper understanding of the challenges posed by this question, this article gives a summary overview of the unique aspects of signed languages, briefly surveys the technology underlying avatars and performs an in-depth analysis of the features in a textual representation for avatar display. It concludes with a comparison of these features and makes observations about future research directions.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mcburney, Susan Lloyd. "William Stokoe and the discipline of sign language linguistics." Historiographia Linguistica 28, no. 1-2 (September 7, 2001): 143–86. http://dx.doi.org/10.1075/hl.28.1.10mcb.

Повний текст джерела
Анотація:
Summary The first modern linguistic analysis of a signed language was published in 1960 – William Clarence Stokoe’s (1919–2000) Sign Language Structure. Although the initial impact of Stokoe’s monograph on linguistics and education was minimal, his work formed a solid base for what was to become a new field of research: American Sign Language (ASL) Linguistics. Together with the work of those that followed (in particular Ursula Bellugi and colleagues), Stokoe’s ground-breaking work on the structure of ASL has led to an acceptance of signed languages as autonomous linguistic systems that exhibit the complex structure characteristic of all human languages.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wilbur, Ronnie B. "What does the study of signed languages tell us about ‘language’?" Investigating Understudied Sign Languages - Croatian SL and Austrian SL, with comparison to American SL 9, no. 1-2 (December 31, 2006): 5–32. http://dx.doi.org/10.1075/sll.9.1.04wil.

Повний текст джерела
Анотація:
Linguists focusing on what all languages have in common seek to identify universals, tendencies, and other patterns to construct a general model of human language, Universal Grammar (UG). The design features of this model are that it must account for linguistic universals, account for linguistic diversity, and account for language learnability. Sign languages contribute to the construction of this model by providing a new source of data, permitting the claims and assumptions of UG to be rigorously tested and modified. One result of this research has been that the notion of ‘language’ itself has been clarified, clearly separating it from speech. It has also been possible to identify the design features of ‘natural languages’ themselves, and then to explain why pedagogical signing systems are not natural languages. This paper provides an overview of these issues.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Knapp, Heather Patterson, and David P. Corina. "A human mirror neuron system for language: Perspectives from signed languages of the deaf." Brain and Language 112, no. 1 (January 2010): 36–43. http://dx.doi.org/10.1016/j.bandl.2009.04.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Human Signed Language"

1

Schneider, Andréia Rodrigues de Assunção. "Animação de humanos virtuais aplicada para língua brasileira de sinais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15313.

Повний текст джерела
Анотація:
Os surdos possuem a capacidade de utilizar a língua oral para se comunicar limitada e por isso tem como língua materna as línguas gestuais. Isso dificulta a utilização, de maneira satisfatória, dos serviços básicos, bem como a inserção na sociedade ouvinte, que é composta pela maioria da população. Devido ao fato desta língua ser gestual, é viável afirmar que se pode simular seus sinais através de animação de humanos virtuais, sem perder a percepção correta do significado do mesmo (que palavra o sinal representa). O presente trabalho descreve uma técnica de animação aplicada em LIBRAS. A idéia principal é, baseado na descrição da animação de um determinado sinal, executar seu movimento de forma mais, ou menos ampla para que se consiga aproveitar o espaço disponível para a gesticulação, sem entretanto perder o significado do sinal. A animação computacional de um sinal deve o mais próximo possível do real, ou seja, seu significado deve ser facilmente entendido e sua execução deve ser natural (suave e contínua). Para isso os sinais devem ser definidos de acordo com as limitações de movimentação das articulações humanas, bem como ao campo de visão do receptor. Além disso alguns parâmetros devem ser analisados e definidos: velocidade do movimento, tempo e amplitude dos sinais. Outro aspecto importante a ser tratado é o espaço que é disponível para a execução do sinal: dependendo do espaço, o sinal deve ser animado de forma a se adequar a ele. A implementação da técnica resultou em um sistema de animação para LIBRAS composto por três módulos: • um modelador do humano virtual, de forma que as articulações e DOFs deste sejam anatomicamente coerentes com a realidade; • um gerador de gestos, o qual é responsável pela transformação dos parâmetros como velocidade, tempo de execução do gesto, configuração das juntas, em um arquivo que descreve a animação da pose. Cabe ressaltar que as palavras em LIBRAS são conhecidas como sinais. Já um sinal é composto por um ou vários gestos e estes são compostos por poses; • um animador, o qual é responsável por gerar a animação de um sinal previamente criado, adequando (se necessário) a amplitude deste sinal ao espaço disponível para a execução do mesmo. O sistema criado foi submetido a testes para que a técnica fosse validada. O que se buscou com os testes foi verificar se os sinais gerados eram passíveis de entendimento, ou seja, se a animação gerada representava determinada palavra. Todos os aspectos acima mencionados são apresentados e analisados em detalhes.
Deaf people have a limited capacity of using oral language to communicate. Because of this, they use gestural languages as their native language. This makes it especially difficult for them to make use of basic services in a satisfactory way and to properly integrate the hearing world, to which the majority of the population belongs. Due to the fact that this language is only gestural, it is possible to say that the signs it comprises of can be simulated with the animation of virtual humans without losing the correct perception of their inherent meanings (what words they represent). This work describes a technique of animation for LIBRAS. The main idea is to take the movement of a sign from a description of its animation and execute it in a more or less wide manner in order to better use the available space for gesticulation without losing the meaning. The computer animation of a sign must be as close to the real gesture as possible. Its meaning must be easily understood and its execution must be natural (smooth and continuous). For that, the signs must be defined in accordance with the movement limitations imposed by the human joints, and the field of view of the receiver. Besides that, some relevant parameters must be analyzed and defined: speed of the movement, time and amplitude of the signs. Another important aspect to be addressed is the space that is available for the execution of the sign: depending on the area, the sign must be animated in a manner that makes it properly fit in it. The implementation of the technique resulted in a animation system for LIBRAS, that consists of three modules: • a virtual human modeler, so that the joints and DOFs are anatomically consistent with reality; • a gesture generator, which is responsible for the processing of parameters such as speed, time of execution of the gesture, joint configuration, in a file that describes the animation of the pose. It is worth emphasizing that the words in LIBRAS are known as signs. Already a sign is composed of one or more gestures and they are composed of poses; • an animator, which is responsible for generating the animation of a previously created sign, fitting (if necessary) the sign amplitude to the space available for its animation. The generated system has been submitted for tests in order to validate the technique. The goal of the tests was to check whether the generated signs were understandable - if the generated animation represented a certain word. All aspects above are presented and analyzed in detail.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Losson, Olivier. "Modélisation du geste communicatif et réalisation d'un signeur virtuel de phrases en langue des signes grançaise." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2000. http://tel.archives-ouvertes.fr/tel-00003332.

Повний текст джерела
Анотація:
Dans le cadre du geste communicatif, la langue des signes française (LSF) constitue un sujet d'étude privilégié, par la richesse que lui confère précisément son statut de langue. Notre étude vise à obtenir un système de synthèse de phrases en LSF à partir d'une représentation textuelle intermédiaire, dans l'optique d'une traduction globale du français en signes. Issue des traits linguistiquement pertinents, une grammaire formelle est proposée pour spécifier le signe, avec pour principe la décomposition en primitives formationnelles (configurations manuelles, mouvement, ...). La description hiérarchique obtenue inclut des caractéristiques spatio-temporelles (points corporels, symétrie des articulateurs, répétition); une étude poussée des profils de vitesse a par ailleurs été effectuée pour représenter finement la dynamique du mouvement. Au niveau du discours interviennent les processus grammaticaux propres aux langues gestuelles : le paramétrage des signes permet, outre de décrire des items lexicaux génériques, de prendre en charge les mécanismes de localisation ou de référence pronominale. Afin de marquer le type de clause, l'expressivité non-manuelle – notamment faciale – est de première importance. Le système a été intégralement implanté pour aboutir à l'animation d'un signeur virtuel. L'exigence de configurations naturelles pour les chaînes articulaires a nécessité le développement d'un modèle réaliste pour l'avatar, et de méthodes spécifiques de cinématique inverse pour l'orientation et le positionnement manuels. L'ensemble, de l'analyseur syntaxique au module de génération graphique tridimensionnelle, constitue un prototype performant d'obtention de phrases signées. Doté d'une interface graphique, il laisse entrevoir (comme le prouve un exemple illustratif) toute une gamme d'applications pour lesquelles la vidéo n'est pas adaptée, tirant principalement profit de la compacité de l'encodage et de la rapidité avec laquelle sont produits les signes.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Benchiheub, Mohamed-El-Fatah. "Contribution à l'analyse des mouvements 3D de la Langue des Signes Française (LSF) en Action et en Perception." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS559/document.

Повний текст джерела
Анотація:
La langue des signes (LS) est encore une langue peu décrite, en particulier ce qui concerne le mouvement des articulateurs. La recherche sur la LS s’est concentrée sur la compréhension et la modélisation des propretés linguistiques. Peu d’études ont essayé de comprendre la cinématique et la dynamique du mouvement en lui-même et ce que cela apporte à compréhensibilité de la LS générée par des modèles. Cette thèse porte sur l’analyse du mouvement en Langue des Signes Française (LSF) tant des points de vue de sa production et de sa compréhension par les individus sourds.Mieux appréhender le mouvement en LS nécessite la création de nouvelles ressources pour la communauté scientifique étudiant les LSF. Dans cette perspective, nous avons créé et annoté un corpus de données de mouvements 3D de la partie supérieure du corps et du visage, à l'aide du système de capture de mouvement. Le traitement de ce corpus a permis de spécifier la cinématique du mouvement en LSF durant les signes et les transitions.La question posée dans la première partie de cette thèse était de quantifier dans quelle mesure certaines lois classiques connues en contrôle moteur restaient valides durant les mouvements de LS, afin de savoir si les connaissances acquises en contrôle moteur pouvaient être réutilisées en LS.Trouver quelles informations du mouvement sont cruciales pour la compréhension de la LS a constitué la deuxième partie de cette thèse. L’intérêt était de savoir quels aspects du mouvement des modèles de production de LS devraient reproduire en priorité. Dans cette démarche, nous avons étudié dans quelle mesure des individus sourds, signeurs ou non, parvenaient à comprendre la LS en fonction de la quantité d’informations qui leur est disponible
Nowadays, Sign Language (SL) is still little described, particularly for what concerns the movement of articulators. Research on SL has focused on understanding and modeling linguistic properties. Few investigations have been carried out to understand the kinematics and dynamics of the movement itself and what it brings to understand the LS SL generated by models. This thesis deals with the analysis of movement in the French Sign Language LSF with a main focus on its production as well as its understanding by deaf people.Better understanding the movement in SL requires the creation of new resources for the scientific community studying SL. In this framework, we have created and annotated a corpus of 3D motion data from the upper body and face, using a motion capture system. The processing of this corpus made it possible to specify the kinematics of the movement in SL during the signs and the transitions.The first contribution of this thesis was to quantify to what extent certain classical laws, known in motor control, remained valid during the movements of SL, in order to know if the knowledge acquired in motor control could be exploited in SL.Finding relevant information of the movement that is crucial for understanding SL represented the second part of this thesis. We were basically interested to know which aspects of the movement of SL production models should be replicated as a priority. In this approach, we have examined to what extent deaf individuals, whether signers or not, were able to understand SL according to the amount of information available to them
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Borgia, Fabrizio. "Informatisation d'une forme graphique des Langues des Signes : application au système d'écriture SignWriting." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30030/document.

Повний текст джерела
Анотація:
Les recherches et les logiciels présentés dans cette étude s'adressent à une importante minorité au sein de notre société, à savoir la communauté des sourdes. De nombreuses recherches démontrent que les sourdes se heurtent à de grosses difficultés avec la langue vocale, ce qui explique pourquoi la plu- part d'entre eux préfère communiquer dans la langue des signes. Du point de vue des sciences de l'information, les LS constituent un groupe de minorités linguistiques peu représentées dans l'univers du numérique. Et, de fait, les sourds sont les sujets les plus touchés par la fracture numérique. Cette étude veut donc être une contribution pour tenter de resserrer cette fracture numérique qui pénalise les sourdes. Pour ce faire, nous nous sommes principalement concentrés sur l'informatisation de SignWriting, qui constitue l'un des systèmes les plus prometteurs pour écrire la LS
The studies and the software presented in this work are addressed to a relevant minority of our society, namely deaf people. Many studies demonstrate that, for several reasons, deaf people experience significant difficulties in exploiting a Vocal Language (VL English, Chinese, etc.). In fact, many of them prefer to communicate using Sign Language (SL). As computer scientists, we observed that SLs are currently a set of underrepresented linguistic minorities in the digital world. As a matter of fact, deaf people are among those individuals which are mostly affected by the digital divide. This work is our contribution towards leveling the digital divide affecting deaf people. In particular, we focused on the computer handling of SignWriting, which is one of the most promising systems devised to write SLs
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Dielmann, Alfred. "Automatic recognition of multiparty human interactions using dynamic Bayesian networks." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/4022.

Повний текст джерела
Анотація:
Relating statistical machine learning approaches to the automatic analysis of multiparty communicative events, such as meetings, is an ambitious research area. We have investigated automatic meeting segmentation both in terms of “Meeting Actions” and “Dialogue Acts”. Dialogue acts model the discourse structure at a fine grained level highlighting individual speaker intentions. Group meeting actions describe the same process at a coarse level, highlighting interactions between different meeting participants and showing overall group intentions. A framework based on probabilistic graphical models such as dynamic Bayesian networks (DBNs) has been investigated for both tasks. Our first set of experiments is concerned with the segmentation and structuring of meetings (recorded using multiple cameras and microphones) into sequences of group meeting actions such as monologue, discussion and presentation. We outline four families of multimodal features based on speaker turns, lexical transcription, prosody, and visual motion that are extracted from the raw audio and video recordings. We relate these lowlevel multimodal features to complex group behaviours proposing a multistreammodelling framework based on dynamic Bayesian networks. Later experiments are concerned with the automatic recognition of Dialogue Acts (DAs) in multiparty conversational speech. We present a joint generative approach based on a switching DBN for DA recognition in which segmentation and classification of DAs are carried out in parallel. This approach models a set of features, related to lexical content and prosody, and incorporates a weighted interpolated factored language model. In conjunction with this joint generative model, we have also investigated the use of a discriminative approach, based on conditional random fields, to perform a reclassification of the segmented DAs. The DBN based approach yielded significant improvements when applied both to the meeting action and the dialogue act recognition task. On both tasks, the DBN framework provided an effective factorisation of the state-space and a flexible infrastructure able to integrate a heterogeneous set of resources such as continuous and discrete multimodal features, and statistical language models. Although our experiments have been principally targeted on multiparty meetings; features, models, and methodologies developed in this thesis can be employed for a wide range of applications. Moreover both group meeting actions and DAs offer valuable insights about the current conversational context providing valuable cues and features for several related research areas such as speaker addressing and focus of attention modelling, automatic speech recognition and understanding, topic and decision detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Héloir, Alexis. "Agent virtuel signeur - Aide à la communication des personnes sourdes." Phd thesis, Université de Bretagne Sud, 2008. http://tel.archives-ouvertes.fr/tel-00516280.

Повний текст джерела
Анотація:
Les travaux présentés dans cette thèse sont dédiés à la conception et l'animation d'entités virtuelles autonomes et réalistes, capables de générer des gestes de la Langue des Signes. L'accent est mis sur la qualité expressive des gestes. Nous montrons que les aspects expressifs du geste, en plus de conférer au mouvement un aspect réaliste, participent à la construction du sens dans le cadre de la visée iconisatrice de la langue des signes. Nous proposons un modèle du discours prenant en compte les visées illustrative et non illustrative de la langue des signes ainsi que ses principales structures de grande iconicité. Ce modèle appelle une spécification des gestes capable de gérer les différentes modes expressifs. Une phase d'analyse nous permet ensuite d'identifier certains paramètres d'expressivité. Cette analyse s'applique sur une séquence de gestes en langue des signes française réalisée par un locuteur sourd selon différentes qualités expressives. Enfin, nous proposons une méthode de génération de mouvement compatible avec le modèle de description des gestes. Cette méthode de génération met en oeuvre différentes méthodes de génération de mouvement dont un modèle de contrôle sensorimoteur capable de s'adapter aux variabilités spatiales et temporelles induites par les paramètres d'expressivité.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Gustavsson, Lisa. "The language learning infant : effects of speech input, vocal output, and feedback /." Doctoral thesis, Stockholm : Department of Linguistics, Stockholm University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-26735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mourenas, Argoud Line. "Les noms de parties du corps en anglais : approche lexico-cognitive." Toulouse 2, 2008. http://www.theses.fr/2008TOU20066.

Повний текст джерела
Анотація:
La question fondamentale à laquelle cette thèse cherche à proposer des éléments de réponse est la suivante : existe-t-il dans le lexique actuel anglais des traces d'une invariance notionnelle qui aurait pu perdurer depuis un stade très antérieur de la langue ? Nous construisons notre réponse à travers l'étude de trois segments consonantiques initiaux : bl-, kn-, et sk-. Dans ce but, nous avons constitué trois classes heuristiques, celle des ‘mots en bl-', des ‘mots en kn-', et des ‘mots en SK-', cette dernière englobant les lexèmes commençant par sc-, sch-, sh-, sk-, et sq-. Le choix de ces segments consonantiques est dicté par le souci de présenter un schéma représentatif du phénomène ; bl-, kn-, et sk- constituant un échantillon de chaque type de phonesthèmes anglais (CR-, ØR-, et SC-). Notre objectif global est de tenter de montrer que l'invariance sémiologique de chacune de ces trois classes véhicule une invariance notionnelle en profondeur : ces segments, submorphémiques, seraient la trace en surface d'une invariance qui reposerait sur une conceptualisation et une nominalisation très anciennes du corps humain. Notre étude comprend quatre dimensions : nous partons du plan lexicologique, puis procédons à l'analyse sémantique, et à l'analyse étymologique. Enfin, à la lumière de théories issues de divers courants de la linguistique ‘cognitive', nous proposons une description de la façon dont le cerveau conceptualise les référents désignés par les lexèmes des trois classes examinées. Nous nous appuyons dans ce domaine sur les travaux de Lakoff & Johnson, de Fauconnier & Turner, ainsi que sur ceux de Langacker et de Talmy
The fundamental question this thesis sets out to explore is the following: does the lexicon of contemporary English exhibit traces of a notional invariance possiby dating back to a very early stage of the language? The answer is provided through the study of three initial consonant segments: bl-, kn-, and sk-. With this aim in view, three heuristic classes are set up, namely ‘bl- words', ‘kn- words', and ‘SK- words', the latter including lexemes beginning in sc-, sch-, sh-, sk-, and sq-. These three initial consonant segments have been chosen because they are representative of the three main types of English phonesthemes (CR-, ØR-, and SC-). The overall objective is to show that the semiological invariance of each of these classes corresponds to a notional invariance: I claim that these submorphemic segments are the surface marks of an invariance whose source may be traced back to a very ancient process of conceptualization and nomination of the human body. My study involves four dimensions: a lexicological analysis leading on to semantic and etymological analyses. Finally, drawing on various approaches of ‘cognitive' linguistics, I suggest a description of the way the brain conceptualizes the referents of the lexemes belonging to the three classes examined, based on the works of Lakoff & Johnson, Fauconnier & Turner, Langacker, and Talmy
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Mercier, Hugo. "Modélisation et suivi des déformations faciales : applications à la description des expressions du visage dans le contexte de la langue des signes." Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00185084.

Повний текст джерела
Анотація:
Le visage joue un rôle prépondérant en langue des signes, notamment par le sens porté par ses expressions. Peu d'études existent sur les expressions faciales en langue des signes ; cela est dû au manque d'outil de description. Dans cette thèse, il s'agit de développer des méthodes permettant la description la plus précise et exhaustive possible des différents mouvements faciaux observables au cours d'une séquence vidéo de langue des signes.

Le formalisme des modèles à apparence active (Active Appearance Models - AAM) est utilisé ici pour modéliser le visage en termes de déplacements d'un certain nombre de points d'intérêt et en termes de variations de texture. Quand il est associé à une méthode d'optimisation, ce formalisme permet de trouver les coordonnées des points d'intérêt sur un visage. Nous utilisons ici une méthode d'optimisation dite "à composition inverse", qui permet une implémentation efficace et l'obtention de résultats précis.

Dans le contexte de la langue des signes, les rotations hors-plan et les occultations manuelles sont fréquentes. Il est donc nécessaire de développer des méthodes robustes à ces conditions. Il existe pour cela une variante robuste des méthodes d'optimisation d'AAM qui permet de considérer une image d'entrée éventuellement bruitée.
Nous avons étendu cette variante de façon à ce que la détection des occultations puisse se faire de manière automatique, en supposant connu le comportement de l'algorithme dans le cas non-occulté.
Le résultat de l'algorithme est alors constitué des coordonnées 2D de chacun des points d'intérêt du modèle en chaque image d'une séquence vidéo, associées éventuellement à un score de confiance. Ces données brutes peuvent ensuite être exploitées dans plusieurs applications.

Nous proposons ainsi comme première application de décrire une séquence vidéo expressive en chaque instant par une combinaison de déformations unitaires activées à des intensités différentes. Une autre application originale consiste à traiter une vidéo de manière à empêcher l'identification d'un visage sans perturber la reconnaissance de ses expressions.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sofia, Estanislao. "Le problème de la définition des entités linguistiques chez Ferdinand de Saussure." Phd thesis, Paris 10, 2009. http://tel.archives-ouvertes.fr/tel-00465625/en/.

Повний текст джерела
Анотація:
La question qui est au cœur de cette thèse peut donc être formulée d'une manière apparemment simple, à savoir : si la langue est un système, quels sont les éléments qui la constituent ? Cette simplicité n'est pourtant qu'apparente, et dissimule, en réalité, une grande complexité. Une réponse acceptable à cette question consisterait en effet non simplement en une affirmation qui précise, par exemple, que les éléments du système Langue sont tels et tels. Elle devrait comporter, également, une explication de leur mode (ou leurs modes) d'interaction, une formulation de leurs lois, une définition de leurs propriétés intrinsèques et de leurs caractéristiques communes, bref une explicitation de tout ce qui justifie que l'on soit autorisé à parler d'« éléments » faisant partie d'un « système » (en l'occurrence, d'une langue), et d'un « système » composé de ce(s) type(s) d'« éléments ». La description d'un élément équivaut – c'est Saussure qui l'a enseigné – à une description du système auquel cet élément participe, c'est-à-dire à une détermination des (types de) rapports qui relient les éléments entre eux. De ce point de vue, la question de savoir quelles sont les entités qui composent le système Langue est une problématique qui porte de manière directe sur la notion du système « Langue », tout court, tel que Saussure le concevait. Cette thèse comporte trois parties. La première, consacrée à la notion de « système », essaye de montrer qu'il existe chez Saussure des fluctuations, et qu'il est possible de dégager au moins deux configuration nettement différentes : l'une nommée par Saussure « système d'oppositions », l'autre « système » (ou « mécanisme », ou encore « organisme ») « grammatical ». La deuxième partie, consacrée à la notion de « valeur », tente de montrer qu'il est possible de trouver, chez Saussure, au moins deux configurations différentes : l'une suivant une voie purement négative et différentielle ; l'autre, plus complexe, comportant des éléments non réductibles à des différences pures. Notre hypothèse a été que ces configurations théoriques distinctes sont issues, chez Saussure, du traitement de problématiques différentes, comportant des éléments définissables, par conséquent, de manière différente. Le pari de notre travail a été de tenter d'expliquer ces deux configurations en prenant pour base la notion d'« entité », dont la définition, disait Saussure, est « la première tâche » de la linguistique.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Human Signed Language"

1

1930-, Gardner R. Allen, Gardner Beatrix T. 1933-, and Van Cantfort, Thomas E., 1949-, eds. Teaching sign language to chimpanzees. Albany: State University of New York Press, 1989.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

1948-, Davidson Iain, ed. Human evolution, language, and mind: A psychological and archaeological inquiry. Cambridge: Cambridge University Press, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Silent partners: The legacy of ape language experiments. New York: Ballantine, 1987.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Silent partners: The legacy of the ape language experiments. New York: Times Books, 1986.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

ill, Bauer Stephanie, ed. There's a story in my head: Sign language for body parts. Minneapolis: Magic Wagon, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ipke, Wachsmuth, and Fröhlich Martin, eds. Gesture and sign language in human-computer interaction: International Gesture Workshop, Bielefeld, Germany, September 17-19, 1997 : proceedings. Berlin: Springer, 1998.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Neustein, Amy. Where Humans Meet Machines: Innovative Solutions for Knotty Natural-Language Problems. New York, NY: Springer New York, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hess, Elizabeth. Nim Chimpsky: The chimp who would be human. New York: Bantam Books, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hess, Elizabeth. Nim Chimpsky: The chimp who would be human. New York: Bantam Books, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hess, Elizabeth. Nim chimpsky: The chimp who would be human. Waterville, Me: Thorndike Press, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Human Signed Language"

1

Wilcox, Sherman, and Jill P. Morford. "Empirical methods in signed language research." In Human Cognitive Processing, 171–200. Amsterdam: John Benjamins Publishing Company, 2007. http://dx.doi.org/10.1075/hcp.18.14wil.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Crasborn, Onno, and Menzo Windhouwer. "ISOcat Data Categories for Signed Language Resources." In Gesture and Sign Language in Human-Computer Interaction and Embodied Communication, 118–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34182-3_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kosecki, Krzysztof. "Western Conception of Time in Signed Languages: a Cognitive Linguistic Perspective." In Human Cognitive Processing, 85–102. Amsterdam: John Benjamins Publishing Company, 2016. http://dx.doi.org/10.1075/hcp.52.05kos.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bono, Mayumi, Tomohiro Okada, Kouhei Kikuchi, Rui Sakaida, Victor Skobov, Yusuke Miyao, and Yutaka Osugi. "Chapter 13. Utterance unit annotation for the Japanese Sign Language Dialogue Corpus." In Advances in Sign Language Corpus Linguistics, 353–82. Amsterdam: John Benjamins Publishing Company, 2023. http://dx.doi.org/10.1075/scl.108.13bon.

Повний текст джерела
Анотація:
This chapter defines ‘utterance units’ and describes their annotation in the Japanese Sign Language (JSL) dialogue corpus, first focusing on how human annotators – native signers of JSL – identify and annotate utterance units, before reporting on part of speech (POS) tagging for JSL and semi-automatic annotation of utterance units. The utterance unit is an original concept for segmenting and annotating movement features in sign language dialogue, based on signers’ native sense. We postulate a fundamental interaction-specific unit for understanding interactional mechanisms (such as turn-taking) in sign language social interactions from the perspectives of conversation analysis and multimodal interaction studies. We explain differences between sentence and utterance units, the corpus construction and composition, and the annotation scheme, before analyzing how JSL native annotators annotated the units. Finally, we show the application potential of this research by presenting two case studies, the first exploring POS annotations, and the second a first attempt at automatic annotation using OpenPose software.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fang, Gaolin, Wen Gao, Xilin Chen, Chunli Wang, and Jiyong Ma. "Signer-Independent Continuous Sign Language Recognition Based on SRN/HMM." In Gesture and Sign Language in Human-Computer Interaction, 76–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Krňoul, Zdeněk, Pavel Jedlička, Miloš Železný, and Luděk Müller. "Motion Capture 3D Sign Language Resources." In European Language Grid, 307–12. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17258-8_21.

Повний текст джерела
Анотація:
AbstractThe new 3D motion capture data corpus expands the portfolio of existing language resources by a corpus of 18 hours of Czech sign language. This helps alleviate the current problem, which is a critical lack of quality data necessary for research and subsequent deployment of machine learning techniques in this area. We currently provide the largest collection of annotated sign language recordings acquired by state-of-the-art 3D human body recording technology for the successful future deployment of communication technologies, especially machine translation and sign language synthesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Edwards, Alistair D. N. "Progress in sign language recognition." In Gesture and Sign Language in Human-Computer Interaction, 13–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0052985.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

da Rocha Costa, Antônio Carlos, and Graçaliz Pereira Dimuro. "SignWriting-Based Sign Language Processing." In Gesture and Sign Language in Human-Computer Interaction, 202–5. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Fröhlich, Martin, and Ipke Wachsmuth. "Gesture recognition of the upper limbs — From signal to symbol." In Gesture and Sign Language in Human-Computer Interaction, 173–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0052998.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Antzakas, Klimis, and Bencie Woll. "Head Movements and Negation in Greek Sign Language." In Gesture and Sign Language in Human-Computer Interaction, 193–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47873-6_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Human Signed Language"

1

Godage, Ishika, Ruvan Weerasignhe, and Damitha Sandaruwan. "Sign Language Recognition for Sentence Level Continuous Signings." In 10th International Conference on Natural Language Processing (NLP 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112305.

Повний текст джерела
Анотація:
It is no doubt that communication plays a vital role in human life. There is, however, a significant population of hearing-impaired people who use non-verbal techniques for communication, which a majority of the people cannot understand. The predominant of these techniques is based on sign language, the main communication protocol among hearing impaired people. In this research, we propose a method to bridge the communication gap between hearing impaired people and others, which translates signed gestures into text. Most existing solutions, based on technologies such as Kinect, Leap Motion, Computer vision, EMG and IMU try to recognize and translate individual signs of hearing impaired people. The few approaches to sentence-level sign language recognition suffer from not being user-friendly or even practical owing to the devices they use. The proposed system is designed to provide full freedom to the user to sign an uninterrupted full sentence at a time. For this purpose, we employ two Myo armbands for gesture-capturing. Using signal processing and supervised learning based on a vocabulary of 49 words and 346 sentences for training with a single signer, we were able to achieve 75-80% word-level accuracy and 45-50% sentence level accuracy using gestural (EMG) and spatial (IMU) features for our signer-dependent experiment.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Krishnan, Vinodh, and Jacob Eisenstein. ""You’re Mr. Lebowski, I’m the Dude": Inducing Address Term Formality in Signed Social Networks." In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/n15-1185.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

K. Aryal, Saurav, Howard Prioleau, and Gloria Washington. "Sentiment Classification of Code-Switched Text using Pre-Trained Multilingual Embeddings and Segmentation." In 8th International Conference on Signal, Image Processing and Embedded Systems (SIGEM 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.122013.

Повний текст джерела
Анотація:
With increasing globalization and immigration, various studies have estimated that about half of the world population is bilingual. Consequently, individuals concurrently use two or more languages or dialects in casual conversational settings. However, most research is natural language processing is focused on monolingual text. To further the work in code-switched sentiment analysis, we propose a multi-step natural language processing algorithm utilizing points of code-switching in mixed text and conduct sentiment analysis around those identified points. The proposed sentiment analysis algorithm uses semantic similarity derived from large pre-trained multilingual models with a handcrafted set of positive and negative words to determine the polarity of code-switched text. The proposed approach outperforms a comparable baseline model by 11.2% for accuracy and 11.64% for F1-score on a Spanish-English dataset. Theoretically, the proposed algorithm can be expanded for sentiment analysis of multiple languages with limited human expertise.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fink, Jérôme, Pierre Poitier, Maxime André, Loup Meurice, Benoît Frénay, Anthony Cleve, Bruno Dumas, and Laurence Meurant. "Sign Language-to-Text Dictionary with Lightweight Transformer Models." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/662.

Повний текст джерела
Анотація:
The recent advances in deep learning have been beneficial to automatic sign language recognition (SLR). However, free-to-access, usable, and accessible tools are still not widely available to the deaf community. The need for a sign language-to-text dictionary was raised by a bilingual deaf school in Belgium and linguist experts in sign languages (SL) in order to improve the autonomy of students. To meet that need, an efficient SLR system was built based on a specific transformer model. The proposed system is able to recognize 700 different signs, with a top-10 accuracy of 83%. Those results are competitive with other systems in the literature while using 10 times less parameters than existing solutions. The integration of this model into a usable and accessible web application for the dictionary is also introduced. A user-centered human-computer interaction (HCI) methodology was followed to design and implement the user interface. To the best of our knowledge, this is the first publicly released sign language-to-text dictionary using video captured by a standard camera.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nyaga, Casam, and Ruth Wario. "Towards Kenyan Sign Language Hand Gesture Recognition Dataset." In 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003281.

Повний текст джерела
Анотація:
Datasets for hand gesture recognition are now an important aspect of machine learning. Many datasets have been created for machine learning purposes. Some of the notable datasets include Modified National Institute of Standards and Technology (MNIST) dataset, Common Objects in Context (COCO) dataset, Canadian Institute For Advanced Research (CIFAR-10) dataset, LeNet-5, AlexNet, GoogLeNet, The American Sign Language Lexicon Video Dataset and 2D Static Hand Gesture Colour Image Dataset for ASL Gestures. However, there is no dataset for Kenya Sign language (KSL). This paper proposes the creation of a KSL hand gesture recognition dataset. The dataset is intended to be in two-fold. One for static hand gestures, and one for dynamic hand gestures. With respect to dynamic hand gestures short videos of the KSL alphabet a to z and numbers 0 to 10 will be considered. Likewise, for the static gestures KSL alphabet a to z will be considered. It is anticipated that this dataset will be vital in creation of sign language hand gesture recognition systems not only for Kenya sign language but of other sign languages as well. This will be possible because of learning transfer ability when implementing sign language systems using neural network models.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Reis, Luana Silva, Tiago Maritan U. De Araújo, Yuska Paola Costa Aguiar, Manuella Aschoff C. B. Lima, and Angelina S. da Silva Sales. "Assessment of the Treatment of Grammatical Aspects of Machine Translators to Libras." In XXIV Simpósio Brasileiro de Sistemas Multimídia e Web. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/webmedia.2018.4570.

Повний текст джерела
Анотація:
Currently, a set of technologies has been developed with the aim of reducing barriers to access to information for deaf people, such as machine tools for sign languages. However, these technologies have some limitations related to the difficult of handling some specific grammatical aspects of the sign languages, which can make the translations less fluent, and influence the deaf users experience. To address this problem, this study analyzes the machine translation of contents from Brazilian Portuguese (Pt-br) into Brazilian Sign Language (Libras) performed by three machine translators: ProDeaf, HandTalk and VLibras. More specifically, we performed an experiment with some Brazilian human interpreters that evaluate the treatment of some specific grammatical aspects in these three applications. As a result, we observed a significant weakness in the evaluation regarding the adequacy treatment of homonymous words, denial adverbs and directional verbs in the translations performed by the applications, which indicates the need for these tools to improve in the treatment of these grammatical aspects.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wu, Annie, and Yu Sun. "A Machine Learning Model that Analyzes Surrounding Road Signs to Eliminate Dangers Caused by Human Operational Error." In 4th International Conference on Natural Language Processing and Machine Learning. Academy and Industry Research Collaboration Center (AIRCC), 2023. http://dx.doi.org/10.5121/csit.2023.130813.

Повний текст джерела
Анотація:
Autonomous vehicles are a potential solution to preventing crashes caused by human error. Although road signs are intended to attract drivers' attention and help them operate, drivers can still misinterpret signs, resulting in an accident. An autonomous vehicle system can implement artificial intelligence to detect and recognize known patterns in input graphics to minimize the human aspect of driving. In this study, we present an implementation of the CNN architecture to classify four regulatory instruments (stop, crosswalk, speed limit sign, and traffic light) using the TensorFlow library. We used a training dataset of 877 images of the four distinct classes to optimize the model. The goal of the study was to create a lightweight and accessible image classification model. Experimental results show a 92% model accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Tokumaru, Kumon. "The Three Stage Digital Evolution of Linguistic Humans." In GLOCAL Conference on Asian Linguistic Anthropology 2019. The GLOCAL Unit, SOAS University of London, 2019. http://dx.doi.org/10.47298/cala2019.12-2.

Повний текст джерела
Анотація:
Digital Linguistics (DL) is an interdisciplinary study that identifies human language as a digital evolution of mammal analog vocal sign communications, founded on the vertebrate spinal sign reflex mechanism [Tokumaru 2017 a/b, 2018 a/b/c/d]. Analog signs are unique with their physical sound waveforms but limited in number, whilst human digital word signs are infinite by permutation of their logical property, phonemes. The first digital evolution took place 66,000 years ago with South African Neolithic industries, Howiesons Poort, when linguistic humans acquired a hypertrophied mandibular bone to house a descended larynx for vowel accented syllables containing logical properties of phonemes and morae. Morae made each syllable distinctive in the time axis and enabled grammatical modulation by alternately transmitting conceptual and grammatical syllables. The sign reflex mechanism is an unconscious self-protection and life-support mechanism, operated by immune cell networks inside the ventricle system. DL identified cellular and molecular structures for the sign (=concept) device as a B lymphocyte (or, in other words, Mobile Ad-Hoc Networking Neuron), connects to sensory, conceptual and networking memories, which consist of its meanings [Table 1]. Its antibodies can network with antigens of CSF-Contacting Neurons at the brainstem reticular formation and of Microglia cells at the neocortex [Figure 1]. It is plausible that the 3D structure of the antigen molecule takes the shape of word sound waveform multiplexing intensity and pitch, and that specifically pairing the antibody molecule consists of three CDRs (Complementality Defining Regions) in the Antibody Variable Region network with the logic of dichotomy and dualism. As sign reflex deals with survival issues such as food, safety and reproduction, it is stubborn, passive and inflexible: It does not spontaneously look for something new, and it is not designed to revise itself. These characteristics are not desirable for the development of human intelligence, and thus are to be overcome. All the word, sensory and network memories in the brain must be acquired postnatally through individual learning and thought. The reason and intelligence of humans depend on how correctly and efficiently humans learn new words and acquire appropriate meanings for them.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Miral Kazmi, Syeda. "Hand Gesture Recognition for Sign language." In Human Interaction and Emerging Technologies (IHIET-AI 2022) Artificial Intelligence and Future Applications. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe100925.

Повний текст джерела
Анотація:
We have come to know a very genuine issue of sign language recognition, that problem being the issue of two-way communication i.e. between normal person and deaf/dumb. Current sign language recognition applications lack basic characteristics which are very necessary for the interaction with environment. Our project is focused on providing a portable and customizable solution for understanding sign language through an android app. The report summarizes the basic concepts and methods in creating this android application that uses gestures recognition to understand American sign language words. The project uses different image processing tools to separate the hand from the rest and then uses pattern recognition techniques for gesture recognition. A complete summary of the results obtained from the various tests performed is also provided to demonstrate the validity of the application.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Zhengxue, and Gaoyun An. "Human-Object Interaction Prediction with Natural Language Supervision." In 2022 16th IEEE International Conference on Signal Processing (ICSP). IEEE, 2022. http://dx.doi.org/10.1109/icsp56322.2022.9965210.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Human Signed Language"

1

Sayers, Dave, Rui Sousa-Silva, Sviatlana Höhn, Lule Ahmedi, Kais Allkivi-Metsoja, Dimitra Anastasiou, Štefan Beňuš, et al. The Dawn of the Human-Machine Era: A forecast of new and emerging language technologies. Open Science Centre, University of Jyväskylä, May 2021. http://dx.doi.org/10.17011/jyx/reports/20210518/1.

Повний текст джерела
Анотація:
New language technologies are coming, thanks to the huge and competing private investment fuelling rapid progress; we can either understand and foresee their effects, or be taken by surprise and spend our time trying to catch up. This report scketches out some transformative new technologies that are likely to fundamentally change our use of language. Some of these may feel unrealistically futuristic or far-fetched, but a central purpose of this report - and the wider LITHME network - is to illustrate that these are mostly just the logical development and maturation of technologies currently in prototype. But will everyone benefit from all these shiny new gadgets? Throughout this report we emphasise a range of groups who will be disadvantaged and issues of inequality. Important issues of security and privacy will accompany new language technologies. A further caution is to re-emphasise the current limitations of AI. Looking ahead, we see many intriguing opportunities and new capabilities, but a range of other uncertainties and inequalities. New devices will enable new ways to talk, to translate, to remember, and to learn. But advances in technology will reproduce existing inequalities among those who cannot afford these devices, among the world’s smaller languages, and especially for sign language. Debates over privacy and security will flare and crackle with every new immersive gadget. We will move together into this curious new world with a mix of excitement and apprehension - reacting, debating, sharing and disagreeing as we always do. Plug in, as the human-machine era dawns.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Crispin, Darla. Artistic Research as a Process of Unfolding. Norges Musikkhøgskole, August 2018. http://dx.doi.org/10.22501/nmh-ar.503395.

Повний текст джерела
Анотація:
As artistic research work in various disciplines and national contexts continues to develop, the diversity of approaches to the field becomes ever more apparent. This is to be welcomed, because it keeps alive ideas of plurality and complexity at a particular time in history when the gross oversimplifications and obfuscations of political discourses are compromising the nature of language itself, leading to what several commentators have already called ‘a post-truth’ world. In this brutal environment where ‘information’ is uncoupled from reality and validated only by how loudly and often it is voiced, the artist researcher has a responsibility that goes beyond the confines of our discipline to articulate the truth-content of his or her artistic practice. To do this, they must embrace daring and risk-taking, finding ways of communicating that flow against the current norms. In artistic research, the empathic communication of information and experience – and not merely the ‘verbally empathic’ – is a sign of research transferability, a marker for research content. But this, in some circles, is still a heretical point of view. Research, in its more traditional manifestations mistrusts empathy and individually-incarnated human experience; the researcher, although a sentient being in the world, is expected to behave dispassionately in their professional discourse, and with a distrust for insights that come primarily from instinct. For the construction of empathic systems in which to study and research, our structures still need to change. So, we need to work toward a new world (one that is still not our idea), a world that is symptomatic of what we might like artistic research to be. Risk is one of the elements that helps us to make the conceptual twist that turns subjective, reflexive experience into transpersonal, empathic communication and/or scientifically-viable modes of exchange. It gives us something to work with in engaging with debates because it means that something is at stake. To propose a space where such risks may be taken, I shall revisit Gillian Rose’s metaphor of ‘the fold’ that I analysed in the first Symposium presented by the Arne Nordheim Centre for Artistic Research (NordART) at the Norwegian Academy of Music in November 2015. I shall deepen the exploration of the process of ‘unfolding’, elaborating on my belief in its appropriateness for artistic research work; I shall further suggest that Rose’s metaphor provides a way to bridge some of the gaps of understanding that have already developed between those undertaking artistic research and those working in the more established music disciplines.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії