Tesis sobre el tema "Sign language"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Sign language".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Sinander, Pierre y Tomas Issa. "Sign Language Translation". Thesis, KTH, Mekatronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296169.
Texto completoSyftet med uppsatsen var att skapa en datahandske som kan översätta ASL genom att läsa av finger- och handrörelser. Vidare undersöktes om ledande tyg kan användas som sträcksensorer. För att läsa av handgesterna fästes ledande tyg på varje finger på handsken för att urskilja hur mycket de böjdes. Handrörelserna registrerades med en 3-axlig accelerometer som var monterad på handsken. Sensorvärdena lästes av en Arduino Nano 33 IoT monterad på handleden som översatte till de motsvarande tecknen. Mikrokontrollern överförde sedan resultatet trådlöst till en annan enhet via Bluetooth Low Energy. Handsken kunde korrekt översätta alla tecken på ASL-alfabetet med en genomsnittlig exakthet på 93%. Det visade sig att tecken med små skillnader i handgester som S och T var svårare att skilja mellan vilket resulterade i en noggrannhet på 70% för dessa specifika tecken.
Eichmann, Hanna. ""Hands off our language!" : deaf sign language teachers' perspectives on sign language standardisation". Thesis, University of Central Lancashire, 2008. http://clok.uclan.ac.uk/21824/.
Texto completoSantoro, Mirko. "Compounds in sign languages : the case of Italian and French Sign Language". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEH204.
Texto completoIn this dissertation, I investigate the domain of compounds in sign languages. Compounding has been documented as a key strategy to enrich the lexicon of sign languages even in situations of emergent sign languages. I address this topic with three main angles: typological/empirical, theoretical and experimental. In the typological/empirical part, I offer a thorough description of compounds in two sign languages: Italian and French Sign Language (LIS and LSF). I offer a refined and more comprehensive typology of compounds, in which classifiers and simultaneous forms are also taken into account.In the theoretical part, I provide a formal account of how to derive the whole typology of compounds found in LIS and LSF. I show i) that compounds can be derived in multiple ways depending on their morphosyntactic properties and ii) that morphosyntactic derivation is not the only process that affects the combinatorial options of compounding. Post-syntactic processes, especially linearization, have to have access to at least partial representations in order to distinguish between forms that have to be spelled out either sequentially or simultaneously.In the experimental part, I investigate whether phonological reduction is a sufficient condition to identify compounds in SL. I show that importing criteria from one SL to another can be done, but with extreme caution
Ann, Jean. "Against [lateral]: Evidence from Chinese Sign Language and American Sign Language". Department of Linguistics, University of Arizona (Tucson, AZ), 1990. http://hdl.handle.net/10150/227260.
Texto completoFekete, Emily. "SIGNS IN SPACE: AMERICAN SIGN LANGUAGE AS SPATIAL LANGUAGE AND CULTURAL WORLDVIEW". Kent State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=kent1279060612.
Texto completoEichmann, Hanna [Verfasser]. "''Hands off our language!'' : Deaf sign language teachers' perspectives on sign language standardisation / Hanna Eichmann". Aachen : Shaker, 2013. http://d-nb.info/1051572126/34.
Texto completoXu, Wang. "A Comparison of Chinese and Taiwan Sign Languages: Towards a New Model for Sign Language Comparison". The Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=osu1363617703.
Texto completoHerman, Rosalind. "Assessing British sign language development". Thesis, City University London, 2002. http://openaccess.city.ac.uk/8446/.
Texto completoBull, Hannah. "Learning sign language from subtitles". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG013.
Texto completoSign languages are an essential means of communication for deaf communities. Sign languages are visuo-gestual languages using the modalities of hand gestures, facial expressions, gaze and body movements. They possess rich grammar structures and lexicons that differ considerably from those found among spoken languages. The uniqueness of transmission medium, structure and grammar of sign languages requires distinct methodologies. The performance of automatic translations systems between high-resource written languages or spoken languages is currently sufficient for many daily use cases, such as translating videos, websites, emails and documents. On the other hand, automatic translation systems for sign languages do not exist outside of very specific use cases with limited vocabulary. Automatic sign language translation is challenging for two main reasons. Firstly, sign languages are low-resource languages with little available training data. Secondly, sign languages are visual-spatial languages with no written form, naturally represented as video rather than audio or text. To tackle the first challenge, we contribute large datasets for training and evaluating automatic sign language translation systems with both interpreted and original sign language video content, as well as written text subtitles. Whilst interpreted data allows us to collect large numbers of hours of videos, original sign language video is more representative of sign language usage within deaf communities. Written subtitles can be used as weak supervision for various sign language understanding tasks. To address the second challenge, we develop methods to better understand visual cues from sign language video. Whilst sentence segmentation is mostly trivial for written languages, segmenting sign language video into sentence-like units relies on detecting subtle semantic and prosodic cues from sign language video. We use prosodic cues to learn to automatically segment sign language video into sentence-like units, determined by subtitle boundaries. Expanding upon this segmentation method, we then learn to align text subtitles to sign language video segments using both semantic and prosodic cues, in order to create sentence-level pairs between sign language video and text. This task is particularly important for interpreted TV data, where subtitles are generally aligned to the audio and not to the signing. Using these automatically aligned video-text pairs, we develop and improve multiple different methods to densely annotate lexical signs by querying words in the subtitle text and searching for visual cues in the sign language video for the corresponding signs
Holzrichter, Amanda Sue. "A crosslinguistic study of child-directed signing : American Sign Language and sign language of Spain /". Digital version accessible at:, 2000. http://wwwlib.umi.com/cr/utexas/main.
Texto completoMoemedi, Kgatlhego Aretha. "Rendering an avatar from sign writing notation for sign language animation". Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_9989_1307516277.
Texto completoThis thesis presents an approach for automatically generating signing animations from a sign language notation. An avatar endowed with expressive gestures, as subtle as changes in facial expression, is used to render the sign language animations. SWML, an XML format of SignWriting is provided as input. It transcribes sign language gestures in a format compatible to virtual signing. Relevant features of sign language gestures are extracted from the SWML. These features are then converted to body animation pa- rameters, which are used to animate the avatar. Using key-frame animation techniques, intermediate key-frames approximate the expected sign language gestures. The avatar then renders the corresponding sign language gestures. These gestures are realistic and aesthetically acceptable and can be recognized and understood by Deaf people.
Ross, Danielle S. (Danielle Suzanne). "Learning to read with sign language : how beginning deaf readers relate sign language to written words". Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22492.
Texto completoThese results indicate that deaf children organize their recognition of written words around their knowledge of sign language. Further, the children's responses to legal versus illegal pseudowords in the lexical decision task indicate that they can learn the orthographic rules of written English words.
Börstell, Carl. "Object marking in the signed modality : Verbal and nominal strategies in Swedish Sign Language and other sign languages". Doctoral thesis, Stockholms universitet, Institutionen för lingvistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-141669.
Texto completoAlba, de la Torre Celia. "Wh-questions in Catalan sign language". Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/397751.
Texto completoS'ofereix una caracterització i una anàlisi de les preguntes-que en Llengua de Signes Catalana, que presenten la particularitat d'ubicar preferentment les expressions-qu al final de l'oració. Aquesta característica, pròpia de les llengües de signes, ha estat difícil de tractar des de models tradicionals, que sovint han considerat que el moviment-qu és universalment cap a l'esquerra i que sovint han assumit que l'estructura sintàctica codifica informació respecte de l'ordre lineal dels elements lingüístics. Es proposa que la jerarquia sintàctica i l'ordre lineal són dos objectes diferents i amb un impacte limitat l'un sobre l'altre i que el segon depèn principalment de mecanismes de processament lingüístic i, específicament, de la Memòria de Treball. En aquest sentit, s'hipotetitza que la diferència en la ubicació dels elements-qu entre llengües de signes i llengües orals respon a diferències en la Memòria de Treball. Per a explorar aquesta hipòtesi, s'exposen els resultats de dos experiments amb participants Sords i oients.
Haseeb, Ahmed Abdul y Asim Ilyas. "Speech Translation into Pakistan Sign Language". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5095.
Texto completoThis research has investigated a computer based solution to facilitate communication among deaf people and unimpaired. Investigation was performed using literature review and visits to institutes to gain a deeper knowledge about sign language and specifically how is it used in Pakistan context. Secondly, challenges faced by deaf people to interact with unimpaired are analyzed by interviews with domain experts (instructors of deaf institutes) and by directly observing deaf in everyday life situations. We conclude that deaf people rely on sign language for communication with unimpaired people. Deaf people in Pakistan use PSL for communication, English is taught as secondary language all over Pakistan in all educational institutes, deaf people are taught by instructors that not only need to know the domain expertise of the area that they are teaching like Math, History and Science etc. but they also need to know PSL very well in order to teach the deaf. It becomes very difficult for deaf institutes to get instructors that know both. Whenever deaf people need to communicate with unimpaired people in any situation, they either need to hire a translator or request the unimpaired people to write everything for them. Translators are very difficult to get all the time and they are very expensive as well. Moreover, using writing by unimpaired becomes very slow process and not all unimpaired people want to do this. We observed this phenomena ourselves as instructors of the institutes provided us the opportunity to work with deaf people to understand their feelings and challenges in everyday life. In this way, we used to go with deaf people in shopping malls, banks, post offices etc. and with their permission, we observed their interaction. We have concluded that sometimes their interaction with normal people becomes very slow and embarrassing. Based on above findings, we concluded that there is definitely a need for an automated system that can facilitate communication between deaf and unimpaired people. These factors lead to the subsequent objective of this research. The main objective of this thesis is to identify a generic and an automated system without any human intervention that converts English speech into PSL as a solution to bridge the communication gap between deaf and unimpaired. It is identified that existing work done related to this problem area doesn’t fulfill our objective. Current solutions are either very specific to a domain, e.g. post office or need human intervention i.e. not automatic. It is identified that none of the existing systems can be extended towards our desired solution. We explored state of the art techniques like Machine translation, Speech recognition and NLP. We have utilized these in our proposed solution. Prototype of the proposed solution is developed whose functional and non functional validation is performed. Since none of existing work exactly matches to our problem statement, therefore, we have not compared the validation of our prototype to any existing system. We have validated prototype with respect to our problem domain. Moreover, this is validated iteratively from the domain experts, i.e. experts of PSL and the English to PSL human translators. We found this user centric approach very useful to help better understand the problem at the ground level, keeping our work user focused and then realization of user satisfaction level throughout the process. This work has opened a new world of opportunities where deaf can communicate with others who do not have PSL knowledge. Having this system, if it is further developed from a prototype to a functioning system; deaf institutes will have wider scope of choosing best instructors for a given domain that may not have PSL expertise. Deaf people will have more opportunities to interact with other members of the society at every level as communication is the basic pillar for this. The automatic speech to sign language is an attractive prospect; the impending applications are exhilarating and worthwhile. In the field of Human Computer Interface (HCI) we hope that our thesis will be an important addition to the ongoing research.
Ahmed Abdul Haseeb & Asim ilyas, Contact no. 00923215126749 House No. 310, Street No. 4 Rawal town Islamabad, Pakistan Postal Code 44000
Kaneko, Michiko. "The poetics of sign language haiku". Thesis, University of Bristol, 2008. http://hdl.handle.net/1983/10e0f467-8d9d-4568-8215-9a3e1d77147d.
Texto completoFeng, Qianli. "Automatic American Sign Language Imitation Evaluator". The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461233570.
Texto completoNel, Warren. "An integrated sign language recognition system". Thesis, University of Western Cape, 2014. http://hdl.handle.net/11394/3584.
Texto completoResearch has shown that five parameters are required to recognize any sign language gesture: hand shape, location, orientation and motion, as well as facial expressions. The South African Sign Language (SASL) research group at the University of the Western Cape has created systems to recognize Sign Language gestures using single parameters. Using a single parameter can cause ambiguities in the recognition of signs that are similarly signed resulting in a restriction of the possible vocabulary size. This research pioneers work at the group towards combining multiple parameters to achieve a larger recognition vocabulary set. The proposed methodology combines hand location and hand shape recognition into one combined recognition system. The system is shown to be able to recognize a very large vocabulary of 50 signs at a high average accuracy of 74.1%. This vocabulary size is much larger than existing SASL recognition systems, and achieves a higher accuracy than these systems in spite of the large vocabulary. It is also shown that the system is highly robust to variations in test subjects such as skin colour, gender and body dimension. Furthermore, the group pioneers research towards continuously recognizing signs from a video stream, whereas existing systems recognized a single sign at a time. To this end, a highly accurate continuous gesture segmentation strategy is proposed and shown to be able to accurately recognize sentences consisting of five isolated SASL gestures.
Petronio, Karen M. "Clause structure in American sign language /". Thesis, Connect to this title online; UW restricted, 1993. http://hdl.handle.net/1773/8418.
Texto completoReiniche, Ruth Mary. "Sign Language: Flannery O'Connor's Pictorial Text". Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/325225.
Texto completoCheek, Davina Adrianne. "The phonetics and phonology of handshape in American Sign Language /". Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008299.
Texto completoLeyhe, Anya A. "An Ethnographic Inquiry: Contemporary Language Ideologies of American Sign Language". Scholarship @ Claremont, 2014. http://scholarship.claremont.edu/scripps_theses/473.
Texto completoGaniso, Mirriam Nosiphiwo. "Sign language in South Africa language planning and policy challenges". Thesis, Rhodes University, 2011. http://hdl.handle.net/10962/d1002163.
Texto completoBelissen, Valentin. "From Sign Recognition to Automatic Sign Language Understanding : Addressing the Non-Conventionalized Units". Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG064.
Texto completoSign Languages (SLs) have developed naturally in Deaf communities. With no written form, they are oral languages, using the gestural channel for expression and the visual channel for reception. These poorly endowed languages do not meet with a broad consensus at the linguistic level. These languages make use of lexical signs, i.e. conventionalized units of language whose form is supposed to be arbitrary, but also - and unlike vocal languages, if we don't take into account the co-verbal gestures - iconic structures, using space to organize discourse. Iconicity, which is defined as the existence of a similarity between the form of a sign and the meaning it carries, is indeed used at several levels of SL discourse.Most research in automatic Sign Language Recognition (SLR) has in fact focused on recognizing lexical signs, at first in the isolated case and then within continuous SL. The video corpora associated with such research are often relatively artificial, consisting of the repetition of elicited utterances in written form. Other corpora consist of interpreted SL, which may also differ significantly from natural SL, as it is strongly influenced by the surrounding vocal language.In this thesis, we wish to show the limits of this approach, by broadening this perspective to consider the recognition of elements used for the construction of discourse or within illustrative structures.To do so, we show the interest and the limits of the corpora developed by linguists. In these corpora, the language is natural and the annotations are sometimes detailed, but not always usable as input data for machine learning systems, as they are not necessarily complete or coherent. We then propose the redesign of a French Sign Language dialogue corpus, Dicta-Sign-LSF-v2, with rich and consistent annotations, following an annotation scheme shared by many linguists.We then propose a redefinition of the problem of automatic SLR, consisting in the recognition of various linguistic descriptors, rather than focusing on lexical signs only. At the same time, we discuss adapted metrics for relevant performance assessment.In order to perform a first experiment on the recognition of linguistic descriptors that are not only lexical, we then develop a compact and generalizable representation of signers in videos. This is done by parallel processing of the hands, face and upper body, using existing tools and models that we have set up. Besides, we preprocess these parallel representations to obtain a relevant feature vector. We then present an adapted and modular architecture for automatic learning of linguistic descriptors, consisting of a recurrent and convolutional neural network.Finally, we show through a quantitative and qualitative analysis the effectiveness of the proposed model, tested on Dicta-Sign-LSF-v2. We first carry out an in-depth analysis of the parameterization, evaluating both the learning model and the signer representation. The study of the model predictions then demonstrates the merits of the proposed approach, with a very interesting performance for the continuous recognition of four linguistic descriptors, especially in view of the uncertainty related to the annotations themselves. The segmentation of the latter is indeed subjective, and the very relevance of the categories used is not strongly demonstrated. Indirectly, the proposed model could therefore make it possible to measure the validity of these categories. With several areas for improvement being considered, particularly in terms of signer representation and the use of larger corpora, the results are very encouraging and pave the way for a wider understanding of continuous Sign Language Recognition
Nurena-Jara, Roberto, Cristopher Ramos-Carrion y Pedro Shiguihara-Juarez. "Data collection of 3D spatial features of gestures from static peruvian sign language alphabet for sign language recognition". Institute of Electrical and Electronics Engineers Inc, 2020. http://hdl.handle.net/10757/656634.
Texto completoPeruvian Sign Language Recognition (PSL) is approached as a classification problem. Previous work has employed 2D features from the position of hands to tackle this problem. In this paper, we propose a method to construct a dataset consisting of 3D spatial positions of static gestures from the PSL alphabet, using the HTC Vive device and a well-known technique to extract 21 keypoints from the hand to obtain a feature vector. A dataset of 35, 400 instances of gestures for PSL was constructed and a novel way to extract data was stated. To validate the appropriateness of this dataset, a comparison of four baselines classifiers in the Peruvian Sign Language Recognition (PSLR) task was stated, achieving 99.32% in the average in terms of F1 measure in the best case.
Revisión por pares
Potrus, Dani. "Swedish Sign Language Skills Training and Assessment". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209129.
Texto completoTeckenspråk används i stor grad runt om i världen som ett modersmål för dom som inte kan använda vardagligt talsspråk och utav grupper av personer som har en funktionsnedsättning (t.ex. en hörselskada). Betydelsen av effektivt lärande av teckenspråk och dess tillämpningar i modern datavetenskap har ökat i stor utsträckning i det moderna samhället, och forskning kring teckenspråklig igenkänning har spirat i många olika riktningar, ett exempel är med hjälp av statistika modeller såsom dolda markovmodeller (eng. Hidden markov models) för att träna modeller för att känna igen olika teckenspråksmönster (bland dessa ingår Svenskt teckenspråk, Amerikanskt teckenspråk, Koreanskt teckenspråk, Tyskt teckenspråk med flera). Denna rapport undersöker bedömningen och skickligheten av att använda ett enkelt teckenspråksspel som har utvecklats för att lära ut enkla Svenska teckenspråksmönster för barn i åldrarna 10 till 11 års ålder som inte har några inlärningssjukdomar eller några problem med allmän hälsa. Under projektets experiment delas 38 barn upp i två lika stora grupper om 19 i vardera grupp, där varje grupp kommer att få spela ett teckenspråksspel. Sammanhanget för spelet är detsamma för båda grupperna, där de får höra och se en tredimensionell figur (eng. 3D Avatar) tala till dom med både talsspråk och teckenspråk. Den första gruppen spelar spelet och svarar på frågor som ges till dem med hjälp av teckenspråk, medan den andra gruppen svarar på frågor som ges till dem genom att klicka på ett av fem alternativ som finns på spelets skärm. En vecka efter att barnen har utfört experimentet med teckenspråksspelet bedöms deras teckenspråkliga färdigheter som de har fått från spelet genom att de ombeds återuppge några av de tecknena som de såg under spelets varaktighet. Rapportens hypotes är att de barn som tillhör gruppen som fick ge teckenspråk som svar till frågorna som ställdes överträffar den andra gruppen, genom att både komma ihåg tecknena och återuppge dom på korrekt sätt. En statistisk hypotesprövning utförs på denna hypotes, där denna i sin tur bekräftas. Slutligen beskrivs det i rapportens sista kapitel om framtida forskning inom teckenspråksbedömning med tv spel och deras effektivitet.
Barnhart, Lindsay J. "Development of sign language for young children". Menomonie, WI : University of Wisconsin--Stout, 2006. http://www.uwstout.edu/lib/thesis/2006/2006barnhartl.pdf.
Texto completoZafrulla, Zahoor. "Automatic recognition of American sign language classifiers". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53461.
Texto completoSze, Yim Binh Felix. "Topic construction in Hong Kong sign language". Thesis, University of Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.479529.
Texto completoBarker, Dean. "Computer facial animation for sign language visualization". Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50300.
Texto completoENGLISH ABSTRACT: Sign Language is a fully-fledged natural language possessing its own syntax and grammar; a fact which implies that the problem of machine translation from a spoken source language to Sign Language is at least as difficult as machine translation between two spoken languages. Sign Language, however, is communicated in a modality fundamentally different from all spoken languages. Machine translation to Sign Language is therefore burdened not only by a mapping from one syntax and grammar to another, but also, by a non-trivial transformation from one communicational modality to another. With regards to the computer visualization of Sign Language; what is required is a three dimensional, temporally accurate, visualization of signs including both the manual and nonmanual components which can be viewed from arbitrary perspectives making accurate understanding and imitation more feasible. Moreover, given that facial expressions and movements represent a fundamental basis for the majority of non-manual signs, any system concerned with the accurate visualization of Sign Language must rely heavily on a facial animation component capable of representing a well-defined set of emotional expressions as well as a set of arbitrary facial movements. This thesis investigates the development of such a computer facial animation system. We address the problem of delivering coordinated, temporally constrained, facial animation sequences in an online environment using VRML. Furthermore, we investigate the animation, using a muscle model process, of arbitrary three-dimensional facial models consisting of multiple aligned NURBS surfaces of varying refinement. Our results showed that this approach is capable of representing and manipulating high fidelity three-dimensional facial models in such a manner that localized distortions of the models result in the recognizable and realistic display of human facial expressions and that these facial expressions can be displayed in a coordinated, synchronous manner.
AFRIKAANSE OPSOMMING: Gebaretaal is 'n volwaardige natuurlike taal wat oor sy eie sintaks en grammatika beskik. Hierdie feit impliseer dat die probleem rakende masjienvertaling vanuit 'n gesproke taal na Gebaretaal net so moeilik is as masjienvertaling tussen twee gesproke tale. Gebaretaal word egter in 'n modaliteit gekommunikeer wat in wese van alle gesproke tale verskil. Masjienvertaling in Gebaretaal word daarom nie net belas deur 'n afbeelding van een sintaks en grammatika op 'n ander nie, maar ook deur beduidende omvorming van een kommunikasiemodaliteit na 'n ander. Wat die gerekenariseerde visualisering van Gebaretaal betref, vereis dit 'n driedimensionele, tyds-akkurate visualisering van gebare, insluitend komponente wat met en sonder die gebruik van die hande uitgevoer word, en wat vanuit arbitrêre perspektiewe beskou kan word ten einde die uitvoerbaarheid van akkurate begrip en nabootsing te verhoog. Aangesien gesigsuitdrukkings en -bewegings die fundamentele grondslag van die meeste gebare wat nie met die hand gemaak word nie, verteenwoordig, moet enige stelsel wat te make het met die akkurate visualisering van Gebaretaal boonop sterk steun op 'n gesigsanimasiekomponent wat daartoe in staat is om 'n goed gedefinieerde stel emosionele uitdrukkings sowel as 'n stel arbitrre gesigbewegings voor te stel. Hierdie tesis ondersoek die ontwikkeling van so 'n gerekenariseerde gesigsanimasiestelsel. Die probleem rakende die lewering van gekordineerde, tydsbegrensde gesigsanimasiesekwensies in 'n intydse omgewing, wat gebruik maak van VRML, word aangeroer. Voorts word ondersoek ingestel na die animasie (hier word van 'n spiermodelproses gebruik gemaak) van arbitrre driedimensionele gesigsmodelle bestaande uit veelvoudige, opgestelde NURBS-oppervlakke waarvan die verfyning wissel. Die resultate toon dat hierdie benadering daartoe in staat is om hoë kwaliteit driedimensionele gesigsmodelle só voor te stel en te manipuleer dat gelokaliseerde vervormings van die modelle die herkenbare en realistiese tentoonstelling van menslike gesigsuitdrukkings tot gevolg het en dat hierdie gesigsuitdrukkings op 'n gekordineerde, sinchroniese wyse uitgebeeld kan word.
Pollitt, Kyra Margaret. "Signart: (British) sign language poetry as Gesamtkunstwerk". Thesis, University of Bristol, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.658072.
Texto completoZhou, Mingjie. "Deep networks for sign language video caption". HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/848.
Texto completoCole, Jessica. "American Sign Language poetry literature in motion /". Diss., [La Jolla, Calif.] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p1462125.
Texto completoTitle from first page of PDF file (viewed April 3, 2009). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 73-76).
Nayak, Sunita. "Representation and learning for sign language recognition". [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002362.
Texto completoMantovan, Lara <1985>. "Nominal modification in Italian sign language (LIS)". Doctoral thesis, Università Ca' Foscari Venezia, 2014. http://hdl.handle.net/10579/5642.
Texto completoFornasiero, Elena <1991>. "EVALUATIVE MORPHOLOGY IN ITALIAN SIGN LANGUAGE (LIS)". Master's Degree Thesis, Università Ca' Foscari Venezia, 2016. http://hdl.handle.net/10579/8145.
Texto completoSchembri, Adam C. "Issues in the analysis of polycomponential verbs in Australian Sign Language (Auslan)". Phd thesis, Department of Linguistics, 2002. http://hdl.handle.net/2123/6272.
Texto completoMcBurney, Susan Lloyd. "Referential morphology in signed languages /". Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/8436.
Texto completoBörstell, Carl. "Revisiting Reduplication : Toward a description of reduplication in predicative signs in Swedish Sign Language". Thesis, Stockholms universitet, Institutionen för lingvistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-63510.
Texto completoCasey, Shannon Kerry. ""Agreement" in gestures and signed languages : the use of directionality to indicate referents involved in actions /". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3094623.
Texto completoZorzi, Giorgia. "Coordination and gapping in Catalan Sign Language (LSC)". Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/665045.
Texto completoAquesta tesi ofereix una descripció i una anàlisi sintàctica per a la coordinació i el “gapping” en coordinació conjuntiva en llengua de signes catalana (LSC), dins el marc generativista i minimista. Pel que fa a la coordinació, la categoria sintàctica que es proposa és “Coordination Phrase” (CoP) per a la coordinació conjuntiva, disjuntiva i adversativa. A l’estructura, ramificada a la dreta, els constituents de la conjunció són especificadors i complements de CoP. La derivació per a cada tipus de coordinació s’aplica a partir d’aquest model. Pel que fa al “gapping”, mostra proprietats similars a l’el·lipsi de SV (VP-ellipsis), sobretot perquè pot aparèixer en subordinació. A més, la l’existència només d’un abast distribuït de la negació (¬A&¬B) i la presència de tòpic i focus contrastius mostra la necessitat de tenir una coordinació “àmplia” on els dos conjunts siguin CPs. En la derivació de “gapping”, els arguments es mouen a TopP i FocP, seguits de l’eliminació del TP a PF, ambel tret [E] posicionat al nucli de FocP.
Adam, Jameel. "Video annotation wiki for South African sign language". Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1540_1304499135.
Texto completoThe SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.
Cooper, H. M. "Sign language recognition : generalising to more complex corpora". Thesis, University of Surrey, 2010. http://epubs.surrey.ac.uk/843617/.
Texto completoYi, Beifang. "A framework for a sign language interfacing system". abstract and full text PDF (free order & download UNR users only), 2006. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3210068.
Texto completoQuinto-Pozos, David Gilbert. "Contact between Mexican sign language and American sign language in two Texas border areas". Thesis, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3082889.
Texto completoCormier, Kearsy Annette. "Grammaticization of indexic signs how American Sign Language expresses numerosity /". 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3077627.
Texto completoYoel, Judith. "Canada's Maritime sign language". 2009. http://hdl.handle.net/1993/21581.
Texto completoBelaldavar, Amruthraj. "American Sign Language generator /". 2008. http://proquest.umi.com/pqdweb?did=1597619941&sid=3&Fmt=2&clientId=10361&RQT=309&VName=PQD.
Texto completoReed, Lauren W. "Sign Languages of Western Highlands, Papua New Guinea, and their Challenges for Sign Language Typology". Master's thesis, 2019. http://hdl.handle.net/1885/165444.
Texto completoLin, Chien-hung y 林建宏. "Tense in Taiwan Sign Language". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/94258836592973163892.
Texto completo國立中正大學
語言所
95
Tense allows a speaker to locate a situation relative to speech time. Taiwan Sign Language(hereafter TSL) is a language that is not morphologically marked for tense. In TSL, tense are directly determined by temporal adverbials. The main purpose of this thesis is to investigate how TSL expresses tense, how modality effects influence tense expression, and what kind of conceptual structure is employed while locating the event in time. This thesis investigates three major issues. The first issue emphasizes the difference between the discreteness-oriented analysis and the gradience-oriented analysis. Traditional linguistics generally defines language so as to exclude not only meaningful gestures but also meaningful gradient aspects of speech signal. Thus language signal is characterized by discreteness. In the light of discreteness, temporal signs are treated as directing to the predetermined loci by means of time lines. However we pointed out that time lines are confronted by the unpredictability of placement. Following Liddell’s (2003) claim, we regard that grammar, gradience and gesture are tightly intertwined in expressing meaning. Based on this concept, the placement of a temporal sign is not limited to a predetermined set of possible loci. Since the hand can move in an unlimited number of directions, the range of directions is gradient. Meaning construction indicates that meaning realization is through the mapping between mental space and semantic space, and the directionality of a temporal sign toward a blended element of blend space provides a mapping instruction. Secondly, the lexicon formation strategy of locating temporal adverbials (hereafter LTA) is elucidated. Similar to Mandarin, TSL resorts to LTA to locate the event in time. We further divided LTA into deictic, anaphoric, and referential adverbials. By means of metaphor “Future is Ahead” and directionality, we offer systematic account for LTA. Meanwhile, we also properly handle the controversial issue - numeral incorporation. Additionally, we compare the lexicon formation strategy of LTA in TSL and Mandarin. Mandarin is an official language enforced in Taiwan, and most TSL signers are familiar with Mandarin. Mandarin employs several types of metaphors to express the time, such as shang / xia (up / down), qian / hou (front /back), lai / qu (come /go). However, TSL without being influenced by Mandarin has an independent metaphor schema employing in the formation of LTA. Thirdly, we emphasize the temporal relation in discourse of TSL. By comparing of grammatical tense of English and nongrammatical tense of Mandarin, we figure out the functions of LTA and the lack of awareness of present in TSL. With a view of LTA denoting the temporal frame for following events to anchored, we further investigate how the temporal frames are established, allocated, and rearranged in the signing space. Once the temporal frame is set up in signing space, we can locate events in time by means of modifying the directionality of sign to associate the time, without repeating LTA.