To see the other types of publications on this topic, follow the link: Gesture.

Dissertations / Theses on the topic 'Gesture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Gesture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lindberg, Martin. "Introducing Gestures: Exploring Feedforward in Touch-Gesture Interfaces." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23555.

Full text
Abstract:
This interaction design thesis aimed to explore how users could be introduced to the different functionalities of a gesture-based touch screen interface. This was done through a user-centred design research process where the designer was taught different artefacts by experienced users. Insights from this process lay the foundation for an interactive, digital gesture-introduction prototype.Testing said prototype with users yielded this study's results. While containing several areas for improvement regarding implementation and behaviour, the prototype's base methods and qualities were well received. Further development would be needed to fully assess its viability. The user-centred research methods used in this project proved valuable for later ideation and prototyping stages. Activities and results from this project indicate a potential for designers to further explore the possibilities for ensuring the discoverability of touch-gesture interactions. For future projects the author suggests more extensive research and testing using a greater sample size and wider demographic.
APA, Harvard, Vancouver, ISO, and other styles
2

Campbell, Lee Winston. "Visual classification of co-verbal gestures for gesture understanding." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8707.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001.
Includes bibliographical references (leaves 86-92).
A person's communicative intent can be better understood by either a human or a machine if the person's gestures are understood. This thesis project demonstrates an expansion of both the range of co-verbal gestures a machine can identify, and the range of communicative intents the machine can infer. We develop an automatic system that uses realtime video as sensory input and then segments, classifies, and responds to co-verbal gestures made by users in realtime as they converse with a synthetic character known as REA, which is being developed in parallel by Justine Cassell and her students at the MIT Media Lab. A set of 670 natural gestures, videotaped and visually tracked in the course of conversational interviews and then hand segmented and annotated according to a widely used gesture classification scheme, is used in an offline training process that trains Hidden Markov Model classifiers. A number of feature sets are extracted and tested in the offline training process, and the best performer is employed in an online HMM segmenter and classifier that requires no encumbering attachments to the user. Modifications made to the REA system enable REA to respond to the user's beat and deictic gestures as well as turntaking requests the user may convey in gesture.
(cont.) The recognition results obtained are far above chance, but too low for use in a production recognition system. The results provide a measure of validity for the gesture categories chosen, and they provide positive evidence for an appealing but difficult to prove proposition: to the extent that a machine can recognize and use these categories of gestures to infer information not present in the words spoken, there is exploitable complementary information in the gesture stream.
by Lee Winston Campbell.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Smith, Jason Alan. "Naturalistic skeletal gesture movement and rendered gesture decoding." Diss., Online access via UMI:, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Davis, James W. "Gesture recognition." Honors in the Major Thesis, University of Central Florida, 1994. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/126.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Arts and Sciences
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
5

Yunus, Fajrian. "Prediction of Gesture Timing and Study About Image Schema for Metaphoric Gestures." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS551.

Full text
Abstract:
Les gestes communicatifs et la parole sont étroitement liés. Nous voulons prédire automatiquement les gestes en fonction du discours. Le discours lui-même a deux constituants : l'acoustique et le contenu du discours (c'est-à-dire le texte). Dans une partie de cette thèse, nous développons un modèle basé sur un réseau de neurones récurrents avec un mécanisme d'attention pour prédire le moment des gestes, c'est-à-dire quand les gestes doivent se produire et quels types des gestes doivent se produire. Nous utilisons une technique de comparaison de séquences pour évaluer les performances du modèle. Nous réalisons également une étude subjective pour mesurer comment nos répondants jugent le naturel, la cohérence temporelle et la cohérence sémantique des gestes générés. Dans une autre partie de la thèse, nous travaillons avec la génération des gestes métaphoriques. Les gestes métaphoriques portent le sens, et il est donc nécessaire d'extraire la sémantique pertinente du contenu du discours. Ceci est fait en utilisant le concept d’image schéma tel que démontré par Ravenet et al. Cependant, pour pouvoir utiliser l’image schéma dans les techniques d'apprentissage automatique, les image schémas doivent être convertis en vecteurs de nombres réels. Par conséquent, nous étudions comment nous pouvons transformer l’image schéma en vecteur en utilisant des techniques du plongement de mots. Enfin, nous étudions comment nous pouvons représenter les formes des gestes des mains. La représentation doit être suffisamment compacte mais elle doit également être suffisamment large pour pouvoir couvrir suffisamment de formes pouvant représenter une gamme suffisante de sémantique
Communicative gestures and speech are tightly linked. We want to automatically predict the gestures based on the speech. The speech itself has two constituents, namely the acoustic and the content of the speech (i.e. the text). In one part of this dissertation, we develop a model based on a recurrent neural network with attention mechanism to predict the gesture timing, that is when the gesture should happen and what kind of gesture should happen. We use a sequence comparison technique to evaluate the model performance. We also perform a subjective study to measure how our respondents judge the naturalness, the time consistency, and the semantic consistency of the generated gestures. In another part of the dissertation, we deal with the generation of metaphoric gestures. Metaphoric gestures carry meaning, and thus it is necessary to extract the relevant semantics from the content of the speech. This is done by using the concept of image schema as demonstrated by Ravenet et al. However, to be able to use image schema in machine learning techniques, the image schemas has to be converted into vectors of real numbers. Therefore, we investigate how we can transform the image schema into vector by using word embedding techniques. Lastly, we investigate how we can represent hand gesture shapes. The representation has to be compact enough yet it also has to be broad enough such that it can cover sufficient shapes which can represent sufficient range of semantics
APA, Harvard, Vancouver, ISO, and other styles
6

Cheng, You-Chi. "Robust gesture recognition." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53492.

Full text
Abstract:
It is a challenging problem to make a general hand gesture recognition system work in a practical operation environment. In this study, it is mainly focused on recognizing English letters and digits performed near the steering wheel of a car and captured by a video camera. Like most human computer interaction (HCI) scenarios, the in-car gesture recognition suffers from various robustness issues, including multiple human factors and highly varying lighting conditions. It therefore brings up quite a few research issues to be addressed. First, multiple gesturing alternatives may share the same meaning, which is not typical in most previous systems. Next, gestures may not be the same as expected because users cannot see what exactly has been written, which increases the gesture diversity significantly.In addition, varying illumination conditions will make hand detection trivial and thus result in noisy hand gestures. And most severely, users will tend to perform letters at a fast pace, which may result in lack of frames for well-describing gestures. Since users are allowed to perform gestures in free-style, multiple alternatives and variations should be considered while modeling gestures. The main contribution of this work is to analyze and address these challenging issues step-by-step such that eventually the robustness of the whole system can be effectively improved. By choosing color-space representation and performing the compensation techniques for varying recording conditions, the hand detection performance for multiple illumination conditions is first enhanced. Furthermore, the issues of low frame rate and different gesturing tempo will be separately resolved via the cubic B-spline interpolation and i-vector method for feature extraction. Finally, remaining issues will be handled by other modeling techniques such as sub-letter stroke modeling. According to experimental results based on the above strategies, the proposed framework clearly improved the system robustness and thus encouraged the future research direction on exploring more discriminative features and modeling techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Cometti, Jean Pierre. "The architect's gesture." Pontificia Universidad Católica del Perú - Departamento de Humanidades, 2012. http://repositorio.pucp.edu.pe/index/handle/123456789/112899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kaâniche, Mohamed Bécha. "Human gesture recognition." Nice, 2009. http://www.theses.fr/2009NICE4032.

Full text
Abstract:
Dans cette thèse, nous voulons reconnaître les gestes (par ex. Lever la main) et plus généralement les actions brèves (par ex. Tomber, se baisser) effectués par un individu. De nombreux travaux ont été proposés afin de reconnaître des gestes dans un contexte précis (par ex. En laboratoire) à l’aide d’une multiplicité de capteurs (par ex. Réseaux de cameras ou individu observé muni de marqueurs). Malgré ces hypothèses simplificatrices, la reconnaissance de gestes reste souvent ambiguë en fonction de la position de l’individu par rapport aux caméras. Nous proposons de réduire ces hypothèses afin de concevoir un algorithme général permettant de reconnaître des gestes d’un individu évoluant dans un environnement quelconque et observé `a l’aide d’un nombre réduit de caméras. Il s’agit d’estimer la vraisemblance de la reconnaissance des gestes en fonction des conditions d’observation. Notre méthode consiste `a classifier un ensemble de gestes `a partir de l’apprentissage de descripteurs de mouvement. Les descripteurs de mouvement sont des signatures locales du mouvement de points d’intérêt associés aux descriptions locales de la texture du voisinage des points considérés. L’approche a été validée sur une base de données de gestes publique KTH et des résultats encourageants ont été obtenus
In this thesis, we aim to recognize gestures (e. G. Hand raising) and more generally short actions (e. G. Fall, bending) accomplished by an individual. Many techniques have already been proposed for gesture recognition in specific environment (e. G. Laboratory) using the cooperation of several sensors (e. G. Camera network, individual equipped with markers). Despite these strong hypotheses, gesture recognition is still brittle and often depends on the position of the individual relatively to the cameras. We propose to reduce these hypotheses in order to conceive general algorithm enabling the recognition of the gesture of an individual involving in an unconstrained environment and observed through limited number of cameras. The goal is to estimate the likelihood of gesture recognition in function of the observation conditions. Our method consists of classifying a set of gestures by learning motion descriptors. These motion descriptors are local signatures of the motion of corner points which are associated with their local textural description. We demonstrate the effectiveness of our motion descriptors by recognizing the actions of the public KTH database
APA, Harvard, Vancouver, ISO, and other styles
9

Alon, Jonathan. "Spatiotemporal Gesture Segmentation." Boston University Computer Science Department, 2006. https://hdl.handle.net/2144/1884.

Full text
Abstract:
Spotting patterns of interest in an input signal is a very useful task in many different fields including medicine, bioinformatics, economics, speech recognition and computer vision. Example instances of this problem include spotting an object of interest in an image (e.g., a tumor), a pattern of interest in a time-varying signal (e.g., audio analysis), or an object of interest moving in a specific way (e.g., a human's body gesture). Traditional spotting methods, which are based on Dynamic Time Warping or hidden Markov models, use some variant of dynamic programming to register the pattern and the input while accounting for temporal variation between them. At the same time, those methods often suffer from several shortcomings: they may give meaningless solutions when input observations are unreliable or ambiguous, they require a high complexity search across the whole input signal, and they may give incorrect solutions if some patterns appear as smaller parts within other patterns. In this thesis, we develop a framework that addresses these three problems, and evaluate the framework's performance in spotting and recognizing hand gestures in video. The first contribution is a spatiotemporal matching algorithm that extends the dynamic programming formulation to accommodate multiple candidate hand detections in every video frame. The algorithm finds the best alignment between the gesture model and the input, and simultaneously locates the best candidate hand detection in every frame. This allows for a gesture to be recognized even when the hand location is highly ambiguous. The second contribution is a pruning method that uses model-specific classifiers to reject dynamic programming hypotheses with a poor match between the input and model. Pruning improves the efficiency of the spatiotemporal matching algorithm, and in some cases may improve the recognition accuracy. The pruning classifiers are learned from training data, and cross-validation is used to reduce the chance of overpruning. The third contribution is a subgesture reasoning process that models the fact that some gesture models can falsely match parts of other, longer gestures. By integrating subgesture reasoning the spotting algorithm can avoid the premature detection of a subgesture when the longer gesture is actually being performed. Subgesture relations between pairs of gestures are automatically learned from training data. The performance of the approach is evaluated on two challenging video datasets: hand-signed digits gestured by users wearing short sleeved shirts, in front of a cluttered background, and American Sign Language (ASL) utterances gestured by ASL native signers. The experiments demonstrate that the proposed method is more accurate and efficient than competing approaches. The proposed approach can be generally applied to alignment or search problems with multiple input observations, that use dynamic programming to find a solution.
APA, Harvard, Vancouver, ISO, and other styles
10

Macleod, Tracy. "Gesture signs in social interaction : how group size influences gesture communication." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1205/.

Full text
Abstract:
This thesis explores the effects of group size on gesture communication. Signs in general change, in the kind of information they convey and the way in which they do so, and changes depend on interactive communication. For instance, speech is like dialogue in smaller groups but like monologue in larger groups. It was predicted that gestures would be influenced in a similar way by group size. In line with predictions, communication in groups of 5 was like dialogue whereas in groups of 8 it was like monologue. This was evident from the types of gesture that occurred with more beat and deictic gestures being produced in groups of 5. Iconic gesture production was comparable across group size but as predicted gestures were more complex in groups of 8. This was also the case for social gestures. Findings fit with dialogue models of communication and in particular the Alignment Model. Also in line with this model, group members aligned on gesture production and form.
APA, Harvard, Vancouver, ISO, and other styles
11

Bodiroža, Saša [Verfasser], Verena V. [Gutachter] Hafner, Yael [Gutachter] Edan, and Bruno [Gutachter] Lara. "Gestures in human-robot interaction : development of intuitive gesture vocabularies and robust gesture recognition / Saša Bodiroža ; Gutachter: Verena V. Hafner, Yael Edan, Bruno Lara." Berlin : Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://d-nb.info/1126553557/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Arvidsson, Carina. "Tal och gesters samverkan i undervisningen : En empirisk studie på lågstadiet." Thesis, Linnéuniversitetet, Institutionen för svenska språket (SV), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-96181.

Full text
Abstract:
Studien syftar till att undersöka sambandet mellan tal och gester utifrån lärares resonemang och språkbruk i klassrummet. Studien utgår från ett förkroppsligande perspektiv där begreppen deiktiska gester, metaforiska gester, ikoniska gester och rytmiska gester återfinns. Begreppen är hämtade från McNeills (1992) indelning av gester i olika kategorier. För insamling av data användes ostrukturerad observation och semistrukturerad intervju. Resultatet från observationerna visade att deiktiska rörelser, främst att peka mot tavlan, var den mest förekommande gesten hos lärarna och de rytmiska gesterna förekom inte alls. De näst mest förekommande gesterna var metaforiska gester som symboliserar en abstrakt idé, någon handling som utförs. Genom intervjuerna framkom att lärarna ansåg det viktigt att använda gester i undervisningen. Studiens didaktiska implikation blir att gesterna tillför liv och rörelse i undervisningen och är en bra hjälp för att förtydliga ord och begrepp både för elever med svenska som första språk och elever med svenska som andraspråk. Av studiens resultat dras slutsatsen att gester och verbal kommunikation är en användbar kombination i undervisningen för att eleverna ska ta till sig budskapet.
APA, Harvard, Vancouver, ISO, and other styles
13

Kuhlman, Lane M. "Gesture Mapping for Interaction Design: An Investigative Process for Developing Interactive Gesture Libraries." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1244003264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Semprini, Mattia. "Gesture Recognition: una panoramica." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15672/.

Full text
Abstract:
Per decenni, l’uomo ha interagito con i calcolatori e altri dispositivi quasi esclusivamente premendo i tasti e facendo "click" sul mouse. Al giorno d’oggi, vi è un grande cambiamento in atto a seguito di una ondata di nuove tecnologie che rispondono alle azioni più naturali, come il movimento delle mani o dell’intero corpo. Il mercato tecnologico è stato scosso in un primo momento dalla sostituzione delle tecniche di interazione standard con approcci di tipo "touch and motion sensing"; il passo successivo è l’introduzione di tecniche e tecnologie che permettano all’utente di accedere e manipolare informazioni interagendo con un sistema informatico solamente con gesti ed azioni del corpo. A questo proposito nasce la Gesture Recognition, una parte sostanziale dell’informatica e della tecnologia del linguaggio, che ha come obbiettivo quello di interpretare ed elaborare gesti umani attraverso algoritmi informatici. In questa trattazione andrò a spiegare, nei primi due capitoli la storia delle tecnologie Wearable dai primi orologi che non si limitavano alla sola indicazione dell’orario fino alla nascita dei sistemi utilizzati al giorno d’oggi per la Gesture Recognition. Segue, nel terzo capitolo, un’esposizione dei più utilizzati algoritmi di classificazione delle gesture. Nel quarto andrò ad approfondire uno dei primi framework progettati per fare in modo che lo sviluppatore si concentri sull’applicazione tralasciando la parte di codifica e classificazione delle gesture. Nell’ultima parte verrà esaminato uno dei dispositivi più performanti ed efficaci in questo campo: il Myo Armband. Saranno riportate anche due studi che dimostrano la sua validità.
APA, Harvard, Vancouver, ISO, and other styles
15

Gingir, Emrah. "Hand Gesture Recognition System." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612532/index.pdf.

Full text
Abstract:
This thesis study presents a hand gesture recognition system, which replaces input devices like keyboard and mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-colored glove or need lots of training data. The system mentioned in this study disables all these restrictions and provides an adaptive, effort free environment to the user. Study starts with an analysis of the different color space performances over skin color extraction. This analysis is independent of the working system and just performed to attain valuable information about the color spaces. Working system is based on two steps, namely hand detection and hand gesture recognition. In the hand detection process, normalized RGB color space skin locus is used to threshold the coarse skin pixels in the image. Then an adaptive skin locus, whose varying boundaries are estimated from coarse skin region pixels, segments the distinct skin color in the image for the current conditions. Since face has a distinct shape, face is detected among the connected group of skin pixels by using the shape analysis. Non-face connected group of skin pixels are determined as hands. Gesture of the hand is recognized by improved centroidal profile method, which is applied around the detected hand. A 3D flight war game, a boxing game and a media player, which are controlled remotely by just using static and dynamic hand gestures, were developed as human machine interface applications by using the theoretical background of this study. In the experiments, recorded videos were used to measure the performance of the system and a correct recognition rate of ~90% was acquired with nearly real time computation.
APA, Harvard, Vancouver, ISO, and other styles
16

Metais, Thierry. "A dynamic gesture interface." Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26983.

Full text
Abstract:
Technological advancements during the last years made three dimensional virtual environments ubiquious computer applications. The traditional input devices such as keyboards and mice have reached their limits for they are no longer intuitive to manage 3 dimensional interactions. During the last decades, digital gloves were invented in an attempt to capture hand gestures. The work presented in this thesis is about recognizing hand shape based gestures for virtual environment navigation. We addressed this issue in two steps: first, we look for gesture representations that would allow a good discrimation between gestures; second, we have examined three strategies namely template matching, neural network and hidden Markov models to classify inputs suitably formatted. Experiments on classifying gesture samples with the three methods seem encouraging (more than 90 percent recognition), but when looking at real time gestural sequencee we conclude that only the Viterbi and neural network approaches should be used for a gesture interface.
APA, Harvard, Vancouver, ISO, and other styles
17

Easton, Beth Louise. "Gesture of the book." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ49744.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kang, Angela. "Chinoiserie as musical gesture." Thesis, Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B40040240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Rajagopal, Manoj Kumar. "Cloning with gesture expressivity." Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00719301.

Full text
Abstract:
Virtual environments allow human beings to be represented by virtual humans or avatars. Users can share a sense of virtual presence is the avatar looks like the real human it represents. This classically involves turning the avatar into a clone with the real human's appearance and voice. However, the possibility of cloning the gesture expressivity of a real person has received little attention so far. Gesture expressivity combines the style and mood of a person. Expressivity parameters have been defined in earlier works for animating embodied conversational agents.In this work, we focus on expressivity in wrist motion. First, we propose algorithms to estimate three expressivity parameters from captured wrist 3D trajectories: repetition, spatial extent and temporal extent. Then, we conducted perceptual study through a user survey the relevance of expressivity for recognizing individual human. We have animated a virtual agent using the expressivity estimated from individual humans, and users have been asked whether they can recognize the individual human behind each animation. We found that, in case gestures are repeated in the animation, this is perceived by users as a discriminative feature to recognize humans, while the absence of repetition would be matched with any human, regardless whether they repeat gesture or not. More importantly, we found that 75 % or more of users could recognize the real human (out of two proposed) from an animated virtual avatar based only on the spatial and temporal extents. Consequently, gesture expressivity is a relevant clue for cloning. It can be used as another element in the development of a virtual clone that represents a person
APA, Harvard, Vancouver, ISO, and other styles
20

Kozel, Susan. "As Vision Becomes Gesture." Thesis, University of Essex, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.506150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Dang, Darren Phi Bang. "Template based gesture recognition." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/41404.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 65-66).
by Darren PHi Bang Dang.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
22

Harrison, Simon Mark. "Grammar, gesture and cognition." Bordeaux 3, 2009. http://www.theses.fr/2009BOR30071.

Full text
Abstract:
Dans cette thèse, nous examinons comment les locuteurs anglais utilisent le geste quand ils expriment une négation. Nous identifions neuf gestes de la négation et nous analysons leurs formes, leur rapport à la négation grammaticale, et leur organisation par rapport au discours. S’appuyant sur un corpus audiovisuel, nous démontrons que le geste joue un rôle fondamental dans les constructions négatives, comme le noyau et la portée de la négation, la négation implicite et la négation cumulative. Au cours de nos analyses nous montrons comment le geste présente des tendances universelles à exprimer une négation aussi tôt et aussi fréquemment que possible dans une phrase négative. Nous émettons l’hypothèse que le contexte du discours et le type de négation grammaticale se combinent pour déterminer quels gestes utilisent les locuteurs et comment ils les organisent, ce qui nous permet d’établir des arguments en faveur d’une grammaire multimodale. Nous accompagnons notre analyse d’une méthodologie pour recueillir, transcrire, et analyser un corpus audiovisuel, et dans un chapitre final nous appliquons cette méthodologie aux secteurs du système linguistique autres que la négation : l’aspect progressif, la modalité épistémique, et les opérations de focalisation. Cette thèse offre une analyse multimodale des notions grammaticales de l’anglais, ciblant surtout la négation, et établit un lien entre la grammaire, le geste, et la cognition. Pendant l’expression d’une négation, les anglophones exécutent une multitude de gestes—mouvements de la main et de la tête qui sont liés à la parole. Ces gestes peuvent illustrer des idées, appliquer une emphase sur une hypothèse, et diriger l'interaction. Les gestes qui se connectent spécifiquement à l’expression de la négation sont des formes conventionnelles que l’on reconnaît comme ‘les gestes de la négation’ quels que soient le locuteur et le contexte précis (c. F. Ladewig 2008). Suivant Geneviève Calbris (1990, 2003, 2005) et Adam Kendon (2002, 2004), nous avons ciblé les gestes de la négation qui sont caractérisés par une forme de main ouverte et un avant-bras en pronation. Ces gestes se divisent en deux sous-groupes : les gestes de la main horizontale et les gestes de la main verticale. Adoptant une approche nouvelle pour l’analyse du geste, qui nécessite l’investigation des propriétés formelles du geste à travers des clips ralentis et sans son, nous avons pu identifier et décrire trois gestes de la négation avec la main horizontale et six gestes de la négation avec la main verticale Ces gestes peuvent sembler similaires, cependant, après un chapitre introductif dans lequel nous présentons notre méthodologie et nos motivations, nous démontrons au cours du deuxième chapitre que ces gestes se distinguent selon quand, comment, et pourquoi les locuteurs les utilisent. En particulier, nous utilisons des exemples pour montrer que le type de particule grammaticale (no, not, n’t, none, etc. ) se combine avec le type de contexte négatif (une interruption, une excuse, un refus, etc. ) pour déterminer quel geste de la négation le locuteur va utiliser et comment il va le déployer par rapport à sa parole. De plus, la linguistique cognitive nous fournit des outils pour analyser les énoncés et pour expliquer la co-expression de la négation dans la modalité vocale et gestuelle dans une perspective cognitive. Dans le troisième chapitre, nous considérons l’étude au-delà de la négation et nous recherchons des liens entre la grammaire, le geste, et la cognition dans d’autres secteurs du système linguistique. Nous analysons l’aspect progressif, la modalité épistémique, et les opérations de focalisation pour pouvoir lier BE + -ING avec les gestes cycliques (Figure 3), les modaux et les adverbes épistémiques avec les gestes de balancement, et la délocalisation syntaxique avec les gestes de pointage que le locuteur utilise pour rendre saillant un élément de son énoncé. A travers l'histoire, les linguistes ont lutté contre l’idéologie dominante en linguistique pour exposer des dimensions de la grammaire autres que sa dimension formelle et morphosyntaxique. Il s’agit de ses dimensions sociales, fonctionnelles, interactionnelles et cognitives. Cette thèse fournit une méthodologie pour analyser la grammaire dans une perspective multimodale et utilise la négation, l’aspect progressif, la modalité épistémique, et les opérations de focalisation pour exposer la dimension gestuelle de la grammaire
In this thesis, I examine the way English speakers gesture when they negate. I identify nine gestures of negation and analyse their forms, their relation to grammatical negation, and their organisation in regard to speech. Drawing examples from an audiovisual corpus, I demonstrate that gesture plays a role in negative constructs, such as node and scope of negation, inherent negation, and cumulative negation, suggesting how these gestures also exhibit the universal tendencies to express negation early on and frequently in negative sentences. I argue that discourse context and type of grammatical negation combine to determine which gestures speakers use and how they use them, establishing arguments toward a multimodal grammar. I accompany this analysis with a methodology for collecting, transcribing, and analysing multimodal data, which in a final chapter I apply to areas of the linguistic system other than negation—namely, progressivity, epistemic modality, and focus operations. This thesis offers an in-depth multimodal analysis of grammatical notions in English, especially negation, and establishes a link between grammar, gesture, and cognition
APA, Harvard, Vancouver, ISO, and other styles
23

Nilsson, Rebecca, and de Val Almida Winquist. "Hand Gesture Controlled Wheelchair." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264512.

Full text
Abstract:
Haptical technology is a field that is under constant development and that exists in many of today’s products, for example in VR-games and in the controls for vehicles. This kind of technology could in the same way simplify for disabled people by their being able to control a wheelchair using hand gestures. The purpose of this project is to research if a wheelchair can be controlled with hand gestures, and in that case, in which way that would be the most optimal. To answer the research questions in the project, a small scale prototype wheelchair was developed. This prototype is based on a microcontroller, Arduino, that is controlled by a sensor, IMU, that reads the angle of the user’s hand. Together, the components control two motors and steer the wheelchair. The result shows how hand gestures can steer the wheelchair forward, backward, left and right under constant speed, as well as making it stop. The prototype is able to follow the movements of the user’s hand, but reacts more slowly than would be desirable in a real situation. In spite of the fact that there are many different aspects to haptical steering of a wheelchair, this project shows that there is a large potential in implementing this kind of technology in an actual wheelchair.
Haptiskt styrning är en teknologi som utvecklas snabbt och inkorporeras i många av dagens produkter, till exempel i allt från VR-spel till styrning av fordon. På samma sätt skulle denna teknologi kunna underlätta för rörelsehindrade genom att erbjuda styrning av rullstol med hjälp av handrörelser. Syftet med detta projekt var därför att undersöka om en rullstol kan styras med handrörelser och i så fall vilket sätt som är optimalt. För att besvara rapportens frågeställning har framtagningen av en prototyp av en rullstol i liten skala gjorts. Denna är baserad på en mikrodator, Arduino, som styrs av en sensor, IMU, som mäter vinkeln på användarens hand. Med hjälp av dessa kan motorerna styras och rullstolen manövreras. Resultatet av rapporten har lett till ett förslag på hur handrörelser kan styra rullstolen framåt, bakåt, till vänster och till höger under konstant fart samt få den att stanna. Protypen följer gesterna som användarens hand visar, men reagerar långsammare än vad som vore önskvärt i verkligheten. Trots att många utvecklingsmöjligheter kvarstår för haptisk styrning av en rullstol, visar detta arbete att det finns stor potential i att implementera denna teknik med handrörelsestyrning i en verklig rullstol.
APA, Harvard, Vancouver, ISO, and other styles
24

Sulaiman, Amil, and Erik Janerdal. "Gesture controlled robot hand." Thesis, Högskolan i Halmstad, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44940.

Full text
Abstract:
This project was chosen by the students themselves. The idea behind the proposed system was tooffer an alternative method to control robotics in remote places and harsh environments using avision-based approach. The purpose of the project is to introduce a more natural way of humanto-computer interaction and also to make humans working in hazardous environments still performtheir duties remotely.The components used in the project are a mechanic hand, a Raspberry Pi, a Raspberry Pi cameramodule v2, servos, a servo driver, and a Raspberry Pi display. The code is written in C++, and thelibraries used were OpenCV for the computer vision part and wiringPi for the servo control. Theimage processing is divided into four parts, finding the region of interest, which is responsible forsegmenting the hand region, locating fingertips and the palm center, calculating the distances inorder to detect movements of the user’s hand, and finally a part responsible for the servo control.With respect to the specification and the goals of the projects, the project resulted in a successfulworking system, with some limitations. However, the system proposed relies on the binary maskcreated in one of the first steps in the image processing part. Results show that the creation of thebinary mask is heavily dependent on good lighting conditions of the scene. There is still room formore improvements regarding image processing and alternative methods in order to achieve betterresults.
Detta projekt valdes av studenterna själva, och tanken bakom det föreslagna systemet var atterbjuda en alternativ metod för att möjliggöra styrning av robotar på avlägsna platser och i tuffamiljöer med hjälp av datorseende. Syftet med projektet var att introducera ett mer naturligt sätt förinteraktion mellan människa och dator och att flytta människor ifrån farliga miljöer men samtidigtkunna utföra sina arbetssysslor. Komponenterna som använts i projektet består av en mekanisk hand, en Raspberry Pi, en RaspberryPi-kameramodul v2, servo, ett kort för servostyrning och en Raspberry Pi-skärm. Koden är skriven i C ++, och biblioteken som använts är OpenCV för bildanalysen och wiringPi för styrning av servo. Bildbehandlingen är uppdelad i fyra delar, varav en är ansvarig för att avgöra inom vilket områdesom handen befinner sig i, lokalisera fingertopparna och handflatans centrum, beräkna avstånden för att upptäcka rörelser hos användaren och slutligen en del som är ansvarig för kontrollen av servo. När det gäller kravspecifikationen och målen för projekten resulterade projektet i ett framgångsrikt fungerande system med vissa begränsningar. Det föreslagna systemet är dock beroende av den binära masken som skapats i ett av de första stegen i bildbehandlingsdelen. Resultaten visaratt skapandet av den binära masken är starkt beroende av scenens ljusförhållanden. Det finnsfortfarande utrymme för fler förbättringar av bildbehandling och alternativa metoder för att uppnå bättre resultat.
APA, Harvard, Vancouver, ISO, and other styles
25

Mann, Pamela Florence. "Meaning, gesture and Gauguin." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/35891/1/35891_Mann_1998.pdf.

Full text
Abstract:
This thesis discusses the difference between two dimensional painting and sculpture although they have proceeded along parallel paths. Feminist. Post-colonial and psychoanalytical discourses intertwine to cover complexities within the artwork of Gauguin and my own. Just as painting and sculpture run along parallel paths though covering problems inherent in their differences so to do I attempt to describe similarities at the same time as recognising the areas where the two do not meet. Issues covered include the search for meaning, in such a way as to make the art works relevant at the close of the twentieth century. A major consideration for a sculptor, that will be discussed, is that reading visual, two dimensional images is something everyone is practised at, in contrast to reception of three dimensional works. Audiences are less familiar with semiotic readings of sculpture. It is my contention that Gauguin's art and life can be compared to the way an audience views sculpture. His relations with women in particular, had this overwhelming double standard. He, at the same time as he adored and idealised the female totally rejected their involvement in his life in any meaningful way.
APA, Harvard, Vancouver, ISO, and other styles
26

Kuhlman, Lane Marie. "Gesture mapping for interaction design an investigative process for developing interactive gesture libraries /." Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1244003264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Duvillard, Jean. ""L'introspection gestuée" - La place des gestes et micro-gestes professionnels dans la formation initiale et continue des métiers de l'enseignement." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10191/document.

Full text
Abstract:
Depuis une trentaine d'années la recherche et la littérature scientifique en éducation ont mis en évidence l'importance de l'analyse des pratiques dans la formation des enseignants et des formateurs. La question des gestes professionnels est aujourd'hui une des problématiques de la formation des enseignants. En prenant appui sur des approches théoriques variées et complémentaires, comme l'anthropologie, la sémiotique et l'ergonomie cognitive notre recherche et notre objet d'étude se concentrent sur l'identification de « microgestes professionnels » qui sont « en (je) eux » dans la mise en pratique et la dynamique de gestes professionnels. Elle tente de mesurer et d'évaluer l'importance d'une prise de conscience réflexive incarnée - l'introspection gestuée - de ces micro gestes dans l'appropriation et/ou la mise en oeuvre des gestes professionnels par des novices et des experts dans des disciplines et des ordres d'enseignement (1er- 2e degré) différents. Une bonne part des difficultés rencontrées par les enseignants résident dans la nonmaîtrise de certains « micro-gestes d'action » vécus dans leur communication didactique et pédagogique. A partir de deux gestes professionnels celui de (S') observer et celui de « (Se) mettre en scène » (Alin, 2010), nous avons mis en évidence cinq micro-gestes qui interagissent constamment entre les protagonistes de la scène du cours. Ce sont : la posture gestuée, la voix, le regard, l'usage du mot, et le positionnement tactique (le placement/déplacement). Sur le plan méthodologique notre protocole de recueil de données s'appuie sur la captation vidéo de situations professionnelles suivie d'entretiens d'auto confrontation. Cette approche qualitative relève à la fois de l'analyse du travail et de l'analyse du discours (langage verbal et non verbal). Les traces enregistrées sont exploitées et analysées avec comme cadre théorique principal, l'approche sémiotique de Ch. S. Peirce. C'est notre expérience de direction de chef de choeur et de chef d'orchestre qui nous a interrogé sur le sens des actes posés, dans ce qu'ils ont de plus infime et signifiant. L'enseignant comme concepteur crée et innove mais il est aussi un interprète. Comme le musicien, il doit savoir interpréter la partition qu'il a créée ou bien qu'il a empruntée, grâce à l'usage de micro-gestes précis, incarnés dans des actions gestuées et situées. La prise de conscience et la prise en compte des gestes professionnels et des micro-gestes qui en constituent la dynamique nous apparaît comme pouvant être un des appuis forts de la formation initiale et continue dans la construction de l'expertise pédagogique des enseignants et/ou des formateurs
For the last thirty years research and scientific literature on education have focused the importance of analysing best practises in teacher training, from the point of view of both teachers and trainers. Today the subject of professional gestures has become one of the most important issues in effective teacher training. This study used data from varied theoretical and complementary approaches such as anthropology, semiotics, cognitive ergonomics and our research aims at identifying vibrant “professional micro gestures” which are to be put into practise. It tries to measure and assess the importance of the awareness of these micro gestures referred to as -introspection on gestures- in the appropriation and the implementation of professional gestures by both novice teachers and experts in different subjects and teaching primary and secondary levels. Most of the difficulties that teachers have to deal with are due to the fact that they don’t control certain “action micro gestures” experienced in their communication both didactic and educational. From two professional gestures: ‘Observing’ and ‘Acting’, we have highlighted 5 micro-gestures constantly interacting with the protagonists of the classroom. They are posture, voice, eye contact, speech and use of space and moving. The method that we used to collect data for our research is based on recordings of professional situations followed by self assessment interviews and feedback. This qualitative approach deals with the analysis of both the work and the speech (verbal and non verbal
APA, Harvard, Vancouver, ISO, and other styles
28

Parra, González Luis Otto. "gestUI: a model-driven method for including gesture-based interaction in user interfaces." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/89090.

Full text
Abstract:
The research reported and discussed in this thesis represents a novel approach to define custom gestures and to include gesture-based interaction in user interfaces of the software systems with the aim of help to solve the problems found in the related literature about the development of gesture-based user interfaces. The research is conducted according to Design Science methodology that is based on the design and investigation of artefacts in a context. In this thesis, the new artefact is the model-driven method to include gesture-based interaction in user interfaces. This methodology considers two cycles: the main cycle is an engineering cycle where we design a model-driven method to include interaction based on gestures. The second cycle is the research cycle, we define two research cycles: the first research cycle corresponds to the validation of the proposed method with an empirical evaluation and the second cycle corresponds to the technical action research to validate the method in an industrial context. Additionally, Design Science provides us the clues on how to conduct the research, be rigorous, and put in practice scientific rules. Besides Design Science has been a key issue for organising our research, we acknowledge the application of this framework since it has helps us to report clearly our findings. The thesis presents a theoretical framework introducing concepts related with the research performed, followed by a state of the art where we know about the related work in three areas: Human-computer Interaction, Model-driven paradigm in Human-Computer Interaction and Empirical Software Engineering. The design and implementation of gestUI is presented following the Model-driven Paradigm and the Model-View-Controller design pattern. Then, we performed two evaluations of gestUI: (i) an empirical evaluation based on ISO 25062-2006 to evaluate usability considering effectiveness, efficiency and satisfaction. Satisfaction is measured with perceived ease of use, perceived usefulness and intention of use, and (ii) a technical action research to evaluate user experience and usability. We use Model Evaluation Method, User Experience Questionnaire and Microsoft Reaction cards as guides to perform the aforementioned evaluations. The contributions of our thesis, limitations of the tool support and the approach are discussed and further work are presented.
La investigación reportada y discutida en esta tesis representa un método nuevo para definir gestos personalizados y para incluir interacción basada en gestos en interfaces de usuario de sistemas software con el objetivo de ayudar a resolver los problemas encontrados en la literatura relacionada respecto al desarrollo de interfaces basadas en gestos de usuarios. Este trabajo de investigación ha sido realizado de acuerdo a la metodología Ciencia del Diseño, que está basada en el diseño e investigación de artefactos en un contexto. En esta tesis, el nuevo artefacto es el método dirigido por modelos para incluir interacción basada en gestos en interfaces de usuario. Esta metodología considera dos ciclos: el ciclo principal, denominado ciclo de ingeniería, donde se ha diseñado un método dirigido por modelos para incluir interacción basada en gestos. El segundo ciclo es el ciclo de investigación, donde se definen dos ciclos de este tipo. El primero corresponde a la validación del método propuesto con una evaluación empírica y el segundo ciclo corresponde a un Technical Action Research para validar el método en un contexto industrial. Adicionalmente, Ciencia del Diseño provee las claves sobre como conducir la investigación, sobre cómo ser riguroso y poner en práctica reglas científicas. Además, Ciencia del Diseño ha sido un recurso clave para organizar la investigación realizada en esta tesis. Nosotros reconocemos la aplicación de este marco de trabajo puesto que nos ayuda a reportar claramente nuestros hallazgos. Esta tesis presenta un marco teórico introduciendo conceptos relacionados con la investigación realizada, seguido por un estado del arte donde conocemos acerca del trabajo relacionado en tres áreas: Interacción Humano-Ordenador, paradigma dirigido por modelos en Interacción Humano-Ordenador e Ingeniería de Software Empírica. El diseño e implementación de gestUI es presentado siguiendo el paradigma dirigido por modelos y el patrón de diseño Modelo-Vista-Controlador. Luego, nosotros hemos realizado dos evaluaciones de gestUI: (i) una evaluación empírica basada en ISO 25062-2006 para evaluar la usabilidad considerando efectividad, eficiencia y satisfacción. Satisfacción es medida por medio de la facilidad de uso percibida, utilidad percibida e intención de uso; y, (ii) un Technical Action Research para evaluar la experiencia del usuario y la usabilidad. Nosotros hemos usado Model Evaluation Method, User Experience Questionnaire y Microsoft Reaction Cards como guías para realizar las evaluaciones antes mencionadas. Las contribuciones de nuestra tesis, limitaciones del método y de la herramienta de soporte, así como el trabajo futuro son discutidas y presentadas.
La investigació reportada i discutida en aquesta tesi representa un mètode per definir gests personalitzats i per incloure interacció basada en gests en interfícies d'usuari de sistemes de programari. L'objectiu és ajudar a resoldre els problemes trobats en la literatura relacionada al desenvolupament d'interfícies basades en gests d'usuaris. Aquest treball d'investigació ha sigut realitzat d'acord a la metodologia Ciència del Diseny, que està basada en el disseny i investigació d'artefactes en un context. En aquesta tesi, el nou artefacte és el mètode dirigit per models per incloure interacció basada en gests en interfícies d'usuari. Aquesta metodologia es considerada en dos cicles: el cicle principal, denominat cicle d'enginyeria, on es dissenya un mètode dirigit per models per incloure interacció basada en gestos. El segon cicle és el cicle de la investigació, on es defineixen dos cicles d'aquest tipus. El primer es correspon a la validació del mètode proposat amb una avaluació empírica i el segon cicle es correspon a un Technical Action Research per validar el mètode en un context industrial. Addicionalment, Ciència del Disseny proveeix les claus sobre com conduir la investigació, sobre com ser rigorós i ficar en pràctica regles científiques. A més a més, Ciència del Disseny ha sigut un recurs clau per organitzar la investigació realitzada en aquesta tesi. Nosaltres reconeixem l'aplicació d'aquest marc de treball donat que ens ajuda a reportar clarament les nostres troballes. Aquesta tesi presenta un marc teòric introduint conceptes relacionats amb la investigació realitzada, seguit per un estat del art on coneixem a prop el treball realitzat en tres àrees: Interacció Humà-Ordinador, paradigma dirigit per models en la Interacció Humà-Ordinador i Enginyeria del Programari Empírica. El disseny i implementació de gestUI es presenta mitjançant el paradigma dirigit per models i el patró de disseny Model-Vista-Controlador. Després, nosaltres hem realitzat dos avaluacions de gestUI: (i) una avaluació empírica basada en ISO 25062-2006 per avaluar la usabilitat considerant efectivitat, eficiència i satisfacció. Satisfacció es mesura mitjançant la facilitat d'ús percebuda, utilitat percebuda i intenció d'ús; (ii) un Technical Action Research per avaluar l'experiència del usuari i la usabilitat. Nosaltres hem usat Model Evaluation Method, User Experience Questionnaire i Microsoft Reaction Cards com guies per realitzar les avaluacions mencionades. Les contribucions de la nostra tesi, limitacions del mètode i de la ferramenta de suport així com el treball futur són discutides i presentades.
Parra González, LO. (2017). gestUI: a model-driven method for including gesture-based interaction in user interfaces [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/89090
TESIS
APA, Harvard, Vancouver, ISO, and other styles
29

Lascarides, Alex, and Matthew Stone. "Formal semantics for iconic gesture." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/1033/.

Full text
Abstract:
We present a formal analysis of iconic coverbal gesture. Our model describes the incomplete meaning of gesture that’s derivable from its form, and the pragmatic reasoning that yields a more specific interpretation. Our formalism builds on established models of discourse interpretation to capture key insights from the descriptive literature on gesture: synchronous speech and gesture express a single thought, but while the form of iconic gesture is an important clue to its interpretation, the content of gesture can be resolved only by linking it to its context.
APA, Harvard, Vancouver, ISO, and other styles
30

Stendahl, Jonas, and Johan Arnör. "Gesture Keyboard USING MACHINE LEARNING." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157141.

Full text
Abstract:
The market for mobile devices is expanding rapidly. Input of text is a large part of using a mobile device and an input method that is convenient and fast is therefore very interesting. Gesture keyboards allow the user to input text by dragging a finger over the letters in the desired word. This study investigates if enhancements of gesture keyboards can be accomplished using machine learning. A gesture keyboard was developed based on an algorithm which used a Multilayer Perceptron with backpropagation and evaluated. The results indicate that the evaluated implementation is not an optimal solution to the problem of recognizing swiped words.
Marknaden för mobila enheter expanderar kraftigt. Inmatning är en viktig del vid användningen av sådana produkter och en inmatningsmetod som är smidig och snabb är därför mycket intressant. Ett tangentbord för gester erbjuder användaren möjligheten att skriva genom att dra fingret över bokstäverna i det önskade ordet. I denna studie undersöks om tangentbord för gester kan förbättras med hjälp av maskininlärning. Ett tangentbord som använde en Multilayer Perceptron med backpropagation utvecklades och utvärderades. Resultaten visar att den undersökta implementationen inte är en optimal lösning på problemet att känna igen ord som matas in med hjälp av gester.
APA, Harvard, Vancouver, ISO, and other styles
31

Eisenstein, Jacob (Jacob Richard). "Gesture in automatic discourse processing." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44401.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 145-153).
Computers cannot fully understand spoken language without access to the wide range of modalities that accompany speech. This thesis addresses the particularly expressive modality of hand gesture, and focuses on building structured statistical models at the intersection of speech, vision, and meaning. My approach is distinguished in two key respects. First, gestural patterns are leveraged to discover parallel structures in the meaning of the associated speech. This differs from prior work that attempted to interpret individual gestures directly, an approach that was prone to a lack of generality across speakers. Second, I present novel, structured statistical models for multimodal language processing, which enable learning about gesture in its linguistic context, rather than in the abstract. These ideas find successful application in a variety of language processing tasks: resolving ambiguous noun phrases, segmenting speech into topics, and producing keyframe summaries of spoken language. In all three cases, the addition of gestural features - extracted automatically from video - yields significantly improved performance over a state-of-the-art text-only alternative. This marks the first demonstration that hand gesture improves automatic discourse processing.
by Jacob Eisenstein.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Lei. "Personalized Dynamic Hand Gesture Recognition." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231345.

Full text
Abstract:
Human gestures, with the spatial-temporal variability, are difficult to be recognized by a generic model or classifier that are applicable for everyone. To address the problem, in this thesis, personalized dynamic gesture recognition approaches are proposed. Specifically, based on Dynamic Time Warping(DTW), a novel concept of Subject Relation Network is introduced to describe the similarity of subjects in performing dynamic gestures, which offers a brand new view for gesture recognition. By clustering or arranging training subjects based on the network, two personalization algorithms are proposed respectively for generative models and discriminative models. Moreover, three basic recognition methods, DTW-based template matching, Hidden Markov Model(HMM) and Fisher Vector combining classification, are compared and integrated into the proposed personalized gesture recognition. The proposed approaches are evaluated on a challenging dynamic hand gesture recognition dataset DHG14/28, which contains the depth images and skeleton coordinates returned by the Intel RealSense depth camera. Experimental results show that the proposed personalized algorithms can significantly improve the performance of basic generative&discriminative models and achieve the state-of-the-art accuracy of 86.2%.
Människliga gester, med spatiala/temporala variationer, är svåra att känna igen med en generisk modell eller klassificeringsmetod. För att komma till rätta med problemet, föreslås personifierade, dynamiska gest igenkänningssätt baserade på Dynamisk Time Warping (DTW) och ett nytt koncept: Subjekt-Relativt Nätverk för att beskriva likheter vid utförande av dynamiska gester, vilket ger en ny syn på gest igenkänning. Genom att klustra eller ordna träningssubjekt baserat på nätverket föreslås två personifieringsalgoritmer för generativa och diskriminerande modeller. Dessutom jämförs och integreras tre grundläggande igenkänningsmetoder, DTW-baserad mall-matchning, Hidden Markov Model (HMM) och Fisher Vector-klassificering i den föreslagna personifierade gestigenkännande ansatsen. De föreslagna tillvägagångssätten utvärderas på ett utmanande, dynamiskt handmanipulerings dataset DHG14/28, som innehåller djupbilderna och skelettkoordinaterna som returneras av Intels RealSense-djupkamera. Experimentella resultat visar att de föreslagna personifierade algoritmerna kan förbättra prestandan i jämfört medgrundläggande generativa och diskriminerande modeller och uppnå den högsta nivån på 86,2%.
APA, Harvard, Vancouver, ISO, and other styles
33

NORMELIUS, ANTON, and KARL BECKMAN. "Hand Gesture Controlled Omnidirectional Vehicle." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279822.

Full text
Abstract:
The purpose of this project was to study how hand gesture control can be implemented on a vehicle that utilizes mecanum wheels in order to move in all directions. Furthermore, it was investigated how the steering of such a vehicle can be made wireless to increase mobility. A prototype vehicle consisting of four mecanum wheels was constructed. Mecanum wheels are such wheels that enable translation in all directions. By varying rotational direction of each wheel, the direction of the resulting force on the vehicle is altered, making it move in the desired direction. Hand gesture control was enabled by constructing another prototype, attached to the hand, consisting of an IMU (Inertial Measurement Unit) and a transceiver. With the IMU, the hand’s angle against the horizontal plane can be calculated and instructions can be sent over to the vehicle by making use of the transceiver. Those instructions contain a short message that specifies in what direction the vehicle should move. The vehicle rotates the wheels in the desired direction and move thereafter. The results show that wireless hand gesture based control of an omnidirectional vehicle works without any noticeable delay in the transmission and the signals that are sent contain the correct information about moving directions.
Syftet med detta projekt var att studera hur handstyrning kan implementeras på ett fordon som utnyttjar mecanumhjul för att röra sig i alla riktningar. Vidare undersöktes också hur styrningen av sådant fordon kan genomföras trädlöst för ökad mobilitet. En prototypfarkost bestående av fyra mecanumhjul konstruerades. Mecanumhjul är sådana hjul som möjliggör translation i alla riktningar. Genom att variera rotationsriktningen på vardera motor ändras riktningen av den resulterande kraften på farkosten, vilket gör att den kan förflytta sig i önskad riktning. Handstyrning möjliggjordes genom att konstruera en till prototyp, som fästs i anslutning till handen, bestående av en IMU och en transceiver. Med IMU:n kan handens vinkel gentemot horisontalplanet beräknas och instruktioner kan skickas över till farkosten med hjälp av transceivern. Dessa instruktioner innehåller ett kort meddelande som specificerar i vilken riktning farkosten ska röra sig i. Resultaten visar på att trädlös handstyrning av en farkost fungerar utan märkbar tidsfördröjning i signalöverföring och att signalerna som skickas till farkosten innehåller korrekta instruktioner gällande rörelseriktningar.
APA, Harvard, Vancouver, ISO, and other styles
34

Espinoza, Victor. "Gesture Recognition in Tennis Biomechanics." Master's thesis, Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/530096.

Full text
Abstract:
Electrical and Computer Engineering
M.S.E.E.
The purpose of this study is to create a gesture recognition system that interprets motion capture data of a tennis player to determine which biomechanical aspects of a tennis swing best correlate to a swing efficacy. For our learning set this work aimed to record 50 tennis athletes of similar competency with the Microsoft Kinect performing standard tennis swings in the presence of different targets. With the acquired data we extracted biomechanical features that hypothetically correlated to ball trajectory using proper technique and tested them as sequential inputs to our designed classifiers. This work implements deep learning algorithms as variable-length sequence classifiers, recurrent neural networks (RNN), to predict tennis ball trajectory. In attempt to learn temporal dependencies within a tennis swing, we implemented gate-augmented RNNs. This study compared the RNN to two gated models; gated recurrent units (GRU), and long short-term memory (LSTM) units. We observed similar classification performance across models while the gated-methods reached convergence twice as fast as the baseline RNN. The results displayed 1.2 entropy loss and 50 % classification accuracy indicating that the hypothesized biomechanical features were loosely correlated to swing efficacy or that they were not accurately depicted by the sensor
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
35

Scoble, Joselynne. "Stuttering blocks the flow of speech and gesture : the speech-gesture relationship in chronic stutterers." Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=69730.

Full text
Abstract:
This thesis investigated the speech-gesture relationship of chronic adult stutterers in comparison to fluent controls based on previous work by McNeill (1979, 1986). Significant differences were found in the speech and gesture characteristics of the narratives of stutterers as compared to fluent controls on a cartoon retelling task. Stutterers produced fewer cartoon details in their speech and fewer meanings per gesture. As well, stutterers were unable to begin a representational gesture at the same moment as a stuttered disfluency resulting in the freezing of gestures or maintaining the hand at rest. A second experiment showed that stutterers were able to maintain and initiate non-communicative hand movements at the same moment as stuttering. Gesture did not replace speech during moments of stuttering even though manual movement during stuttering was possible. The results demonstrate the strength of the speech-gesture relationship and show that stuttering affects both modalities of expression.
APA, Harvard, Vancouver, ISO, and other styles
36

Nygård, Espen Solberg. "Multi-touch Interaction with Gesture Recognition." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9126.

Full text
Abstract:

This master's thesis explores the world of multi-touch interaction with gesture recognition. The focus is on camera based multi-touch techniques, as these provide a new dimension to multi-touch with its ability to recognize objects. During the project, a multi-touch table based on the technology Diffused Surface Illumination has been built. In addition to building a table, a complete gesture recognition system has been implemented, and different gesture recognition algorithms have been successfully tested in a multi-touch environment. The goal with this table, and the accompanying gesture recognition system, is to create an open and affordable multi-touch solution, with the purpose of bringing multi-touch out to the masses. By doing this, more people will be able to enjoy the benefits of a more natural interaction with computers. In a larger perspective, multi-touch is just the beginning, and by adding additional modalities to our applications, such as speech recognition and full body tracking, a whole new level of computer interaction will be possible.

APA, Harvard, Vancouver, ISO, and other styles
37

Jannedy, Stefanie, and Norma Mendoza-Denton. "Structuring information through gesture and intonation." Universität Potsdam, 2005. http://opus.kobv.de/ubp/volltexte/2006/877/.

Full text
Abstract:
Face-to-face communication is multimodal. In unscripted spoken discourse we can observe the interaction of several "semiotic layers", modalities of information such as syntax, discourse structure, gesture, and intonation.
We explore the role of gesture and intonation in structuring and aligning information in spoken discourse through a study of the co-occurrence of pitch accents and gestural apices.
Metaphorical spatialization through gesture also plays a role in conveying the contextual relationships between the speaker, the government and other external forces in a naturally-occurring political speech setting.
APA, Harvard, Vancouver, ISO, and other styles
38

Khan, Muhammad. "Hand Gesture Detection & Recognition System." Thesis, Högskolan Dalarna, Datateknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6496.

Full text
Abstract:
The project introduces an application using computer vision for Hand gesture recognition. A camera records a live video stream, from which a snapshot is taken with the help of interface. The system is trained for each type of count hand gestures (one, two, three, four, and five) at least once. After that a test gesture is given to it and the system tries to recognize it.A research was carried out on a number of algorithms that could best differentiate a hand gesture. It was found that the diagonal sum algorithm gave the highest accuracy rate. In the preprocessing phase, a self-developed algorithm removes the background of each training gesture. After that the image is converted into a binary image and the sums of all diagonal elements of the picture are taken. This sum helps us in differentiating and classifying different hand gestures.Previous systems have used data gloves or markers for input in the system. I have no such constraints for using the system. The user can give hand gestures in view of the camera naturally. A completely robust hand gesture recognition system is still under heavy research and development; the implemented system serves as an extendible foundation for future work.
APA, Harvard, Vancouver, ISO, and other styles
39

Glatt, Ruben [UNESP]. "Deep learning architecture for gesture recognition." Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/115718.

Full text
Abstract:
Made available in DSpace on 2015-03-03T11:52:29Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-07-25Bitstream added on 2015-03-03T12:06:38Z : No. of bitstreams: 1 000807195.pdf: 2462524 bytes, checksum: 91686fbe11c74337c40fe57671eb8d82 (MD5)
O reconhecimento de atividade de visão de computador desempenha um papel importante na investigação para aplicações como interfaces humanas de computador, ambientes inteligentes, vigilância ou sistemas médicos. Neste trabalho, é proposto um sistema de reconhecimento de gestos com base em uma arquitetura de aprendizagem profunda. Ele é usado para analisar o desempenho quando treinado com os dados de entrada multi-modais em um conjunto de dados de linguagem de sinais italiana. A área de pesquisa subjacente é um campo chamado interação homem-máquina. Ele combina a pesquisa sobre interfaces naturais, reconhecimento de gestos e de atividade, aprendizagem de máquina e tecnologias de sensores que são usados para capturar a entrada do meio ambiente para processamento posterior. Essas áreas são introduzidas e os conceitos básicos são descritos. O ambiente de desenvolvimento para o pré-processamento de dados e algoritmos de aprendizagem de máquina programada em Python é descrito e as principais bibliotecas são discutidas. A coleta dos fluxos de dados é explicada e é descrito o conjunto de dados utilizado. A arquitetura proposta de aprendizagem consiste em dois passos. O pré-processamento dos dados de entrada e a arquitetura de aprendizagem. O pré-processamento é limitado a três estratégias diferentes, que são combinadas para oferecer seis diferentes perfis de préprocessamento. No segundo passo, um Deep Belief Network é introduzido e os seus componentes são explicados. Com esta definição, 294 experimentos são realizados com diferentes configurações. As variáveis que são alteradas são as definições de pré-processamento, a estrutura de camadas do modelo, a taxa de aprendizagem de pré-treino e a taxa de aprendizagem de afinação. A avaliação dessas experiências mostra que a abordagem de utilização de uma arquitetura ... (Resumo completo, clicar acesso eletrônico abaixo)
Activity recognition from computer vision plays an important role in research towards applications like human computer interfaces, intelligent environments, surveillance or medical systems. In this work, a gesture recognition system based on a deep learning architecture is proposed. It is used to analyze the performance when trained with multi-modal input data on an Italian sign language dataset. The underlying research area is a field called human-machine interaction. It combines research on natural user interfaces, gesture and activity recognition, machine learning and sensor technologies, which are used to capture the environmental input for further processing. Those areas are introduced and the basic concepts are described. The development environment for preprocessing data and programming machine learning algorithms with Python is described and the main libraries are discussed. The gathering of the multi-modal data streams is explained and the used dataset is outlined. The proposed learning architecture consists of two steps. The preprocessing of the input data and the actual learning architecture. The preprocessing is limited to three different strategies, which are combined to offer six different preprocessing profiles. In the second step, a Deep Belief network is introduced and its components are explained. With this setup, 294 experiments are conducted with varying configuration settings. The variables that are altered are the preprocessing settings, the layer structure of the model, the pretraining and the fine-tune learning rate. The evaluation of these experiments show that the approach of using a deep learning architecture on an activity or gesture recognition task yields acceptable results, but has not yet reached a level of maturity, which would allow to use the developed models in serious applications.
APA, Harvard, Vancouver, ISO, and other styles
40

Gillian, N. E. "Gesture recognition for musician computer interaction." Thesis, Queen's University Belfast, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.546348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Cairns, Alistair Y. "Towards the automatic recognition of gesture." Thesis, University of Dundee, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Harding, Peter Reginald George. "Gesture recognition by Fourier analysis techniques." Thesis, City University London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

GOMES, MARIA LUZIA DE CERQUEIRA. "OBJECT AND GESTURE - NA INTERDISCIPLYNARY LOOK." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13140@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Este trabalho trata de uma reflexão sobre a interação homem e objeto, a partir da percepção elementar que se encontra nesse “inter-mundo”: o gesto. Tornar manifesta essa ação do homem – ora espontânea ou técnica – é o objetivo desta narrativa que busca nutrir as relações humanas, ocorridas nos espaço de suas vivências, através da consciência de suas extensões, ou seja, dos seus gestos e objetos.
This project is a research proposal on the interaction between man and object through the elementary perception found in this inter-world: the gesture. The manifestation of this human action - either spontaneous or technical - is the goal of this narrative, which aims to nourish human relations, present in your life experiences, through the awareness of their extensions, that is, their gestures and objects.
APA, Harvard, Vancouver, ISO, and other styles
44

Tanguay, Donald O. (Donald Ovila). "Hidden Markov models for gesture recognition." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/37796.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 41-42).
by Donald O. Tanguay, Jr.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
45

Wilson, Andrew David. "Learning visual behavior for gesture analysis." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/62924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yao, Yi. "Hand gesture recognition in uncontrolled environments." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/74268/.

Full text
Abstract:
Human Computer Interaction has been relying on mechanical devices to feed information into computers with low efficiency for a long time. With the recent developments in image processing and machine learning methods, the computer vision community is ready to develop the next generation of Human Computer Interaction methods, including Hand Gesture Recognition methods. A comprehensive Hand Gesture Recognition based semantic level Human Computer Interaction framework for uncontrolled environments is proposed in this thesis. The framework contains novel methods for Hand Posture Recognition, Hand Gesture Recognition and Hand Gesture Spotting. The Hand Posture Recognition method in the proposed framework is capable of recognising predefined still hand postures from cluttered backgrounds. Texture features are used in conjunction with Adaptive Boosting to form a novel feature selection scheme, which can effectively detect and select discriminative texture features from the training samples of the posture classes. A novel Hand Tracking method called Adaptive SURF Tracking is proposed in this thesis. Texture key points are used to track multiple hand candidates in the scene. This tracking method matches texture key points of hand candidates within adjacent frames to calculate the movement directions of hand candidates. With the gesture trajectories provided by the Adaptive SURF Tracking method, a novel classi�er called Partition Matrix is introduced to perform gesture classification for uncontrolled environments with multiple hand candidates. The trajectories of all hand candidates extracted from the original video under different frame rates are used to analyse the movements of hand candidates. An alternative gesture classifier based on Convolutional Neural Network is also proposed. The input images of the Neural Network are approximate trajectory images reconstructed from the tracking results of the Adaptive SURF Tracking method. For Hand Gesture Spotting, a forward spotting scheme is introduced to detect the starting and ending points of the prede�ned gestures in the continuously signed gesture videos. A Non-Sign Model is also proposed to simulate meaningless hand movements between the meaningful gestures. The proposed framework can perform well with unconstrained scene settings, including frontal occlusions, background distractions and changing lighting conditions. Moreover, it is invariant to changing scales, speed and locations of the gesture trajectories.
APA, Harvard, Vancouver, ISO, and other styles
47

Gonçalves, Duarte Nuno de Jesus. "Gesture based interface for image annotation." Master's thesis, Faculdade de Ciências e Tecnologia, 2008. http://hdl.handle.net/10362/7828.

Full text
Abstract:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Given the complexity of visual information, multimedia content search presents more problems than textual search. This level of complexity is related with the difficulty of doing automatic image and video tagging, using a set of keywords to describe the content. Generally, this annotation is performed manually (e.g., Google Image) and the search is based on pre-defined keywords. However, this task takes time and can be dull. In this dissertation project the objective is to define and implement a game to annotate personal digital photos with a semi-automatic system. The game engine tags images automatically and the player role is to contribute with correct annotations. The application is composed by the following main modules: a module for automatic image annotation, a module that manages the game graphical interface (showing images and tags), a module for the game engine and a module for human interaction. The interaction is made with a pre-defined set of gestures, using a web camera. These gestures will be detected using computer vision techniques interpreted as the user actions. The dissertation also presents a detailed analysis of this application, computational modules and design, as well as a series of usability tests.
APA, Harvard, Vancouver, ISO, and other styles
48

Glatt, Ruben. "Deep learning architecture for gesture recognition /." Guaratinguetá, 2014. http://hdl.handle.net/11449/115718.

Full text
Abstract:
Orientador: José Celso Freire Junior
Coorientador: Daniel Julien Barros da Silva Sampaio
Banca: Galeno José de Sena
Banca: Luiz de Siqueira Martins Filho
Resumo: O reconhecimento de atividade de visão de computador desempenha um papel importante na investigação para aplicações como interfaces humanas de computador, ambientes inteligentes, vigilância ou sistemas médicos. Neste trabalho, é proposto um sistema de reconhecimento de gestos com base em uma arquitetura de aprendizagem profunda. Ele é usado para analisar o desempenho quando treinado com os dados de entrada multi-modais em um conjunto de dados de linguagem de sinais italiana. A área de pesquisa subjacente é um campo chamado interação homem-máquina. Ele combina a pesquisa sobre interfaces naturais, reconhecimento de gestos e de atividade, aprendizagem de máquina e tecnologias de sensores que são usados para capturar a entrada do meio ambiente para processamento posterior. Essas áreas são introduzidas e os conceitos básicos são descritos. O ambiente de desenvolvimento para o pré-processamento de dados e algoritmos de aprendizagem de máquina programada em Python é descrito e as principais bibliotecas são discutidas. A coleta dos fluxos de dados é explicada e é descrito o conjunto de dados utilizado. A arquitetura proposta de aprendizagem consiste em dois passos. O pré-processamento dos dados de entrada e a arquitetura de aprendizagem. O pré-processamento é limitado a três estratégias diferentes, que são combinadas para oferecer seis diferentes perfis de préprocessamento. No segundo passo, um Deep Belief Network é introduzido e os seus componentes são explicados. Com esta definição, 294 experimentos são realizados com diferentes configurações. As variáveis que são alteradas são as definições de pré-processamento, a estrutura de camadas do modelo, a taxa de aprendizagem de pré-treino e a taxa de aprendizagem de afinação. A avaliação dessas experiências mostra que a abordagem de utilização de uma arquitetura ... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Activity recognition from computer vision plays an important role in research towards applications like human computer interfaces, intelligent environments, surveillance or medical systems. In this work, a gesture recognition system based on a deep learning architecture is proposed. It is used to analyze the performance when trained with multi-modal input data on an Italian sign language dataset. The underlying research area is a field called human-machine interaction. It combines research on natural user interfaces, gesture and activity recognition, machine learning and sensor technologies, which are used to capture the environmental input for further processing. Those areas are introduced and the basic concepts are described. The development environment for preprocessing data and programming machine learning algorithms with Python is described and the main libraries are discussed. The gathering of the multi-modal data streams is explained and the used dataset is outlined. The proposed learning architecture consists of two steps. The preprocessing of the input data and the actual learning architecture. The preprocessing is limited to three different strategies, which are combined to offer six different preprocessing profiles. In the second step, a Deep Belief network is introduced and its components are explained. With this setup, 294 experiments are conducted with varying configuration settings. The variables that are altered are the preprocessing settings, the layer structure of the model, the pretraining and the fine-tune learning rate. The evaluation of these experiments show that the approach of using a deep learning architecture on an activity or gesture recognition task yields acceptable results, but has not yet reached a level of maturity, which would allow to use the developed models in serious applications.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
49

Caceres, Carlos Antonio. "Machine Learning Techniques for Gesture Recognition." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/52556.

Full text
Abstract:
Classification of human movement is a large field of interest to Human-Machine Interface researchers. The reason for this lies in the large emphasis humans place on gestures while communicating with each other and while interacting with machines. Such gestures can be digitized in a number of ways, including both passive methods, such as cameras, and active methods, such as wearable sensors. While passive methods might be the ideal, they are not always feasible, especially when dealing in unstructured environments. Instead, wearable sensors have gained interest as a method of gesture classification, especially in the upper limbs. Lower arm movements are made up of a combination of multiple electrical signals known as Motor Unit Action Potentials (MUAPs). These signals can be recorded from surface electrodes placed on the surface of the skin, and used for prosthetic control, sign language recognition, human machine interface, and a myriad of other applications. In order to move a step closer to these goal applications, this thesis compares three different machine learning tools, which include Hidden Markov Models (HMMs), Support Vector Machines (SVMs), and Dynamic Time Warping (DTW), to recognize a number of different gestures classes. It further contrasts the applicability of these tools to noisy data in the form of the Ninapro dataset, a benchmarking tool put forth by a conglomerate of universities. Using this dataset as a basis, this work paves a path for the analysis required to optimize each of the three classifiers. Ultimately, care is taken to compare the three classifiers for their utility against noisy data, and a comparison is made against classification results put forth by other researchers in the field. The outcome of this work is 90+ % recognition of individual gestures from the Ninapro dataset whilst using two of the three distinct classifiers. Comparison against previous works by other researchers shows these results to outperform all other thus far. Through further work with these tools, an end user might control a robotic or prosthetic arm, or translate sign language, or perhaps simply interact with a computer.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
50

Pfister, Tomas. "Advancing human pose and gesture recognition." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:64e5b1be-231e-49ed-b385-e87db6dbeed8.

Full text
Abstract:
This thesis presents new methods in two closely related areas of computer vision: human pose estimation, and gesture recognition in videos. In human pose estimation, we show that random forests can be used to estimate human pose in monocular videos. To this end, we propose a co-segmentation algorithm for segmenting humans out of videos, and an evaluator that predicts whether the estimated poses are correct or not. We further extend this pose estimator to new domains (with a transfer learning approach), and enhance its predictions by predicting the joint positions sequentially (rather than independently) in an image, and using temporal information in the videos (rather than predicting the poses from a single frame). Finally, we go beyond random forests, and show that convolutional neural networks can be used to estimate human pose even more accurately and efficiently. We propose two new convolutional neural network architectures, and show how optical flow can be employed in convolutional nets to further improve the predictions. In gesture recognition, we explore the idea of using weak supervision to learn gestures. We show that we can learn sign language automatically from signed TV broadcasts with subtitles by letting algorithms 'watch' the TV broadcasts and 'match' the signs with the subtitles. We further show that if even a small amount of strong supervision is available (as there is for sign language, in the form of sign language video dictionaries), this strong supervision can be combined with weak supervision to learn even better models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography