Dissertations / Theses on the topic 'Gesture'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Gesture.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Lindberg, Martin. "Introducing Gestures: Exploring Feedforward in Touch-Gesture Interfaces." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23555.
Full textCampbell, Lee Winston. "Visual classification of co-verbal gestures for gesture understanding." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8707.
Full textIncludes bibliographical references (leaves 86-92).
A person's communicative intent can be better understood by either a human or a machine if the person's gestures are understood. This thesis project demonstrates an expansion of both the range of co-verbal gestures a machine can identify, and the range of communicative intents the machine can infer. We develop an automatic system that uses realtime video as sensory input and then segments, classifies, and responds to co-verbal gestures made by users in realtime as they converse with a synthetic character known as REA, which is being developed in parallel by Justine Cassell and her students at the MIT Media Lab. A set of 670 natural gestures, videotaped and visually tracked in the course of conversational interviews and then hand segmented and annotated according to a widely used gesture classification scheme, is used in an offline training process that trains Hidden Markov Model classifiers. A number of feature sets are extracted and tested in the offline training process, and the best performer is employed in an online HMM segmenter and classifier that requires no encumbering attachments to the user. Modifications made to the REA system enable REA to respond to the user's beat and deictic gestures as well as turntaking requests the user may convey in gesture.
(cont.) The recognition results obtained are far above chance, but too low for use in a production recognition system. The results provide a measure of validity for the gesture categories chosen, and they provide positive evidence for an appealing but difficult to prove proposition: to the extent that a machine can recognize and use these categories of gestures to infer information not present in the words spoken, there is exploitable complementary information in the gesture stream.
by Lee Winston Campbell.
Ph.D.
Smith, Jason Alan. "Naturalistic skeletal gesture movement and rendered gesture decoding." Diss., Online access via UMI:, 2006.
Find full textDavis, James W. "Gesture recognition." Honors in the Major Thesis, University of Central Florida, 1994. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/126.
Full textBachelors
Arts and Sciences
Computer Science
Yunus, Fajrian. "Prediction of Gesture Timing and Study About Image Schema for Metaphoric Gestures." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS551.
Full textCommunicative gestures and speech are tightly linked. We want to automatically predict the gestures based on the speech. The speech itself has two constituents, namely the acoustic and the content of the speech (i.e. the text). In one part of this dissertation, we develop a model based on a recurrent neural network with attention mechanism to predict the gesture timing, that is when the gesture should happen and what kind of gesture should happen. We use a sequence comparison technique to evaluate the model performance. We also perform a subjective study to measure how our respondents judge the naturalness, the time consistency, and the semantic consistency of the generated gestures. In another part of the dissertation, we deal with the generation of metaphoric gestures. Metaphoric gestures carry meaning, and thus it is necessary to extract the relevant semantics from the content of the speech. This is done by using the concept of image schema as demonstrated by Ravenet et al. However, to be able to use image schema in machine learning techniques, the image schemas has to be converted into vectors of real numbers. Therefore, we investigate how we can transform the image schema into vector by using word embedding techniques. Lastly, we investigate how we can represent hand gesture shapes. The representation has to be compact enough yet it also has to be broad enough such that it can cover sufficient shapes which can represent sufficient range of semantics
Cheng, You-Chi. "Robust gesture recognition." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53492.
Full textCometti, Jean Pierre. "The architect's gesture." Pontificia Universidad Católica del Perú - Departamento de Humanidades, 2012. http://repositorio.pucp.edu.pe/index/handle/123456789/112899.
Full textKaâniche, Mohamed Bécha. "Human gesture recognition." Nice, 2009. http://www.theses.fr/2009NICE4032.
Full textIn this thesis, we aim to recognize gestures (e. G. Hand raising) and more generally short actions (e. G. Fall, bending) accomplished by an individual. Many techniques have already been proposed for gesture recognition in specific environment (e. G. Laboratory) using the cooperation of several sensors (e. G. Camera network, individual equipped with markers). Despite these strong hypotheses, gesture recognition is still brittle and often depends on the position of the individual relatively to the cameras. We propose to reduce these hypotheses in order to conceive general algorithm enabling the recognition of the gesture of an individual involving in an unconstrained environment and observed through limited number of cameras. The goal is to estimate the likelihood of gesture recognition in function of the observation conditions. Our method consists of classifying a set of gestures by learning motion descriptors. These motion descriptors are local signatures of the motion of corner points which are associated with their local textural description. We demonstrate the effectiveness of our motion descriptors by recognizing the actions of the public KTH database
Alon, Jonathan. "Spatiotemporal Gesture Segmentation." Boston University Computer Science Department, 2006. https://hdl.handle.net/2144/1884.
Full textMacleod, Tracy. "Gesture signs in social interaction : how group size influences gesture communication." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1205/.
Full textBodiroža, Saša [Verfasser], Verena V. [Gutachter] Hafner, Yael [Gutachter] Edan, and Bruno [Gutachter] Lara. "Gestures in human-robot interaction : development of intuitive gesture vocabularies and robust gesture recognition / Saša Bodiroža ; Gutachter: Verena V. Hafner, Yael Edan, Bruno Lara." Berlin : Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://d-nb.info/1126553557/34.
Full textArvidsson, Carina. "Tal och gesters samverkan i undervisningen : En empirisk studie på lågstadiet." Thesis, Linnéuniversitetet, Institutionen för svenska språket (SV), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-96181.
Full textKuhlman, Lane M. "Gesture Mapping for Interaction Design: An Investigative Process for Developing Interactive Gesture Libraries." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1244003264.
Full textSemprini, Mattia. "Gesture Recognition: una panoramica." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15672/.
Full textGingir, Emrah. "Hand Gesture Recognition System." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612532/index.pdf.
Full textMetais, Thierry. "A dynamic gesture interface." Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26983.
Full textEaston, Beth Louise. "Gesture of the book." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ49744.pdf.
Full textKang, Angela. "Chinoiserie as musical gesture." Thesis, Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B40040240.
Full textRajagopal, Manoj Kumar. "Cloning with gesture expressivity." Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00719301.
Full textKozel, Susan. "As Vision Becomes Gesture." Thesis, University of Essex, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.506150.
Full textDang, Darren Phi Bang. "Template based gesture recognition." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/41404.
Full textIncludes bibliographical references (p. 65-66).
by Darren PHi Bang Dang.
M.S.
Harrison, Simon Mark. "Grammar, gesture and cognition." Bordeaux 3, 2009. http://www.theses.fr/2009BOR30071.
Full textIn this thesis, I examine the way English speakers gesture when they negate. I identify nine gestures of negation and analyse their forms, their relation to grammatical negation, and their organisation in regard to speech. Drawing examples from an audiovisual corpus, I demonstrate that gesture plays a role in negative constructs, such as node and scope of negation, inherent negation, and cumulative negation, suggesting how these gestures also exhibit the universal tendencies to express negation early on and frequently in negative sentences. I argue that discourse context and type of grammatical negation combine to determine which gestures speakers use and how they use them, establishing arguments toward a multimodal grammar. I accompany this analysis with a methodology for collecting, transcribing, and analysing multimodal data, which in a final chapter I apply to areas of the linguistic system other than negation—namely, progressivity, epistemic modality, and focus operations. This thesis offers an in-depth multimodal analysis of grammatical notions in English, especially negation, and establishes a link between grammar, gesture, and cognition
Nilsson, Rebecca, and de Val Almida Winquist. "Hand Gesture Controlled Wheelchair." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264512.
Full textHaptiskt styrning är en teknologi som utvecklas snabbt och inkorporeras i många av dagens produkter, till exempel i allt från VR-spel till styrning av fordon. På samma sätt skulle denna teknologi kunna underlätta för rörelsehindrade genom att erbjuda styrning av rullstol med hjälp av handrörelser. Syftet med detta projekt var därför att undersöka om en rullstol kan styras med handrörelser och i så fall vilket sätt som är optimalt. För att besvara rapportens frågeställning har framtagningen av en prototyp av en rullstol i liten skala gjorts. Denna är baserad på en mikrodator, Arduino, som styrs av en sensor, IMU, som mäter vinkeln på användarens hand. Med hjälp av dessa kan motorerna styras och rullstolen manövreras. Resultatet av rapporten har lett till ett förslag på hur handrörelser kan styra rullstolen framåt, bakåt, till vänster och till höger under konstant fart samt få den att stanna. Protypen följer gesterna som användarens hand visar, men reagerar långsammare än vad som vore önskvärt i verkligheten. Trots att många utvecklingsmöjligheter kvarstår för haptisk styrning av en rullstol, visar detta arbete att det finns stor potential i att implementera denna teknik med handrörelsestyrning i en verklig rullstol.
Sulaiman, Amil, and Erik Janerdal. "Gesture controlled robot hand." Thesis, Högskolan i Halmstad, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44940.
Full textDetta projekt valdes av studenterna själva, och tanken bakom det föreslagna systemet var atterbjuda en alternativ metod för att möjliggöra styrning av robotar på avlägsna platser och i tuffamiljöer med hjälp av datorseende. Syftet med projektet var att introducera ett mer naturligt sätt förinteraktion mellan människa och dator och att flytta människor ifrån farliga miljöer men samtidigtkunna utföra sina arbetssysslor. Komponenterna som använts i projektet består av en mekanisk hand, en Raspberry Pi, en RaspberryPi-kameramodul v2, servo, ett kort för servostyrning och en Raspberry Pi-skärm. Koden är skriven i C ++, och biblioteken som använts är OpenCV för bildanalysen och wiringPi för styrning av servo. Bildbehandlingen är uppdelad i fyra delar, varav en är ansvarig för att avgöra inom vilket områdesom handen befinner sig i, lokalisera fingertopparna och handflatans centrum, beräkna avstånden för att upptäcka rörelser hos användaren och slutligen en del som är ansvarig för kontrollen av servo. När det gäller kravspecifikationen och målen för projekten resulterade projektet i ett framgångsrikt fungerande system med vissa begränsningar. Det föreslagna systemet är dock beroende av den binära masken som skapats i ett av de första stegen i bildbehandlingsdelen. Resultaten visaratt skapandet av den binära masken är starkt beroende av scenens ljusförhållanden. Det finnsfortfarande utrymme för fler förbättringar av bildbehandling och alternativa metoder för att uppnå bättre resultat.
Mann, Pamela Florence. "Meaning, gesture and Gauguin." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/35891/1/35891_Mann_1998.pdf.
Full textKuhlman, Lane Marie. "Gesture mapping for interaction design an investigative process for developing interactive gesture libraries /." Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1244003264.
Full textDuvillard, Jean. ""L'introspection gestuée" - La place des gestes et micro-gestes professionnels dans la formation initiale et continue des métiers de l'enseignement." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10191/document.
Full textFor the last thirty years research and scientific literature on education have focused the importance of analysing best practises in teacher training, from the point of view of both teachers and trainers. Today the subject of professional gestures has become one of the most important issues in effective teacher training. This study used data from varied theoretical and complementary approaches such as anthropology, semiotics, cognitive ergonomics and our research aims at identifying vibrant “professional micro gestures” which are to be put into practise. It tries to measure and assess the importance of the awareness of these micro gestures referred to as -introspection on gestures- in the appropriation and the implementation of professional gestures by both novice teachers and experts in different subjects and teaching primary and secondary levels. Most of the difficulties that teachers have to deal with are due to the fact that they don’t control certain “action micro gestures” experienced in their communication both didactic and educational. From two professional gestures: ‘Observing’ and ‘Acting’, we have highlighted 5 micro-gestures constantly interacting with the protagonists of the classroom. They are posture, voice, eye contact, speech and use of space and moving. The method that we used to collect data for our research is based on recordings of professional situations followed by self assessment interviews and feedback. This qualitative approach deals with the analysis of both the work and the speech (verbal and non verbal
Parra, González Luis Otto. "gestUI: a model-driven method for including gesture-based interaction in user interfaces." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/89090.
Full textLa investigación reportada y discutida en esta tesis representa un método nuevo para definir gestos personalizados y para incluir interacción basada en gestos en interfaces de usuario de sistemas software con el objetivo de ayudar a resolver los problemas encontrados en la literatura relacionada respecto al desarrollo de interfaces basadas en gestos de usuarios. Este trabajo de investigación ha sido realizado de acuerdo a la metodología Ciencia del Diseño, que está basada en el diseño e investigación de artefactos en un contexto. En esta tesis, el nuevo artefacto es el método dirigido por modelos para incluir interacción basada en gestos en interfaces de usuario. Esta metodología considera dos ciclos: el ciclo principal, denominado ciclo de ingeniería, donde se ha diseñado un método dirigido por modelos para incluir interacción basada en gestos. El segundo ciclo es el ciclo de investigación, donde se definen dos ciclos de este tipo. El primero corresponde a la validación del método propuesto con una evaluación empírica y el segundo ciclo corresponde a un Technical Action Research para validar el método en un contexto industrial. Adicionalmente, Ciencia del Diseño provee las claves sobre como conducir la investigación, sobre cómo ser riguroso y poner en práctica reglas científicas. Además, Ciencia del Diseño ha sido un recurso clave para organizar la investigación realizada en esta tesis. Nosotros reconocemos la aplicación de este marco de trabajo puesto que nos ayuda a reportar claramente nuestros hallazgos. Esta tesis presenta un marco teórico introduciendo conceptos relacionados con la investigación realizada, seguido por un estado del arte donde conocemos acerca del trabajo relacionado en tres áreas: Interacción Humano-Ordenador, paradigma dirigido por modelos en Interacción Humano-Ordenador e Ingeniería de Software Empírica. El diseño e implementación de gestUI es presentado siguiendo el paradigma dirigido por modelos y el patrón de diseño Modelo-Vista-Controlador. Luego, nosotros hemos realizado dos evaluaciones de gestUI: (i) una evaluación empírica basada en ISO 25062-2006 para evaluar la usabilidad considerando efectividad, eficiencia y satisfacción. Satisfacción es medida por medio de la facilidad de uso percibida, utilidad percibida e intención de uso; y, (ii) un Technical Action Research para evaluar la experiencia del usuario y la usabilidad. Nosotros hemos usado Model Evaluation Method, User Experience Questionnaire y Microsoft Reaction Cards como guías para realizar las evaluaciones antes mencionadas. Las contribuciones de nuestra tesis, limitaciones del método y de la herramienta de soporte, así como el trabajo futuro son discutidas y presentadas.
La investigació reportada i discutida en aquesta tesi representa un mètode per definir gests personalitzats i per incloure interacció basada en gests en interfícies d'usuari de sistemes de programari. L'objectiu és ajudar a resoldre els problemes trobats en la literatura relacionada al desenvolupament d'interfícies basades en gests d'usuaris. Aquest treball d'investigació ha sigut realitzat d'acord a la metodologia Ciència del Diseny, que està basada en el disseny i investigació d'artefactes en un context. En aquesta tesi, el nou artefacte és el mètode dirigit per models per incloure interacció basada en gests en interfícies d'usuari. Aquesta metodologia es considerada en dos cicles: el cicle principal, denominat cicle d'enginyeria, on es dissenya un mètode dirigit per models per incloure interacció basada en gestos. El segon cicle és el cicle de la investigació, on es defineixen dos cicles d'aquest tipus. El primer es correspon a la validació del mètode proposat amb una avaluació empírica i el segon cicle es correspon a un Technical Action Research per validar el mètode en un context industrial. Addicionalment, Ciència del Disseny proveeix les claus sobre com conduir la investigació, sobre com ser rigorós i ficar en pràctica regles científiques. A més a més, Ciència del Disseny ha sigut un recurs clau per organitzar la investigació realitzada en aquesta tesi. Nosaltres reconeixem l'aplicació d'aquest marc de treball donat que ens ajuda a reportar clarament les nostres troballes. Aquesta tesi presenta un marc teòric introduint conceptes relacionats amb la investigació realitzada, seguit per un estat del art on coneixem a prop el treball realitzat en tres àrees: Interacció Humà-Ordinador, paradigma dirigit per models en la Interacció Humà-Ordinador i Enginyeria del Programari Empírica. El disseny i implementació de gestUI es presenta mitjançant el paradigma dirigit per models i el patró de disseny Model-Vista-Controlador. Després, nosaltres hem realitzat dos avaluacions de gestUI: (i) una avaluació empírica basada en ISO 25062-2006 per avaluar la usabilitat considerant efectivitat, eficiència i satisfacció. Satisfacció es mesura mitjançant la facilitat d'ús percebuda, utilitat percebuda i intenció d'ús; (ii) un Technical Action Research per avaluar l'experiència del usuari i la usabilitat. Nosaltres hem usat Model Evaluation Method, User Experience Questionnaire i Microsoft Reaction Cards com guies per realitzar les avaluacions mencionades. Les contribucions de la nostra tesi, limitacions del mètode i de la ferramenta de suport així com el treball futur són discutides i presentades.
Parra González, LO. (2017). gestUI: a model-driven method for including gesture-based interaction in user interfaces [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/89090
TESIS
Lascarides, Alex, and Matthew Stone. "Formal semantics for iconic gesture." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/1033/.
Full textStendahl, Jonas, and Johan Arnör. "Gesture Keyboard USING MACHINE LEARNING." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157141.
Full textMarknaden för mobila enheter expanderar kraftigt. Inmatning är en viktig del vid användningen av sådana produkter och en inmatningsmetod som är smidig och snabb är därför mycket intressant. Ett tangentbord för gester erbjuder användaren möjligheten att skriva genom att dra fingret över bokstäverna i det önskade ordet. I denna studie undersöks om tangentbord för gester kan förbättras med hjälp av maskininlärning. Ett tangentbord som använde en Multilayer Perceptron med backpropagation utvecklades och utvärderades. Resultaten visar att den undersökta implementationen inte är en optimal lösning på problemet att känna igen ord som matas in med hjälp av gester.
Eisenstein, Jacob (Jacob Richard). "Gesture in automatic discourse processing." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44401.
Full textIncludes bibliographical references (p. 145-153).
Computers cannot fully understand spoken language without access to the wide range of modalities that accompany speech. This thesis addresses the particularly expressive modality of hand gesture, and focuses on building structured statistical models at the intersection of speech, vision, and meaning. My approach is distinguished in two key respects. First, gestural patterns are leveraged to discover parallel structures in the meaning of the associated speech. This differs from prior work that attempted to interpret individual gestures directly, an approach that was prone to a lack of generality across speakers. Second, I present novel, structured statistical models for multimodal language processing, which enable learning about gesture in its linguistic context, rather than in the abstract. These ideas find successful application in a variety of language processing tasks: resolving ambiguous noun phrases, segmenting speech into topics, and producing keyframe summaries of spoken language. In all three cases, the addition of gestural features - extracted automatically from video - yields significantly improved performance over a state-of-the-art text-only alternative. This marks the first demonstration that hand gesture improves automatic discourse processing.
by Jacob Eisenstein.
Ph.D.
Wang, Lei. "Personalized Dynamic Hand Gesture Recognition." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231345.
Full textMänniskliga gester, med spatiala/temporala variationer, är svåra att känna igen med en generisk modell eller klassificeringsmetod. För att komma till rätta med problemet, föreslås personifierade, dynamiska gest igenkänningssätt baserade på Dynamisk Time Warping (DTW) och ett nytt koncept: Subjekt-Relativt Nätverk för att beskriva likheter vid utförande av dynamiska gester, vilket ger en ny syn på gest igenkänning. Genom att klustra eller ordna träningssubjekt baserat på nätverket föreslås två personifieringsalgoritmer för generativa och diskriminerande modeller. Dessutom jämförs och integreras tre grundläggande igenkänningsmetoder, DTW-baserad mall-matchning, Hidden Markov Model (HMM) och Fisher Vector-klassificering i den föreslagna personifierade gestigenkännande ansatsen. De föreslagna tillvägagångssätten utvärderas på ett utmanande, dynamiskt handmanipulerings dataset DHG14/28, som innehåller djupbilderna och skelettkoordinaterna som returneras av Intels RealSense-djupkamera. Experimentella resultat visar att de föreslagna personifierade algoritmerna kan förbättra prestandan i jämfört medgrundläggande generativa och diskriminerande modeller och uppnå den högsta nivån på 86,2%.
NORMELIUS, ANTON, and KARL BECKMAN. "Hand Gesture Controlled Omnidirectional Vehicle." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279822.
Full textSyftet med detta projekt var att studera hur handstyrning kan implementeras på ett fordon som utnyttjar mecanumhjul för att röra sig i alla riktningar. Vidare undersöktes också hur styrningen av sådant fordon kan genomföras trädlöst för ökad mobilitet. En prototypfarkost bestående av fyra mecanumhjul konstruerades. Mecanumhjul är sådana hjul som möjliggör translation i alla riktningar. Genom att variera rotationsriktningen på vardera motor ändras riktningen av den resulterande kraften på farkosten, vilket gör att den kan förflytta sig i önskad riktning. Handstyrning möjliggjordes genom att konstruera en till prototyp, som fästs i anslutning till handen, bestående av en IMU och en transceiver. Med IMU:n kan handens vinkel gentemot horisontalplanet beräknas och instruktioner kan skickas över till farkosten med hjälp av transceivern. Dessa instruktioner innehåller ett kort meddelande som specificerar i vilken riktning farkosten ska röra sig i. Resultaten visar på att trädlös handstyrning av en farkost fungerar utan märkbar tidsfördröjning i signalöverföring och att signalerna som skickas till farkosten innehåller korrekta instruktioner gällande rörelseriktningar.
Espinoza, Victor. "Gesture Recognition in Tennis Biomechanics." Master's thesis, Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/530096.
Full textM.S.E.E.
The purpose of this study is to create a gesture recognition system that interprets motion capture data of a tennis player to determine which biomechanical aspects of a tennis swing best correlate to a swing efficacy. For our learning set this work aimed to record 50 tennis athletes of similar competency with the Microsoft Kinect performing standard tennis swings in the presence of different targets. With the acquired data we extracted biomechanical features that hypothetically correlated to ball trajectory using proper technique and tested them as sequential inputs to our designed classifiers. This work implements deep learning algorithms as variable-length sequence classifiers, recurrent neural networks (RNN), to predict tennis ball trajectory. In attempt to learn temporal dependencies within a tennis swing, we implemented gate-augmented RNNs. This study compared the RNN to two gated models; gated recurrent units (GRU), and long short-term memory (LSTM) units. We observed similar classification performance across models while the gated-methods reached convergence twice as fast as the baseline RNN. The results displayed 1.2 entropy loss and 50 % classification accuracy indicating that the hypothesized biomechanical features were loosely correlated to swing efficacy or that they were not accurately depicted by the sensor
Temple University--Theses
Scoble, Joselynne. "Stuttering blocks the flow of speech and gesture : the speech-gesture relationship in chronic stutterers." Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=69730.
Full textNygård, Espen Solberg. "Multi-touch Interaction with Gesture Recognition." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9126.
Full textThis master's thesis explores the world of multi-touch interaction with gesture recognition. The focus is on camera based multi-touch techniques, as these provide a new dimension to multi-touch with its ability to recognize objects. During the project, a multi-touch table based on the technology Diffused Surface Illumination has been built. In addition to building a table, a complete gesture recognition system has been implemented, and different gesture recognition algorithms have been successfully tested in a multi-touch environment. The goal with this table, and the accompanying gesture recognition system, is to create an open and affordable multi-touch solution, with the purpose of bringing multi-touch out to the masses. By doing this, more people will be able to enjoy the benefits of a more natural interaction with computers. In a larger perspective, multi-touch is just the beginning, and by adding additional modalities to our applications, such as speech recognition and full body tracking, a whole new level of computer interaction will be possible.
Jannedy, Stefanie, and Norma Mendoza-Denton. "Structuring information through gesture and intonation." Universität Potsdam, 2005. http://opus.kobv.de/ubp/volltexte/2006/877/.
Full textWe explore the role of gesture and intonation in structuring and aligning information in spoken discourse through a study of the co-occurrence of pitch accents and gestural apices.
Metaphorical spatialization through gesture also plays a role in conveying the contextual relationships between the speaker, the government and other external forces in a naturally-occurring political speech setting.
Khan, Muhammad. "Hand Gesture Detection & Recognition System." Thesis, Högskolan Dalarna, Datateknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6496.
Full textGlatt, Ruben [UNESP]. "Deep learning architecture for gesture recognition." Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/115718.
Full textO reconhecimento de atividade de visão de computador desempenha um papel importante na investigação para aplicações como interfaces humanas de computador, ambientes inteligentes, vigilância ou sistemas médicos. Neste trabalho, é proposto um sistema de reconhecimento de gestos com base em uma arquitetura de aprendizagem profunda. Ele é usado para analisar o desempenho quando treinado com os dados de entrada multi-modais em um conjunto de dados de linguagem de sinais italiana. A área de pesquisa subjacente é um campo chamado interação homem-máquina. Ele combina a pesquisa sobre interfaces naturais, reconhecimento de gestos e de atividade, aprendizagem de máquina e tecnologias de sensores que são usados para capturar a entrada do meio ambiente para processamento posterior. Essas áreas são introduzidas e os conceitos básicos são descritos. O ambiente de desenvolvimento para o pré-processamento de dados e algoritmos de aprendizagem de máquina programada em Python é descrito e as principais bibliotecas são discutidas. A coleta dos fluxos de dados é explicada e é descrito o conjunto de dados utilizado. A arquitetura proposta de aprendizagem consiste em dois passos. O pré-processamento dos dados de entrada e a arquitetura de aprendizagem. O pré-processamento é limitado a três estratégias diferentes, que são combinadas para oferecer seis diferentes perfis de préprocessamento. No segundo passo, um Deep Belief Network é introduzido e os seus componentes são explicados. Com esta definição, 294 experimentos são realizados com diferentes configurações. As variáveis que são alteradas são as definições de pré-processamento, a estrutura de camadas do modelo, a taxa de aprendizagem de pré-treino e a taxa de aprendizagem de afinação. A avaliação dessas experiências mostra que a abordagem de utilização de uma arquitetura ... (Resumo completo, clicar acesso eletrônico abaixo)
Activity recognition from computer vision plays an important role in research towards applications like human computer interfaces, intelligent environments, surveillance or medical systems. In this work, a gesture recognition system based on a deep learning architecture is proposed. It is used to analyze the performance when trained with multi-modal input data on an Italian sign language dataset. The underlying research area is a field called human-machine interaction. It combines research on natural user interfaces, gesture and activity recognition, machine learning and sensor technologies, which are used to capture the environmental input for further processing. Those areas are introduced and the basic concepts are described. The development environment for preprocessing data and programming machine learning algorithms with Python is described and the main libraries are discussed. The gathering of the multi-modal data streams is explained and the used dataset is outlined. The proposed learning architecture consists of two steps. The preprocessing of the input data and the actual learning architecture. The preprocessing is limited to three different strategies, which are combined to offer six different preprocessing profiles. In the second step, a Deep Belief network is introduced and its components are explained. With this setup, 294 experiments are conducted with varying configuration settings. The variables that are altered are the preprocessing settings, the layer structure of the model, the pretraining and the fine-tune learning rate. The evaluation of these experiments show that the approach of using a deep learning architecture on an activity or gesture recognition task yields acceptable results, but has not yet reached a level of maturity, which would allow to use the developed models in serious applications.
Gillian, N. E. "Gesture recognition for musician computer interaction." Thesis, Queen's University Belfast, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.546348.
Full textCairns, Alistair Y. "Towards the automatic recognition of gesture." Thesis, University of Dundee, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385803.
Full textHarding, Peter Reginald George. "Gesture recognition by Fourier analysis techniques." Thesis, City University London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440735.
Full textGOMES, MARIA LUZIA DE CERQUEIRA. "OBJECT AND GESTURE - NA INTERDISCIPLYNARY LOOK." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13140@1.
Full textEste trabalho trata de uma reflexão sobre a interação homem e objeto, a partir da percepção elementar que se encontra nesse “inter-mundo”: o gesto. Tornar manifesta essa ação do homem – ora espontânea ou técnica – é o objetivo desta narrativa que busca nutrir as relações humanas, ocorridas nos espaço de suas vivências, através da consciência de suas extensões, ou seja, dos seus gestos e objetos.
This project is a research proposal on the interaction between man and object through the elementary perception found in this inter-world: the gesture. The manifestation of this human action - either spontaneous or technical - is the goal of this narrative, which aims to nourish human relations, present in your life experiences, through the awareness of their extensions, that is, their gestures and objects.
Tanguay, Donald O. (Donald Ovila). "Hidden Markov models for gesture recognition." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/37796.
Full textIncludes bibliographical references (p. 41-42).
by Donald O. Tanguay, Jr.
M.Eng.
Wilson, Andrew David. "Learning visual behavior for gesture analysis." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/62924.
Full textYao, Yi. "Hand gesture recognition in uncontrolled environments." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/74268/.
Full textGonçalves, Duarte Nuno de Jesus. "Gesture based interface for image annotation." Master's thesis, Faculdade de Ciências e Tecnologia, 2008. http://hdl.handle.net/10362/7828.
Full textGiven the complexity of visual information, multimedia content search presents more problems than textual search. This level of complexity is related with the difficulty of doing automatic image and video tagging, using a set of keywords to describe the content. Generally, this annotation is performed manually (e.g., Google Image) and the search is based on pre-defined keywords. However, this task takes time and can be dull. In this dissertation project the objective is to define and implement a game to annotate personal digital photos with a semi-automatic system. The game engine tags images automatically and the player role is to contribute with correct annotations. The application is composed by the following main modules: a module for automatic image annotation, a module that manages the game graphical interface (showing images and tags), a module for the game engine and a module for human interaction. The interaction is made with a pre-defined set of gestures, using a web camera. These gestures will be detected using computer vision techniques interpreted as the user actions. The dissertation also presents a detailed analysis of this application, computational modules and design, as well as a series of usability tests.
Glatt, Ruben. "Deep learning architecture for gesture recognition /." Guaratinguetá, 2014. http://hdl.handle.net/11449/115718.
Full textCoorientador: Daniel Julien Barros da Silva Sampaio
Banca: Galeno José de Sena
Banca: Luiz de Siqueira Martins Filho
Resumo: O reconhecimento de atividade de visão de computador desempenha um papel importante na investigação para aplicações como interfaces humanas de computador, ambientes inteligentes, vigilância ou sistemas médicos. Neste trabalho, é proposto um sistema de reconhecimento de gestos com base em uma arquitetura de aprendizagem profunda. Ele é usado para analisar o desempenho quando treinado com os dados de entrada multi-modais em um conjunto de dados de linguagem de sinais italiana. A área de pesquisa subjacente é um campo chamado interação homem-máquina. Ele combina a pesquisa sobre interfaces naturais, reconhecimento de gestos e de atividade, aprendizagem de máquina e tecnologias de sensores que são usados para capturar a entrada do meio ambiente para processamento posterior. Essas áreas são introduzidas e os conceitos básicos são descritos. O ambiente de desenvolvimento para o pré-processamento de dados e algoritmos de aprendizagem de máquina programada em Python é descrito e as principais bibliotecas são discutidas. A coleta dos fluxos de dados é explicada e é descrito o conjunto de dados utilizado. A arquitetura proposta de aprendizagem consiste em dois passos. O pré-processamento dos dados de entrada e a arquitetura de aprendizagem. O pré-processamento é limitado a três estratégias diferentes, que são combinadas para oferecer seis diferentes perfis de préprocessamento. No segundo passo, um Deep Belief Network é introduzido e os seus componentes são explicados. Com esta definição, 294 experimentos são realizados com diferentes configurações. As variáveis que são alteradas são as definições de pré-processamento, a estrutura de camadas do modelo, a taxa de aprendizagem de pré-treino e a taxa de aprendizagem de afinação. A avaliação dessas experiências mostra que a abordagem de utilização de uma arquitetura ... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Activity recognition from computer vision plays an important role in research towards applications like human computer interfaces, intelligent environments, surveillance or medical systems. In this work, a gesture recognition system based on a deep learning architecture is proposed. It is used to analyze the performance when trained with multi-modal input data on an Italian sign language dataset. The underlying research area is a field called human-machine interaction. It combines research on natural user interfaces, gesture and activity recognition, machine learning and sensor technologies, which are used to capture the environmental input for further processing. Those areas are introduced and the basic concepts are described. The development environment for preprocessing data and programming machine learning algorithms with Python is described and the main libraries are discussed. The gathering of the multi-modal data streams is explained and the used dataset is outlined. The proposed learning architecture consists of two steps. The preprocessing of the input data and the actual learning architecture. The preprocessing is limited to three different strategies, which are combined to offer six different preprocessing profiles. In the second step, a Deep Belief network is introduced and its components are explained. With this setup, 294 experiments are conducted with varying configuration settings. The variables that are altered are the preprocessing settings, the layer structure of the model, the pretraining and the fine-tune learning rate. The evaluation of these experiments show that the approach of using a deep learning architecture on an activity or gesture recognition task yields acceptable results, but has not yet reached a level of maturity, which would allow to use the developed models in serious applications.
Mestre
Caceres, Carlos Antonio. "Machine Learning Techniques for Gesture Recognition." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/52556.
Full textMaster of Science
Pfister, Tomas. "Advancing human pose and gesture recognition." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:64e5b1be-231e-49ed-b385-e87db6dbeed8.
Full text