Tesis sobre el tema "Pointing gestures"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Pointing gestures.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 31 mejores tesis para su investigación sobre el tema "Pointing gestures".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Wu, Zhen. "The role of pointing gestures in facilitating word learning". Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1805.

Texto completo
Resumen
Previous natural observations have found a robust correlation between infants’ spontaneous gesture production and vocabulary development: the onset and frequency of infants’ pointing gestures are significantly correlated to their subsequent vocabulary size (Colonnesi, Stams, Koster, & Noom, 2010). The present study first examined the correlations between pointing and vocabulary size in an experimental setting, and then experimentally manipulated responses to pointing, to investigate the role of pointing in infants’ forming word-object associations. In the first experiment, we elicited 12- to 24-month old infants’ pointing gestures to 8 familiar and 8 novel objects. Their vocabulary was assessed by the MacArthur Communicative Development Inventory (MCDI): Words and Gestures. Results showed that 12-16 month old infants’ receptive vocabulary was positively correlated to infants’ spontaneous pointing. This correlation, however, was not significant in 19-24 month old infants. This experiment thus generalizes the previous naturalistic observation findings to an experimental setting, and shows a developmental change in the relation between pointing and receptive vocabulary. Together with prior studies, it suggests a possible positive social feedback loop of pointing and language skills in infants younger than 18 months old: the bigger vocabulary size infants have, the more likely they point, the more words they hear, and then the faster they develop their vocabulary. In the second experiment, we tested whether 16-month-old infants’ pointing gestures facilitate infants’ word learning in the moment. Infants were randomly assigned to one of three conditions: the experimenter labeled an unfamiliar object with a novel name 1) immediately after the infant pointed to it (the point contingent condition); 2) when the infant looked at it; or 3) at a schedule predetermined by a vocabulary-matched infant in the point contingent condition. After hearing the objects’ names, infants were presented with a word learning test. Results showed that infants successfully selected the correct referent above chance level only in the point contingent condition, and their performance was significantly better in the point contingent condition than the other two conditions. Therefore, only words that were provided contingently after pointing were learned. Taken together, these two studies further our understanding of the correlation between early gesture and vocabulary development and suggest that pointing plays a role in early word learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Råman, J. (Joonas). "Pointing gestures as embodied resources of references, requests and directives in the car". Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201304171193.

Texto completo
Resumen
This thesis studies the different ways the driver uses pointing gestures as resources of embodied references, requests and directives, as well as how these gestures are modified to ensure the successful transfer of meaning to the recipient even when facing the challenges of mobility and multitasking which are essentially inherent in the conversational setting of the car. Specific focus is on the pointing gesture’s witnessability, duration, domain of scrutiny, and the apex location in relation to the verbal resources used. As a source of data, this thesis employs three corpora of naturally occurring, conversation. The Habitable Cars corpus features mostly native English speakers, the Talk&Drive corpus features speaker from several different nationalities and languages, and the Kokkola corpus features only native Finnish speakers. Keeping in with the research methods of conversation analysis, transcripts of the conversation situations are provided and serve as a source of analysis. The main finding of this research is that in order to distinguish pointing gestures used for referring, requesting and directing from each other, the driver modifies the gesture’s witnessability. Pointing gestures used for referring tend to be the shortest in duration and the least witnessable from the recipient’s point of view. Pointing gestures used for directing tend to the longest in duration and the most witnessable, with the requesting gestures falling somewhere between these two extremes. Factors such as urgency and the increased need to multitask between talking and driving also have an effect on the gesture’s delivery, increasing or decreasing the overall witnessability of the gesture depending on the situation, and sometimes blurring the line between the three social actions examined in this thesis. However, the embodied resources used for performing the three social actions are distinguishable from each other and the initial categorization is justifiable. This thesis continues the research into gesture witnessability, and it’s importance in prioritization and acquisition of the referent. However, this thesis moves the concept of witnessability away from the world of ‘professional vision’, and further develops it by examining its interplay with the verbal resources, gesture duration and domain of scrutiny
Tutkimus käsittelee auton kuljettajan osoittavia eleitä kehollisina resursseina kolmen sosiaalisen toiminnon, viittaamisen, pyytämisen, ja käskemisen, toteuttamisessa. Tutkimuksessa tarkastellaan myös, miten eleitä muokataan, jotta ne välittäisivät toiminnon viestin vastaanottajalle myös silloin kun ympäristön haasteellisuus lisääntyy. Haasteellisuutta voivat lisätä muun muassa mobiliteetti tai lisääntynyt tarve tuottaa useita rinnakkaisia toimintajaksoja. Tutkimus keskittyy erityisesti eleen havaittavuuteen ja kestoon, eleen luomaan tarkkailun alueeseen (domain of scrutiny) sekä sen huipentuman sijaintiin suhteessa verbaalisiin resursseihin. Tutkimusaineistona on käytetty kolmea korpusta, jotka sisältävät luontaista keskustelua. Habitable Cars -korpus sisältää enimmäkseen äidinkieleltään englanninkielisiä puhujia, Talk&Drive-korpus useamman eri kielen edustajia, ja Kokkola-korpus yksinomaan suomenkielisiä puhujia. Keskusteluanalyyttisen tutkimuksen periaatteita noudattaen keskustelutilanteista on tuotettu transkriptiota, jotka toimivat analyysin lähteenä. Tärkeimpänä löydöksenä voitaneen pitää sitä, että kuljettaja muokkaa ensisijaisesti eleen havaittavuutta erottaakseen keholliset viittaukset, pyynnöt ja käskyt toisistaan. Kehollisten viittausten osoittava ele on kestoltaan lyhyin ja havaittavuudeltaan pienin, kun taas käskyyn käytetty ele on kestoltaan pisin sekä havaittavuudeltaan suurin. Pyyntöihin käytetyt eleet ovat kestoltaan ja havaittavuudeltaan näiden kahden välimaastossa. Tilanteen kiireellisyys sekä rinnakkaiset toimintajaksot voivat vaikuttaa eleen havaittavuuteen joko lisäten tai vähentäen sitä sekä usein hämärtäen rajoja kolmen tarkastellun sosiaalisen toiminnon välillä. Tästä huolimatta tarkasteltujen kolmen toimintokategorian eleet ovat erotettavissa toisistaan. Täten alkuperäinen kategorisointi on oikeutettu. Tämä tutkimus on jatkoa aiemmille tutkimuksille eleen havaittavuudesta sekä sen vaikutuksesta kohteen löytämiseen ja priorisointiin. Tutkimuksessa havaittavuuden käsite kuitenkin siirretään niinkutsutun ammattinäön piiristä lähemmäs arkipäivää. Lisäksi havaittavuuden käsitettä syvennetään tutkimalla sen suhdetta ja vuorovaikutusta verbaalisiin resursseihin, eleen kestoon sekä tarkkailun alueeseen
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Cochet, Hélène. "Hand shape, function and hand preference of communicative gestures in young children : insights into the origins of human communication". Thesis, Aix-Marseille 1, 2011. http://www.theses.fr/2011AIX10076/document.

Texto completo
Resumen
Bien que l’utilisation précoce de gestes communicatifs par de jeunes enfants soit reconnue comme étant étroitement liée au développement du langage (e.g., Colonnesi et al., 2010), la nature des liens gestes–langage doit encore être clarifiée. Cette thèse a pour but d’étudier la production de gestes de pointage au cours du développement afin de déterminer si la relation prédictive et facilitatrice entre les gestes et l’acquisition du langage implique des fonctions spécifiques du pointage, en association avec des caractéristiques spécifiques en terme de forme de mains, regard et vocalisations. De plus, une attention particulière a été apportée à l’étude des préférences manuelles dans le but de mieux comprendre le développement de la spécialisation hémisphérique gauche pour les comportements communicatifs. Nos résultats ont révélé des relations complexes entre le langage, les gestes communicatifs et les activités de manipulation, qui dépendent de la fonction des gestes (i.e., pointage impératif versus déclaratif) et des étapes spécifiques de l’acquisition du langage. Les gestes déclaratifs sont plus étroitement associés au développement de la parole que les gestes impératifs, au-moins avant la période d’explosion lexicale. De plus, la comparaison des patterns de préférence manuelle chez l’enfant et l’adulte a montré une plus grande proximité pour les gestes que pour la manipulation d’objet. L’asymétrie manuelle droite pour les gestes communicatifs est ainsi établie à des stades précoces, ce qui suggère un rôle primordial des gestes dans la spécialisation hémisphérique.Finalement, nos résultats ont mis en évidence l’existence d’un système de communication dans l’hémisphère cérébral gauche contrôlant à la fois la communication gestuelle et verbale, qui pourrait avoir une origine phylogénétique ancienne (e.g., Corballis, 2010). Par conséquent, le présent travail peut améliorer notre compréhension des origines du langage, y compris des mécanismes de la spécialisation cérébrale pour les comportements communicatifs
Even though children’s early use of communicative gestures is recognized as being closely related to language development (e.g., Colonnesi et al., 2010), the nature of speech–gestures links still needs to be clarified. This dissertation aims to investigate the production of pointing gestures during development to determine whether the predictive and facilitative relationship between gestures and language acquisition involves specific functions of pointing, in association with specific features in terms of hand shape, gaze and accompanying vocalizations. Moreover, special attention was paid to the study of hand preferences in order to better understand the development of left hemisphere specialization for communicative behaviors. Our results revealed complex relationships between language, communicative gestures and manipulative activities depending on the function of gestures (i.e., imperative versus declarative pointing) as well as on specific stages of language acquisition. Declarative gestures were found to be more closely associated with speech development than imperative gestures, at least before the lexical spurt period. In addition, the comparison of hand-preference patterns in adults and infants showed stronger similarity for gestures than for object manipulation. The right-sided asymmetry for communicative gestures is thus established in early stages, which suggests a primary role of gestures in hemispheric specialization.Finally, our findings have highlighted the existence of a left-lateralized communication system controlling both gestural and vocal communication, which has been suggested to have a deep phylogenetic origin (e.g., Corballis, 2010). Therefore, the present work may improve current understanding of the evolutionary roots of language, including the mechanisms of cerebral specialization for communicative behaviors
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Roustan, Benjamin. "Etude de la coordination gestes manuels/parole dans le cadre de la désignation". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00759199.

Texto completo
Resumen
Le travail présenté dans cette thèse vise à étudier la coordination entre gestes manuels et parole lors de la production d'énoncés multimodaux. Les études menées s'intéressent plus particulièrement aux relations temporelles entre les deux modalités. Cette coordination a été étudiée plus précisément dans le cadre de la désignation qui est réalisable à la fois dans la modalité manuelle (geste de pointage) et dans la modalité parole (" montrer avec la voix ", en utilisant la focalisation et/ou les démonstratifs par exemple). Les études présentées ont été menées dans un environnement contrôlé de laboratoire afin d'obtenir des mesures précises et reproductibles en minimisant les facteurs extérieurs de variations intra- et inter-participants. Les productions des locuteurs peuvent ainsi être comparées entre-elles en se focalisant sur les facteurs d'intérêt toutes choses maintenues le plus possible égales par ailleurs. Un travail particulier de mise en place des protocoles a néanmoins permis de maintenir une tâche assez naturelle afin de ne pas induire des productions trop artificielles. Les deux premières études se sont intéressées à la production conjointe de gestes manuels et de parole contenant de la focalisation. Plusieurs types de gestes ont été comparés (geste de pointage, geste de battement et geste d'appui sur un bouton) lors d'une tâche de désignation. Il a été montré que la production de focalisation attire le geste manuel quel que soit son type mais que l'attraction est plus " précise " et fine pour le pointage. Par ailleurs, l'apex du geste de pointage semble être cooccurent à une cible articulatoire plutôt qu'acoustique. La seconde étude manipule le lien de désignation le geste de pointage et la parole. Elle montre, en exhibant deux stratégies adoptées par les participants, la complexité des mécanismes mis en jeu dans cette coordination. Finalement, une troisième étude s'intéresse à la coordination dans une tâche interactive et collaborative plus naturelle. Dans cette tâche les locuteurs utilisent naturellement des gestes de pointage pour désigner à leur interlocuteur l'emplacement d'une carte à poser grâce à une phrase porteuse contenant un démonstratif. Les résultats montrent une cooccurrence de la partie du geste qui montre avec l'information qui lui est complémentaire en parole, i.e. avec le nom de l'objet à poser à l'endroit désigné par le geste de pointage, plutôt qu'avec la partie de la parole qui désigne, i.e. le démonstratif. L'effet de la perturbation de l'interaction par un bruit ambiant est également testé et il est montré que, si la parole subit un effet Lombard classique, la production de gestes est peu modifiée mis à part une adaptation de la durée de la partie du geste qui montre à l'allongement de la parole. Ce mémoire propose par ailleurs une exploration des procédés d'annotation multimodaux mis en place pour l'annotation de tâches semi-contrôlées mais applicables à des cas plus généraux. Le manuscrit se conclut par une mise en perspective des résultats pour l'amélioration de certains modèles de production conjointe gestes manuels/parole et fournit quelques pistes utilisables dans le domaine des agents conversationnels ainsi que pour la détection de pathologies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Madsen, Elainie Alenkær. "Attention following and nonverbal referential communication in bonobos (Pan paniscus), chimpanzees (Pan troglodytes) and orangutans (Pongo pygmaeus)". Thesis, University of St Andrews, 2011. http://hdl.handle.net/10023/1893.

Texto completo
Resumen
A central issue in the study of primate communication is the extent to which individuals adjust their behaviour to the attention and signals of others, and manipulate others’ attention to communicate about external events. I investigated whether 13 chimpanzees (Pan troglodytes spp.), 11 bonobos (Pan paniscus), and 7 orangutans (Pongo pygmaeus pygmaeus) followed conspecific attention and led others to distal locations. Individuals were presented with a novel stimulus, to test if they would lead a conspecific to detect it in two experimental conditions. In one the conspecific faced the communicator, while another required the communicator to first attract the attention of a conspecific. All species followed conspecific attention, but only bonobos in conditions that required geometric attention following and that the communicator first attract the conspecific‘s attention. There was a clear trend for the chimpanzees to selectively produce a stimulus directional ‘hunching’ posture when viewing the stimulus in the presence of a conspecific rather than alone (the comparison was statistically non-significant, but very closely approached significance [p = 0.056]), and the behaviour consistently led conspecifics to look towards the stimulus. An observational study showed that ‘hunching’ only occurred in the context of attention following. Some chimpanzees and bonobos consistently and selectively combined functionally different behaviours (consisting of sequential auditory-stimulus-directional-behaviours), when viewing the stimulus in the presence of a non-attentive conspecific, although at species level this did not yield significant effects. While the design did not eliminate the possibility of a social referencing motive (“look and help me decide how to respond”), the coupling of auditory cues followed by directional cues towards a novel object, is consistent with a declarative and social referential interpretation of non-verbal deixis. An exploratory study, which applied the ‘Social Attention Hypothesis’ (that individuals accord and receive attention as a function of dominance) to attention following, showed that chimpanzees were more likely to follow the attention of the dominant individual. Overall, the results suggest that the paucity of observed referential behaviours in apes may owe to the inconspicuousness and multi-faceted nature of the behaviours.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ben, Chikha Houssem. "Impact des gestes de pointage et du regard de l’entraîneur sur la mémorisation d'une scène tactique de basket-ball : Études oculométriques". Electronic Thesis or Diss., Valenciennes, Université Polytechnique Hauts-de-France, 2023. https://ged.uphf.fr/nuxeo/site/esupversions/62b8d414-a45c-4a04-8f10-da43ab0dc578.

Texto completo
Resumen
Les gestes de pointage et le regard guidé sont couramment utilisés comme des indices corporels pour faciliter l'attention visuelle et la compréhension dans différents domaines académiques. Cependant, leur efficacité spécifique dans le contexte sportif, notamment dans l'enseignement des schémas tactiques de basket-ball, reste relativement peu explorée. L'objectif central de cette thèse était donc d'examiner l'impact des gestes de pointage et/ou du regard guidé de l'entraîneur sur l'attention visuelle et la mémorisation des scènes tactiques. Les principaux résultats essentiels ont révélé des interactions significatives entre l'utilisation de ces indices et le niveau d'expertise des joueurs, démontrant ainsi un effet de renversement de l'expertise. Dans la plupart des expériences menées, les méthodes pédagogiques efficaces pour les joueurs novices se sont avérées inefficaces, voire préjudiciables, pour les joueurs experts. Par conséquent, ces résultats soulignent l'importance de s'adapter aux variations du niveau d'expertise des joueurs lors de l'utilisation des gestes de pointage et/ou du regard guidé pour la présentation du matériel d'entraînement pour la présentation des plans et/ou phases de jeu en basket-ball
Pointing gestures and guided gaze are commonly used as bodily cues to enhance visual attention and comprehension in various academic domains. However, their specific effectiveness in the sports context, particularly in teaching basketball tactical patterns, remains relatively unexplored. Therefore, the central objective of this thesis was to examine the impact of pointing gestures and/or guided gaze by the coach on visual attention and memorization of tactical scenes. The key findings revealed significant interactions between the use of these cues and the players' level of expertise, demonstrating an expertise reversal effect. In most experiments conducted, pedagogical methods that were effective for novice players proved to be ineffective or even detrimental for expert players. Consequently, these results emphasize the importance of adapting the use of pointing gestures and guided gaze to accommodate variations in players' expertise level when presenting training materials for basketball game plans and/or phases
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Grover, Lesley Ann. "Comprehension of the manual pointing gesture in human infants : a developmental study of the cognitive and social-cognitive processes involved in the comprehension of the gesture". Thesis, University of Southampton, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329150.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Racine, Timothy Philip. "The role of shared practice in the origins of joint attention and pointing /". Burnaby B.C. : Simon Fraser University, 2005. http://ir.lib.sfu.ca/handle/1892/2056.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Hatzopoulou, Marianna. "Acquisition of reference to self and others in Greek Sign Language : From pointing gesture to pronominal pointing signs". Doctoral thesis, Stockholm : Sign Language Section, Department of Linguistics, Stockholm University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-8293.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Nugent, Susie P. "Infant cross-fostered chimpanzees develop indexical pointing". abstract and full text PDF (free order & download UNR users only), 2006. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433288.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Winnerhall, Louise. "The effect of breed selection on interpreting human directed cues in the domestic dog". Thesis, Linköpings universitet, Biologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-108847.

Texto completo
Resumen
During the course of time, artificial selection has given rise to a great diversity among today's dogs. Humans and dogs have evolved side by side and dogs have come to understand human body language relatively well. This study investigates whether selection pressure and domestication could reveal differences in dogs’ skill to interpret human directional cues, such as distal pointing. In this study, 46 pet dogs were tested from 27 breeds and 6 crossbreeds for performance in the two-way object choice task. Breeds that are selected to work with eye contact of humans were compared with breeds that are selected to work more independently. Dogs of different skull shape were also compared, as well as age, sex and previous training on similar tasks. No significant differences in performance were found between dogs of various age, sex or skull shape. There was a tendency for significant difference in performance if the dog had been previously trained on similar tasks. When dogs that made 100% one-sided choices were excluded, a tendency appeared for there to be a difference between the cooperative worker breeds compared to the other breeds for the time it took for dogs to make a choice. There is a correlation between the number of correct choices made and the latency for the dogs from being release to making a choice (choice latency). All groups of dogs, regardless of my categorization, performed above chance level, showing that dogs have a general ability to follow, and understand, human distal pointing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Esteve, Gibert Núria. "The integration of prosody and gesture in early intentional communication". Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/284226.

Texto completo
Resumen
This dissertation comprises four experimental studies which investigate the way infants integrate prosody and gesture for intentional communicative purposes. As adult speakers we automatically integrate prosody and gestures at a temporal and pragmatic level, and we use these cues together with social contextual information to convey and understand intentional meanings. My aim is to investigate whether infants use prosodic and gesture features in an integrated way for communicative purposes prior to their use of lexical-semantic cues. The dissertation includes four studies, each one described in a separate chapter. The first study is a longitudinal analysis of how a group of infants produce gesture and speech combinations in natural interactions, with results that show that already at 12 and 15 months of age infants temporally align prosodic and gesture prominences. The second study uses a habituation/test procedure to test the infants‟ early sensitivity to temporal gesture-prosodic integration, showing that 9-month-old infants are sensitive to the alignment between prosodic and gesture prominences. The third study analyzes the longitudinal productions of four infants at the pre-lexical stage and provides evidence that infants use prosodic cues such as pitch range and duration to convey specific intentions like requests, statements, responses, and expressions of satisfaction or discontent. Finally, the fourth study examines how infants responded at 12 months of age to different types of pointing-speech combinations and shows that x infants use prosodic and gestural cues to comprehend communicative intentions behind an attention-directing act. Altogether, this dissertation shows that the temporal integration of gesture and speech occurs at the early stages of language and cognitive development, and that pragmatic uses of prosody and gesture develop before infants master the use of lexical cues. Thus, prosody is the first grammatical component of language that infants use for communicative purposes, revealing that linguistic communication emerges before infants have the ability to use lexical items with semantic meanings. I further claim that infants‟ integration of prosody and gesture at the temporal and pragmatic levels is a reflex of an early emergence of language pragmatics.
Aquesta tesi inclou quatre estudis experimentals que investiguen com els infants integren prosòdia i gestualitat amb fins comunicatius. Els adults integrem la prosòdia i la gestualitat de manera temporal i pragmàtica i, juntament amb la informació sociocontextual, ho utilitzem per a transmetre i comprendre significats intencionals. En aquesta tesi es pretén investigar si els infants utilitzen la prosòdia i la gestualitat de manera integrada per a fins comunicatius, abans de ser capaços d‟emprar elements lexicosemàntics. La tesi inclou quatre estudis, cada un en un apartat diferent. El primer estudi analitza longitudinalment les combinacions de gest i parla dels infants en interaccions espontànies, i mostra que a partir dels 12 o 15 mesos els infants alineen temporalment la prominència prosòdica i la prominència gestual. El segon estudi empra el mètode d‟habituació/test per a comprovar l‟habilitat primerenca dels infants a percebre la integració temporal entre prosòdia i gest, i mostra que als 9 mesos els infants ja són capaços de percebre l‟alineació temporal entre la prominència gestual i la prosòdica. El tercer estudi també analitza longitudinalment les produccions dels infants en interaccions espontànies per a mostrar que, abans de produir les primeres paraules, els infants ja utilitzen elements prosòdics com el rang tonal i la durada, a més del gest, per a transmetre actes de parla com ara la petició, les respostes, les oracions declaratives, i les expressions de satisfacció o de descontentament. Finalment, el quart xii estudi investiga la reacció dels infants a diversos tipus de combinacions de parla i gest d‟assenyalar, i mostra que els infants de 12 mesos utilitzen les marques prosòdiques i gestuals del discurs per a entendre les intencions comunicatives que els són dirigides. En conjunt, aquesta tesi mostra que la integració temporal de prosòdia i gest ocorre en les etapes més primerenques del desenvolupament lingüístic i cognitiu, i que el usos pragmàtics de la prosòdia i la gestualitat emergeixen abans que els infants dominin l‟ús d‟elements lèxics. Així, la prosòdia és el primer component gramatical del llenguatge que els infants utilitzen amb finalitat comunicativa, cosa que indica que la comunicació lingüística emergeix abans que els infants tinguin la capacitat de produir ítems lèxics amb significants semàntics. La conclusió general és, doncs, que la integració temporal i pragmàtica de la prosòdia i el gest per part dels infants indica el desenvolupament primerenc de la pragmàtica lingüística.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Scheider, Linda [Verfasser]. "The command hypothesis versus the information hypothesis : how do domestic dogs (Canis familiaris) comprehend the human pointing gesture? / Linda Scheider". Berlin : Freie Universität Berlin, 2011. http://d-nb.info/1025939069/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Liu, Xiaoxing. "What role does effort play: the effect of effort for gesture interfaces and the effect of pointing on spatial working memory". Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2110.

Texto completo
Resumen
Automatically recognizing gestures of the hand is a promising approach to communicating with computers, particularly when keyboard and mouse interactions are inconvenient, when only a brief interaction is necessary, or when a command involves a three-dimensional, spatial component. Which gestures are most convenient or preferred in various circumstances is unknown. This work explores the idea that perceived physical effort of a hand gesture influences users’ preference for using it when communicating with a computer. First, the hypothesis that people prefer gestures with less effort is tested by measuring the perceived effort and appeal of simple gestures. The results demonstrate that gestures perceived as less effortful are more likely to be accepted and preferred. The second experiment tests similar hypothesis with three-dimensional selection tasks. Participants used the tapping gesture to select among 16 targets in two environments that differ primarily in the physical distance required to finish the task. Participants, again, favor the less effortful environment over the other. Together the experiments suggest that effort is an important factor in user preference for gestures. The effort-to-reliability tradeoff existing in the majority of current gesture interfaces is then studied in experiment 3. Participants are presented 10 different levels of effort-to-reliability tradeoff and decide which tradeoff they prefer. Extreme conditions were intentionally avoided. On average they rate their preferred condition 4.23 in a 10-point scale in terms of perceived effort, and can achieve a success rate of approximately 70%. Finally, the question of whether pointing to objects enhances recall of their visuospatial position in a three-dimensional virtual environment is explored. The results show that pointing actually decreases memory relative to passively viewing. All in all, this work suggests that effort is an important factor, and there is an optimal balance for the effort-to-reliability tradeoff from a user’s perspective. The understanding and careful consideration of this point can help make future gesture interfaces more usable.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Kato, Carolyn K. "A comparison between pre-verbal "you-me" pointing and the acquisition of verbal pronouns : does gestural knowledge facilitate the acquisition of verbal pronouns?" Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61834.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Melo, Silvia Beatriz Fonseca de. "O gato dom?stico (Felis catus) responde ? sinais gestuais? poss?veis implica??es do conv?vio social". Universidade Federal do Rio Grande do Norte, 2008. http://repositorio.ufrn.br:8080/jspui/handle/123456789/17272.

Texto completo
Resumen
Made available in DSpace on 2014-12-17T15:36:54Z (GMT). No. of bitstreams: 1 SilviaBFM.pdf: 183510 bytes, checksum: 25d17456b26e665dea3d1b33500465c1 (MD5) Previous issue date: 2008-11-07
The cats (Felis catus) were domesticated about 9,500 years ago due to the advent of agriculture, being used to control the pests that devastated the food harvested. These animals went through an artificial selection and over generations and millennia had their behavior and morphology changed by humans. This process of domestication by man gave rise to a special ability, the understanding of human pointing gestures, clearly noticed while we feed our pets. Our goal in this study was to assess the comprehension of pointing gestures by cats and also verify the influence that social interactions exerts on the development of this ability. We found that experimental subjects from both groups, solitary animals and social animals, were able to follow human indication in order to find hidden food. However, social interaction had no effect on cats performances. The ability tested here probably evolved during the process of domestication of this species, and social interaction seems to exert little or no influence upon its expression
AOs gatos (Felis catus) foram domesticados h? cerca de 9.500 anos devido ? agricultura, onde eram utilizados no combate ?s pragas que assolavam os alimentos colhidos. Esses animais passaram por uma sele??o artificial e ao longo das gera??es e mil?nios tiveram seus comportamentos e morfologia modificadas pelos humanos. O processo de domestica??o pelo homem fez surgir uma habilidade em especial, a compreens?o de sinais gestuais humanos, que ? bem observada nos momentos em que alimentamos nossos animais. Nosso objetivo neste estudo foi testar a resposta ? sinaliza??o gestual (comportamento de apontar) em gatos, emitida por humanos e tamb?m verificar a influ?ncia do conv?vio social sobre o desenvolvimento desta habilidade. Observamos que os sujeitos experimentais de ambos os grupos, animais solit?rios e de conv?vio em grupo, foram capazes de seguir os sinais de indica??o humana para localizar o alimento escondido. Por?m, a forma de conv?vio social n?o influenciou no desempenho dos gatos. A habilidade aqui testada, possivelmente evoluiu durante o processo de domestica??o dessa esp?cie, e a intera??o social parece exercer pouca ou nenhuma influ?ncia sobre a sua express?o
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Delamare, William. "Interaction à distance en environnement physique augmenté". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM032/document.

Texto completo
Resumen
Nous nous intéressons à l'interaction dans le contexte d'environnements physiques augmentés, plus précisément avec les objets physiques qui les composent. Bien que l'augmentation de ces objets offre de nouvelles possibilités d'interaction, notamment celle d'interagir à distance, le monde physique possède des caractéristiques propres rendant difficile l'adaptation de techniques d'interaction existantes en environnements virtuels. Il convient alors d'identifier ces caractéristiques afin de concevoir des techniques d'interaction à la fois efficaces et plaisantes dédiées à ces environnements physiques augmentés. Dans nos travaux, nous décomposons cette interaction à distance avec des objets physiques augmentés en deux étapes complémentaires : la sélection et le contrôle. Nous apportons deux contributions à chacun de ces champs de recherche. Ces contributions sont à la fois conceptuelles, avec la création d'espaces de conception, et pratiques, avec la conception, la réalisation logicielle et l'évaluation expérimentale de techniques d'interaction :- Pour l'étape de sélection, nous explorons la désambiguïsation potentielle après un geste de pointage à distance définissant un volume de sélection comme avec une télécommande infrarouge par exemple. En effet, bien que ce type de pointage sollicite moins de précision de la part de l'utilisateur, il peut néanmoins impliquer la sélection de plusieurs objets dans le volume de sélection et donc nécessiter une phase de désambiguïsation. Nous définissons et utilisons un espace de conception afin de concevoir et évaluer expérimentalement deux techniques de désambiguïsation visant à maintenir l'attention visuelle de l'utilisateur sur les objets physiques.- Pour l'étape de contrôle, nous explorons le guidage de gestes 3D lors d'une interaction gestuelle afin de spécifier des commandes à distance. Ce guidage est nécessaire afin d'indiquer à l'utilisateur les commandes disponibles ainsi que les gestes associés. Nous définissons un espace de conception capturant les caractéristiques comportementales d'un large ensemble de guides ainsi qu'un outil en ligne facilitant son utilisation. Nous explorons ensuite plusieurs options de conception afin d'étudier expérimentalement leurs impacts sur la qualité du guidage de gestes 3D
We explore interaction with augmented physical objects within physical environments. Augmented physical objects allow new ways of interaction, including distant interaction. However, the physical world has specificities making difficult the adaptation of interaction techniques already existing in virtual environments. These specificities need to be identified in order to design efficient and enjoyable interaction techniques dedicated to augmented physical environments. In our work, we split up distant interaction into two complementary stages: the selection and the control of augmented physical objects. For each of these stages, our contribution is two-fold. These contributions are both theoretical, with the establishment of design spaces, and practical, with the design, the implementation and the experimental evaluation of interaction techniques:- For the selection stage, we study the disambiguation potentially needed after a distal pointing gesture using a volume selection such as an infrared remote controller. Indeed, although the volume selection can facilitate the aiming action, several objects can fall into the selected volume. Thus, users should disambiguate this coarse pointing selection. We define and use a design space in order to design and experimentally evaluate two disambiguation techniques that maintain the user's focus on the physical objects.- For the control stage, we study the guidance of 3D hand gestures in order to trigger commands at a distance. Such guidance is essential in order to reveal available commands and the associated gestures. We define a design space capturing specificities of a wide range of guiding systems. We also provide an online tool, easing the use of such a large design space. We then explore the impact of several design options on the quality of 3D gestures guidance
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Rashdan, Khalid. "Entre "Ça" et "Comme ça", différences entre la deixis ad oculos et la deixis am phantasma au niveau gestuel, intonatif et syntaxique : étude chez des enfants entre 4 et 7 ans". Thesis, Paris 5, 2014. http://www.theses.fr/2014PA05H031/document.

Texto completo
Resumen
Dans cette thèse, je traite la question de la Deixis sur trois niveaux du langage : Syntaxique, phonétique et gestuel. L’idée principale, c’est que la Deixis ne s’arrête pas sur le fait de pointer un objet présent et peut aller plus loin avec des caractéristiques beaucoup plus abstraites. Cette thèse est accomplie sur trois parties fondamentales. La première partie contient un chapitre définitoire et dans le deuxième chapitre je présente les trois processus fondamentaux de la Deixis : la catégorisation, la nomination et la mémorisation. Dans la deuxième partie, j’étudie les processus cognitifs de la Deixis d’un point de vue dynamique et se servir des théories et concepts pour éclaircir la méthodologie utilisée. J’explique la dynamique de la Deixis Ad Oculos et la Deixis Am Phantasma en montrant comment cette dynamique est complètement dépendante des représentations personnelles du sujet. Les concepts de partage et d’égocentrage nous donnent un sens aux analyses que nous pourrons utiliser comme critères pour décider si un tel sujet a une bonne gestuelle communicative pour passer clairement son message à l’autrui. Ces concepts pourront être bien intéressantes en ce qui concerne des recherches étudiant le comportement des acteurs et les études de détection. Ainsi, avec quelques recherches plus profondes nous pourrons créer des listes de critères et de caractéristiques qui nous aideront à distinguer et détecter un comportement correct d’un autre qui est incorrecte. De même, nous pourrons définir un comportement normal d’un autre anormal et appliquer ces indices et critères sur des enfants souffrant d’incapacité pour s’exprimer tels que des enfants dyslexiques et autistes
In this thesis, I treat with the question of Deixis on three levels of language: Syntax, phonetic and gestural. The main idea is that the Deixis does not end on pointing out a present object and can go further with more abstract features. This thesis has three basic parts. The first contains a definitional chapter and in the second chapter I present the three basic processes of Deixis: categorization, nomination and memorization. In the second part, I study the cognitive process of Deixis from a dynamic point of view and use theories and concepts to clarify its methodology. I explain the dynamics of Deixis Ad Oculos and Deixis Am Phantasm showing how this dynamic is completely dependent on the personal representations of the subject. The concepts share and egocentrage give us a sense of analysis that we can use as criteria to decide whether such subject has good communicative gesture to clarify his message to others. These concepts can be very interesting in terms of research regarding actor’s behavior and detection studies. Thus, with some deeper research we can create lists of criteria and characteristics that will help us to distinguish and detect the appropriate behavior from the inappropriate one. Similarly, we can define normal behavior from the abnormal one and apply these criteria and indicators on children with disabilities, such as dyslexia and autism, to help them to express themselves in the future
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

YASUI, Eiko y 永子 安井. "語りの開始にともなう他者への指さし : 多人数会話における指さしのマルチモーダル分析". 名古屋大学文学部, 2014. http://hdl.handle.net/2237/19747.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Andersson, Elin. "Dogs´understanding of human pointing gestures". Thesis, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-109038.

Texto completo
Resumen
To investigate the ability for animals to understand human communication signals and the communication between animals and humans, scientists often investigate the understanding of human gestural cues. Dogs (Canis lupus familiaris) which have a long history of co-evolution with humans have been shown to make good use of human gestural cues. In the present study I investigated whether dogs in general understand a human pointing gesture and if there are differences between sex, age or breeds. In total 46 dogs of different breeds participated in the study. The study was carried out in a dog center in Linköping, Hundens och djurens beteendecenter. To test if dogs understand human pointing gestures, a two-way object choice test were used, where an experimenter pointed at a baited bowl at a distance of three meter from the dog. The results showed that dogs in general can understand human pointing gestures. However, no significant differences were found for sex, age or breeds. As a conclusion, I found that dogs in general can understand human pointing gestures, but sex, age or breed did not affect the ability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

"Phonetic Realization of Narrow Focus in Hong Kong Cantonese and its Temporal Relationship with Pointing Gestures". 2016. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1292307.

Texto completo
Resumen
西方文獻顯示在語言產出中,韻律與語伴手勢(co-speech gesture)的顯著部分有規律的同步性,並指手勢的比劃(stroke)或頂峰(apex)與焦點/重音詞(focused/stressed/accentedword)、重音音節(stressed/accented syllable)或基頻峰(F0 peak)等單位有密切的時間連繫。惟文獻只涵蓋重音語言,故此韻律與手勢在如香港粵語等無重音聲調語言中的時間關係仍然未明。另一方面,文獻中對香港粵語窄焦點句的韻律特徵仍然存在分歧,爭議主要在於焦點部分的音高和音域是否有上升和擴展,及焦點後是否有壓縮(post-focus compression)。因此,本文研究香港粵語窄焦點句的韻律特徵,並透過分析伴隨焦點產出之指示手勢(pointing gesture)的時間規律,初探韻律和語伴手勢在無重音聲調語言中的同步關係。
10名香港粵語母語者參與了一項產出實驗。實驗中參與者須辨識一些圖片,並回答問題,確認或糾正圖片顯示的物件名稱。糾正時,參與者須同時以慣用手指向相關的圖片。實驗結果顯示香港粵語窄焦點句的焦點部分時長增加,音高與音域普遍沒有顯著改變。焦點後部分則時長、音高與音域三者均沒顯著改變。手勢方面,結果顯示大部分參加者的指示手勢的頂峰均與焦點詞同步,其中在焦點詞為雙音節時,手勢頂峰多與首音節同步。亦有小部分參加者展示出其他同步規律。總括實驗結果,香港粵語中韻律和語伴手勢的顯著部分雖有同步關係,但其同步規律跟重音非聲調語言有明顯差別,顯示不同語言的韻律特徵對語伴手勢產出的影響。
A growing number of empirical studies show that speech prosody and co-speech gestures unfold with regular temporal relationship. More precisely, the prominent units of both channels are said to be “synchronised” or closely aligned to one another. Gestural prominence is commonly measured either by the stroke, i.e., the most meaningful phase of a gesture, or the apex, i.e., the peak of the stroke. On the prosody side of the alignment, several speech units and prosodic landmarks that have been suggested to attract gestural prominence, including the focused/stressed word, the stressed syllable of that word and even more precisely, the F0 peak of that syllable. However, the results were based only on studies of stress languages and no study has yet investigated the temporal relationship between prosody and gesture in non-stress tone languages, e.g. Hong Kong Cantonese. Previous studies on the prosodic realisation of Hong Kong Cantonese reported mixed results as to what the acoustic correlates of focus are and whether changes of them take place locally in the on-focus element or extend to the post-focus domain as well. Therefore, two main research questions were raised in this study: (1) how narrow focus is realised prosodically in Hong Kong Cantonese, and (2) whether and how it coordinates temporally with co-speech pointing gestures.
To address the two questions, 10 native speakers of Hong Kong Cantonese participated in a picture-naming task, in which each of them were presented pictures of two objects at a time and asked to verify them. Pointing was elicited along with verbal corrections. Acoustic results show that narrow (contrastive) focus is marked solely by on-focus durational increase. Gestural results reveal that there was alignment between prosodic and gestural prominences as most of the gestural apices were produced within the focused words. However, in contrast with previous findings, no significant effect of F0 (tone) or focus position is found. Rather, most speakers aligned their apices to syllables of the same position consistently. Based on the current findings, the prosodic anchor of prosody-gesture alignment is suggested to be the focused word in Hong Kong Cantonese.
Fung, Sze Ho.
Thesis M.Phil. Chinese University of Hong Kong 2016.
Includes bibliographical references (leaves ).
Abstracts also in Chinese.
Title from PDF title page (viewed on …).
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Jorge, Clinton Luis. "Remote presence: supporting deictic gestures through a handheld multi-touch device". Master's thesis, 2011. http://hdl.handle.net/10400.13/474.

Texto completo
Resumen
This thesis argues on the possibility of supporting deictic gestures through handheld multi-touch devices in remote presentation scenarios. In [1], Clark distinguishes indicative techniques of placing-for and directing-to, where placing-for refers to placing a referent into the addressee’s attention, and directing-to refers to directing the addressee’s attention towards a referent. Keynote, PowerPoint, FuzeMeeting and others support placing-for efficiently with slide transitions, and animations, but support limited to none directing-to. The traditional “pointing feature” present in some presentation tools comes as a virtual laser pointer or mouse cursor. [12, 13] have shown that the mouse cursor and laser pointer offer very little informational expressiveness and do not do justice to human communicative gestures. In this project, a prototype application was implemented for the iPad in order to explore, develop, and test the concept of pointing in remote presentations. The prototype offers visualizing and navigating the slides as well as “pointing” and zooming. To further investigate the problem and possible solutions, a theoretical framework was designed representing the relationships between the presenter’s intention and gesture and the resulting visual effect (cursor) that enables the audience members to interpret the meaning of the effect and the presenter’s intention. Two studies were performed to investigate people’s appreciation of different ways of presenting remotely. An initial qualitative study was performed at The Hague, followed by an online quantitative user experiment. The results indicate that subjects found pointing to be helpful in understanding and concentrating, while the detached video feed of the presenter was considered to be distracting. The positive qualities of having the video feed were the emotion and social presence that it adds to the presentations. For a number of subjects, pointing displayed some of the same social and personal qualities [2] that video affords, while less intensified. The combination of pointing and video proved to be successful with 10-out-of-19 subjects scoring it the highest while pointing example came at a close 8-out-of-19. Video was the least preferred with only one subject preferring it. We suggest that the research performed here could provide a basis for future research and possibly be applied in a variety of distributed collaborative settings.
Universidade da Madeira - Madeira Interactive Technologies Institute
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Das, Shome Subhra. "Techniques for estimating the direction of pointing gestures using depth images in the presence of orientation and distance variations from the depth sensor". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5604.

Texto completo
Resumen
A significant part of our daily life is spent interacting with various computing devices, mobile phones, etc. Soon, robots and drones may become common in daily life, as might be virtual reality based interfaces. Currently, the interaction with such devices is through the use of mouse, touch-pad, joystick, virtual reality wand, etc. Most of these interaction devices (such as mouse, keyboard and joystick) are immobile and restricted to table-top usage. Some interaction devices such as drone controllers are mobile, but need significant training before usage. Devices such as virtual reality wands are mobile, but inhibit immersive experience due to their handheld mode of operation and size of the device. Also, these interaction devices need to be touched with the hand during operation, thus making them prone to transmission of virus, bacteria etc. due to their use by multiple subjects. To overcome the limitations of the existing interaction devices, the recent trend is to move towards gesture-based user interfaces, which enable more natural interaction. Gestural inter-faces facilitate better interaction in 3D, since the user is not constrained to operate on planar surfaces (like with the use of mouse and touch-pad) or to operate at a fixed location (like joy-stick). Also, gesture based interaction requires minimum training, as human beings use gestures in day to day life. Gesture based interfaces are immersive as they mainly use the hands and not cumbersome external devices. Also, gesture based interfaces are non-contact type and minimize the probability of transmission of virus, bacteria etc. All interaction modes (gestural or device based) mentioned above are mostly used for selection tasks, pick-and-place tasks and direction indication tasks. These tasks primarily consist of pointing tasks. Existing gesture-based pointing interfaces suffer from one or more of the following limitations. Techniques using RGB cameras need multiple cameras for operation, thereby imposing constraints on camera placement and the operational area of the setup. Also, RGB camera based techniques are not tolerant to variation in skin color and illumination. Majority of the existing techniques based on depth sensor rely on the usage of multiple joint locations (such as head, shoulder, elbow, and wrist) to find the pointing direction. This requires that the entire upper human body be visible to the depth sensor. This may lead to occlusion related problem and constrains the operational area of the setup. There are a few techniques which use only the hand region data from a depth sensor. However, these techniques either work in a very constrained setting or have very poor accuracy. This thesis addresses two problems, namely pointing direction estimation (hereafter referred to as PDE) and detection of pointing gestures (a prerequisite before any PDE technique). The proposed techniques use depth images from a single depth sensor thus avoiding the pitfalls of RGB based techniques, and use only the hand region thus avoiding the pitfalls of using multiple body parts. To our knowledge, this is the maiden attempt at creating depth and orientation tolerant, accurate methods for estimating the pointing direction using only depth images of the hand region. The proposed methods achieve accuracies comparable to or better than those of existing methods while avoiding their limitations. The summary of the key contributions of this thesis follows. • Proposing an accurate, real-time technique for estimating the pointing direction using a nine-axis inertial motion unit (IMU) and depth data from an RGB-D sensor. It is the first method to fuse information from the IMU and depth sensor to obtain the pointing direction by finding the axis vector of the index finger. Further, this is the first method to obtain ground-truth pointing direction of index finger-based pointing gestures using only the data from the index finger region. • Creation of a large data-set of 107K samples with accurate ground-truth for pointing direction estimation from depth images. Each sample consists of the segmented depth image of a hand, the fingertip location (2D + 3D), the pointing vector (as a unit vector and in terms of the yaw and pitch values), and the mean depth of the hand. To the best of our knowledge, this is the first data-set for depth image and hand region based PDE that has accurate ground-truth and a large number of samples. The data-set has been made publicly available. • Proposing a new 3D convolutional neural network based method to estimate pointing direction. To the best of our knowledge, this is the first deep learning-based method for PDE that uses only the depth image of the hand region for estimation of pointing direction. It is tolerant to variation in orientation and depth of the hand with respect to the camera and is suitable for real-time applications. • Proposing a technique that uses depth image of the hand region for estimating the pointing direction with the aid of a global registration technique. Pointing direction is estimated by aligning a pointing hand model (captured using kinect fusion-based method) with the point-cloud from the test depth data. Unlike the other methods proposed by us, it does not need an accurate segmentation of the hand region as a prerequisite. It is tolerant to variation in the orientation and depth of the hand w.r.t. the RGB-D sensor. It achieves less net angular, yaw and pitch errors than most hand region based PDE techniques in the literature. • Creation of a large data-set of approximately 100K (46,918 positive and 53,477 negative) samples for detection of pointing gestures from depth images of the hand region. A deep learning based technique is proposed using the created data-set to distinguish pointing gestures from other hand gestures. The proposed method achieves significantly better performance over various metrics (accuracy, precision, recall, true negative rate, false positive rate, false negative rate) w.r.t. the only other existing technique for pointing gesture detection using hand region and depth image. • Proposing an accurate, inexpensive setup for calibrating a nine-axis IMU. This calibration is used in some works reported in the thesis. • Proposing a technique to find the absolute orientation of an RGB-D sensor in the North-East-Down (NED) coordinate frame. The absolute orientation of the RGB-D sensor is used in some works reported in this thesis. In chapter 1, we state the motivation to create a pointing gesture based interface for natural human interaction with computers, robots, drones and in virtual reality setups. We elucidate the limitations of the existing techniques and devices to show the necessity of designing pointing gesture-based interfaces that use a single depth sensor and the image of the hand region only. In chapter 2, we describe the experimental setups created and used for the work reported in chapters 3 and 4. First, we propose an inexpensive and accurate setup to calibrate a nine-axis IMU. Then we propose a method to find the orientation of an RGB-D sensor in the NED frame. In chapter 3, we propose a technique for estimating the pointing direction using a nine-axis IMU and depth data from an RGB-D sensor. Sensor fusion is applied to the data from the magnetometer, accelerometer and the gyroscope to find the pointing direction in the NED frame. Coordinate transformation is used to find the pointing direction in the frame of the RGB-D sensor. The computationally expensive parts of the algorithm are executed in the GPU, due to which it takes only eight milliseconds to process a frame. Thus, the proposed method is suitable for real-time operations, while achieving a mean accuracy of 90.5% over the depth range of the RGB-D sensor. Chapter 4 reports on the creation of a data-set for index finger-based pointing direction estimation with accurate ground truth and a high number of samples. The data-set has been collected using the IMU-based technique proposed in chapter 3. Chapter 4 also proposes a 3D convolutional neural network-based method to find the pointing direction from depth images of the hand region. This method achieves a mean accuracy of 94.49% over the depth range of the RGB-D sensor and real-time performance. In chapter 5, we propose a global registration-based method to find the pointing direction. A pointing hand model is captured using Kinect fusion technique, while the pointing direction is found by aligning the pointing hand model to the point-cloud from the test depth data. We show that our method is tolerant to small changes between the model and the actual hand data. This method achieves a mean accuracy of 86.33% over the depth range of the RGB-D sensor. In chapter 6, we address the challenge of detecting pointing gestures from depth images of the hand region. Identifying pointing gestures from generic gestures is a prerequisite for any pointing gesture-based interface. We have created a data-set of nearly 100K samples consisting of comparable number of pointing and non-pointing gestures. A 3D convolutional neural network-based method is proposed using the created data-set to distinguish pointing gestures from other gestures using only the depth images of the hand region. The proposed method achieves significantly better performance over various metrics (accuracy, precision, recall, true negative rate, false positive rate, false negative rate) w.r.t. the only other technique for pointing gesture detection that uses only hand region and depth image. The proposed method is suitable for real-time operation. Chapter 7 concludes the thesis by summarizing the contributions from all the chapters and proposing directions for possible future work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Yang, Feng-Ming y 楊豐名. "Vision-based Remote Pointing with Finger Gesture for Interactive English Learning". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/28743408623740691136.

Texto completo
Resumen
碩士
國立東華大學
資訊工程學系
102
In this thesis, we propose a human-computer interaction system for a user, who wears a head-mounted camera to obtain live images from the user’s viewpoint. We subsequently tracked the real projection screen area in front of the user in the captured image sequence. Once the user points to the targeted area with fingers, the system extracts the hand region through background subtraction with screenshot and transformed projection area. By determining the positions and the number of fingertips, users can manipulate the content on the screen using intuitive finger gestures, such as selecting, dragging, and dropping. Because our system is suitable for real-time group cooperation and interaction on a projection screen, we proposed multi-user interactive English-learning activities with scrambled word ordering. After we implement multi-camera image processing for our system in the future, we will evaluate the learning interactivity, the participants’ motivation, and learning effectiveness of the system relative to other remote pointing interfaces
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Fisher, Tamara L. "Declarative pointing : the capacity to share experience with others in infants with and without down syndrome /". 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR19674.

Texto completo
Resumen
Thesis (M.A.)--York University, 2006. Graduate Programme in Psychology.
Typescript. Includes bibliographical references (leaves 48-55). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR19674
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Ko, Chih-han y 柯志函. "Design and Development of Hand-free and 3-D Pointing Gesture Recognition System". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/92207115965771180412.

Texto completo
Resumen
碩士
國立臺灣科技大學
工業管理系
97
Pointing gesture is a very intuitive human communication. Basically, the pointing orientation has a fully spatial compatibility, and the meaning of gesture is in the pointing trajectory. The objective of this study is to design and development a "free-hand, 3 dimensional, real time, pointing gesture recognition system” using 2 cameras. When pointing an object, the eye, the finger, and the object should be collinear. Base upon this principle, sub-system of 3-D camera calibration, hand/head area detection, and/head tracking were developed to continuously tracking the hand/head 3-D positions. The 3-D pointing positions were reconstructed in a large screen with a refresh speed of 20 Hz. It was then the velocity and the accelerations of the pointing trajectories were calculated. To verify the advantage of the system, we built a remote controlled music player with free-hand comments of play, pause, volume control, move to a previous, and move to the next control. Experiment result showed that the average time-spending for gesture command is less than 4 seconds, and the recognition rate of all comments is more than 90%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Ching-Yu, Chien. "Vision-based Real-time Pointing Arm Gesture Tracking and Recognition System using Multiple Cameras". 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-0109200613415356.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Chien, Ching-Yu y 簡敬宇. "Vision-based Real-time Pointing Arm Gesture Tracking and Recognition System using Multiple Cameras". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/84402742103207006919.

Texto completo
Resumen
碩士
國立清華大學
電機工程學系
94
In this thesis, we develop a real-time arm pointing system. The main contribution of the system is using three cameras to track the pointing arm and identify several pointing targets in 3-D space. The system allows the user to make the arm pointing and the walking in a work space at the same time. The novelty of our method is directly tracking two 3-D points representing the pointing line in 3-D space and then refining the tracking results. We take advantages of Direct Linear Transformation (DLT) to extend the samples of particle filter to 3-D space. In our system, the pointing targets are not necessarily visible in any one of the three views. In the experiments, we show that our system will finish analyzing each frame of video in about 1/6 second. The pointing accuracy of our system is measured by 80 times of pointing test to eight designated 3-D targets by five users. The success rate of our system is above 90%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Jhang, Jia-Hao y 張家豪. "Performance evaluation of wearable controller for pointing and gesture task in 2D and 3D situation". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/19201645252925024529.

Texto completo
Resumen
碩士
中原大學
工業與系統工程研究所
105
The motion-control technology and directional operation have been applied in daily life and also in the virtual assembly in the automotive industry, and practically used in combination with monitor screen or virtual reality equipment. Currently, the motion-controllers mostly use lens sensing and handheld controlling devices for inputs. In recent years, wearable devices have been developed to detect human EMG signals, helping human body gestures to be determined for operation. MYO armbands are one of the few wearable devices that not only combine motion-control technology and human EMG signals, but also include an interface for cursor controlling (click, drag) in the input device. Nevertheless, the directional approaches and the applicability of such kind of wearable devices are still unclear and need to be further evaluated. In this study, a total of 18 subjects were assigned to operate with MYO armband and air mouse. The operation interface was projected through a projector. Two difficulty task levels (simple, medium) and upper limb operation gestures (shoulder, elbow) were applied as the basis to conduct directional and gesture tasks, and the gesture movements were separated by time. The resulting data of movement time, target re-entry, throughput, arm movement range, and the subjective assessment questionnaire were all used to perform the follow-up analysis. The analysis results were used to determine the advantages and disadvantages of the directional and gesture operation of the various controllers in different circumstances, and relevant recommendations were also provided. Based on the research results, the wearable controllers have shown that: 1) The gestures with the elbow as the center have better operation performance and comfort in the two-dimensional and three-dimensional operation; 2) In the two-dimensional operation, the difficulty tasks have higher error rates; in the three-dimensional operation, the difficulty levels have no influence on the error rate, indicating that a simple gesture setting can help users to operate; 3) The arm movement range with the elbow as the center is smaller in the upper and lower directions in the two-dimensional operation, while it is smaller in the right and left directions in the three-dimensional operation; 4) The three-dimensional gestures have better operation performance than the air mouse operation. This study suggests that if the wearable controllers can change the gestures, improve the nine-axis IMU technology, and use additional wearable devices for assistance, it would help greatly for the gesture and directional performance of wearable EMG armbands, which could be provided as a reference for both designers and users.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Kelly, Spencer Dougan. "Children's understanding of pragmatically ambiguous speech : have we been missing the point? /". 1999. http://gateway.proquest.com/openurl?url%5Fver=Z39.88-2004&res%5Fdat=xri:pqdiss&rft%5Fval%5Ffmt=info:ofi/fmt:kev:mtx:dissertation&rft%5Fdat=xri:pqdiss:9951806.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Lin, Wei Lun y 林瑋倫. "Modular systems for gesture recognition and pointing direction analysis using Kinect and the Qi software environment on service robot". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/56905827599116023874.

Texto completo
Resumen
碩士
國立中正大學
電機工程研究所
100
With recently rapid growth of computer and robotic technology, the intelligent robot system has been applied to the industrial automation, hospital automation, military application, home service etc. The main content of this thesis is the user gesture to command the robot behavior that can also set the robot behaviors. We make the robot can really help people. It is very important thing for service robot that can provide service to the user when service robot detects the user. In my thesis, we used the “Kinect” and “OPENNI” library to create the user skeleton model. We get gesture by the user hand motion direction from the user skeleton so that we can command to service robot. We create modular in the “Qi” software environment that can manage and control the system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía