Dissertations / Theses on the topic 'Pointing gestures'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 31 dissertations / theses for your research on the topic 'Pointing gestures.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Wu, Zhen. "The role of pointing gestures in facilitating word learning." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1805.
Full textRåman, J. (Joonas). "Pointing gestures as embodied resources of references, requests and directives in the car." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201304171193.
Full textTutkimus käsittelee auton kuljettajan osoittavia eleitä kehollisina resursseina kolmen sosiaalisen toiminnon, viittaamisen, pyytämisen, ja käskemisen, toteuttamisessa. Tutkimuksessa tarkastellaan myös, miten eleitä muokataan, jotta ne välittäisivät toiminnon viestin vastaanottajalle myös silloin kun ympäristön haasteellisuus lisääntyy. Haasteellisuutta voivat lisätä muun muassa mobiliteetti tai lisääntynyt tarve tuottaa useita rinnakkaisia toimintajaksoja. Tutkimus keskittyy erityisesti eleen havaittavuuteen ja kestoon, eleen luomaan tarkkailun alueeseen (domain of scrutiny) sekä sen huipentuman sijaintiin suhteessa verbaalisiin resursseihin. Tutkimusaineistona on käytetty kolmea korpusta, jotka sisältävät luontaista keskustelua. Habitable Cars -korpus sisältää enimmäkseen äidinkieleltään englanninkielisiä puhujia, Talk&Drive-korpus useamman eri kielen edustajia, ja Kokkola-korpus yksinomaan suomenkielisiä puhujia. Keskusteluanalyyttisen tutkimuksen periaatteita noudattaen keskustelutilanteista on tuotettu transkriptiota, jotka toimivat analyysin lähteenä. Tärkeimpänä löydöksenä voitaneen pitää sitä, että kuljettaja muokkaa ensisijaisesti eleen havaittavuutta erottaakseen keholliset viittaukset, pyynnöt ja käskyt toisistaan. Kehollisten viittausten osoittava ele on kestoltaan lyhyin ja havaittavuudeltaan pienin, kun taas käskyyn käytetty ele on kestoltaan pisin sekä havaittavuudeltaan suurin. Pyyntöihin käytetyt eleet ovat kestoltaan ja havaittavuudeltaan näiden kahden välimaastossa. Tilanteen kiireellisyys sekä rinnakkaiset toimintajaksot voivat vaikuttaa eleen havaittavuuteen joko lisäten tai vähentäen sitä sekä usein hämärtäen rajoja kolmen tarkastellun sosiaalisen toiminnon välillä. Tästä huolimatta tarkasteltujen kolmen toimintokategorian eleet ovat erotettavissa toisistaan. Täten alkuperäinen kategorisointi on oikeutettu. Tämä tutkimus on jatkoa aiemmille tutkimuksille eleen havaittavuudesta sekä sen vaikutuksesta kohteen löytämiseen ja priorisointiin. Tutkimuksessa havaittavuuden käsite kuitenkin siirretään niinkutsutun ammattinäön piiristä lähemmäs arkipäivää. Lisäksi havaittavuuden käsitettä syvennetään tutkimalla sen suhdetta ja vuorovaikutusta verbaalisiin resursseihin, eleen kestoon sekä tarkkailun alueeseen
Cochet, Hélène. "Hand shape, function and hand preference of communicative gestures in young children : insights into the origins of human communication." Thesis, Aix-Marseille 1, 2011. http://www.theses.fr/2011AIX10076/document.
Full textEven though children’s early use of communicative gestures is recognized as being closely related to language development (e.g., Colonnesi et al., 2010), the nature of speech–gestures links still needs to be clarified. This dissertation aims to investigate the production of pointing gestures during development to determine whether the predictive and facilitative relationship between gestures and language acquisition involves specific functions of pointing, in association with specific features in terms of hand shape, gaze and accompanying vocalizations. Moreover, special attention was paid to the study of hand preferences in order to better understand the development of left hemisphere specialization for communicative behaviors. Our results revealed complex relationships between language, communicative gestures and manipulative activities depending on the function of gestures (i.e., imperative versus declarative pointing) as well as on specific stages of language acquisition. Declarative gestures were found to be more closely associated with speech development than imperative gestures, at least before the lexical spurt period. In addition, the comparison of hand-preference patterns in adults and infants showed stronger similarity for gestures than for object manipulation. The right-sided asymmetry for communicative gestures is thus established in early stages, which suggests a primary role of gestures in hemispheric specialization.Finally, our findings have highlighted the existence of a left-lateralized communication system controlling both gestural and vocal communication, which has been suggested to have a deep phylogenetic origin (e.g., Corballis, 2010). Therefore, the present work may improve current understanding of the evolutionary roots of language, including the mechanisms of cerebral specialization for communicative behaviors
Roustan, Benjamin. "Etude de la coordination gestes manuels/parole dans le cadre de la désignation." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00759199.
Full textMadsen, Elainie Alenkær. "Attention following and nonverbal referential communication in bonobos (Pan paniscus), chimpanzees (Pan troglodytes) and orangutans (Pongo pygmaeus)." Thesis, University of St Andrews, 2011. http://hdl.handle.net/10023/1893.
Full textBen, Chikha Houssem. "Impact des gestes de pointage et du regard de l’entraîneur sur la mémorisation d'une scène tactique de basket-ball : Études oculométriques." Electronic Thesis or Diss., Valenciennes, Université Polytechnique Hauts-de-France, 2023. https://ged.uphf.fr/nuxeo/site/esupversions/62b8d414-a45c-4a04-8f10-da43ab0dc578.
Full textPointing gestures and guided gaze are commonly used as bodily cues to enhance visual attention and comprehension in various academic domains. However, their specific effectiveness in the sports context, particularly in teaching basketball tactical patterns, remains relatively unexplored. Therefore, the central objective of this thesis was to examine the impact of pointing gestures and/or guided gaze by the coach on visual attention and memorization of tactical scenes. The key findings revealed significant interactions between the use of these cues and the players' level of expertise, demonstrating an expertise reversal effect. In most experiments conducted, pedagogical methods that were effective for novice players proved to be ineffective or even detrimental for expert players. Consequently, these results emphasize the importance of adapting the use of pointing gestures and guided gaze to accommodate variations in players' expertise level when presenting training materials for basketball game plans and/or phases
Grover, Lesley Ann. "Comprehension of the manual pointing gesture in human infants : a developmental study of the cognitive and social-cognitive processes involved in the comprehension of the gesture." Thesis, University of Southampton, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329150.
Full textRacine, Timothy Philip. "The role of shared practice in the origins of joint attention and pointing /." Burnaby B.C. : Simon Fraser University, 2005. http://ir.lib.sfu.ca/handle/1892/2056.
Full textHatzopoulou, Marianna. "Acquisition of reference to self and others in Greek Sign Language : From pointing gesture to pronominal pointing signs." Doctoral thesis, Stockholm : Sign Language Section, Department of Linguistics, Stockholm University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-8293.
Full textNugent, Susie P. "Infant cross-fostered chimpanzees develop indexical pointing." abstract and full text PDF (free order & download UNR users only), 2006. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433288.
Full textWinnerhall, Louise. "The effect of breed selection on interpreting human directed cues in the domestic dog." Thesis, Linköpings universitet, Biologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-108847.
Full textEsteve, Gibert Núria. "The integration of prosody and gesture in early intentional communication." Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/284226.
Full textAquesta tesi inclou quatre estudis experimentals que investiguen com els infants integren prosòdia i gestualitat amb fins comunicatius. Els adults integrem la prosòdia i la gestualitat de manera temporal i pragmàtica i, juntament amb la informació sociocontextual, ho utilitzem per a transmetre i comprendre significats intencionals. En aquesta tesi es pretén investigar si els infants utilitzen la prosòdia i la gestualitat de manera integrada per a fins comunicatius, abans de ser capaços d‟emprar elements lexicosemàntics. La tesi inclou quatre estudis, cada un en un apartat diferent. El primer estudi analitza longitudinalment les combinacions de gest i parla dels infants en interaccions espontànies, i mostra que a partir dels 12 o 15 mesos els infants alineen temporalment la prominència prosòdica i la prominència gestual. El segon estudi empra el mètode d‟habituació/test per a comprovar l‟habilitat primerenca dels infants a percebre la integració temporal entre prosòdia i gest, i mostra que als 9 mesos els infants ja són capaços de percebre l‟alineació temporal entre la prominència gestual i la prosòdica. El tercer estudi també analitza longitudinalment les produccions dels infants en interaccions espontànies per a mostrar que, abans de produir les primeres paraules, els infants ja utilitzen elements prosòdics com el rang tonal i la durada, a més del gest, per a transmetre actes de parla com ara la petició, les respostes, les oracions declaratives, i les expressions de satisfacció o de descontentament. Finalment, el quart xii estudi investiga la reacció dels infants a diversos tipus de combinacions de parla i gest d‟assenyalar, i mostra que els infants de 12 mesos utilitzen les marques prosòdiques i gestuals del discurs per a entendre les intencions comunicatives que els són dirigides. En conjunt, aquesta tesi mostra que la integració temporal de prosòdia i gest ocorre en les etapes més primerenques del desenvolupament lingüístic i cognitiu, i que el usos pragmàtics de la prosòdia i la gestualitat emergeixen abans que els infants dominin l‟ús d‟elements lèxics. Així, la prosòdia és el primer component gramatical del llenguatge que els infants utilitzen amb finalitat comunicativa, cosa que indica que la comunicació lingüística emergeix abans que els infants tinguin la capacitat de produir ítems lèxics amb significants semàntics. La conclusió general és, doncs, que la integració temporal i pragmàtica de la prosòdia i el gest per part dels infants indica el desenvolupament primerenc de la pragmàtica lingüística.
Scheider, Linda [Verfasser]. "The command hypothesis versus the information hypothesis : how do domestic dogs (Canis familiaris) comprehend the human pointing gesture? / Linda Scheider." Berlin : Freie Universität Berlin, 2011. http://d-nb.info/1025939069/34.
Full textLiu, Xiaoxing. "What role does effort play: the effect of effort for gesture interfaces and the effect of pointing on spatial working memory." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2110.
Full textKato, Carolyn K. "A comparison between pre-verbal "you-me" pointing and the acquisition of verbal pronouns : does gestural knowledge facilitate the acquisition of verbal pronouns?" Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61834.
Full textMelo, Silvia Beatriz Fonseca de. "O gato dom?stico (Felis catus) responde ? sinais gestuais? poss?veis implica??es do conv?vio social." Universidade Federal do Rio Grande do Norte, 2008. http://repositorio.ufrn.br:8080/jspui/handle/123456789/17272.
Full textThe cats (Felis catus) were domesticated about 9,500 years ago due to the advent of agriculture, being used to control the pests that devastated the food harvested. These animals went through an artificial selection and over generations and millennia had their behavior and morphology changed by humans. This process of domestication by man gave rise to a special ability, the understanding of human pointing gestures, clearly noticed while we feed our pets. Our goal in this study was to assess the comprehension of pointing gestures by cats and also verify the influence that social interactions exerts on the development of this ability. We found that experimental subjects from both groups, solitary animals and social animals, were able to follow human indication in order to find hidden food. However, social interaction had no effect on cats performances. The ability tested here probably evolved during the process of domestication of this species, and social interaction seems to exert little or no influence upon its expression
AOs gatos (Felis catus) foram domesticados h? cerca de 9.500 anos devido ? agricultura, onde eram utilizados no combate ?s pragas que assolavam os alimentos colhidos. Esses animais passaram por uma sele??o artificial e ao longo das gera??es e mil?nios tiveram seus comportamentos e morfologia modificadas pelos humanos. O processo de domestica??o pelo homem fez surgir uma habilidade em especial, a compreens?o de sinais gestuais humanos, que ? bem observada nos momentos em que alimentamos nossos animais. Nosso objetivo neste estudo foi testar a resposta ? sinaliza??o gestual (comportamento de apontar) em gatos, emitida por humanos e tamb?m verificar a influ?ncia do conv?vio social sobre o desenvolvimento desta habilidade. Observamos que os sujeitos experimentais de ambos os grupos, animais solit?rios e de conv?vio em grupo, foram capazes de seguir os sinais de indica??o humana para localizar o alimento escondido. Por?m, a forma de conv?vio social n?o influenciou no desempenho dos gatos. A habilidade aqui testada, possivelmente evoluiu durante o processo de domestica??o dessa esp?cie, e a intera??o social parece exercer pouca ou nenhuma influ?ncia sobre a sua express?o
Delamare, William. "Interaction à distance en environnement physique augmenté." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM032/document.
Full textWe explore interaction with augmented physical objects within physical environments. Augmented physical objects allow new ways of interaction, including distant interaction. However, the physical world has specificities making difficult the adaptation of interaction techniques already existing in virtual environments. These specificities need to be identified in order to design efficient and enjoyable interaction techniques dedicated to augmented physical environments. In our work, we split up distant interaction into two complementary stages: the selection and the control of augmented physical objects. For each of these stages, our contribution is two-fold. These contributions are both theoretical, with the establishment of design spaces, and practical, with the design, the implementation and the experimental evaluation of interaction techniques:- For the selection stage, we study the disambiguation potentially needed after a distal pointing gesture using a volume selection such as an infrared remote controller. Indeed, although the volume selection can facilitate the aiming action, several objects can fall into the selected volume. Thus, users should disambiguate this coarse pointing selection. We define and use a design space in order to design and experimentally evaluate two disambiguation techniques that maintain the user's focus on the physical objects.- For the control stage, we study the guidance of 3D hand gestures in order to trigger commands at a distance. Such guidance is essential in order to reveal available commands and the associated gestures. We define a design space capturing specificities of a wide range of guiding systems. We also provide an online tool, easing the use of such a large design space. We then explore the impact of several design options on the quality of 3D gestures guidance
Rashdan, Khalid. "Entre "Ça" et "Comme ça", différences entre la deixis ad oculos et la deixis am phantasma au niveau gestuel, intonatif et syntaxique : étude chez des enfants entre 4 et 7 ans." Thesis, Paris 5, 2014. http://www.theses.fr/2014PA05H031/document.
Full textIn this thesis, I treat with the question of Deixis on three levels of language: Syntax, phonetic and gestural. The main idea is that the Deixis does not end on pointing out a present object and can go further with more abstract features. This thesis has three basic parts. The first contains a definitional chapter and in the second chapter I present the three basic processes of Deixis: categorization, nomination and memorization. In the second part, I study the cognitive process of Deixis from a dynamic point of view and use theories and concepts to clarify its methodology. I explain the dynamics of Deixis Ad Oculos and Deixis Am Phantasm showing how this dynamic is completely dependent on the personal representations of the subject. The concepts share and egocentrage give us a sense of analysis that we can use as criteria to decide whether such subject has good communicative gesture to clarify his message to others. These concepts can be very interesting in terms of research regarding actor’s behavior and detection studies. Thus, with some deeper research we can create lists of criteria and characteristics that will help us to distinguish and detect the appropriate behavior from the inappropriate one. Similarly, we can define normal behavior from the abnormal one and apply these criteria and indicators on children with disabilities, such as dyslexia and autism, to help them to express themselves in the future
YASUI, Eiko, and 永子 安井. "語りの開始にともなう他者への指さし : 多人数会話における指さしのマルチモーダル分析." 名古屋大学文学部, 2014. http://hdl.handle.net/2237/19747.
Full textAndersson, Elin. "Dogs´understanding of human pointing gestures." Thesis, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-109038.
Full text"Phonetic Realization of Narrow Focus in Hong Kong Cantonese and its Temporal Relationship with Pointing Gestures." 2016. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1292307.
Full text10名香港粵語母語者參與了一項產出實驗。實驗中參與者須辨識一些圖片,並回答問題,確認或糾正圖片顯示的物件名稱。糾正時,參與者須同時以慣用手指向相關的圖片。實驗結果顯示香港粵語窄焦點句的焦點部分時長增加,音高與音域普遍沒有顯著改變。焦點後部分則時長、音高與音域三者均沒顯著改變。手勢方面,結果顯示大部分參加者的指示手勢的頂峰均與焦點詞同步,其中在焦點詞為雙音節時,手勢頂峰多與首音節同步。亦有小部分參加者展示出其他同步規律。總括實驗結果,香港粵語中韻律和語伴手勢的顯著部分雖有同步關係,但其同步規律跟重音非聲調語言有明顯差別,顯示不同語言的韻律特徵對語伴手勢產出的影響。
A growing number of empirical studies show that speech prosody and co-speech gestures unfold with regular temporal relationship. More precisely, the prominent units of both channels are said to be “synchronised” or closely aligned to one another. Gestural prominence is commonly measured either by the stroke, i.e., the most meaningful phase of a gesture, or the apex, i.e., the peak of the stroke. On the prosody side of the alignment, several speech units and prosodic landmarks that have been suggested to attract gestural prominence, including the focused/stressed word, the stressed syllable of that word and even more precisely, the F0 peak of that syllable. However, the results were based only on studies of stress languages and no study has yet investigated the temporal relationship between prosody and gesture in non-stress tone languages, e.g. Hong Kong Cantonese. Previous studies on the prosodic realisation of Hong Kong Cantonese reported mixed results as to what the acoustic correlates of focus are and whether changes of them take place locally in the on-focus element or extend to the post-focus domain as well. Therefore, two main research questions were raised in this study: (1) how narrow focus is realised prosodically in Hong Kong Cantonese, and (2) whether and how it coordinates temporally with co-speech pointing gestures.
To address the two questions, 10 native speakers of Hong Kong Cantonese participated in a picture-naming task, in which each of them were presented pictures of two objects at a time and asked to verify them. Pointing was elicited along with verbal corrections. Acoustic results show that narrow (contrastive) focus is marked solely by on-focus durational increase. Gestural results reveal that there was alignment between prosodic and gestural prominences as most of the gestural apices were produced within the focused words. However, in contrast with previous findings, no significant effect of F0 (tone) or focus position is found. Rather, most speakers aligned their apices to syllables of the same position consistently. Based on the current findings, the prosodic anchor of prosody-gesture alignment is suggested to be the focused word in Hong Kong Cantonese.
Fung, Sze Ho.
Thesis M.Phil. Chinese University of Hong Kong 2016.
Includes bibliographical references (leaves ).
Abstracts also in Chinese.
Title from PDF title page (viewed on …).
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Jorge, Clinton Luis. "Remote presence: supporting deictic gestures through a handheld multi-touch device." Master's thesis, 2011. http://hdl.handle.net/10400.13/474.
Full textUniversidade da Madeira - Madeira Interactive Technologies Institute
Das, Shome Subhra. "Techniques for estimating the direction of pointing gestures using depth images in the presence of orientation and distance variations from the depth sensor." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5604.
Full textYang, Feng-Ming, and 楊豐名. "Vision-based Remote Pointing with Finger Gesture for Interactive English Learning." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/28743408623740691136.
Full text國立東華大學
資訊工程學系
102
In this thesis, we propose a human-computer interaction system for a user, who wears a head-mounted camera to obtain live images from the user’s viewpoint. We subsequently tracked the real projection screen area in front of the user in the captured image sequence. Once the user points to the targeted area with fingers, the system extracts the hand region through background subtraction with screenshot and transformed projection area. By determining the positions and the number of fingertips, users can manipulate the content on the screen using intuitive finger gestures, such as selecting, dragging, and dropping. Because our system is suitable for real-time group cooperation and interaction on a projection screen, we proposed multi-user interactive English-learning activities with scrambled word ordering. After we implement multi-camera image processing for our system in the future, we will evaluate the learning interactivity, the participants’ motivation, and learning effectiveness of the system relative to other remote pointing interfaces
Fisher, Tamara L. "Declarative pointing : the capacity to share experience with others in infants with and without down syndrome /." 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR19674.
Full textTypescript. Includes bibliographical references (leaves 48-55). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR19674
Ko, Chih-han, and 柯志函. "Design and Development of Hand-free and 3-D Pointing Gesture Recognition System." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/92207115965771180412.
Full text國立臺灣科技大學
工業管理系
97
Pointing gesture is a very intuitive human communication. Basically, the pointing orientation has a fully spatial compatibility, and the meaning of gesture is in the pointing trajectory. The objective of this study is to design and development a "free-hand, 3 dimensional, real time, pointing gesture recognition system” using 2 cameras. When pointing an object, the eye, the finger, and the object should be collinear. Base upon this principle, sub-system of 3-D camera calibration, hand/head area detection, and/head tracking were developed to continuously tracking the hand/head 3-D positions. The 3-D pointing positions were reconstructed in a large screen with a refresh speed of 20 Hz. It was then the velocity and the accelerations of the pointing trajectories were calculated. To verify the advantage of the system, we built a remote controlled music player with free-hand comments of play, pause, volume control, move to a previous, and move to the next control. Experiment result showed that the average time-spending for gesture command is less than 4 seconds, and the recognition rate of all comments is more than 90%.
Ching-Yu, Chien. "Vision-based Real-time Pointing Arm Gesture Tracking and Recognition System using Multiple Cameras." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-0109200613415356.
Full textChien, Ching-Yu, and 簡敬宇. "Vision-based Real-time Pointing Arm Gesture Tracking and Recognition System using Multiple Cameras." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/84402742103207006919.
Full text國立清華大學
電機工程學系
94
In this thesis, we develop a real-time arm pointing system. The main contribution of the system is using three cameras to track the pointing arm and identify several pointing targets in 3-D space. The system allows the user to make the arm pointing and the walking in a work space at the same time. The novelty of our method is directly tracking two 3-D points representing the pointing line in 3-D space and then refining the tracking results. We take advantages of Direct Linear Transformation (DLT) to extend the samples of particle filter to 3-D space. In our system, the pointing targets are not necessarily visible in any one of the three views. In the experiments, we show that our system will finish analyzing each frame of video in about 1/6 second. The pointing accuracy of our system is measured by 80 times of pointing test to eight designated 3-D targets by five users. The success rate of our system is above 90%.
Jhang, Jia-Hao, and 張家豪. "Performance evaluation of wearable controller for pointing and gesture task in 2D and 3D situation." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/19201645252925024529.
Full text中原大學
工業與系統工程研究所
105
The motion-control technology and directional operation have been applied in daily life and also in the virtual assembly in the automotive industry, and practically used in combination with monitor screen or virtual reality equipment. Currently, the motion-controllers mostly use lens sensing and handheld controlling devices for inputs. In recent years, wearable devices have been developed to detect human EMG signals, helping human body gestures to be determined for operation. MYO armbands are one of the few wearable devices that not only combine motion-control technology and human EMG signals, but also include an interface for cursor controlling (click, drag) in the input device. Nevertheless, the directional approaches and the applicability of such kind of wearable devices are still unclear and need to be further evaluated. In this study, a total of 18 subjects were assigned to operate with MYO armband and air mouse. The operation interface was projected through a projector. Two difficulty task levels (simple, medium) and upper limb operation gestures (shoulder, elbow) were applied as the basis to conduct directional and gesture tasks, and the gesture movements were separated by time. The resulting data of movement time, target re-entry, throughput, arm movement range, and the subjective assessment questionnaire were all used to perform the follow-up analysis. The analysis results were used to determine the advantages and disadvantages of the directional and gesture operation of the various controllers in different circumstances, and relevant recommendations were also provided. Based on the research results, the wearable controllers have shown that: 1) The gestures with the elbow as the center have better operation performance and comfort in the two-dimensional and three-dimensional operation; 2) In the two-dimensional operation, the difficulty tasks have higher error rates; in the three-dimensional operation, the difficulty levels have no influence on the error rate, indicating that a simple gesture setting can help users to operate; 3) The arm movement range with the elbow as the center is smaller in the upper and lower directions in the two-dimensional operation, while it is smaller in the right and left directions in the three-dimensional operation; 4) The three-dimensional gestures have better operation performance than the air mouse operation. This study suggests that if the wearable controllers can change the gestures, improve the nine-axis IMU technology, and use additional wearable devices for assistance, it would help greatly for the gesture and directional performance of wearable EMG armbands, which could be provided as a reference for both designers and users.
Kelly, Spencer Dougan. "Children's understanding of pragmatically ambiguous speech : have we been missing the point? /." 1999. http://gateway.proquest.com/openurl?url%5Fver=Z39.88-2004&res%5Fdat=xri:pqdiss&rft%5Fval%5Ffmt=info:ofi/fmt:kev:mtx:dissertation&rft%5Fdat=xri:pqdiss:9951806.
Full textLin, Wei Lun, and 林瑋倫. "Modular systems for gesture recognition and pointing direction analysis using Kinect and the Qi software environment on service robot." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/56905827599116023874.
Full text國立中正大學
電機工程研究所
100
With recently rapid growth of computer and robotic technology, the intelligent robot system has been applied to the industrial automation, hospital automation, military application, home service etc. The main content of this thesis is the user gesture to command the robot behavior that can also set the robot behaviors. We make the robot can really help people. It is very important thing for service robot that can provide service to the user when service robot detects the user. In my thesis, we used the “Kinect” and “OPENNI” library to create the user skeleton model. We get gesture by the user hand motion direction from the user skeleton so that we can command to service robot. We create modular in the “Qi” software environment that can manage and control the system.