To see the other types of publications on this topic, follow the link: Gesture.

Journal articles on the topic 'Gesture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gesture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

PIKA, SIMONE, ELENA NICOLADIS, and PAULA F. MARENTETTE. "A cross-cultural study on the use of gestures: Evidence for cross-linguistic transfer?" Bilingualism: Language and Cognition 9, no. 3 (October 20, 2006): 319–27. http://dx.doi.org/10.1017/s1366728906002665.

Full text
Abstract:
Anecdotal reports provide evidence of so called “hybrid” gesturer whose non-verbal behavior of one language/culture becomes visible in the other. The direction of this gestural transfer seems to occur from a high to a low frequency gesture language. The purpose of this study was therefore to test systematically 1) whether gestural transfer occurs from a high frequency gesture language to a low frequency gesture language, 2) if the frequency of production of some gesture types is more likely to be transferred than others, and 3) whether gestural transfer can also occur bi-directionally. To address these questions, we investigated the use of gestures of English–Spanish bilinguals, French–English bilinguals, and English monolinguals while retelling a cartoon. Our analysis focused on the rate of gestures and the frequency of production of gesture types. There was a significant difference in the overall rate of gestures: both bilingual groups gestured more than monolingual participants. This difference was particularly salient for iconic gestures. In addition, we found that French–English bilinguals used more deictic gestures in their L2. The results suggest that knowledge of a high frequency gesture language affects the gesture rate in a low-frequency gesture language.
APA, Harvard, Vancouver, ISO, and other styles
2

Braddock, Barbara A., Christina Gabany, Meera Shah, Eric S. Armbrecht, and Kimberly A. Twyman. "Patterns of Gesture Use in Adolescents With Autism Spectrum Disorder." American Journal of Speech-Language Pathology 25, no. 3 (August 2016): 408–15. http://dx.doi.org/10.1044/2015_ajslp-14-0112.

Full text
Abstract:
Purpose The purpose of this study was to examine patterns of spontaneous gesture use in a sample of adolescents with autism spectrum disorder (ASD). Method Thirty-five adolescents with ASD ages 11 to 16 years participated (mean age = 13.51 years; 29 boys, 6 girls). Participants' spontaneous speech and gestures produced during a narrative task were later coded from videotape. Parents were also asked to complete questionnaires to quantify adolescents' general communication ability and autism severity. Results No significant subgroup differences were apparent between adolescents who did not gesture versus those who produced at least 1 gesture in general communication ability and autism severity. Subanalyses including only adolescents who produced gesture indicated a statistically significant negative association between gesture rate and general communication ability, specifically speech and syntax subscale scores. Adolescents who gestured produced higher proportions of iconic gestures and used gesture mostly to add information to speech. Conclusions The findings relate spontaneous gesture use to underlying strengths and weaknesses in adolescents' speech and syntactical language development. More research examining cospeech gesture in fluent speakers with ASD is needed.
APA, Harvard, Vancouver, ISO, and other styles
3

Sekine, Kazuki, and Miranda L. Rose. "The Relationship of Aphasia Type and Gesture Production in People With Aphasia." American Journal of Speech-Language Pathology 22, no. 4 (November 2013): 662–72. http://dx.doi.org/10.1044/1058-0360(2013/12-0030).

Full text
Abstract:
Purpose For many individuals with aphasia, gestures form a vital component of message transfer and are the target of speech-language pathology intervention. What remains unclear are the participant variables that predict successful outcomes from gesture treatments. The authors examined the gesture production of a large number of individuals with aphasia—in a consistent discourse sampling condition and with a detailed gesture coding system—to determine patterns of gesture production associated with specific types of aphasia. Method The authors analyzed story retell samples from AphasiaBank (TalkBank, n.d.), gathered from 98 individuals with aphasia resulting from stroke and 64 typical controls. Twelve gesture types were coded. Descriptive statistics were used to describe the patterns of gesture production. Possible significant differences in production patterns according to aphasia type were examined using a series of chi-square, Fisher exact, and logistic regression statistics. Results A significantly higher proportion of individuals with aphasia gestured as compared to typical controls, and for many individuals with aphasia, this gesture was iconic and was capable of communicative load. Aphasia type impacted significantly on gesture type in specific identified patterns, detailed here. Conclusion These type-specific patterns suggest the opportunity for gestures as targets of aphasia therapy.
APA, Harvard, Vancouver, ISO, and other styles
4

Kong, Anthony Pak-Hin, Sam-Po Law, and Gigi Wan-Chi Chak. "A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia." Journal of Speech, Language, and Hearing Research 60, no. 7 (July 12, 2017): 2031–46. http://dx.doi.org/10.1044/2017_jslhr-l-16-0093.

Full text
Abstract:
Purpose Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Results Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. Conclusions The current results supported the sketch model of language–gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

PARRILL, FEY, BRITTANY LAVANTY, AUSTIN BENNETT, ALAYNA KLCO, and OZLEM ECE DEMIR-LIRA. "The relationship between character viewpoint gesture and narrative structure in children." Language and Cognition 10, no. 3 (July 12, 2018): 408–34. http://dx.doi.org/10.1017/langcog.2018.9.

Full text
Abstract:
abstractWhen children tell stories, they gesture; their gestures can predict how their narrative abilities will progress. Five-year-olds who gestured from the point of view of a character (CVPT gesture) when telling stories produced better-structured narratives at later ages (Demir, Levine, & Goldin-Meadow, 2014). But does gesture just predict narrative structure, or can asking children to gesture in a particular way change their narratives? To explore this question, we instructed children to produce CVPT gestures and measured their narrative structure. Forty-four kindergarteners were asked to tell stories after being trained to produce CVPT gestures, gestures from an observer’s viewpoint (OVPT gestures), or after no instruction in gesture. Gestures were coded as CVPT or OVPT, and stories were scored for narrative structure. Children trained to produce CVPT gestures produced more of these gestures, and also had higher narrative structure scores compared to those who received the OVPT training. Children returned for a follow-up session one week later and narrated the stories again. The training received in the first session did not impact narrative structure or recall for the events of the stories. Overall, these results suggest a brief gestural intervention has the potential to enhance narrative structure. Due to the fact that stronger narrative abilities have been correlated with greater success in developing writing and reading skills at later ages, this research has important implications for literacy and education.
APA, Harvard, Vancouver, ISO, and other styles
6

Cooperrider, Kensy. "Foreground gesture, background gesture." Gesture 16, no. 2 (December 31, 2017): 176–202. http://dx.doi.org/10.1075/gest.16.2.02coo.

Full text
Abstract:
Abstract Do speakers intend their gestures to communicate? Central as this question is to the study of gesture, researchers cannot seem to agree on the answer. According to one common framing, gestures are an “unwitting” window into the mind (McNeill, 1992); but, according to another common framing, they are designed along with speech to form “composite utterances” (Enfield, 2009). These two framings correspond to two cultures within gesture studies – the first cognitive and the second interactive in orientation – and they appear to make incompatible claims. In this article I attempt to bridge the cultures by developing a distinction between foreground gestures and background gestures. Foreground gestures are designed in their particulars to communicate a critical part of the speaker’s message; background gestures are not designed in this way. These are two fundamentally different kinds of gesture, not two different ways of framing the same monolithic behavior. Foreground gestures can often be identified by one or more of the following hallmarks: they are produced along with demonstratives; they are produced in the absence of speech; they are co-organized with speaker gaze; and they are produced with conspicuous effort. The distinction between foreground and background gestures helps dissolve the apparent tension between the two cultures: interactional researchers have focused on foreground gestures and elevated them to the status of a prototype, whereas cognitive researchers have done the same with background gestures. The distinction also generates a number of testable predictions about gesture production and understanding, and it opens up new lines of inquiry into gesture across child development and across cultures.
APA, Harvard, Vancouver, ISO, and other styles
7

CASEY, SHANNON, KAREN EMMOREY, and HEATHER LARRABEE. "The effects of learning American Sign Language on co-speech gesture." Bilingualism: Language and Cognition 15, no. 4 (January 3, 2012): 677–86. http://dx.doi.org/10.1017/s1366728911000575.

Full text
Abstract:
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture.
APA, Harvard, Vancouver, ISO, and other styles
8

Foran, Lori, and Brenda Beverly. "Points to Ponder: Gesture and Language in Math Talk." Perspectives on Language Learning and Education 22, no. 2 (March 2015): 72–81. http://dx.doi.org/10.1044/lle22.2.71.

Full text
Abstract:
With the introduction of Common Core State Standards, mathematical learning and problem solving in the academic environment is more linguistically demanding. Speech-language pathologists (SLPs) can support students with language impairment and teachers charged with new curricular demands. The role of gestural communication as a support for children's math learning and as an instructional strategy during math education is reviewed. Findings are presented from a recent pilot study on the gesture and language production of 3-, 4- and 5-year- old children as they solve early arithmetic and fraction problems. Children spontaneously produced deictic and representational gestures that most often matched their spoken solutions. A few children exhibited gesture-speech mismatches in which the gesture contained semantic content not contained in the speech alone. This can suggest some underlying knowledge that would not be apparent without the gesture. Furthermore, the investigator introduced gestured prompts with some preschool participants using spontaneous gestures previously observed by successful peers. Gesture's role in early mathematic areas preceding kindergarten and specific gesturing strategies effective in the academic environment continue to be explored.
APA, Harvard, Vancouver, ISO, and other styles
9

Jasim, Mahmood, Tao Zhang, and Md Hasanuzzaman. "A Real-Time Computer Vision-Based Static and Dynamic Hand Gesture Recognition System." International Journal of Image and Graphics 14, no. 01n02 (January 2014): 1450006. http://dx.doi.org/10.1142/s0219467814500065.

Full text
Abstract:
This paper presents a novel method for computer vision-based static and dynamic hand gesture recognition. Haar-like feature-based cascaded classifier is used for hand area segmentation. Static hand gestures are recognized using linear discriminant analysis (LDA) and local binary pattern (LBP)-based feature extraction methods. Static hand gestures are classified using nearest neighbor (NN) algorithm. Dynamic hand gestures are recognized using the novel text-based principal directional features (PDFs), which are generated from the segmented image sequences. Longest common subsequence (LCS) algorithm is used to classify the dynamic gestures. For testing, the Chinese numeral gesture dataset containing static hand poses and directional gesture dataset containing complex dynamic gestures are prepared. The mean accuracy of LDA-based static hand gesture recognition on the Chinese numeral gesture dataset is 92.42%. The mean accuracy of LBP-based static hand gesture recognition on the Chinese numeral gesture dataset is 87.23%. The mean accuracy of the novel dynamic hand gesture recognition method using PDF on directional gesture dataset is 94%.
APA, Harvard, Vancouver, ISO, and other styles
10

Kelly, Spencer D., Peter Creigh, and James Bartolotti. "Integrating Speech and Iconic Gestures in a Stroop-like Task: Evidence for Automatic Processing." Journal of Cognitive Neuroscience 22, no. 4 (April 2010): 683–94. http://dx.doi.org/10.1162/jocn.2009.21254.

Full text
Abstract:
Previous research has demonstrated a link between language and action in the brain. The present study investigates the strength of this neural relationship by focusing on a potential interface between the two systems: cospeech iconic gesture. Participants performed a Stroop-like task in which they watched videos of a man and a woman speaking and gesturing about common actions. The videos differed as to whether the gender of the speaker and gesturer was the same or different and whether the content of the speech and gesture was congruent or incongruent. The task was to identify whether a man or a woman produced the spoken portion of the videos while accuracy rates, RTs, and ERPs were recorded to the words. Although not relevant to the task, participants paid attention to the semantic relationship between the speech and the gesture, producing a larger N400 to words accompanied by incongruent versus congruent gestures. In addition, RTs were slower to incongruent versus congruent gesture–speech stimuli, but this effect was greater when the gender of the gesturer and speaker was the same versus different. These results suggest that the integration of gesture and speech during language comprehension is automatic but also under some degree of neurocognitive control.
APA, Harvard, Vancouver, ISO, and other styles
11

Hupp, Julie M., and Mary C. Gingras. "The role of gesture meaningfulness in word learning." Gesture 15, no. 3 (November 28, 2016): 340–56. http://dx.doi.org/10.1075/gest.15.3.04hup.

Full text
Abstract:
Adults regularly use word-gesture combinations in communication, and meaningful gestures facilitate word learning. However, it is not clear if this benefit of gestures is due to the speaker’s movement increasing the listener’s attention or if it needs to be a meaningful gesture, if the difficulty of the task results in disparate reliance on gestures, and if word classes are differentially affected by gestures. In the present research, participants were measured on their novel word learning across four gesture conditions: meaningful gesture, beat gesture, nonsense gesture, and no gesture with extended training (Study 1, n = 139) and brief training (Study 2, n = 128). Overall, meaningful gestures and high frequency words led to the highest word learning accuracy. This effect of word frequency did not hold true for beat gestures after brief training suggesting that adding rhythmic information — if not adding semantic information — may detract from word learning. This research highlights the importance of considering task difficulty when analyzing the effects of gestures.
APA, Harvard, Vancouver, ISO, and other styles
12

Villarreal-Narvaez, Santiago, Arthur Sluÿters, Jean Vanderdonckt, and Efrem Mbaki Luzayisu. "Theoretically-defined vs. user-defined squeeze gestures." Proceedings of the ACM on Human-Computer Interaction 6, ISS (November 14, 2022): 73–102. http://dx.doi.org/10.1145/3567805.

Full text
Abstract:
This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes. We then empirically analyze the results of a gesture elicitation study resulting in a set of N=32 participants X 21 referents = 672 elicited gestures, further classified into 26 gesture classes. We also contribute to the practice of gesture elicitation studies by explaining why we started from a theoretical analysis (by systematically exploring a design space of potential squeeze gestures) to end up with an empirical analysis (by conducting a gesture elicitation study afterward): the intersection of the results from these sources confirm or disconfirm consensus gestures. Based on these findings, we extract from the taxonomy a subset of recommended gestures that give rise to design implications for gesture interaction with squeezable objects.
APA, Harvard, Vancouver, ISO, and other styles
13

BHUYAN, M. K., P. K. BORA, and D. GHOSH. "AN INTEGRATED APPROACH TO THE RECOGNITION OF A WIDE CLASS OF CONTINUOUS HAND GESTURES." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 02 (March 2011): 227–52. http://dx.doi.org/10.1142/s0218001411008592.

Full text
Abstract:
The gesture segmentation is a method that distinguishes meaningful gestures from unintentional movements. Gesture segmentation is a prerequisite stage to continuous gesture recognition which locates the start and end points of a gesture in an input sequence. Yet, this is an extremely difficult task due to both the multitude of possible gesture variations in spatio-temporal space and the co-articulation/movement epenthesis of successive gestures. In this paper, we focus our attention on coping with this problem associated with continuous gesture recognition. This requires gesture spotting that distinguishes meaningful gestures from co-articulation and unintentional movements. In our method, we first segment the input video stream by detecting gesture boundaries at which the hand pauses for a while during gesturing. Next, every segment is checked for movement epenthesis and co-articulation via finite state machine (FSM) matching or by using hand motion information. Thus, movement epenthesis phases are detected and eliminated from the sequence and we are left with a set of isolated gestures. Finally, we apply different recognition schemes to identify each individual gesture in the sequence. Our experimental results show that the proposed scheme is suitable for recognition of continuous gestures having different spatio-temporal behavior.
APA, Harvard, Vancouver, ISO, and other styles
14

Suttora, Chiara, Annalisa Guarini, Mariagrazia Zuccarini, Arianna Aceti, Luigi Corvaglia, and Alessandra Sansavini. "Integrating Gestures and Words to Communicate in Full-Term and Low-Risk Preterm Late Talkers." International Journal of Environmental Research and Public Health 19, no. 7 (March 25, 2022): 3918. http://dx.doi.org/10.3390/ijerph19073918.

Full text
Abstract:
Young children use gestures to practice communicative functions that foster their receptive and expressive linguistic skills. Studies investigating the use of gestures by late talkers are limited. This study aimed to investigate the use of gestures and gesture–word combinations and their associations with word comprehension and word and sentence production in late talkers. A further purpose was to examine whether a set of individual and environmental factors accounted for interindividual differences in late talkers’ gesture and gesture–word production. Sixty-one late talkers, including 35 full-term and 26 low-risk preterm children, participated in the study. Parents filled out the Italian short forms of the MacArthur–Bates Communicative Development Inventories (MB–CDI), “Gesture and Words” and “Words and Sentences” when their children were 30-months-old, and they were then invited to participate in a book-sharing session with their child. Children’s gestures and words produced during the book-sharing session were transcribed and coded into CHAT of CHILDES and analyzed with CLAN. Types of spontaneous gestures (pointing and representational gestures) and gesture–word combinations (complementary, equivalent, and supplementary) were coded. Measures of word tokens and MLU were also computed. Correlational analyses documented that children’s use of gesture–word combinations, particularly complementary and supplementary forms, in the book-sharing session was positively associated with linguistic skills both observed during the session (word tokens and MLU) and reported by parents (word comprehension, word production, and sentence production at the MB–CDI). Concerning individual factors, male gender was negatively associated with gesture and gesture–word use, as well as with MB–CDI action/gesture production. In contrast, having a low-risk preterm condition and being later-born were positively associated with the use of gestures and pointing gestures, and having a family history of language and/or learning disorders was positively associated with the use of representational gestures. Furthermore, a low-risk preterm status and a higher cognitive score were positively associated with gesture–word combinations, particularly complementary and supplementary types. With regard to environmental factors, older parental age was negatively associated with late talkers’ use of gestures and pointing gestures. Interindividual differences in late talkers’ gesture and gesture–word production were thus related to several intertwined individual and environmental factors. Among late talkers, use of gestures and gesture–word combinations represents a point of strength promoting receptive and expressive language acquisition.
APA, Harvard, Vancouver, ISO, and other styles
15

K, Srinivas, and Manoj Kumar Rajagopal. "STUDY OF HAND GESTURE RECOGNITION AND CLASSIFICATION." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (April 1, 2017): 25. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.19540.

Full text
Abstract:
To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration.
APA, Harvard, Vancouver, ISO, and other styles
16

Emmorey, Karen, and Shannon Casey. "Gesture, thought and spatial language." Gesture 1, no. 1 (December 31, 2001): 35–50. http://dx.doi.org/10.1075/gest.1.1.04emm.

Full text
Abstract:
This study explores the conceptual and communicative roles of gesture by examining the consequences of gesture prevention for the type of spatial language used to solve a spatial problem. English speakers were asked to describe where to place a group of blocks so that the blocks completely filled a puzzle grid. Half the subjects were allowed to gesture and half were prevented from gesturing. In addition, half the subjects could see their addressee and half could not. Addressee visibility affected how reliant subjects were on specifying puzzle grid co-ordinates, regardless of gesture condition. When describing block locations, subjects who were allowed to gesture were more likely to describe block orientation and rotation, but only when they could see the addressee. Further, gesture and speech complemented each other such that subjects were less likely to lexically specify rotation direction when this information was expressed by gesture; however, this was not a deliberate communicative choice because subjects who were not visible to their addressee also tended to leave rotation direction unspecified when they gestured. Finally, speakers produced deictic anaphoric constructions (e.g., “turn it this way”) which referred to their own gestures only when they could see the addressee. Together, these findings support the hypothesis that gesture is both an act of communication and an act of thought, and the results fail to support the hypothesis that gesture functions primarily to facilitate lexical retrieval.
APA, Harvard, Vancouver, ISO, and other styles
17

McNeill, David, Bennett Bertenthal, Jonathan Cole, and Shaun Gallagher. "Gesture-first, but no gestures?" Behavioral and Brain Sciences 28, no. 2 (April 2005): 138–39. http://dx.doi.org/10.1017/s0140525x05360031.

Full text
Abstract:
Although Arbib's extension of the mirror-system hypothesis neatly sidesteps one problem with the “gesture-first” theory of language origins, it overlooks the importance of gestures that occur in current-day human linguistic performance, and this lands it with another problem. We argue that, instead of gesture-first, a system of combined vocalization and gestures would have been a more natural evolutionary unit.
APA, Harvard, Vancouver, ISO, and other styles
18

Alyamani, Hasan J. "Gesture Vocabularies for Hand Gestures for Controlling Air Conditioners in Home and Vehicle Environments." Electronics 12, no. 7 (March 23, 2023): 1513. http://dx.doi.org/10.3390/electronics12071513.

Full text
Abstract:
With the growing prevalence of modern technologies as part of everyday life, mid-air gestures have become a promising input method in the field of human–computer interaction. This paper analyses the gestures of actual users to define a preliminary gesture vocabulary for home air conditioning (AC) systems and suggests a gesture vocabulary for controlling the AC that applies to both home and vehicle environments. In this study, a user elicitation experiment was conducted. A total of 36 participants were filmed while employing their preferred hand gestures to manipulate a home air conditioning system. Comparisons were drawn between our proposed gesture vocabulary (HomeG) and a previously proposed gesture vocabulary which was designed to identify the preferred hand gestures for in-vehicle air conditioners. The findings indicate that HomeG successfully identifies and describes the employed gestures in detail. To gain a gesture taxonomy that is suitable for manipulating the AC at home and in a vehicle, some modifications were applied to HomeG based on suggestions from other studies. The modified gesture vocabulary (CrossG) can identify the gestures of our study, although CrossG has a less detailed gesture pattern. Our results will help designers to understand user preferences and behaviour prior to designing and implementing a gesture-based user interface.
APA, Harvard, Vancouver, ISO, and other styles
19

Nguyen, Ngoc-Hoang, Tran-Dac-Thinh Phan, Soo-Hyung Kim, Hyung-Jeong Yang, and Guee-Sang Lee. "3D Skeletal Joints-Based Hand Gesture Spotting and Classification." Applied Sciences 11, no. 10 (May 20, 2021): 4689. http://dx.doi.org/10.3390/app11104689.

Full text
Abstract:
This paper presents a novel approach to continuous dynamic hand gesture recognition. Our approach contains two main modules: gesture spotting and gesture classification. Firstly, the gesture spotting module pre-segments the video sequence with continuous gestures into isolated gestures. Secondly, the gesture classification module identifies the segmented gestures. In the gesture spotting module, the motion of the hand palm and fingers are fed into the Bidirectional Long Short-Term Memory (Bi-LSTM) network for gesture spotting. In the gesture classification module, three residual 3D Convolution Neural Networks based on ResNet architectures (3D_ResNet) and one Long Short-Term Memory (LSTM) network are combined to efficiently utilize the multiple data channels such as RGB, Optical Flow, Depth, and 3D positions of key joints. The promising performance of our approach is obtained through experiments conducted on three public datasets—Chalearn LAP ConGD dataset, 20BN-Jester, and NVIDIA Dynamic Hand gesture Dataset. Our approach outperforms the state-of-the-art methods on the Chalearn LAP ConGD dataset.
APA, Harvard, Vancouver, ISO, and other styles
20

Ma, Xianmin, and Xiaofeng Li. "Dynamic Gesture Contour Feature Extraction Method Using Residual Network Transfer Learning." Wireless Communications and Mobile Computing 2021 (October 13, 2021): 1–11. http://dx.doi.org/10.1155/2021/1503325.

Full text
Abstract:
The current dynamic gesture contour feature extraction method has the problems that the recognition rate of dynamic gesture contour feature and the recognition accuracy of dynamic gesture type are low, the recognition time is long, and comprehensive is poor. Therefore, we propose a dynamic gesture contour feature extraction method using residual network transfer learning. Sensors are used to integrate dynamic gesture information. The distance between the dynamic gesture and the acquisition device is detected by transfer learning, the dynamic gesture image is segmented, and the characteristic contour image is initialized. The residual network method is used to accurately identify the contour and texture features of dynamic gestures. Fusion processing weights are used to trace the contour features of dynamic gestures frame by frame, and the contour area of dynamic gestures is processed by gray and binarization to realize the extraction of contour features of dynamic gestures. The results show that the dynamic gesture contour feature recognition rate of the proposed method is 91%, the recognition time is 11.6 s, and the dynamic gesture type recognition accuracy rate is 92%. Therefore, this method can effectively improve the recognition rate and type recognition accuracy of dynamic gesture contour features and shorten the time for dynamic gesture contour feature recognition, and the F value is 0.92, with good comprehensive performance.
APA, Harvard, Vancouver, ISO, and other styles
21

Wong, Alex Ming Hui, and Dae-Ki Kang. "Stationary Hand Gesture Authentication Using Edit Distance on Finger Pointing Direction Interval." Scientific Programming 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/7427980.

Full text
Abstract:
One of the latest authentication methods is by discerning human gestures. Previous research has shown that different people can develop distinct gesture behaviours even when executing the same gesture. Hand gesture is one of the most commonly used gestures in both communication and authentication research since it requires less room to perform as compared to other bodily gestures. There are different types of hand gesture and they have been researched by many researchers, but stationary hand gesture has yet to be thoroughly explored. There are a number of disadvantages and flaws in general hand gesture authentication such as reliability, usability, and computational cost. Although stationary hand gesture is not able to solve all these problems, it still provides more benefits and advantages over other hand gesture authentication methods, such as making gesture into a motion flow instead of trivial image capturing, and requires less room to perform, less vision cue needed during performance, and so forth. In this paper, we introduce stationary hand gesture authentication by implementing edit distance on finger pointing direction interval (ED-FPDI) from hand gesture to model behaviour-based authentication system. The accuracy rate of the proposed ED-FPDI shows promising results.
APA, Harvard, Vancouver, ISO, and other styles
22

Liao, Ting. "Application of Gesture Recognition Based on Spatiotemporal Graph Convolution Network in Virtual Reality Interaction." Journal of Cases on Information Technology 24, no. 5 (February 21, 2022): 1–12. http://dx.doi.org/10.4018/jcit.295246.

Full text
Abstract:
Aiming at the low recognition rate of traditional gesture, a gesture recognition algorithm based on spatiotemporal graph convolution network is proposed in this paper. Firstly, the dynamic gesture data were preprocessed, including removing invalid gesture frames, completing gesture frame data and normalization of joint length. Then, the key frame of the gesture is extracted according to the given coordinate information of the hand joint. A connected graph is constructed according to the natural connection of time series information and gesture skeleton. A spatio-temporal convolutional network with multi-attention mechanism is used to learn spatio-temporal features to predict gestures. Finally, experiments are carried out on 14 types of gesture dataset in DHG-14 dynamic gesture dataset. Experimental results show that this method can recognize gestures accurately.
APA, Harvard, Vancouver, ISO, and other styles
23

Parrill, Fey, John Cabot, Hannah Kent, Kelly Chen, and Ann Payneau. "Do people gesture more when instructed to?" Gesture 15, no. 3 (November 28, 2016): 357–71. http://dx.doi.org/10.1075/gest.15.3.05par.

Full text
Abstract:
Does being instructed to gesture encourage those with low gesture rates to produce more gestures? If participants do gesture more when asked to, do they produce the same kinds of gestures? Does this vary as a function of the type of discourse being produced? We asked participants to take part in three tasks, a quasi-conversational task, a spatial problem solving task, and a narrative task, in two phases. In the first they received no instruction, and in the second they were asked to gesture. The instruction to gesture did not change gesture rate or gesture type across phases. We suggest that while explicitly asking participants to gesture may not always achieve higher gesture rates, it also does not negatively impact natural behavior.
APA, Harvard, Vancouver, ISO, and other styles
24

Katagami, Daisuke, Yusuke Ikeda, and Katsumi Nitta. "Behavior Generation and Evaluation of Negotiation Agent Based on Negotiation Dialogue Instances." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 7 (November 20, 2010): 840–51. http://dx.doi.org/10.20965/jaciii.2010.p0840.

Full text
Abstract:
This study focuses on gestures negotiation dialogs. Analyzing the situation/gesture relationship, we suggest how to enable agents to conduct adequate human-like gestures and evaluated whether an agent’s gestures could give an impression similar to those by a human being. We collected negotiation dialogs to study common human gestures. We studied gesture frequency in different situations and extracted gestures with high frequency, making an agent gesture module based on the number of characteristics. Using a questionnaire, we evaluated the impressions of gestures by human users and agents, confirming that the agent expresses the same state of mind as the human being by generating an adequately human-like gesture.
APA, Harvard, Vancouver, ISO, and other styles
25

Nyirarugira, Clementine, Hyo-rim Choi, and TaeYong Kim. "Hand Gesture Recognition Using Particle Swarm Movement." Mathematical Problems in Engineering 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/1919824.

Full text
Abstract:
We present a gesture recognition method derived from particle swarm movement for free-air hand gesture recognition. Online gesture recognition remains a difficult problem due to uncertainty in vision-based gesture boundary detection methods. We suggest an automated process of segmenting meaningful gesture trajectories based on particle swarm movement. A subgesture detection and reasoning method is incorporated in the proposed recognizer to avoid premature gesture spotting. Evaluation of the proposed method shows promising recognition results: 97.6% on preisolated gestures, 94.9% on stream gestures with assistive boundary indicators, and 94.2% for blind gesture spotting on digit gesture vocabulary. The proposed recognizer requires fewer computation resources; thus it is a good candidate for real-time applications.
APA, Harvard, Vancouver, ISO, and other styles
26

Child, Simon, Anna Theakston, and Simone Pika. "How do modelled gestures influence preschool children’s spontaneous gesture production?" Gesture 14, no. 1 (December 31, 2014): 1–25. http://dx.doi.org/10.1075/gest.14.1.01chi.

Full text
Abstract:
Around the age of nine months, children start to communicate by using first words and gestures, during interactions with caregivers. The question remains as to how older preschool children utilise the gestures they observe into their own gestural representations of previously unseen objects. Two accounts of gesture production (the ‘gesture learning’, and ‘simulated representation’ accounts) offer different predictions for how preschool children use the gestures they observe when describing objects. To test these two competing accounts underlying gesture production, we showed 42 children (mean age: 45 months 14 days) four novel objects using speech only, or speech accompanied by either movement or physical feature gestures. Analyses revealed that (a) overall symbolic gesture production showed a high degree of individual variability, and (b) distinct observed gesture types influenced the children’s subsequent gesture use. Specifically, it was found that children preferred to match movement gestures in a subsequent communicative interaction including the same objects, but not physical feature gestures. We conclude that the observation of gestures (in particular gestures that depict movement) may act to change preschool children’s object representations, which in turn influences how they depict objects in space.
APA, Harvard, Vancouver, ISO, and other styles
27

De Froy, Adrienne, and Pamela Rosenthal Rollins. "The cross-racial/ethnic gesture production of young autistic children and their parents." Autism & Developmental Language Impairments 8 (January 2023): 239694152311595. http://dx.doi.org/10.1177/23969415231159548.

Full text
Abstract:
Background & Aims Early gesture plays an important role in prelinguistic/emerging linguistic communication and may provide insight into a child's social communication skills before the emergence of spoken language. Social interactionist theories suggest children learn to gesture through daily interactions with their social environment (e.g., their parents). As such, it is important to understand how parents gesture within interactions with their children when studying child gesture. Parents of typically developing (TD) children exhibit cross-racial/ethnic differences in gesture rate. Correlations between parent and child gesture rates arise prior to the first birthday, although TD children at this developmental level do not yet consistently exhibit the same cross-racial/ethnic differences as their parents. While these relationships have been explored in TD children, less is known about the gesture production of young autistic children and their parents. Further, studies of autistic children have historically been conducted with predominantly White, English-speaking participants. As a result, there is little data regarding the gesture production of young autistic children and their parents from diverse racial/ethnic backgrounds. In the present study, we examined the gesture rates of racially/ethnically diverse autistic children and their parents. Specifically, we explored (1) cross-racial/ethnic differences in the gesture rate of parents of autistic children, (2) the correlation between parent and child gesture rates, and (3) cross-racial/ethnic differences in the gesture rates of autistic children. Methods Participants were 77 racially/ethnically diverse cognitively and linguistically impaired autistic children (age 18 to 57 months) and a parent who participated in one of two larger intervention studies. Naturalistic parent–child and structured clinician–child interactions were video recorded at baseline. Parent and child gesture rate (number of gestures produced per 10 min) were extracted from these recordings. Results (1) Parents exhibited cross-racial/ethnic differences in gesture rate such that Hispanic parents gestured more frequently than Black/African American parents, replicating previous findings in parents of TD children. Further, South Asian parents gestured more than Black/African American parents. (2) The gesture rate of autistic children was not correlated with parent gesture, a finding that differs from TD children of a similar developmental level. (3) Autistic children did not exhibit the same cross-racial/ethnic differences in gesture rate as their parents, a result consistent with findings from TD children. Conclusions Parents of autistic children—like parents of TD children—exhibit cross-racial/ethnic differences in gesture rate. However, parent and child gesture rates were not related in the present study. Thus, while parents of autistic children from different ethnic/racial backgrounds appear to be conveying differences in gestural communication to their children, these differences are not yet evident in child gesture. Implications Our findings enhance our understanding of the early gesture production of racially/ethnically diverse autistic children in the prelinguistic/emerging linguistic stage of development, as well as the role of parent gesture. More research is needed with developmentally more advanced autistic children, as these relationships may change with development.
APA, Harvard, Vancouver, ISO, and other styles
28

Kok, Kasper I., and Alan Cienki. "Cognitive Grammar and gesture: Points of convergence, advances and challenges." Cognitive Linguistics 27, no. 1 (February 1, 2016): 67–100. http://dx.doi.org/10.1515/cog-2015-0087.

Full text
Abstract:
AbstractGiven its usage-oriented character, Cognitive Grammar (CG) can be expected to be consonant with a multimodal, rather than text-only, perspective on language. Whereas several scholars have acknowledged this potential, the question as to how speakers’ gestures can be incorporated in CG-based grammatical analysis has not been conclusively addressed. In this paper, we aim to advance the CG-gesture relationship. We first elaborate on three important points of convergence between CG and gesture research: (1) CG’s conception of grammar as a prototype category, with central and more peripheral structures, aligns with the variable degrees to which speakers’ gestures are conventionalized in human communication. (2) Conceptualization, which lies at the basis of grammatical organization according to CG, is known to be of central importance for gestural expression. In fact, all of the main dimensions of construal postulated in CG (specificity, perspective, profile-base relationship, conceptual archetypes) receive potential gestural expression. (3) CG’s intensive use of diagrammatic notation allows for the incorporation of spatial features of gestures. Subsequently, we demonstrate how CG can be applied to analyze the structure of multimodal, spoken-gestured utterances. These analyses suggest that the constructs and tools developed by CG can be employed to analyze the compositionality that exists within a single gesture (between conventional and more idiosyncratic components) as well as in the grammatical relations that may exist between gesture and speech. Finally, we raise a number of theoretical and empirical challenges.
APA, Harvard, Vancouver, ISO, and other styles
29

HWANG, BON-WOO, SUNGMIN KIM, and SEONG-WHANe LEE. "A FULL-BODY GESTURE DATABASE FOR HUMAN GESTURE ANALYSIS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 06 (September 2007): 1069–84. http://dx.doi.org/10.1142/s0218001407005806.

Full text
Abstract:
This paper presents a full-body gesture database which contains 2D video data and 3D motion data of 14 normal gestures, 10 abnormal gestures and 30 command gestures for 20 subjects. We call this database the Korea University Gesture (KUG) database. Using 3D motion cameras and 3 sets of stereo cameras, we captured 3D motion data and 3 pairs of stereo-video data in 3 different directions for normal and abnormal gestures. In case of command gestures, 2 pairs of stereo-video data were obtained by 2 sets of stereo cameras with different focal lengths in order to capture views of whole body and upper body, simultaneously. The 2D silhouette data was synthesized by separating a subject and background in 2D stereo-video data. In this paper, we describe the gesture capture system, the organization of database, the potential usages of the database and the contact point for the KUG database. We expect that this database would be very useful for the study of 2D/3D human gesture and its application.
APA, Harvard, Vancouver, ISO, and other styles
30

Cartmill, Erica A., Sian Beilock, and Susan Goldin-Meadow. "A word in the hand: action, gesture and mental representation in humans and non-human primates." Philosophical Transactions of the Royal Society B: Biological Sciences 367, no. 1585 (January 12, 2012): 129–43. http://dx.doi.org/10.1098/rstb.2011.0162.

Full text
Abstract:
The movements we make with our hands both reflect our mental processes and help to shape them. Our actions and gestures can affect our mental representations of actions and objects. In this paper, we explore the relationship between action, gesture and thought in both humans and non-human primates and discuss its role in the evolution of language. Human gesture (specifically representational gesture) may provide a unique link between action and mental representation. It is kinaesthetically close to action and is, at the same time, symbolic. Non-human primates use gesture frequently to communicate, and do so flexibly. However, their gestures mainly resemble incomplete actions and lack the representational elements that characterize much of human gesture. Differences in the mirror neuron system provide a potential explanation for non-human primates' lack of representational gestures; the monkey mirror system does not respond to representational gestures, while the human system does. In humans, gesture grounds mental representation in action, but there is no evidence for this link in other primates. We argue that gesture played an important role in the transition to symbolic thought and language in human evolution, following a cognitive leap that allowed gesture to incorporate representational elements.
APA, Harvard, Vancouver, ISO, and other styles
31

Asmoro, Jeffri Dian, Achmad Teguh Wibowo, and Mujib Ridwan. "VIRTUAL MOUSE WITH HAND GESTURE RECOGNITION BASED ON HAND LANDMARK MODEL FOR POINTING DEVICE." JURTEKSI (Jurnal Teknologi dan Sistem Informasi) 9, no. 2 (March 28, 2023): 261–68. http://dx.doi.org/10.33330/jurteksi.v9i2.2073.

Full text
Abstract:
Abstract: Technology is growing rapidly and has become one of the human needs that must be owned to solve the problems being faced. The development of touchless input devices or hand gesture recognition using a camera is a form of machine learning. Gestures can define as physical movements of the hands, arms, or body as expressive messages, besides that this hand gesture system can explain the contents of commands that have meaning. In this research, a virtual mouse system will be developed using hand gesture recognition based on the hand landmark model for pointing devices. The resulting application can be run on a desktop device using a webcam. The results of the tests carried out to analyze the implementation of the hand landmark model into the system show that the average system accuracy reaches 96% and the speed reaches 0.05 seconds. Keywords: hand gesture recognition, hand landmark models, machine learning, virtual mouse Abstract: Teknologi semakin pesat dan sudah menjadi salah satu kebutuhan manusia yang harus dimiliki untuk menyelesaikan permasalahan yang sedang dihadapi. Perkembangan piranti masukan tanpa sentuhan atau hand gesture recognition menggunakan kamera adalah salah satu bentuk dari machine learning. Gestur mampu mendefinisikan sebagai gerakan fisik dari tangan, lengan, maupun badan sebagai pesan yang ekspresif, selain itu sistem gestur tangan ini mampu menjelaskan isi perintah yang memiliki arti. Dalam penelitian ini akan dikembangkan sebuah sistem virtual mouse menggunakan hand gesture recognition berbasis hand landmark model untuk pointing device. Aplikasi yang dihasilkan dapat dijalankan pada perangkat desktop dengan menggunakan webcam. Hasil dari pengujian yang dilakukan untuk menganalisa penerapan hand landmark model kedalam sistem menunjukkan rata-rata akurasi sistem mencapai 96% dan kecepatan mencapai 0.05 second. Keywords: hand gesture recognition, hand landmark models, machine learning, virtual mouse
APA, Harvard, Vancouver, ISO, and other styles
32

LAURENT, ANGÉLIQUE, and ELENA NICOLADIS. "Gesture restriction affects French–English bilinguals’ speech only in French." Bilingualism: Language and Cognition 18, no. 2 (May 22, 2014): 340–49. http://dx.doi.org/10.1017/s1366728914000042.

Full text
Abstract:
Some studies have shown that bilinguals gesture more than monolinguals. One possible reason for the high gesture frequency is that bilinguals rely on gestures even more than monolinguals in constructing their message. To test this, we asked French–English bilingual adults and English monolingual adults to tell a story twice; on one occasion they could move their hands and on the other they could not. If gestures aid bilinguals in information packaging and/or lexical access, bilinguals should tell shorter stories with fewer word types than monolinguals when their gestures are restricted. In fact, we found that gesture restriction affected bilinguals’ stories only in French, the language in which they used more gestures. These findings challenge the interpretation that bilinguals gesture frequently as an aid in constructing their message. We argue that cultural norms in gesture frequency interact with gesture use in message construction.
APA, Harvard, Vancouver, ISO, and other styles
33

Valle, Chelsea La, Karen Chenausky, and Helen Tager-Flusberg. "How do minimally verbal children and adolescents with autism spectrum disorder use communicative gestures to complement their spoken language abilities?" Autism & Developmental Language Impairments 6 (January 2021): 239694152110350. http://dx.doi.org/10.1177/23969415211035065.

Full text
Abstract:
Background and aims Prior work has examined how children and adolescents with autism spectrum disorder who are minimally verbal use their spoken language abilities during interactions with others. However, social communication includes other aspects beyond speech. To our knowledge, no studies have examined how minimally verbal children and adolescents with autism spectrum disorder are using their gestural communication during social interactions. Such work can provide important insights into how gestures may complement their spoken language abilities. Methods Fifty minimally verbal children and adolescents with autism spectrum disorder participated ( Mage = 12.41 years; 38 males). Gestural communication was coded from the Autism Diagnostic Observation Schedule. Children ( n = 25) and adolescents ( n = 25) were compared on their production of gestures, gesture–speech combinations, and communicative functions. Communicative functions were also assessed by the type of communication modality: gesture, speech, and gesture–speech to examine the range of communicative functions across different modalities of communication. To explore the role gestures may play the relation between speech utterances and gestural production was investigated. Results Analyses revealed that (1) minimally verbal children and adolescents with autism spectrum disorder did not differ in their total number of gestures. The most frequently produced gesture across children and adolescents was a reach gesture, followed by a point gesture (deictic gesture), and then conventional gestures. However, adolescents produced more gesture–speech combinations (reinforcing gesture-speech combinations) and displayed a wider range of communicative functions. (2) Overlap was found in the types of communicative functions expressed across different communication modalities. However, requests were conveyed via gesture more frequently compared to speech or gesture–speech. In contrast, dis/agree/acknowledging and responding to a question posed by the conversational partner was expressed more frequently via speech compared to gesture or gesture–speech. (3) The total number of gestures was negatively associated with total speech utterances after controlling for chronological age, receptive communication ability, and nonverbal IQ. Conclusions Adolescents may be employing different communication strategies to maintain the conversational exchange and to further clarify the message they want to convey to the conversational partner. Although overlap occurred in communicative functions across gesture, speech, and gesture–speech, nuanced differences emerged in how often they were expressed across different modalities of communication. Given their speech production abilities, gestures may play a compensatory role for some individuals with autism spectrum disorder who are minimally verbal. Implications Findings underscore the importance of assessing multiple modalities of communication to provide a fuller picture of their social communication abilities. Our results identified specific communicative strengths and areas for growth that can be targeted and expanded upon within gesture and speech to optimize social communication development.
APA, Harvard, Vancouver, ISO, and other styles
34

ÖZYÜREK, ASLI, REYHAN FURMAN, and SUSAN GOLDIN-MEADOW. "On the way to language: event segmentation in homesign and gesture." Journal of Child Language 42, no. 1 (March 20, 2014): 64–94. http://dx.doi.org/10.1017/s0305000913000512.

Full text
Abstract:
ABSTRACTLanguages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.
APA, Harvard, Vancouver, ISO, and other styles
35

Park, Jisun, Yong Jin, Seoungjae Cho, Yunsick Sung, and Kyungeun Cho. "Advanced Machine Learning for Gesture Learning and Recognition Based on Intelligent Big Data of Heterogeneous Sensors." Symmetry 11, no. 7 (July 16, 2019): 929. http://dx.doi.org/10.3390/sym11070929.

Full text
Abstract:
With intelligent big data, a variety of gesture-based recognition systems have been developed to enable intuitive interaction by utilizing machine learning algorithms. Realizing a high gesture recognition accuracy is crucial, and current systems learn extensive gestures in advance to augment their recognition accuracies. However, the process of accurately recognizing gestures relies on identifying and editing numerous gestures collected from the actual end users of the system. This final end-user learning component remains troublesome for most existing gesture recognition systems. This paper proposes a method that facilitates end-user gesture learning and recognition by improving the editing process applied on intelligent big data, which is collected through end-user gestures. The proposed method realizes the recognition of more complex and precise gestures by merging gestures collected from multiple sensors and processing them as a single gesture. To evaluate the proposed method, it was used in a shadow puppet performance that could interact with on-screen animations. An average gesture recognition rate of 90% was achieved in the experimental evaluation, demonstrating the efficacy and intuitiveness of the proposed method for editing visualized learning gestures.
APA, Harvard, Vancouver, ISO, and other styles
36

Namy, Laura L., Rebecca Vallas, and Jennifer Knight-Schwarz. "Linking parent input and child receptivity to symbolic gestures." Gesture 8, no. 3 (December 12, 2008): 302–24. http://dx.doi.org/10.1075/gest.8.3.03nam.

Full text
Abstract:
This study explored the relation between parents’ production of gestures and symbolic play during free play and children’s production and comprehension of symbolic gestures. Thirty-one 16- to 22-month-olds and their parents participated in a free play session. Children also participated in a forced-choice novel gesture-learning task. Parents’ pretend play with objects in hand was predictive of children’s gesture production during play and gesture vocabulary according to parental report. No relationship was found between parent gesture and child performance on the forced-choice gesture-learning task, although children’s performance was negatively correlated with their verbal vocabulary size. These data suggest a strong link between parental input and the children’s use of gestures as symbols, although not a direct link from parent gesture to child gesture. The data also suggest that children’s overall expectations that gestures can be symbols is unaffected by parental input, and highlight the possibility that children play a role in transforming the symbolic play behaviors that they observe into communicative signals.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhao, Yiming, Yanchao Zhao, Huawei Tu, Qihan Huang, Wenlai Zhao, and Wenhao Jiang. "Motion Gesture Delimiters for Smartwatch Interaction." Wireless Communications and Mobile Computing 2022 (July 12, 2022): 1–11. http://dx.doi.org/10.1155/2022/6879206.

Full text
Abstract:
Smartwatches are increasingly popular in our daily lives. Motion gestures are a common way of interacting with smartwatches, e.g., users can make a movement in the air with their arm wearing the watch to trigger a specific command of the smartwatch. Motion gesture interaction can compensate for the small screen size of the smartwatch to some extent and enrich smartwatch-based interactions. An important aspect of motion gesture interaction lies in how to determine the start and end of a motion gesture. This paper is aimed at selecting gestures as suitable delimiters for motion gesture interaction with the smartwatch. We designed six gestures (“shaking wrist left and right,” “shaking wrist up and down,” “holding fist and opening,” “turning wrist clockwise,” “turning wrist anticlockwise,” and “shaking wrist up”) and conducted two experiments to compare the performance of these six gestures. Firstly, we used dynamic time warping (DTW) and feature extraction with KNN ( K -nearest neighbors) to recognize these six gestures. The average recognition rate of the latter algorithm for the six gestures was higher than that of the former. And with the latter algorithm, the recognition rate for the first three of the six gestures was greater than 98%. According to experiment one, gesture 1 (shaking wrist left and right), gesture 2 (shaking wrist up and down), and gesture 3 (holding fist and opening) were selected as the candidate delimiters. In addition, we conducted a questionnaire data analysis and obtained the same conclusion. Then, we conducted the second experiment to investigate the performance of these three candidate gestures in daily scenes to obtain their misoperation rates. The misoperation rates of two candidate gestures (“shaking wrist left and right” and “shaking wrist up and down”) were approximately 0, which were significantly lower than that of the third candidate gesture. Based on the above experimental results, gestures “shaking wrist left and right” and “shaking wrist up and down” are suitable as motion gesture delimiters for smartwatch interaction.
APA, Harvard, Vancouver, ISO, and other styles
38

Ng, Chloe, and Nicolai Marquardt. "Eliciting user-defined touch and mid-air gestures for co-located mobile gaming." Proceedings of the ACM on Human-Computer Interaction 6, ISS (November 14, 2022): 303–27. http://dx.doi.org/10.1145/3567722.

Full text
Abstract:
Many interaction techniques have been developed to best support mobile gaming – but developed gestures and techniques might not always match user behaviour or preferences. To inform this design space of gesture input for co-located mobile gaming, we present insights from a gesture elicitation user study for touch and mid-air input, specifically focusing on board and card games due to the materiality of game artefacts and rich interaction between players. We obtained touch and mid-air gesture proposals for 11 game tasks with 12 dyads and gained insights into user preferences. We contribute our classification and analysis of 622 elicited gestures (showing more collaborative gestures in the mid-air modality), resulting in a consensus gesture set, and agreement rates showing higher consensus for touch gestures. Furthermore, we identified interaction patterns – such as benefits of situational awareness, social etiquette, gestures fostering interaction between players, and roles of gestures providing fun, excitement, and suspense to the game – which can inform future games and gesture design.
APA, Harvard, Vancouver, ISO, and other styles
39

GARCÍA-GÁMEZ, ANA B., and PEDRO MACIZO. "Learning nouns and verbs in a foreign language: The role of gestures." Applied Psycholinguistics 40, no. 2 (December 11, 2018): 473–507. http://dx.doi.org/10.1017/s0142716418000656.

Full text
Abstract:
ABSTRACTWe evaluated the impact of gestures on second language (L2) vocabulary learning with nouns (Experiment 1) and verbs (Experiment 2). Four training methods were compared: the learning of L2 words with congruent gestures, incongruent gestures, meaningless gestures, and no gestures. Better vocabulary learning was found in both experiments when participants learned L2 words with congruent gestures relative to the no gesture condition. This result indicates that gestures have a positive effect on L2 learning when there is a match between the word meaning and the gesture. However, the recall of words in the incongruent and meaningless gesture conditions was lower than that of the no gesture condition. This suggests that gestures might have a negative impact on L2 learning. The facilitation and interference effects we found with the use of gestures in L2 vocabulary acquisition are discussed.
APA, Harvard, Vancouver, ISO, and other styles
40

Patil, Anuradha, Chandrashekhar M. Tavade, and . "Methods on Real Time Gesture Recognition System." International Journal of Engineering & Technology 7, no. 3.12 (July 20, 2018): 982. http://dx.doi.org/10.14419/ijet.v7i3.12.17617.

Full text
Abstract:
Gesture recognition deals with discussion of various methods, techniques and concerned algorithms related to it. Gesture recognition uses a simple & basic sign languages like movement of hand, position of lips & eye ball as well as eye lids positions. The various methods for image capturing, gesture recognition, gesture tracking, gesture segmentation and smoothing methods compared, and by the overweighing advantage of different gesture recognitions and their applications. In recent days gesture recognition is widely utilized in gaming industries, biomedical applications, and medical diagnostics for dumb and deaf people. Due to their wide applications, high efficiency, high accuracy and low expenditure gestures are using in many applications including robotics. By using gestures to develop human computer interaction (HCI) method it is necessary to identify the proper and meaning full gesture from different gesture images. The Gesture recognition avoids use of costly hardware devices for understanding the activities and recognition example lots of I/O devices like keyboard mouse etc. Can be Limited.
APA, Harvard, Vancouver, ISO, and other styles
41

Izuta, Ryo, Kazuya Murao, Tsutomu Terada, and Masahiko Tsukamoto. "Early gesture recognition method with an accelerometer." International Journal of Pervasive Computing and Communications 11, no. 3 (September 7, 2015): 270–87. http://dx.doi.org/10.1108/ijpcc-03-2015-0016.

Full text
Abstract:
Purpose – This paper aims to propose a gesture recognition method at an early stage. An accelerometer is installed in most current mobile phones, such as iPhones, Android-powered devices and video game controllers for the Wii or PS3, which enables easy and intuitive operations. Therefore, many gesture-based user interfaces that use accelerometers are expected to appear in the future. Gesture recognition systems with an accelerometer generally have to construct models with user’s gesture data before use and recognize unknown gestures by comparing them with the models. Because the recognition process generally starts after the gesture has finished, the output of the recognition result and feedback delay, which may cause users to retry gestures, degrades the interface usability. Design/methodology/approach – The simplest way to achieve early recognition is to start it at a fixed time after a gesture starts. However, the degree of accuracy would decrease if a gesture in an early stage was similar to the others. Moreover, the timing of a recognition has to be capped by the length of the shortest gesture, which may be too early for longer gestures. On the other hand, retreated recognition timing will exceed the length of the shorter gestures. In addition, a proper length of training data has to be found, as the full length of training data does not fit the input data until halfway. To recognize gestures in an early stage, proper recognition timing and a proper length of training data have to be decided. This paper proposes a gesture recognition method used in the early stages that sequentially calculates the distance between the input and training data. The proposed method outputs the recognition result when one candidate has a stronger likelihood of recognition than the other candidates so that similar incorrect gestures are not output. Findings – The proposed method was experimentally evaluated on 27 kinds of gestures and it was confirmed that the recognition process finished 1,000 msec before the end of the gestures on average without deteriorating the level of accuracy. Gestures were recognized in an early stage of motion, which would lead to an improvement in the interface usability and a reduction in the number of incorrect operations such as retried gestures. Moreover, a gesture-based photo viewer was implemented as a useful application of our proposed method, the proposed early gesture recognition system was used in a live unscripted performance and its effectiveness is ensured. Originality/value – Gesture recognition methods with accelerometers generally learn a given user’s gesture data before using the system, then recognizes any unknown gestures by comparing them with the training data. The recognition process starts after a gesture has finished, and therefore, any interaction or feedback depending on the recognition result is delayed. For example, an image on a smartphone screen rotates a few seconds after the device has been tilted, which may cause the user to retry tilting the smartphone even if the first one was correctly recognized. Although many studies on gesture recognition using accelerometers have been done, to the best of the authors’ knowledge, none of these studies has taken the potential delays in output into consideration.
APA, Harvard, Vancouver, ISO, and other styles
42

Tran, Dinh-Son, Ngoc-Huynh Ho, Hyung-Jeong Yang, Eu-Tteum Baek, Soo-Hyung Kim, and Gueesang Lee. "Real-Time Hand Gesture Spotting and Recognition Using RGB-D Camera and 3D Convolutional Neural Network." Applied Sciences 10, no. 2 (January 20, 2020): 722. http://dx.doi.org/10.3390/app10020722.

Full text
Abstract:
Using hand gestures is a natural method of interaction between humans and computers. We use gestures to express meaning and thoughts in our everyday conversations. Gesture-based interfaces are used in many applications in a variety of fields, such as smartphones, televisions (TVs), video gaming, and so on. With advancements in technology, hand gesture recognition is becoming an increasingly promising and attractive technique in human–computer interaction. In this paper, we propose a novel method for fingertip detection and hand gesture recognition in real-time using an RGB-D camera and a 3D convolution neural network (3DCNN). This system can accurately and robustly extract fingertip locations and recognize gestures in real-time. We demonstrate the accurateness and robustness of the interface by evaluating hand gesture recognition across a variety of gestures. In addition, we develop a tool to manipulate computer programs to show the possibility of using hand gesture recognition. The experimental results showed that our system has a high level of accuracy of hand gesture recognition. This is thus considered to be a good approach to a gesture-based interface for human–computer interaction by hand in the future.
APA, Harvard, Vancouver, ISO, and other styles
43

Procházka, David, Jaromír Landa, Tomáš Koubek, and Vít Ondroušek. "Mainstreaming gesture based interfaces." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 61, no. 7 (2013): 2655–60. http://dx.doi.org/10.11118/actaun201361072655.

Full text
Abstract:
Gestures are a common way of interaction with mobile devices. They emerged especially with the iPhone production. Gestures in currently used devices are usually based on the original gestures presented by Apple in its iOS (iPhone Operating System). Therefore, there is a wide agreement on the mobile gesture design. In last years, it is possible to see experiments with gesture usage also in the other areas of consumer electronics and computers. The examples can include televisions, large projections etc. These gestures can be marked as spatial or 3D gestures. They are connected with a natural 3D environment rather than with a flat 2D screen. Nevertheless, it is hard to find a comparable design agreement within the spatial gestures. Various projects are based on completely different gesture sets. This situation is confusing for their users and slows down spatial gesture adoption.This paper is focused on the standardization of spatial gestures. The review of projects focused on spatial gesture usage is provided in the first part. The main emphasis is placed on the usability point-of-view. On the basis of our analysis, we argue that the usability is the key issue enabling the wide adoption. The mobile gesture emergence was possible easily because the iPhone gestures were natural. Therefore, it was not necessary to learn them.The design and implementation of our presentation software, which is controlled by gestures, is outlined in the second part of the paper. Furthermore, the usability testing results are provided as well. We have tested our application on a group of users not instructed in the implemented gestures design. These results were compared with the other ones, obtained with our original implementation. The evaluation can be used as the basis for implementation of similar projects.
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Seongjo, Sohyun Sim, Kyhyun Um, Young-Sik Jeong, Seung-won Jung, and Kyungeun Cho. "Development of a Hand Gestures SDK for NUI-Based Applications." Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/212639.

Full text
Abstract:
Concomitant with the advent of the ubiquitous era, research into better human computer interaction (HCI) for human-focused interfaces has intensified. Natural user interface (NUI), in particular, is being actively investigated with the objective of more intuitive and simpler interaction between humans and computers. However, developing NUI-based applications without special NUI-related knowledge is difficult. This paper proposes a NUI-specific SDK, called “Gesture SDK,” for development of NUI-based applications. Gesture SDK provides a gesture generator with which developers can directly define gestures. Further, a “Gesture Recognition Component” is provided that enables defined gestures to be recognized by applications. We generated gestures using the proposed SDK and developed a “Smart Interior,” NUI-based application using the Gesture Recognition Component. The results of experiments conducted indicate that the recognition rate of the generated gestures was 96% on average.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Tao, Xiaolong Cai, Liping Wang, and Haoye Tian. "Interactive Design of 3D Dynamic Gesture Based on SVM-LSTM Model." International Journal of Mobile Human Computer Interaction 10, no. 3 (July 2018): 49–63. http://dx.doi.org/10.4018/ijmhci.2018070104.

Full text
Abstract:
Visual hand gesture interaction is one of the main ways of human-computer interaction, and provides users more interactive degrees of freedom and more realistic interactive experience. Authors present a hybrid model based on SVM-LSTM, and design a three-dimensional dynamic gesture interaction system. The system uses Leap Motion to capture gesture information, combined with SVM powerful static gesture classification ability and LSTM powerful variable-length time series gesture processing ability, enabling real-time recognition of user gestures. The gesture interaction method can automatically define the start and end of gestures, recognition accuracy reached 96.4%, greatly reducing the cost of learning. Experiments have shown that the gesture interaction method proposed by authors is effective. In the simulated mobile environment, the average gesture prediction only takes 0.15 seconds, and ordinary users can quickly grasp this method.
APA, Harvard, Vancouver, ISO, and other styles
46

BHASKORO, SUSETYO BAGAS, and MUHAMMAD AZHAR ABDUL AZIZ. "Pengendalian Gerak Robot menggunakan Semantik Citra Gestur Tangan Manusia." ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika 8, no. 1 (January 31, 2020): 80. http://dx.doi.org/10.26760/elkomika.v8i1.80.

Full text
Abstract:
ABSTRAK Pengendalian robot jarak jauh menjadi sangat dibutuhkan pada bidang otomasi industri. Objektif dari penelitian ini adalah mengembangkan robot lengan beroda pemindah benda menggunakan perintah semantik citra gestur tangan manusia. Evaluasi pada penelitian ini dibedakan menjadi dua kategori yaitu deteksi semantik citra gestur tangan manusia dan manuver gerak bebas robot lengan beroda. Tingkat keberhasilan identifikasi gestur tangan sebesar 77,73%. Manuver gerak maju memiliki rata-rata error sebesar 4,125%, gerak mundur sebesar 4,85%, gerak berputar kiri sebesar 14,36% dan gerak berputar kanan sebesar 6,66%. Kata kunci: robot lengan beroda, semantik citra, gesture tangan, gerak manuver, Lab-VIEW. ABSTRACT Remote control of the robot became very needed in the industrial automation fields. The research objectives are to develop a driven wheeled arm robot using human hand gesture semantic commands. Evaluation in this research is divided into two categories are the detection of human hand movements and the freedom of motion wheeled arm robots. The successful identification of human hand gestures is 77.73%. The maneuvers has an average error is 4.125%, the backward motion is 4.85%, left rotating motion is 14.36% and right rotating motion is 6.66%. Keywords: A driven wheeled arm robot, semantic vision, human hand gesture, motion maneuvers, Lab-VIEW
APA, Harvard, Vancouver, ISO, and other styles
47

Gonzalez, Glebys, Naveen Madapana, Rahul Taneja, Lingsong Zhang, Richard Rodgers, and Juan P. Wachs. "Looking Beyond the Gesture: Vocabulary Acceptability Criteria for Gesture Elicitation Studies." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 997–1001. http://dx.doi.org/10.1177/1541931218621230.

Full text
Abstract:
The choice of what gestures should be part of a gesture language is a critical step in the design of gesturebased interfaces. This step is especially important when time and accuracy are key factors of the user experience, such as gestural interfaces in vehicle control and sterile control of a picture archiving and communication system (PACS) in the operating room (OR). Agreement studies are commonly used to find the gesture preference of the end users. These studies hypothesize that the best available gesture lexicon is the one preferred by a majority. However, these agreement approaches cannot offer a metric to assess the qualitative aspects of gestures. In this work, we propose an experimental framework to quantify, compare and evaluate gestures. This framework is grounded in the expert knowledge of speech and language professionals (SLPs). The development consisted of three studies: 1) Creation, 2) Evaluation and 3) Validation. In the creation study, we followed an adapted version of the Delphi’s interview/discussion procedure with SLPs. The purpose was to obtain the Vocabulary Acceptability Criteria (VAC) to evaluate gestures. Next, in the evaluation study, a modified method of pairwise comparisons was used to rank and quantify the gestures based on each criteria (VAC). Lastly, in the validation study, we formulated an odd one out procedure, to prove that the VAC values of a gesture are representative and sufficiently distinctive, to select that particular gesture from a pool of gestures. We applied this framework to the gestures obtained from a gesture elicitation study conducted with nine neurosurgeons, to control an imaging software. In addition, 29 SLPs comprising of 17 experts and 12 graduate students participated in the VAC study. The best lexicons from the available pool were obtained through both agreement and VAC metrics. We used binomial tests to show that the results obtained from the validation procedure are significantly better than the baseline. These results verify our hypothesis that the VAC are representative of the gestures and the subjects should be able to select the right gesture given its VAC values.
APA, Harvard, Vancouver, ISO, and other styles
48

Bomsdorf, Birgit, Rainer Blum, and Daniel Künkel. "Towards ProGesture, a Tool Supporting Early Prototyping of 3D-Gesture Interaction." International Journal of People-Oriented Programming 4, no. 2 (July 2015): 54–70. http://dx.doi.org/10.4018/ijpop.2015070103.

Full text
Abstract:
Development of gesture interaction requires a combination of three design matters: gesture, presentation, and dialog. However, in current work on rapid prototyping the focus is on gestures taking into account only the presentation. Model-based development incorporating gestures, in contrast, supports the gesture and dialog dimensions. The work on ProGesture aims at a rapid prototyping tool supporting a coherent development within the whole gesture-presentation-dialog design space. In this contribution, a first version of ProGesture is introduced. Here, gestures are specified by demonstrating the movements or they are composed of other gestures. The tool also provides a dialog editor, which allows gestures to be assigned to dialog models. Based on its executable runtime system the models and gestures can be tested and evaluated. In addition, gestures can be bound to first presentations or existing applications and evaluated in their context.
APA, Harvard, Vancouver, ISO, and other styles
49

Driskell, James E., and Paul H. Radtke. "The Effect of Gesture on Speech Production and Comprehension." Human Factors: The Journal of the Human Factors and Ergonomics Society 45, no. 3 (September 2003): 445–54. http://dx.doi.org/10.1518/hfes.45.3.445.27258.

Full text
Abstract:
Hand gestures are ubiquitous in communication. However, there is considerable debate regarding the fundamental role that gesture plays in communication and, subsequently, regarding the value of gesture for telecommunications. Controversy exists regarding whether gesture has a primarily communicative function (enhancing listener comprehension) or a primarily noncommunicative function (enhancing speech production). Moreover, some have argued that gesture seems to enhance listener comprehension only because of the effect gesture has on speech production. The purpose of this study was to examine the extent to which gesture enhances listener comprehension and the extent to which the effect of gesture on listener comprehension is mediated by the effects of gesture on speech production. Results indicated that gesture enhanced both listener comprehension and speech production. When the effects of gesture on speech production were controlled, the relationship between gesture and listener comprehension was reduced but still remained significant. These results suggest that gesture aids the listener as well as the speaker and that gesture has a direct effect on listener comprehension, independent of the effects gesture has on speech production. Implications for understanding the value of gestural information in telecommunications are discussed. Potential applications of this research include the design of computer-mediated communication systems and displays in which the visibility of gestures may be beneficial.
APA, Harvard, Vancouver, ISO, and other styles
50

Xi, Xiaotong, Peng Li, Florence Baills, and Pilar Prieto. "Hand Gestures Facilitate the Acquisition of Novel Phonemic Contrasts When They Appropriately Mimic Target Phonetic Features." Journal of Speech, Language, and Hearing Research 63, no. 11 (November 13, 2020): 3571–85. http://dx.doi.org/10.1044/2020_jslhr-20-00084.

Full text
Abstract:
Purpose Research has shown that observing hand gestures mimicking pitch movements or rhythmic patterns can improve the learning of second language (L2) suprasegmental features. However, less is known about the effects of hand gestures on the learning of novel phonemic contrasts. This study examines (a) whether hand gestures mimicking phonetic features can boost L2 segment learning by naive learners and (b) whether a mismatch between the hand gesture form and the target phonetic feature influences the learning effect. Method Fifty Catalan native speakers undertook a short multimodal training session on two types of Mandarin Chinese consonants (plosives and affricates) in either of two conditions: Gesture and No Gesture. In the Gesture condition, a fist-to-open-hand gesture was used to mimic air burst, while the No Gesture condition included no such use of gestures. Crucially, while the hand gesture appropriately mimicked the air burst produced in plosives, this was not the case for affricates. Before and after training, participants were tested on two tasks, namely, the identification task and the imitation task. Participants' speech output was rated by five Chinese native speakers. Results The perception results showed that training with or without gestures yielded similar degrees of improvement for the identification of aspiration contrasts. By contrast, the production results showed that, while training without gestures did not help improve L2 pronunciation, training with gestures improved pronunciation, but only when the given gestures appropriately mimicked the phonetic properties they represented. Conclusions Results revealed that the efficacy of observing hand gestures on the learning of nonnative phonemes depends on the appropriateness of the form of those gestures relative to the target phonetic features. That is, hand gestures seem to be more useful when they appropriately mimic phonetic features. Supplemental Material https://doi.org/10.23641/asha.13105442
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography