To see the other types of publications on this topic, follow the link: Gesture.

Journal articles on the topic 'Gesture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gesture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

PIKA, SIMONE, ELENA NICOLADIS, and PAULA F. MARENTETTE. "A cross-cultural study on the use of gestures: Evidence for cross-linguistic transfer?" Bilingualism: Language and Cognition 9, no. 3 (2006): 319–27. http://dx.doi.org/10.1017/s1366728906002665.

Full text
Abstract:
Anecdotal reports provide evidence of so called “hybrid” gesturer whose non-verbal behavior of one language/culture becomes visible in the other. The direction of this gestural transfer seems to occur from a high to a low frequency gesture language. The purpose of this study was therefore to test systematically 1) whether gestural transfer occurs from a high frequency gesture language to a low frequency gesture language, 2) if the frequency of production of some gesture types is more likely to be transferred than others, and 3) whether gestural transfer can also occur bi-directionally. To addr
APA, Harvard, Vancouver, ISO, and other styles
2

Braddock, Barbara A., Christina Gabany, Meera Shah, Eric S. Armbrecht, and Kimberly A. Twyman. "Patterns of Gesture Use in Adolescents With Autism Spectrum Disorder." American Journal of Speech-Language Pathology 25, no. 3 (2016): 408–15. http://dx.doi.org/10.1044/2015_ajslp-14-0112.

Full text
Abstract:
Purpose The purpose of this study was to examine patterns of spontaneous gesture use in a sample of adolescents with autism spectrum disorder (ASD). Method Thirty-five adolescents with ASD ages 11 to 16 years participated (mean age = 13.51 years; 29 boys, 6 girls). Participants' spontaneous speech and gestures produced during a narrative task were later coded from videotape. Parents were also asked to complete questionnaires to quantify adolescents' general communication ability and autism severity. Results No significant subgroup differences were apparent between adolescents who did not gestu
APA, Harvard, Vancouver, ISO, and other styles
3

Sekine, Kazuki, and Miranda L. Rose. "The Relationship of Aphasia Type and Gesture Production in People With Aphasia." American Journal of Speech-Language Pathology 22, no. 4 (2013): 662–72. http://dx.doi.org/10.1044/1058-0360(2013/12-0030).

Full text
Abstract:
Purpose For many individuals with aphasia, gestures form a vital component of message transfer and are the target of speech-language pathology intervention. What remains unclear are the participant variables that predict successful outcomes from gesture treatments. The authors examined the gesture production of a large number of individuals with aphasia—in a consistent discourse sampling condition and with a detailed gesture coding system—to determine patterns of gesture production associated with specific types of aphasia. Method The authors analyzed story retell samples from AphasiaBank (Tal
APA, Harvard, Vancouver, ISO, and other styles
4

Kong, Anthony Pak-Hin, Sam-Po Law, and Gigi Wan-Chi Chak. "A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia." Journal of Speech, Language, and Hearing Research 60, no. 7 (2017): 2031–46. http://dx.doi.org/10.1044/2017_jslhr-l-16-0093.

Full text
Abstract:
Purpose Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple reg
APA, Harvard, Vancouver, ISO, and other styles
5

PARRILL, FEY, BRITTANY LAVANTY, AUSTIN BENNETT, ALAYNA KLCO, and OZLEM ECE DEMIR-LIRA. "The relationship between character viewpoint gesture and narrative structure in children." Language and Cognition 10, no. 3 (2018): 408–34. http://dx.doi.org/10.1017/langcog.2018.9.

Full text
Abstract:
abstractWhen children tell stories, they gesture; their gestures can predict how their narrative abilities will progress. Five-year-olds who gestured from the point of view of a character (CVPT gesture) when telling stories produced better-structured narratives at later ages (Demir, Levine, & Goldin-Meadow, 2014). But does gesture just predict narrative structure, or can asking children to gesture in a particular way change their narratives? To explore this question, we instructed children to produce CVPT gestures and measured their narrative structure. Forty-four kindergarteners were aske
APA, Harvard, Vancouver, ISO, and other styles
6

Cooperrider, Kensy. "Foreground gesture, background gesture." Gesture 16, no. 2 (2017): 176–202. http://dx.doi.org/10.1075/gest.16.2.02coo.

Full text
Abstract:
Abstract Do speakers intend their gestures to communicate? Central as this question is to the study of gesture, researchers cannot seem to agree on the answer. According to one common framing, gestures are an “unwitting” window into the mind (McNeill, 1992); but, according to another common framing, they are designed along with speech to form “composite utterances” (Enfield, 2009). These two framings correspond to two cultures within gesture studies – the first cognitive and the second interactive in orientation – and they appear to make incompatible claims. In this article I attempt to bridge
APA, Harvard, Vancouver, ISO, and other styles
7

CASEY, SHANNON, KAREN EMMOREY, and HEATHER LARRABEE. "The effects of learning American Sign Language on co-speech gesture." Bilingualism: Language and Cognition 15, no. 4 (2012): 677–86. http://dx.doi.org/10.1017/s1366728911000575.

Full text
Abstract:
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cart
APA, Harvard, Vancouver, ISO, and other styles
8

Foran, Lori, and Brenda Beverly. "Points to Ponder: Gesture and Language in Math Talk." Perspectives on Language Learning and Education 22, no. 2 (2015): 72–81. http://dx.doi.org/10.1044/lle22.2.71.

Full text
Abstract:
With the introduction of Common Core State Standards, mathematical learning and problem solving in the academic environment is more linguistically demanding. Speech-language pathologists (SLPs) can support students with language impairment and teachers charged with new curricular demands. The role of gestural communication as a support for children's math learning and as an instructional strategy during math education is reviewed. Findings are presented from a recent pilot study on the gesture and language production of 3-, 4- and 5-year- old children as they solve early arithmetic and fractio
APA, Harvard, Vancouver, ISO, and other styles
9

Jasim, Mahmood, Tao Zhang, and Md Hasanuzzaman. "A Real-Time Computer Vision-Based Static and Dynamic Hand Gesture Recognition System." International Journal of Image and Graphics 14, no. 01n02 (2014): 1450006. http://dx.doi.org/10.1142/s0219467814500065.

Full text
Abstract:
This paper presents a novel method for computer vision-based static and dynamic hand gesture recognition. Haar-like feature-based cascaded classifier is used for hand area segmentation. Static hand gestures are recognized using linear discriminant analysis (LDA) and local binary pattern (LBP)-based feature extraction methods. Static hand gestures are classified using nearest neighbor (NN) algorithm. Dynamic hand gestures are recognized using the novel text-based principal directional features (PDFs), which are generated from the segmented image sequences. Longest common subsequence (LCS) algor
APA, Harvard, Vancouver, ISO, and other styles
10

Meng, Yuting, Haibo Jiang, Nengquan Duan, and Haijun Wen. "Real-Time Hand Gesture Monitoring Model Based on MediaPipe’s Registerable System." Sensors 24, no. 19 (2024): 6262. http://dx.doi.org/10.3390/s24196262.

Full text
Abstract:
Hand gesture recognition plays a significant role in human-to-human and human-to-machine interactions. Currently, most hand gesture detection methods rely on fixed hand gesture recognition. However, with the diversity and variability of hand gestures in daily life, this paper proposes a registerable hand gesture recognition approach based on Triple Loss. By learning the differences between different hand gestures, it can cluster them and identify newly added gestures. This paper constructs a registerable gesture dataset (RGDS) for training registerable hand gesture recognition models. Addition
APA, Harvard, Vancouver, ISO, and other styles
11

Kelly, Spencer D., Peter Creigh, and James Bartolotti. "Integrating Speech and Iconic Gestures in a Stroop-like Task: Evidence for Automatic Processing." Journal of Cognitive Neuroscience 22, no. 4 (2010): 683–94. http://dx.doi.org/10.1162/jocn.2009.21254.

Full text
Abstract:
Previous research has demonstrated a link between language and action in the brain. The present study investigates the strength of this neural relationship by focusing on a potential interface between the two systems: cospeech iconic gesture. Participants performed a Stroop-like task in which they watched videos of a man and a woman speaking and gesturing about common actions. The videos differed as to whether the gender of the speaker and gesturer was the same or different and whether the content of the speech and gesture was congruent or incongruent. The task was to identify whether a man or
APA, Harvard, Vancouver, ISO, and other styles
12

Hupp, Julie M., and Mary C. Gingras. "The role of gesture meaningfulness in word learning." Gesture 15, no. 3 (2016): 340–56. http://dx.doi.org/10.1075/gest.15.3.04hup.

Full text
Abstract:
Adults regularly use word-gesture combinations in communication, and meaningful gestures facilitate word learning. However, it is not clear if this benefit of gestures is due to the speaker’s movement increasing the listener’s attention or if it needs to be a meaningful gesture, if the difficulty of the task results in disparate reliance on gestures, and if word classes are differentially affected by gestures. In the present research, participants were measured on their novel word learning across four gesture conditions: meaningful gesture, beat gesture, nonsense gesture, and no gesture with e
APA, Harvard, Vancouver, ISO, and other styles
13

Priyanayana, Kodikarage Sahan, A. G. Buddhika P. Jayasekara, and R. A. R. C. Gopura. "Filtering Unintentional Hand Gestures to Enhance the Understanding of Multimodal Navigational Commands in an Intelligent Wheelchair." Electronics 14, no. 10 (2025): 1909. https://doi.org/10.3390/electronics14101909.

Full text
Abstract:
Natural human–human communication consists of multiple modalities interacting together. When an intelligent robot or wheelchair is being developed, it is important to consider this aspect. One of the most common modality pairs in multimodal human–human communication is speech–hand gesture interaction. However, not all the hand gestures that can be identified in this type of interaction are useful. Some hand movements can be misinterpreted as useful hand gestures or intentional hand gestures. Failing to filter out these unintentional gestures could lead to severe faulty identifications of impor
APA, Harvard, Vancouver, ISO, and other styles
14

Villarreal-Narvaez, Santiago, Arthur Sluÿters, Jean Vanderdonckt, and Efrem Mbaki Luzayisu. "Theoretically-defined vs. user-defined squeeze gestures." Proceedings of the ACM on Human-Computer Interaction 6, ISS (2022): 73–102. http://dx.doi.org/10.1145/3567805.

Full text
Abstract:
This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes. We then empirically analyze the results of a gesture elicitation study resulting in a set of N=32 participants X 21 referents = 672 elicited gestures, further classified into 26 gesture classes. We also contribute to the practice of gesture elicitat
APA, Harvard, Vancouver, ISO, and other styles
15

BHUYAN, M. K., P. K. BORA, and D. GHOSH. "AN INTEGRATED APPROACH TO THE RECOGNITION OF A WIDE CLASS OF CONTINUOUS HAND GESTURES." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 02 (2011): 227–52. http://dx.doi.org/10.1142/s0218001411008592.

Full text
Abstract:
The gesture segmentation is a method that distinguishes meaningful gestures from unintentional movements. Gesture segmentation is a prerequisite stage to continuous gesture recognition which locates the start and end points of a gesture in an input sequence. Yet, this is an extremely difficult task due to both the multitude of possible gesture variations in spatio-temporal space and the co-articulation/movement epenthesis of successive gestures. In this paper, we focus our attention on coping with this problem associated with continuous gesture recognition. This requires gesture spotting that
APA, Harvard, Vancouver, ISO, and other styles
16

LUBIS, MUHAMMAD FIRDAUS SYAWALUDIN, MIKAIL FAUZAN ATHALLAH, and CATUR APRIONO. "Pengembangan dan Evaluasi Agen Virtual dengan Model Generasi Gestur berbasis Aturan Sederhana." ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika 12, no. 4 (2024): 953. https://doi.org/10.26760/elkomika.v12i4.953.

Full text
Abstract:
ABSTRAKStudi pengembangan gestur sebelumnya telah menyoroti manfaat pendekatan berbasis deep learning untuk menghasilkan gerakan yang mirip manusia, namun, pendekatan tersebut memerlukan dastaset besar dan komputasi intensif. Model milik penulis membedakan antara dialog pendek dan panjang, menghasilkan gerakan spesifik konteks untuk percapakan pendek (salam, perpisahan, persetujuan/tidak setuju) dan gerakan berbasis emosi untuk dialog yang lebih panjang (netral, bahagia, agresif). Penulis membandingkan kinerja sistem dengan ground truth gestures, random gestures, dan idling gestures, menggunak
APA, Harvard, Vancouver, ISO, and other styles
17

K, Srinivas, and Manoj Kumar Rajagopal. "STUDY OF HAND GESTURE RECOGNITION AND CLASSIFICATION." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (2017): 25. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.19540.

Full text
Abstract:
To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by bo
APA, Harvard, Vancouver, ISO, and other styles
18

Suttora, Chiara, Annalisa Guarini, Mariagrazia Zuccarini, Arianna Aceti, Luigi Corvaglia, and Alessandra Sansavini. "Integrating Gestures and Words to Communicate in Full-Term and Low-Risk Preterm Late Talkers." International Journal of Environmental Research and Public Health 19, no. 7 (2022): 3918. http://dx.doi.org/10.3390/ijerph19073918.

Full text
Abstract:
Young children use gestures to practice communicative functions that foster their receptive and expressive linguistic skills. Studies investigating the use of gestures by late talkers are limited. This study aimed to investigate the use of gestures and gesture–word combinations and their associations with word comprehension and word and sentence production in late talkers. A further purpose was to examine whether a set of individual and environmental factors accounted for interindividual differences in late talkers’ gesture and gesture–word production. Sixty-one late talkers, including 35 full
APA, Harvard, Vancouver, ISO, and other styles
19

Emmorey, Karen, and Shannon Casey. "Gesture, thought and spatial language." Gesture 1, no. 1 (2001): 35–50. http://dx.doi.org/10.1075/gest.1.1.04emm.

Full text
Abstract:
This study explores the conceptual and communicative roles of gesture by examining the consequences of gesture prevention for the type of spatial language used to solve a spatial problem. English speakers were asked to describe where to place a group of blocks so that the blocks completely filled a puzzle grid. Half the subjects were allowed to gesture and half were prevented from gesturing. In addition, half the subjects could see their addressee and half could not. Addressee visibility affected how reliant subjects were on specifying puzzle grid co-ordinates, regardless of gesture condition.
APA, Harvard, Vancouver, ISO, and other styles
20

Alyamani, Hasan J. "Gesture Vocabularies for Hand Gestures for Controlling Air Conditioners in Home and Vehicle Environments." Electronics 12, no. 7 (2023): 1513. http://dx.doi.org/10.3390/electronics12071513.

Full text
Abstract:
With the growing prevalence of modern technologies as part of everyday life, mid-air gestures have become a promising input method in the field of human–computer interaction. This paper analyses the gestures of actual users to define a preliminary gesture vocabulary for home air conditioning (AC) systems and suggests a gesture vocabulary for controlling the AC that applies to both home and vehicle environments. In this study, a user elicitation experiment was conducted. A total of 36 participants were filmed while employing their preferred hand gestures to manipulate a home air conditioning sy
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen, Ngoc-Hoang, Tran-Dac-Thinh Phan, Soo-Hyung Kim, Hyung-Jeong Yang, and Guee-Sang Lee. "3D Skeletal Joints-Based Hand Gesture Spotting and Classification." Applied Sciences 11, no. 10 (2021): 4689. http://dx.doi.org/10.3390/app11104689.

Full text
Abstract:
This paper presents a novel approach to continuous dynamic hand gesture recognition. Our approach contains two main modules: gesture spotting and gesture classification. Firstly, the gesture spotting module pre-segments the video sequence with continuous gestures into isolated gestures. Secondly, the gesture classification module identifies the segmented gestures. In the gesture spotting module, the motion of the hand palm and fingers are fed into the Bidirectional Long Short-Term Memory (Bi-LSTM) network for gesture spotting. In the gesture classification module, three residual 3D Convolution
APA, Harvard, Vancouver, ISO, and other styles
22

Ma, Xianmin, and Xiaofeng Li. "Dynamic Gesture Contour Feature Extraction Method Using Residual Network Transfer Learning." Wireless Communications and Mobile Computing 2021 (October 13, 2021): 1–11. http://dx.doi.org/10.1155/2021/1503325.

Full text
Abstract:
The current dynamic gesture contour feature extraction method has the problems that the recognition rate of dynamic gesture contour feature and the recognition accuracy of dynamic gesture type are low, the recognition time is long, and comprehensive is poor. Therefore, we propose a dynamic gesture contour feature extraction method using residual network transfer learning. Sensors are used to integrate dynamic gesture information. The distance between the dynamic gesture and the acquisition device is detected by transfer learning, the dynamic gesture image is segmented, and the characteristic c
APA, Harvard, Vancouver, ISO, and other styles
23

Wong, Alex Ming Hui, and Dae-Ki Kang. "Stationary Hand Gesture Authentication Using Edit Distance on Finger Pointing Direction Interval." Scientific Programming 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/7427980.

Full text
Abstract:
One of the latest authentication methods is by discerning human gestures. Previous research has shown that different people can develop distinct gesture behaviours even when executing the same gesture. Hand gesture is one of the most commonly used gestures in both communication and authentication research since it requires less room to perform as compared to other bodily gestures. There are different types of hand gesture and they have been researched by many researchers, but stationary hand gesture has yet to be thoroughly explored. There are a number of disadvantages and flaws in general han
APA, Harvard, Vancouver, ISO, and other styles
24

Sitorus, Wilda Wardani, and Rita Hartati. "Gestures and Their Meaning of Main Character in Alex Hardcastle™s Movie œSenior Year (2022)." REGISTER: Journal of English Language Teaching of FBS-Unimed 12, no. 3 (2023): 179–85. https://doi.org/10.24114/reg.v12i3.46058.

Full text
Abstract:
This study analyzes gestures and their meaning in Senior Year (2022) movie. The objectives of this study are (1) to find out the types and their meaning used by the main character, and (2) to analyze the meanings of gestures in relation to the semantic meaning of the main character. This study used a qualitative descriptive method using the theory of types and meaning of gestures by David McNeil and Levy (2005), and Semantic Meaning by Chaer (1994). This study showed that 86 data had been found. There are 12 (14%) data as iconic gestures, 13 (15%) data as deictic gestures, 16 (19%) data as met
APA, Harvard, Vancouver, ISO, and other styles
25

Liao, Ting. "Application of Gesture Recognition Based on Spatiotemporal Graph Convolution Network in Virtual Reality Interaction." Journal of Cases on Information Technology 24, no. 5 (2022): 1–12. http://dx.doi.org/10.4018/jcit.295246.

Full text
Abstract:
Aiming at the low recognition rate of traditional gesture, a gesture recognition algorithm based on spatiotemporal graph convolution network is proposed in this paper. Firstly, the dynamic gesture data were preprocessed, including removing invalid gesture frames, completing gesture frame data and normalization of joint length. Then, the key frame of the gesture is extracted according to the given coordinate information of the hand joint. A connected graph is constructed according to the natural connection of time series information and gesture skeleton. A spatio-temporal convolutional network
APA, Harvard, Vancouver, ISO, and other styles
26

Parrill, Fey, John Cabot, Hannah Kent, Kelly Chen, and Ann Payneau. "Do people gesture more when instructed to?" Gesture 15, no. 3 (2016): 357–71. http://dx.doi.org/10.1075/gest.15.3.05par.

Full text
Abstract:
Does being instructed to gesture encourage those with low gesture rates to produce more gestures? If participants do gesture more when asked to, do they produce the same kinds of gestures? Does this vary as a function of the type of discourse being produced? We asked participants to take part in three tasks, a quasi-conversational task, a spatial problem solving task, and a narrative task, in two phases. In the first they received no instruction, and in the second they were asked to gesture. The instruction to gesture did not change gesture rate or gesture type across phases. We suggest that w
APA, Harvard, Vancouver, ISO, and other styles
27

Katagami, Daisuke, Yusuke Ikeda, and Katsumi Nitta. "Behavior Generation and Evaluation of Negotiation Agent Based on Negotiation Dialogue Instances." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 7 (2010): 840–51. http://dx.doi.org/10.20965/jaciii.2010.p0840.

Full text
Abstract:
This study focuses on gestures negotiation dialogs. Analyzing the situation/gesture relationship, we suggest how to enable agents to conduct adequate human-like gestures and evaluated whether an agent’s gestures could give an impression similar to those by a human being. We collected negotiation dialogs to study common human gestures. We studied gesture frequency in different situations and extracted gestures with high frequency, making an agent gesture module based on the number of characteristics. Using a questionnaire, we evaluated the impressions of gestures by human users and agents, conf
APA, Harvard, Vancouver, ISO, and other styles
28

Nyirarugira, Clementine, Hyo-rim Choi, and TaeYong Kim. "Hand Gesture Recognition Using Particle Swarm Movement." Mathematical Problems in Engineering 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/1919824.

Full text
Abstract:
We present a gesture recognition method derived from particle swarm movement for free-air hand gesture recognition. Online gesture recognition remains a difficult problem due to uncertainty in vision-based gesture boundary detection methods. We suggest an automated process of segmenting meaningful gesture trajectories based on particle swarm movement. A subgesture detection and reasoning method is incorporated in the proposed recognizer to avoid premature gesture spotting. Evaluation of the proposed method shows promising recognition results: 97.6% on preisolated gestures, 94.9% on stream gest
APA, Harvard, Vancouver, ISO, and other styles
29

Child, Simon, Anna Theakston, and Simone Pika. "How do modelled gestures influence preschool children’s spontaneous gesture production?" Gesture 14, no. 1 (2014): 1–25. http://dx.doi.org/10.1075/gest.14.1.01chi.

Full text
Abstract:
Around the age of nine months, children start to communicate by using first words and gestures, during interactions with caregivers. The question remains as to how older preschool children utilise the gestures they observe into their own gestural representations of previously unseen objects. Two accounts of gesture production (the ‘gesture learning’, and ‘simulated representation’ accounts) offer different predictions for how preschool children use the gestures they observe when describing objects. To test these two competing accounts underlying gesture production, we showed 42 children (mean
APA, Harvard, Vancouver, ISO, and other styles
30

De Froy, Adrienne, and Pamela Rosenthal Rollins. "The cross-racial/ethnic gesture production of young autistic children and their parents." Autism & Developmental Language Impairments 8 (January 2023): 239694152311595. http://dx.doi.org/10.1177/23969415231159548.

Full text
Abstract:
Background & Aims Early gesture plays an important role in prelinguistic/emerging linguistic communication and may provide insight into a child's social communication skills before the emergence of spoken language. Social interactionist theories suggest children learn to gesture through daily interactions with their social environment (e.g., their parents). As such, it is important to understand how parents gesture within interactions with their children when studying child gesture. Parents of typically developing (TD) children exhibit cross-racial/ethnic differences in gesture rate. Corre
APA, Harvard, Vancouver, ISO, and other styles
31

McNeill, David, Bennett Bertenthal, Jonathan Cole, and Shaun Gallagher. "Gesture-first, but no gestures?" Behavioral and Brain Sciences 28, no. 2 (2005): 138–39. http://dx.doi.org/10.1017/s0140525x05360031.

Full text
Abstract:
Although Arbib's extension of the mirror-system hypothesis neatly sidesteps one problem with the “gesture-first” theory of language origins, it overlooks the importance of gestures that occur in current-day human linguistic performance, and this lands it with another problem. We argue that, instead of gesture-first, a system of combined vocalization and gestures would have been a more natural evolutionary unit.
APA, Harvard, Vancouver, ISO, and other styles
32

Cartmill, Erica A., Sian Beilock, and Susan Goldin-Meadow. "A word in the hand: action, gesture and mental representation in humans and non-human primates." Philosophical Transactions of the Royal Society B: Biological Sciences 367, no. 1585 (2012): 129–43. http://dx.doi.org/10.1098/rstb.2011.0162.

Full text
Abstract:
The movements we make with our hands both reflect our mental processes and help to shape them. Our actions and gestures can affect our mental representations of actions and objects. In this paper, we explore the relationship between action, gesture and thought in both humans and non-human primates and discuss its role in the evolution of language. Human gesture (specifically representational gesture) may provide a unique link between action and mental representation. It is kinaesthetically close to action and is, at the same time, symbolic. Non-human primates use gesture frequently to communic
APA, Harvard, Vancouver, ISO, and other styles
33

Kok, Kasper I., and Alan Cienki. "Cognitive Grammar and gesture: Points of convergence, advances and challenges." Cognitive Linguistics 27, no. 1 (2016): 67–100. http://dx.doi.org/10.1515/cog-2015-0087.

Full text
Abstract:
AbstractGiven its usage-oriented character, Cognitive Grammar (CG) can be expected to be consonant with a multimodal, rather than text-only, perspective on language. Whereas several scholars have acknowledged this potential, the question as to how speakers’ gestures can be incorporated in CG-based grammatical analysis has not been conclusively addressed. In this paper, we aim to advance the CG-gesture relationship. We first elaborate on three important points of convergence between CG and gesture research: (1) CG’s conception of grammar as a prototype category, with central and more peripheral
APA, Harvard, Vancouver, ISO, and other styles
34

LAURENT, ANGÉLIQUE, and ELENA NICOLADIS. "Gesture restriction affects French–English bilinguals’ speech only in French." Bilingualism: Language and Cognition 18, no. 2 (2014): 340–49. http://dx.doi.org/10.1017/s1366728914000042.

Full text
Abstract:
Some studies have shown that bilinguals gesture more than monolinguals. One possible reason for the high gesture frequency is that bilinguals rely on gestures even more than monolinguals in constructing their message. To test this, we asked French–English bilingual adults and English monolingual adults to tell a story twice; on one occasion they could move their hands and on the other they could not. If gestures aid bilinguals in information packaging and/or lexical access, bilinguals should tell shorter stories with fewer word types than monolinguals when their gestures are restricted. In fac
APA, Harvard, Vancouver, ISO, and other styles
35

Valle, Chelsea La, Karen Chenausky, and Helen Tager-Flusberg. "How do minimally verbal children and adolescents with autism spectrum disorder use communicative gestures to complement their spoken language abilities?" Autism & Developmental Language Impairments 6 (January 2021): 239694152110350. http://dx.doi.org/10.1177/23969415211035065.

Full text
Abstract:
Background and aims Prior work has examined how children and adolescents with autism spectrum disorder who are minimally verbal use their spoken language abilities during interactions with others. However, social communication includes other aspects beyond speech. To our knowledge, no studies have examined how minimally verbal children and adolescents with autism spectrum disorder are using their gestural communication during social interactions. Such work can provide important insights into how gestures may complement their spoken language abilities. Methods Fifty minimally verbal children an
APA, Harvard, Vancouver, ISO, and other styles
36

Park, Jisun, Yong Jin, Seoungjae Cho, Yunsick Sung, and Kyungeun Cho. "Advanced Machine Learning for Gesture Learning and Recognition Based on Intelligent Big Data of Heterogeneous Sensors." Symmetry 11, no. 7 (2019): 929. http://dx.doi.org/10.3390/sym11070929.

Full text
Abstract:
With intelligent big data, a variety of gesture-based recognition systems have been developed to enable intuitive interaction by utilizing machine learning algorithms. Realizing a high gesture recognition accuracy is crucial, and current systems learn extensive gestures in advance to augment their recognition accuracies. However, the process of accurately recognizing gestures relies on identifying and editing numerous gestures collected from the actual end users of the system. This final end-user learning component remains troublesome for most existing gesture recognition systems. This paper p
APA, Harvard, Vancouver, ISO, and other styles
37

Namy, Laura L., Rebecca Vallas, and Jennifer Knight-Schwarz. "Linking parent input and child receptivity to symbolic gestures." Gesture 8, no. 3 (2008): 302–24. http://dx.doi.org/10.1075/gest.8.3.03nam.

Full text
Abstract:
This study explored the relation between parents’ production of gestures and symbolic play during free play and children’s production and comprehension of symbolic gestures. Thirty-one 16- to 22-month-olds and their parents participated in a free play session. Children also participated in a forced-choice novel gesture-learning task. Parents’ pretend play with objects in hand was predictive of children’s gesture production during play and gesture vocabulary according to parental report. No relationship was found between parent gesture and child performance on the forced-choice gesture-learning
APA, Harvard, Vancouver, ISO, and other styles
38

Asmoro, Jeffri Dian, Achmad Teguh Wibowo, and Mujib Ridwan. "VIRTUAL MOUSE WITH HAND GESTURE RECOGNITION BASED ON HAND LANDMARK MODEL FOR POINTING DEVICE." JURTEKSI (Jurnal Teknologi dan Sistem Informasi) 9, no. 2 (2023): 261–68. http://dx.doi.org/10.33330/jurteksi.v9i2.2073.

Full text
Abstract:
Abstract: Technology is growing rapidly and has become one of the human needs that must be owned to solve the problems being faced. The development of touchless input devices or hand gesture recognition using a camera is a form of machine learning. Gestures can define as physical movements of the hands, arms, or body as expressive messages, besides that this hand gesture system can explain the contents of commands that have meaning. In this research, a virtual mouse system will be developed using hand gesture recognition based on the hand landmark model for pointing devices. The resulting appl
APA, Harvard, Vancouver, ISO, and other styles
39

ÖZYÜREK, ASLI, REYHAN FURMAN, and SUSAN GOLDIN-MEADOW. "On the way to language: event segmentation in homesign and gesture." Journal of Child Language 42, no. 1 (2014): 64–94. http://dx.doi.org/10.1017/s0305000913000512.

Full text
Abstract:
ABSTRACTLanguages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in o
APA, Harvard, Vancouver, ISO, and other styles
40

Zhao, Yiming, Yanchao Zhao, Huawei Tu, Qihan Huang, Wenlai Zhao, and Wenhao Jiang. "Motion Gesture Delimiters for Smartwatch Interaction." Wireless Communications and Mobile Computing 2022 (July 12, 2022): 1–11. http://dx.doi.org/10.1155/2022/6879206.

Full text
Abstract:
Smartwatches are increasingly popular in our daily lives. Motion gestures are a common way of interacting with smartwatches, e.g., users can make a movement in the air with their arm wearing the watch to trigger a specific command of the smartwatch. Motion gesture interaction can compensate for the small screen size of the smartwatch to some extent and enrich smartwatch-based interactions. An important aspect of motion gesture interaction lies in how to determine the start and end of a motion gesture. This paper is aimed at selecting gestures as suitable delimiters for motion gesture interacti
APA, Harvard, Vancouver, ISO, and other styles
41

Bao, Yihua, Dongdong Weng, and Nan Gao. "Editable Co-Speech Gesture Synthesis Enhanced with Individual Representative Gestures." Electronics 13, no. 16 (2024): 3315. http://dx.doi.org/10.3390/electronics13163315.

Full text
Abstract:
Co-speech gesture synthesis is a challenging task due to the complexity and uncertainty between gestures and speech. Gestures that accompany speech (i.e., Co-Speech Gesture) are an essential part of natural and efficient embodied human communication, as they work in tandem with speech to convey information more effectively. Although data-driven approaches have improved gesture synthesis, existing deep learning-based methods use deterministic modeling which could lead to averaging out predicted gestures. Additionally, these methods lack control over gesture generation such as user editing of ge
APA, Harvard, Vancouver, ISO, and other styles
42

Ng, Chloe, and Nicolai Marquardt. "Eliciting user-defined touch and mid-air gestures for co-located mobile gaming." Proceedings of the ACM on Human-Computer Interaction 6, ISS (2022): 303–27. http://dx.doi.org/10.1145/3567722.

Full text
Abstract:
Many interaction techniques have been developed to best support mobile gaming – but developed gestures and techniques might not always match user behaviour or preferences. To inform this design space of gesture input for co-located mobile gaming, we present insights from a gesture elicitation user study for touch and mid-air input, specifically focusing on board and card games due to the materiality of game artefacts and rich interaction between players. We obtained touch and mid-air gesture proposals for 11 game tasks with 12 dyads and gained insights into user preferences. We contribute our
APA, Harvard, Vancouver, ISO, and other styles
43

HWANG, BON-WOO, SUNGMIN KIM, and SEONG-WHANe LEE. "A FULL-BODY GESTURE DATABASE FOR HUMAN GESTURE ANALYSIS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 06 (2007): 1069–84. http://dx.doi.org/10.1142/s0218001407005806.

Full text
Abstract:
This paper presents a full-body gesture database which contains 2D video data and 3D motion data of 14 normal gestures, 10 abnormal gestures and 30 command gestures for 20 subjects. We call this database the Korea University Gesture (KUG) database. Using 3D motion cameras and 3 sets of stereo cameras, we captured 3D motion data and 3 pairs of stereo-video data in 3 different directions for normal and abnormal gestures. In case of command gestures, 2 pairs of stereo-video data were obtained by 2 sets of stereo cameras with different focal lengths in order to capture views of whole body and uppe
APA, Harvard, Vancouver, ISO, and other styles
44

GARCÍA-GÁMEZ, ANA B., and PEDRO MACIZO. "Learning nouns and verbs in a foreign language: The role of gestures." Applied Psycholinguistics 40, no. 2 (2018): 473–507. http://dx.doi.org/10.1017/s0142716418000656.

Full text
Abstract:
ABSTRACTWe evaluated the impact of gestures on second language (L2) vocabulary learning with nouns (Experiment 1) and verbs (Experiment 2). Four training methods were compared: the learning of L2 words with congruent gestures, incongruent gestures, meaningless gestures, and no gestures. Better vocabulary learning was found in both experiments when participants learned L2 words with congruent gestures relative to the no gesture condition. This result indicates that gestures have a positive effect on L2 learning when there is a match between the word meaning and the gesture. However, the recall
APA, Harvard, Vancouver, ISO, and other styles
45

Patil, Anuradha, Chandrashekhar M. Tavade, and . "Methods on Real Time Gesture Recognition System." International Journal of Engineering & Technology 7, no. 3.12 (2018): 982. http://dx.doi.org/10.14419/ijet.v7i3.12.17617.

Full text
Abstract:
Gesture recognition deals with discussion of various methods, techniques and concerned algorithms related to it. Gesture recognition uses a simple & basic sign languages like movement of hand, position of lips & eye ball as well as eye lids positions. The various methods for image capturing, gesture recognition, gesture tracking, gesture segmentation and smoothing methods compared, and by the overweighing advantage of different gesture recognitions and their applications. In recent days gesture recognition is widely utilized in gaming industries, biomedical applications, and medical di
APA, Harvard, Vancouver, ISO, and other styles
46

Izuta, Ryo, Kazuya Murao, Tsutomu Terada, and Masahiko Tsukamoto. "Early gesture recognition method with an accelerometer." International Journal of Pervasive Computing and Communications 11, no. 3 (2015): 270–87. http://dx.doi.org/10.1108/ijpcc-03-2015-0016.

Full text
Abstract:
Purpose – This paper aims to propose a gesture recognition method at an early stage. An accelerometer is installed in most current mobile phones, such as iPhones, Android-powered devices and video game controllers for the Wii or PS3, which enables easy and intuitive operations. Therefore, many gesture-based user interfaces that use accelerometers are expected to appear in the future. Gesture recognition systems with an accelerometer generally have to construct models with user’s gesture data before use and recognize unknown gestures by comparing them with the models. Because the recognition pr
APA, Harvard, Vancouver, ISO, and other styles
47

Bai, Yujing, Jun Wang, Penghui Chen, Ziwei Gong, and Qingxu Xiong. "Hand Trajectory Recognition by Radar with a Finite-State Machine and a Bi-LSTM." Applied Sciences 14, no. 15 (2024): 6782. http://dx.doi.org/10.3390/app14156782.

Full text
Abstract:
Gesture plays an important role in human–machine interaction. However, the insufficient accuracy and high complexity of gesture recognition have blocked its widespread application. A gesture recognition method that combines state machine and bidirectional long short-term memory (Bi-LSTM) fusion neural network is proposed to improve the accuracy and efficiency. Firstly, gestures with large movements are categorized into simple trajectory gestures and complex trajectory gestures in advance. Afterwards, different recognition methods are applied for the two categories of gestures, and the final re
APA, Harvard, Vancouver, ISO, and other styles
48

Grasanando, Arfando, and Paniran Paniran. "Implementasi Pengendalian Karakter Game Mobile Legends Berbasis Gesture Tangan Menggunakan MediaPipe." Jurnal Nasional Komputasi dan Teknologi Informasi (JNKTI) 7, no. 6 (2024): 1613–19. https://doi.org/10.32672/jnkti.v7i6.8240.

Full text
Abstract:
Abstrak - Perkembangan teknologi dalam bidang computer vision dan machine learning telah membuka peluang baru dalam menciptakan metode interaksi manusia-komputer yang inovatif. Penelitian ini bertujuan untuk mengembangkan sistem pengendalian karakter game Mobile Legends menggunakan gesture tangan berbasis MediaPipe dan OpenCV. Dengan memanfaatkan teknologi hand gesture recognition, sistem ini memungkinkan pemain untuk mengontrol pergerakan karakter dan penggunaan skill melalui gerakan tangan, menawarkan cara interaksi yang lebih natural dan intuitif. Metode penelitian yang digunakan adalah pen
APA, Harvard, Vancouver, ISO, and other styles
49

Tran, Dinh-Son, Ngoc-Huynh Ho, Hyung-Jeong Yang, Eu-Tteum Baek, Soo-Hyung Kim, and Gueesang Lee. "Real-Time Hand Gesture Spotting and Recognition Using RGB-D Camera and 3D Convolutional Neural Network." Applied Sciences 10, no. 2 (2020): 722. http://dx.doi.org/10.3390/app10020722.

Full text
Abstract:
Using hand gestures is a natural method of interaction between humans and computers. We use gestures to express meaning and thoughts in our everyday conversations. Gesture-based interfaces are used in many applications in a variety of fields, such as smartphones, televisions (TVs), video gaming, and so on. With advancements in technology, hand gesture recognition is becoming an increasingly promising and attractive technique in human–computer interaction. In this paper, we propose a novel method for fingertip detection and hand gesture recognition in real-time using an RGB-D camera and a 3D co
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Tao, Xiaolong Cai, Liping Wang, and Haoye Tian. "Interactive Design of 3D Dynamic Gesture Based on SVM-LSTM Model." International Journal of Mobile Human Computer Interaction 10, no. 3 (2018): 49–63. http://dx.doi.org/10.4018/ijmhci.2018070104.

Full text
Abstract:
Visual hand gesture interaction is one of the main ways of human-computer interaction, and provides users more interactive degrees of freedom and more realistic interactive experience. Authors present a hybrid model based on SVM-LSTM, and design a three-dimensional dynamic gesture interaction system. The system uses Leap Motion to capture gesture information, combined with SVM powerful static gesture classification ability and LSTM powerful variable-length time series gesture processing ability, enabling real-time recognition of user gestures. The gesture interaction method can automatically d
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!