Academic literature on the topic 'Gesture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gesture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gesture"

1

PIKA, SIMONE, ELENA NICOLADIS, and PAULA F. MARENTETTE. "A cross-cultural study on the use of gestures: Evidence for cross-linguistic transfer?" Bilingualism: Language and Cognition 9, no. 3 (October 20, 2006): 319–27. http://dx.doi.org/10.1017/s1366728906002665.

Full text
Abstract:
Anecdotal reports provide evidence of so called “hybrid” gesturer whose non-verbal behavior of one language/culture becomes visible in the other. The direction of this gestural transfer seems to occur from a high to a low frequency gesture language. The purpose of this study was therefore to test systematically 1) whether gestural transfer occurs from a high frequency gesture language to a low frequency gesture language, 2) if the frequency of production of some gesture types is more likely to be transferred than others, and 3) whether gestural transfer can also occur bi-directionally. To address these questions, we investigated the use of gestures of English–Spanish bilinguals, French–English bilinguals, and English monolinguals while retelling a cartoon. Our analysis focused on the rate of gestures and the frequency of production of gesture types. There was a significant difference in the overall rate of gestures: both bilingual groups gestured more than monolingual participants. This difference was particularly salient for iconic gestures. In addition, we found that French–English bilinguals used more deictic gestures in their L2. The results suggest that knowledge of a high frequency gesture language affects the gesture rate in a low-frequency gesture language.
APA, Harvard, Vancouver, ISO, and other styles
2

Braddock, Barbara A., Christina Gabany, Meera Shah, Eric S. Armbrecht, and Kimberly A. Twyman. "Patterns of Gesture Use in Adolescents With Autism Spectrum Disorder." American Journal of Speech-Language Pathology 25, no. 3 (August 2016): 408–15. http://dx.doi.org/10.1044/2015_ajslp-14-0112.

Full text
Abstract:
Purpose The purpose of this study was to examine patterns of spontaneous gesture use in a sample of adolescents with autism spectrum disorder (ASD). Method Thirty-five adolescents with ASD ages 11 to 16 years participated (mean age = 13.51 years; 29 boys, 6 girls). Participants' spontaneous speech and gestures produced during a narrative task were later coded from videotape. Parents were also asked to complete questionnaires to quantify adolescents' general communication ability and autism severity. Results No significant subgroup differences were apparent between adolescents who did not gesture versus those who produced at least 1 gesture in general communication ability and autism severity. Subanalyses including only adolescents who produced gesture indicated a statistically significant negative association between gesture rate and general communication ability, specifically speech and syntax subscale scores. Adolescents who gestured produced higher proportions of iconic gestures and used gesture mostly to add information to speech. Conclusions The findings relate spontaneous gesture use to underlying strengths and weaknesses in adolescents' speech and syntactical language development. More research examining cospeech gesture in fluent speakers with ASD is needed.
APA, Harvard, Vancouver, ISO, and other styles
3

Sekine, Kazuki, and Miranda L. Rose. "The Relationship of Aphasia Type and Gesture Production in People With Aphasia." American Journal of Speech-Language Pathology 22, no. 4 (November 2013): 662–72. http://dx.doi.org/10.1044/1058-0360(2013/12-0030).

Full text
Abstract:
Purpose For many individuals with aphasia, gestures form a vital component of message transfer and are the target of speech-language pathology intervention. What remains unclear are the participant variables that predict successful outcomes from gesture treatments. The authors examined the gesture production of a large number of individuals with aphasia—in a consistent discourse sampling condition and with a detailed gesture coding system—to determine patterns of gesture production associated with specific types of aphasia. Method The authors analyzed story retell samples from AphasiaBank (TalkBank, n.d.), gathered from 98 individuals with aphasia resulting from stroke and 64 typical controls. Twelve gesture types were coded. Descriptive statistics were used to describe the patterns of gesture production. Possible significant differences in production patterns according to aphasia type were examined using a series of chi-square, Fisher exact, and logistic regression statistics. Results A significantly higher proportion of individuals with aphasia gestured as compared to typical controls, and for many individuals with aphasia, this gesture was iconic and was capable of communicative load. Aphasia type impacted significantly on gesture type in specific identified patterns, detailed here. Conclusion These type-specific patterns suggest the opportunity for gestures as targets of aphasia therapy.
APA, Harvard, Vancouver, ISO, and other styles
4

Kong, Anthony Pak-Hin, Sam-Po Law, and Gigi Wan-Chi Chak. "A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia." Journal of Speech, Language, and Hearing Research 60, no. 7 (July 12, 2017): 2031–46. http://dx.doi.org/10.1044/2017_jslhr-l-16-0093.

Full text
Abstract:
Purpose Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Results Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. Conclusions The current results supported the sketch model of language–gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

PARRILL, FEY, BRITTANY LAVANTY, AUSTIN BENNETT, ALAYNA KLCO, and OZLEM ECE DEMIR-LIRA. "The relationship between character viewpoint gesture and narrative structure in children." Language and Cognition 10, no. 3 (July 12, 2018): 408–34. http://dx.doi.org/10.1017/langcog.2018.9.

Full text
Abstract:
abstractWhen children tell stories, they gesture; their gestures can predict how their narrative abilities will progress. Five-year-olds who gestured from the point of view of a character (CVPT gesture) when telling stories produced better-structured narratives at later ages (Demir, Levine, & Goldin-Meadow, 2014). But does gesture just predict narrative structure, or can asking children to gesture in a particular way change their narratives? To explore this question, we instructed children to produce CVPT gestures and measured their narrative structure. Forty-four kindergarteners were asked to tell stories after being trained to produce CVPT gestures, gestures from an observer’s viewpoint (OVPT gestures), or after no instruction in gesture. Gestures were coded as CVPT or OVPT, and stories were scored for narrative structure. Children trained to produce CVPT gestures produced more of these gestures, and also had higher narrative structure scores compared to those who received the OVPT training. Children returned for a follow-up session one week later and narrated the stories again. The training received in the first session did not impact narrative structure or recall for the events of the stories. Overall, these results suggest a brief gestural intervention has the potential to enhance narrative structure. Due to the fact that stronger narrative abilities have been correlated with greater success in developing writing and reading skills at later ages, this research has important implications for literacy and education.
APA, Harvard, Vancouver, ISO, and other styles
6

Cooperrider, Kensy. "Foreground gesture, background gesture." Gesture 16, no. 2 (December 31, 2017): 176–202. http://dx.doi.org/10.1075/gest.16.2.02coo.

Full text
Abstract:
Abstract Do speakers intend their gestures to communicate? Central as this question is to the study of gesture, researchers cannot seem to agree on the answer. According to one common framing, gestures are an “unwitting” window into the mind (McNeill, 1992); but, according to another common framing, they are designed along with speech to form “composite utterances” (Enfield, 2009). These two framings correspond to two cultures within gesture studies – the first cognitive and the second interactive in orientation – and they appear to make incompatible claims. In this article I attempt to bridge the cultures by developing a distinction between foreground gestures and background gestures. Foreground gestures are designed in their particulars to communicate a critical part of the speaker’s message; background gestures are not designed in this way. These are two fundamentally different kinds of gesture, not two different ways of framing the same monolithic behavior. Foreground gestures can often be identified by one or more of the following hallmarks: they are produced along with demonstratives; they are produced in the absence of speech; they are co-organized with speaker gaze; and they are produced with conspicuous effort. The distinction between foreground and background gestures helps dissolve the apparent tension between the two cultures: interactional researchers have focused on foreground gestures and elevated them to the status of a prototype, whereas cognitive researchers have done the same with background gestures. The distinction also generates a number of testable predictions about gesture production and understanding, and it opens up new lines of inquiry into gesture across child development and across cultures.
APA, Harvard, Vancouver, ISO, and other styles
7

CASEY, SHANNON, KAREN EMMOREY, and HEATHER LARRABEE. "The effects of learning American Sign Language on co-speech gesture." Bilingualism: Language and Cognition 15, no. 4 (January 3, 2012): 677–86. http://dx.doi.org/10.1017/s1366728911000575.

Full text
Abstract:
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture.
APA, Harvard, Vancouver, ISO, and other styles
8

Foran, Lori, and Brenda Beverly. "Points to Ponder: Gesture and Language in Math Talk." Perspectives on Language Learning and Education 22, no. 2 (March 2015): 72–81. http://dx.doi.org/10.1044/lle22.2.71.

Full text
Abstract:
With the introduction of Common Core State Standards, mathematical learning and problem solving in the academic environment is more linguistically demanding. Speech-language pathologists (SLPs) can support students with language impairment and teachers charged with new curricular demands. The role of gestural communication as a support for children's math learning and as an instructional strategy during math education is reviewed. Findings are presented from a recent pilot study on the gesture and language production of 3-, 4- and 5-year- old children as they solve early arithmetic and fraction problems. Children spontaneously produced deictic and representational gestures that most often matched their spoken solutions. A few children exhibited gesture-speech mismatches in which the gesture contained semantic content not contained in the speech alone. This can suggest some underlying knowledge that would not be apparent without the gesture. Furthermore, the investigator introduced gestured prompts with some preschool participants using spontaneous gestures previously observed by successful peers. Gesture's role in early mathematic areas preceding kindergarten and specific gesturing strategies effective in the academic environment continue to be explored.
APA, Harvard, Vancouver, ISO, and other styles
9

Jasim, Mahmood, Tao Zhang, and Md Hasanuzzaman. "A Real-Time Computer Vision-Based Static and Dynamic Hand Gesture Recognition System." International Journal of Image and Graphics 14, no. 01n02 (January 2014): 1450006. http://dx.doi.org/10.1142/s0219467814500065.

Full text
Abstract:
This paper presents a novel method for computer vision-based static and dynamic hand gesture recognition. Haar-like feature-based cascaded classifier is used for hand area segmentation. Static hand gestures are recognized using linear discriminant analysis (LDA) and local binary pattern (LBP)-based feature extraction methods. Static hand gestures are classified using nearest neighbor (NN) algorithm. Dynamic hand gestures are recognized using the novel text-based principal directional features (PDFs), which are generated from the segmented image sequences. Longest common subsequence (LCS) algorithm is used to classify the dynamic gestures. For testing, the Chinese numeral gesture dataset containing static hand poses and directional gesture dataset containing complex dynamic gestures are prepared. The mean accuracy of LDA-based static hand gesture recognition on the Chinese numeral gesture dataset is 92.42%. The mean accuracy of LBP-based static hand gesture recognition on the Chinese numeral gesture dataset is 87.23%. The mean accuracy of the novel dynamic hand gesture recognition method using PDF on directional gesture dataset is 94%.
APA, Harvard, Vancouver, ISO, and other styles
10

Kelly, Spencer D., Peter Creigh, and James Bartolotti. "Integrating Speech and Iconic Gestures in a Stroop-like Task: Evidence for Automatic Processing." Journal of Cognitive Neuroscience 22, no. 4 (April 2010): 683–94. http://dx.doi.org/10.1162/jocn.2009.21254.

Full text
Abstract:
Previous research has demonstrated a link between language and action in the brain. The present study investigates the strength of this neural relationship by focusing on a potential interface between the two systems: cospeech iconic gesture. Participants performed a Stroop-like task in which they watched videos of a man and a woman speaking and gesturing about common actions. The videos differed as to whether the gender of the speaker and gesturer was the same or different and whether the content of the speech and gesture was congruent or incongruent. The task was to identify whether a man or a woman produced the spoken portion of the videos while accuracy rates, RTs, and ERPs were recorded to the words. Although not relevant to the task, participants paid attention to the semantic relationship between the speech and the gesture, producing a larger N400 to words accompanied by incongruent versus congruent gestures. In addition, RTs were slower to incongruent versus congruent gesture–speech stimuli, but this effect was greater when the gender of the gesturer and speaker was the same versus different. These results suggest that the integration of gesture and speech during language comprehension is automatic but also under some degree of neurocognitive control.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Gesture"

1

Lindberg, Martin. "Introducing Gestures: Exploring Feedforward in Touch-Gesture Interfaces." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23555.

Full text
Abstract:
This interaction design thesis aimed to explore how users could be introduced to the different functionalities of a gesture-based touch screen interface. This was done through a user-centred design research process where the designer was taught different artefacts by experienced users. Insights from this process lay the foundation for an interactive, digital gesture-introduction prototype.Testing said prototype with users yielded this study's results. While containing several areas for improvement regarding implementation and behaviour, the prototype's base methods and qualities were well received. Further development would be needed to fully assess its viability. The user-centred research methods used in this project proved valuable for later ideation and prototyping stages. Activities and results from this project indicate a potential for designers to further explore the possibilities for ensuring the discoverability of touch-gesture interactions. For future projects the author suggests more extensive research and testing using a greater sample size and wider demographic.
APA, Harvard, Vancouver, ISO, and other styles
2

Campbell, Lee Winston. "Visual classification of co-verbal gestures for gesture understanding." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8707.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001.
Includes bibliographical references (leaves 86-92).
A person's communicative intent can be better understood by either a human or a machine if the person's gestures are understood. This thesis project demonstrates an expansion of both the range of co-verbal gestures a machine can identify, and the range of communicative intents the machine can infer. We develop an automatic system that uses realtime video as sensory input and then segments, classifies, and responds to co-verbal gestures made by users in realtime as they converse with a synthetic character known as REA, which is being developed in parallel by Justine Cassell and her students at the MIT Media Lab. A set of 670 natural gestures, videotaped and visually tracked in the course of conversational interviews and then hand segmented and annotated according to a widely used gesture classification scheme, is used in an offline training process that trains Hidden Markov Model classifiers. A number of feature sets are extracted and tested in the offline training process, and the best performer is employed in an online HMM segmenter and classifier that requires no encumbering attachments to the user. Modifications made to the REA system enable REA to respond to the user's beat and deictic gestures as well as turntaking requests the user may convey in gesture.
(cont.) The recognition results obtained are far above chance, but too low for use in a production recognition system. The results provide a measure of validity for the gesture categories chosen, and they provide positive evidence for an appealing but difficult to prove proposition: to the extent that a machine can recognize and use these categories of gestures to infer information not present in the words spoken, there is exploitable complementary information in the gesture stream.
by Lee Winston Campbell.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Smith, Jason Alan. "Naturalistic skeletal gesture movement and rendered gesture decoding." Diss., Online access via UMI:, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Davis, James W. "Gesture recognition." Honors in the Major Thesis, University of Central Florida, 1994. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/126.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Arts and Sciences
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
5

Yunus, Fajrian. "Prediction of Gesture Timing and Study About Image Schema for Metaphoric Gestures." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS551.

Full text
Abstract:
Les gestes communicatifs et la parole sont étroitement liés. Nous voulons prédire automatiquement les gestes en fonction du discours. Le discours lui-même a deux constituants : l'acoustique et le contenu du discours (c'est-à-dire le texte). Dans une partie de cette thèse, nous développons un modèle basé sur un réseau de neurones récurrents avec un mécanisme d'attention pour prédire le moment des gestes, c'est-à-dire quand les gestes doivent se produire et quels types des gestes doivent se produire. Nous utilisons une technique de comparaison de séquences pour évaluer les performances du modèle. Nous réalisons également une étude subjective pour mesurer comment nos répondants jugent le naturel, la cohérence temporelle et la cohérence sémantique des gestes générés. Dans une autre partie de la thèse, nous travaillons avec la génération des gestes métaphoriques. Les gestes métaphoriques portent le sens, et il est donc nécessaire d'extraire la sémantique pertinente du contenu du discours. Ceci est fait en utilisant le concept d’image schéma tel que démontré par Ravenet et al. Cependant, pour pouvoir utiliser l’image schéma dans les techniques d'apprentissage automatique, les image schémas doivent être convertis en vecteurs de nombres réels. Par conséquent, nous étudions comment nous pouvons transformer l’image schéma en vecteur en utilisant des techniques du plongement de mots. Enfin, nous étudions comment nous pouvons représenter les formes des gestes des mains. La représentation doit être suffisamment compacte mais elle doit également être suffisamment large pour pouvoir couvrir suffisamment de formes pouvant représenter une gamme suffisante de sémantique
Communicative gestures and speech are tightly linked. We want to automatically predict the gestures based on the speech. The speech itself has two constituents, namely the acoustic and the content of the speech (i.e. the text). In one part of this dissertation, we develop a model based on a recurrent neural network with attention mechanism to predict the gesture timing, that is when the gesture should happen and what kind of gesture should happen. We use a sequence comparison technique to evaluate the model performance. We also perform a subjective study to measure how our respondents judge the naturalness, the time consistency, and the semantic consistency of the generated gestures. In another part of the dissertation, we deal with the generation of metaphoric gestures. Metaphoric gestures carry meaning, and thus it is necessary to extract the relevant semantics from the content of the speech. This is done by using the concept of image schema as demonstrated by Ravenet et al. However, to be able to use image schema in machine learning techniques, the image schemas has to be converted into vectors of real numbers. Therefore, we investigate how we can transform the image schema into vector by using word embedding techniques. Lastly, we investigate how we can represent hand gesture shapes. The representation has to be compact enough yet it also has to be broad enough such that it can cover sufficient shapes which can represent sufficient range of semantics
APA, Harvard, Vancouver, ISO, and other styles
6

Cheng, You-Chi. "Robust gesture recognition." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53492.

Full text
Abstract:
It is a challenging problem to make a general hand gesture recognition system work in a practical operation environment. In this study, it is mainly focused on recognizing English letters and digits performed near the steering wheel of a car and captured by a video camera. Like most human computer interaction (HCI) scenarios, the in-car gesture recognition suffers from various robustness issues, including multiple human factors and highly varying lighting conditions. It therefore brings up quite a few research issues to be addressed. First, multiple gesturing alternatives may share the same meaning, which is not typical in most previous systems. Next, gestures may not be the same as expected because users cannot see what exactly has been written, which increases the gesture diversity significantly.In addition, varying illumination conditions will make hand detection trivial and thus result in noisy hand gestures. And most severely, users will tend to perform letters at a fast pace, which may result in lack of frames for well-describing gestures. Since users are allowed to perform gestures in free-style, multiple alternatives and variations should be considered while modeling gestures. The main contribution of this work is to analyze and address these challenging issues step-by-step such that eventually the robustness of the whole system can be effectively improved. By choosing color-space representation and performing the compensation techniques for varying recording conditions, the hand detection performance for multiple illumination conditions is first enhanced. Furthermore, the issues of low frame rate and different gesturing tempo will be separately resolved via the cubic B-spline interpolation and i-vector method for feature extraction. Finally, remaining issues will be handled by other modeling techniques such as sub-letter stroke modeling. According to experimental results based on the above strategies, the proposed framework clearly improved the system robustness and thus encouraged the future research direction on exploring more discriminative features and modeling techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Cometti, Jean Pierre. "The architect's gesture." Pontificia Universidad Católica del Perú - Departamento de Humanidades, 2012. http://repositorio.pucp.edu.pe/index/handle/123456789/112899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kaâniche, Mohamed Bécha. "Human gesture recognition." Nice, 2009. http://www.theses.fr/2009NICE4032.

Full text
Abstract:
Dans cette thèse, nous voulons reconnaître les gestes (par ex. Lever la main) et plus généralement les actions brèves (par ex. Tomber, se baisser) effectués par un individu. De nombreux travaux ont été proposés afin de reconnaître des gestes dans un contexte précis (par ex. En laboratoire) à l’aide d’une multiplicité de capteurs (par ex. Réseaux de cameras ou individu observé muni de marqueurs). Malgré ces hypothèses simplificatrices, la reconnaissance de gestes reste souvent ambiguë en fonction de la position de l’individu par rapport aux caméras. Nous proposons de réduire ces hypothèses afin de concevoir un algorithme général permettant de reconnaître des gestes d’un individu évoluant dans un environnement quelconque et observé `a l’aide d’un nombre réduit de caméras. Il s’agit d’estimer la vraisemblance de la reconnaissance des gestes en fonction des conditions d’observation. Notre méthode consiste `a classifier un ensemble de gestes `a partir de l’apprentissage de descripteurs de mouvement. Les descripteurs de mouvement sont des signatures locales du mouvement de points d’intérêt associés aux descriptions locales de la texture du voisinage des points considérés. L’approche a été validée sur une base de données de gestes publique KTH et des résultats encourageants ont été obtenus
In this thesis, we aim to recognize gestures (e. G. Hand raising) and more generally short actions (e. G. Fall, bending) accomplished by an individual. Many techniques have already been proposed for gesture recognition in specific environment (e. G. Laboratory) using the cooperation of several sensors (e. G. Camera network, individual equipped with markers). Despite these strong hypotheses, gesture recognition is still brittle and often depends on the position of the individual relatively to the cameras. We propose to reduce these hypotheses in order to conceive general algorithm enabling the recognition of the gesture of an individual involving in an unconstrained environment and observed through limited number of cameras. The goal is to estimate the likelihood of gesture recognition in function of the observation conditions. Our method consists of classifying a set of gestures by learning motion descriptors. These motion descriptors are local signatures of the motion of corner points which are associated with their local textural description. We demonstrate the effectiveness of our motion descriptors by recognizing the actions of the public KTH database
APA, Harvard, Vancouver, ISO, and other styles
9

Alon, Jonathan. "Spatiotemporal Gesture Segmentation." Boston University Computer Science Department, 2006. https://hdl.handle.net/2144/1884.

Full text
Abstract:
Spotting patterns of interest in an input signal is a very useful task in many different fields including medicine, bioinformatics, economics, speech recognition and computer vision. Example instances of this problem include spotting an object of interest in an image (e.g., a tumor), a pattern of interest in a time-varying signal (e.g., audio analysis), or an object of interest moving in a specific way (e.g., a human's body gesture). Traditional spotting methods, which are based on Dynamic Time Warping or hidden Markov models, use some variant of dynamic programming to register the pattern and the input while accounting for temporal variation between them. At the same time, those methods often suffer from several shortcomings: they may give meaningless solutions when input observations are unreliable or ambiguous, they require a high complexity search across the whole input signal, and they may give incorrect solutions if some patterns appear as smaller parts within other patterns. In this thesis, we develop a framework that addresses these three problems, and evaluate the framework's performance in spotting and recognizing hand gestures in video. The first contribution is a spatiotemporal matching algorithm that extends the dynamic programming formulation to accommodate multiple candidate hand detections in every video frame. The algorithm finds the best alignment between the gesture model and the input, and simultaneously locates the best candidate hand detection in every frame. This allows for a gesture to be recognized even when the hand location is highly ambiguous. The second contribution is a pruning method that uses model-specific classifiers to reject dynamic programming hypotheses with a poor match between the input and model. Pruning improves the efficiency of the spatiotemporal matching algorithm, and in some cases may improve the recognition accuracy. The pruning classifiers are learned from training data, and cross-validation is used to reduce the chance of overpruning. The third contribution is a subgesture reasoning process that models the fact that some gesture models can falsely match parts of other, longer gestures. By integrating subgesture reasoning the spotting algorithm can avoid the premature detection of a subgesture when the longer gesture is actually being performed. Subgesture relations between pairs of gestures are automatically learned from training data. The performance of the approach is evaluated on two challenging video datasets: hand-signed digits gestured by users wearing short sleeved shirts, in front of a cluttered background, and American Sign Language (ASL) utterances gestured by ASL native signers. The experiments demonstrate that the proposed method is more accurate and efficient than competing approaches. The proposed approach can be generally applied to alignment or search problems with multiple input observations, that use dynamic programming to find a solution.
APA, Harvard, Vancouver, ISO, and other styles
10

Macleod, Tracy. "Gesture signs in social interaction : how group size influences gesture communication." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1205/.

Full text
Abstract:
This thesis explores the effects of group size on gesture communication. Signs in general change, in the kind of information they convey and the way in which they do so, and changes depend on interactive communication. For instance, speech is like dialogue in smaller groups but like monologue in larger groups. It was predicted that gestures would be influenced in a similar way by group size. In line with predictions, communication in groups of 5 was like dialogue whereas in groups of 8 it was like monologue. This was evident from the types of gesture that occurred with more beat and deictic gestures being produced in groups of 5. Iconic gesture production was comparable across group size but as predicted gestures were more complex in groups of 8. This was also the case for social gestures. Findings fit with dialogue models of communication and in particular the Alignment Model. Also in line with this model, group members aligned on gesture production and form.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Gesture"

1

Stam, Gale, Gale Stam, and Mika Ishino. Integrating gestures: The interdisciplinary nature of gesture. Amsterdam: John Benjamins Pub., 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ormerod, Roger. Farewell gesture. New York: Doubleday, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Connors, April. Gesture Drawing. Boca Raton, FL : CRC Press, Taylor & Francis Group, 2018.: CRC Press, 2017. http://dx.doi.org/10.1201/9781315156385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Church, R. Breckinridge, Martha W. Alibali, and Spencer D. Kelly, eds. Why Gesture? Amsterdam: John Benjamins Publishing Company, 2017. http://dx.doi.org/10.1075/gs.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Escalera, Sergio, Isabelle Guyon, and Vassilis Athitsos, eds. Gesture Recognition. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Konar, Amit, and Sriparna Saha. Gesture Recognition. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-62212-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Linda, Corriveau, and Hatay Nona, eds. Pure gesture. Nevada City, CA: Gateways, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Winterthur, Kunstmuseum, ed. Frozen gesture: Gesten in der Malerei = gestures in painting. München: Hirmer Verlag, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

McNeill, David. Gesture and Thought. Chicago: University of Chicago Press, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cienki, Alan, and Cornelia Müller, eds. Metaphor and Gesture. Amsterdam: John Benjamins Publishing Company, 2008. http://dx.doi.org/10.1075/gs.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Gesture"

1

Vermeerbergen, Myriam, and Eline Demey. "Sign + Gesture = Speech + Gesture?" In Simultaneity in Signed Languages, 257–82. Amsterdam: John Benjamins Publishing Company, 2007. http://dx.doi.org/10.1075/cilt.281.12ver.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schneider, Rebecca. "Gesture." In Critical Terms in Futures Studies, 145–49. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28987-4_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Peacock, Steven. "Gesture." In Hollywood and Intimacy, 56–77. London: Palgrave Macmillan UK, 2012. http://dx.doi.org/10.1057/9780230355330_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lamb, Jonathan. "Gesture." In The Routledge Handbook of Reenactment Studies, 94–96. First edition. | New York: Routledge, 2020.: Routledge, 2019. http://dx.doi.org/10.4324/9780429445637-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cartmill, Erica A., and Susan Goldin-Meadow. "Gesture." In APA handbook of nonverbal communication., 307–33. Washington: American Psychological Association, 2016. http://dx.doi.org/10.1037/14669-012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Holme, Randal. "Gesture." In Cognitive Linguistics and Language Teaching, 54–62. London: Palgrave Macmillan UK, 2009. http://dx.doi.org/10.1057/9780230233676_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reichl, Karl. "Gesture." In The Oral Epic, 85–106. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003189114-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, Roger. "Gesture." In Kinaesthesia in the Psychology, Philosophy and Culture of Human Experience, 109–14. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003368021-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stanchfield, Walt. "Gesture." In Drawn to Life: 20 Golden Years of Disney Master Classes, 43–100. 2nd ed. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003215363-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wulf, Christoph. "Gesture." In Handbook of the Anthropocene, 1429–33. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-25910-4_233.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gesture"

1

Chen, Ting, and Wencheng Tang. "Interactive Gesture of Exhibition Hall Mobile Follow Service Robot." In Human Systems Engineering and Design (IHSED 2021) Future Trends and Applications. AHFE International, 2021. http://dx.doi.org/10.54941/ahfe1001110.

Full text
Abstract:
In order to make gesture-based interaction more natural for users, this research aims to find suitable gestures for interaction between users and exhibition hall service robots. The research process is divided into two stages. The first stage conducts demand analysis and task definition to analyze user needs and tasks in different scenarios during the visit of the exhibition hall, thus to define the task set of the exhibition hall service robot gesture interaction. The second stage conducts two experiments. Experiment one carries out gesture inspiration. Participants are invited to design gestures for different tasks in each scene, therefore obtain the user-defined gesture sets. Experiment two conducts gesture evaluation tests. Experts are invited to select three candidate gestures for each task. In order to collect user preferences for candidate gestures, participants are asked to watch candidate gesture’s demonstration videos and score candidate gestures in terms of ease of use, comfort, matching and memorability The candidate gestures are sorted according to their scores. The most ideal gestures for human-computer interaction between exhibition hall service robots and users are obtained in different scenarios and tasks.
APA, Harvard, Vancouver, ISO, and other styles
2

DeVito, Matthew P., and Karthik Ramani. "Talking to TAD: Animating an Everyday Object for Use in Augmented Workspaces." In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34189.

Full text
Abstract:
Workspaces augmented by multitouch and gesture-sensing systems are quickly becoming a reality, but studies appear to limit themselves to interacting with displays. With the continued progress of the ubiquitous computing movement, everyday objects are coming to life and will soon enter these augmented spaces. Little has been studied regarding gestural control of everyday objects capable of movement in three-dimensional space. In the present study, we augment an office lamp for gestural interaction and use it toward finding more natural gestures for augmented workspace interaction with physical objects. We begin by surveying the current literature on user-defined gesture sets and digital augmentation of lamps to determine features desirable in the design of an actuated desk lamp. A prototypical Tabletop Assistive Droid (TAD) is then used in a study conducted to determine and analyze a feasible user-defined gesture set.
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Sicheng, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, and Long Xiao. "DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/650.

Full text
Abstract:
The art of communication beyond speech there are gestures. The automatic co-speech gesture generation draws much attention in computer animation. It is a challenging task due to the diversity of gestures and the difficulty of matching the rhythm and semantics of the gesture to the corresponding speech. To address these problems, we present DiffuseStyleGesture, a diffusion model based speech-driven gesture generation approach. It generates high-quality, speech-matched, stylized, and diverse co-speech gestures based on given speeches of arbitrary length. Specifically, we introduce cross-local attention and self-attention to the gesture diffusion pipeline to generate better speech matched and realistic gestures. We then train our model with classifier-free guidance to control the gesture style by interpolation or extrapolation. Additionally, we improve the diversity of generated gestures with different initial gestures and noise. Extensive experiments show that our method outperforms recent approaches on speech-driven gesture generation. Our code, pre-trained models, and demos are available at https://github.com/YoungSeng/DiffuseStyleGesture.
APA, Harvard, Vancouver, ISO, and other styles
4

Liang, Cao. "Bridging the Cognitive Gap: Optimizing Gesture Interaction Design for the Elderly." In Intelligent Human Systems Integration (IHSI 2024) Integrating People and Intelligent Systems. AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004543.

Full text
Abstract:
With the advancement of technology, individuals are on the verge of embracing an era of artificial intelligence (AI). While the utilization of innovative products enhances convenience in people's lives, it often brings forth new challenges and problems. The elderly, in particular, face additional obstacles in using AI products due to physical and cognitive aging. However, gestural interaction, as a new means of interaction, has the potential to address this issue.In this paper, we employ qualitative research methods to explore the guidelines for gesture interaction between AI systems and elderly users. By conducting focus group interviews, we identify a suitable set of gesture commands and analyze the factors that impact the elderly's comprehension of gestures. The results of our study provide valuable research methods and guidelines for future gesture command design.
APA, Harvard, Vancouver, ISO, and other styles
5

He, Yanming, Shumeng Hou, and Peiyao Cheng. "Generating a Gesture Set Using the User-defined Method in Smart Home Contexts." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1002181.

Full text
Abstract:
Gesture interaction is a natural interaction method and it has been widely applied in various smart contexts. Smart home system is a promising area to integrate gesture interaction. Under this background, it is necessary to generate a set of gestures that can support users’ intuitive interaction with smart home devices. Gesture elicitation study (GES) is an effective method used for generating gestures. In this study, by following GES, we develop a gesture set for controlling a smart TV via a smart speaker, which was common in smart home contexts. Two studies were conducted. In study 1, we conducted a diary study to generate target tasks, resulting in fifteen most frequent tasks in domestic contexts. In study 2, GES was conducted to generate gestures for each command by involving twelve participants. The generated gestures were analyzed by combining frequency, match, ease of use, learnability, memorability and preference, resulting in a set of gestures for smart home contexts.Keywords: Gesture Interaction, Smart Home System, Gesture Elicitation Study
APA, Harvard, Vancouver, ISO, and other styles
6

Jung, Euichul, Young Joo Jang, and Whang Jae Lee. "Study on Preferred Gestural Interaction of Playing Music for Wrist Wearable Devices." In Applied Human Factors and Ergonomics Conference. AHFE International, 2021. http://dx.doi.org/10.54941/ahfe100581.

Full text
Abstract:
Recently, many gesture-based interactive devices have been developed. Gesture is one of the most intuitive and natural ways to communicate each other, so gesture recognition technology is becoming huge issues in interaction design. Wrist wearable devices such as smart watches, Nike FuelBand, and Samsung Galaxy Gear are vitalized on the market, and there are attempts to control the wrist wearable devices with gestural interaction. In order to design more user-centered devices, development of gesture standards which gesture is appropriate for which operation becomes very important. In particular, there are two different situations gesture interaction is required: 1) people control objects that exist around then such as TV and vehicle, and 2) people control objects put on their body such as smart watch. This paper assumes that the two different situations may require different gesture interactions. The goal of this paper is to reveal preferred gesture interaction for wrist wearable devices. The function of playing music is selected for the experiment because it is most common and popular function on almost all digital devices. This paper consists of three parts: 1) collect existing gesture signal conventions and categorize them, 2) conduct a survey to find out preferred gestures for each function of playing music in two different situations, and 3) analyze the result for defining the most preferred gesture interactions and considering rationales for designing gesture interaction for wrist wearable device.
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Jinmiao, and Rahul Rai. "Hand Gesture Based Intuitive CAD Interface." In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34070.

Full text
Abstract:
A key objective of gesture based computer aided design (CAD) interface is to enable humans to manipulate 3D models in virtual environments in a manner similar to how such objects are manipulated in real-life. In this paper, we outline the development of a novel real-time gesture based conceptual computer aided design tool which enables intuitive hand gesture based interaction with a given design interface. Recognized hand gestures along with hand position information are converted into commands for rotating, scaling, and translating 3D models. In the presented system, gestures are identified based solely on the depth information obtained via inexpensive depth sensing cameras (SoftKinetics DepthSense 311). Since the gesture recognition system is entirely based on using depth images, the developed system is robust and insensitive to variations in lighting conditions, hand color, and background noise. The difference between the input hand shape and the nearest neighboring point in the database is employed as the criterion to recognize different gestures. Extensive experiments with a design interface are also presented to demonstrate the accuracy, robustness, and effectiveness of the presented system.
APA, Harvard, Vancouver, ISO, and other styles
8

Nyaga, Casam, and Ruth Wario. "Towards Kenyan Sign Language Hand Gesture Recognition Dataset." In 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003281.

Full text
Abstract:
Datasets for hand gesture recognition are now an important aspect of machine learning. Many datasets have been created for machine learning purposes. Some of the notable datasets include Modified National Institute of Standards and Technology (MNIST) dataset, Common Objects in Context (COCO) dataset, Canadian Institute For Advanced Research (CIFAR-10) dataset, LeNet-5, AlexNet, GoogLeNet, The American Sign Language Lexicon Video Dataset and 2D Static Hand Gesture Colour Image Dataset for ASL Gestures. However, there is no dataset for Kenya Sign language (KSL). This paper proposes the creation of a KSL hand gesture recognition dataset. The dataset is intended to be in two-fold. One for static hand gestures, and one for dynamic hand gestures. With respect to dynamic hand gestures short videos of the KSL alphabet a to z and numbers 0 to 10 will be considered. Likewise, for the static gestures KSL alphabet a to z will be considered. It is anticipated that this dataset will be vital in creation of sign language hand gesture recognition systems not only for Kenya sign language but of other sign languages as well. This will be possible because of learning transfer ability when implementing sign language systems using neural network models.
APA, Harvard, Vancouver, ISO, and other styles
9

Rafiq, Riyad Bin, Weishi Shi, and Mark V. Albert. "Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/823.

Full text
Abstract:
Hand gestures can provide a natural means of human-computer interaction and enable people who cannot speak to communicate efficiently. Existing hand gesture recognition methods heavily depend on pre-defined gestures, however, motor-impaired individuals require new gestures tailored to each individual's gesture motion and style. Gesture samples collected from different persons have distribution shifts due to their health conditions, the severity of the disability, motion patterns of the arms, etc. In this paper, we introduce the Latent Embedding Exploitation (LEE) mechanism in our replay-based Few-Shot Continual Learning (FSCL) framework that significantly improves the performance of fine-tuning a model for out-of-distribution data. Our method produces a diversified latent feature space by leveraging a preserved latent embedding known as gesture prior knowledge, along with intra-gesture divergence derived from two additional embeddings. Thus, the model can capture latent statistical structure in highly variable gestures with limited samples. We conduct an experimental evaluation using the SmartWatch Gesture and the Motion Gesture datasets. The proposed method results in an average test accuracy of 57.0%, 64.6%, and 69.3% by using one, three, and five samples for six different gestures. Our method helps motor-impaired persons leverage wearable devices, and their unique styles of movement can be learned and applied in human-computer interaction and social communication. Code is available at: https://github.com/riyadRafiq/wearable-latent-embedding-exploitation.
APA, Harvard, Vancouver, ISO, and other styles
10

Radkowski, Rafael, and Christian Stritzke. "Comparison Between 2D and 3D Hand Gesture Interaction for Augmented Reality Applications." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-48155.

Full text
Abstract:
This paper presents a comparison between 2D and 3D interaction techniques for Augmented Reality (AR) applications. The interaction techniques are based on hand gestures and a computer vision-based hand gesture recognition system. We have compared 2D gestures and 3D gestures for interaction in AR application. The 3D recognition system is based on a video camera, which provides an additional depth image to each 2D color image. Thus, spatial interactions become possible. Our major question during this work was: Do depth images and 3D interaction techniques improve the interaction with AR applications, respectively with virtual 3D objects? Therefore, we have tested and compared the hand gesture recognition systems. The results show two things: First, they show that the depth images facilitate a more robust hand recognition and gesture identification. Second, the results are a strong indication that 3D hand gesture interactions techniques are more intuitive than 2D hand gesture interaction techniques. In summary the results emphasis, that depth images improve the hand gesture interaction for AR applications.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Gesture"

1

Yang, Jie, and Yangsheng Xu. Hidden Markov Model for Gesture Recognition. Fort Belvoir, VA: Defense Technical Information Center, May 1994. http://dx.doi.org/10.21236/ada282845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morton, Paul R., Edward L. Fix, and Gloria L. Calhoun. Hand Gesture Recognition Using Neural Networks. Fort Belvoir, VA: Defense Technical Information Center, May 1996. http://dx.doi.org/10.21236/ada314933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cassell, Justine, Matthew Stone, Brett Douville, Scott Prevost, and Brett Achorn. Modeling the Interaction between Speech and Gesture. Fort Belvoir, VA: Defense Technical Information Center, May 1994. http://dx.doi.org/10.21236/ada290549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vira, Naren. Gesture Recognition Development for the Interactive Datawall. Fort Belvoir, VA: Defense Technical Information Center, January 2008. http://dx.doi.org/10.21236/ada476755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Ruyin. CSI-based Gesture Recognition and Object Detection. Ames (Iowa): Iowa State University, January 2021. http://dx.doi.org/10.31274/cc-20240624-456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lampton, Donald R., Bruce W. Knerr, Bryan R. Clark, Glenn A. Martin, and Donald A. Washburn. Gesture Recognition System for Hand and Arm Signals. Fort Belvoir, VA: Defense Technical Information Center, November 2002. http://dx.doi.org/10.21236/ada408459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Venetsky, Larry, Mark Husni, and Mark Yager. Gesture Recognition for UCAV-N Flight Deck Operations. Fort Belvoir, VA: Defense Technical Information Center, January 2003. http://dx.doi.org/10.21236/ada422629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yacoob, Yaser, and Larry Davis. Gesture-Based Control of Spaces and Objects in Augmented. Fort Belvoir, VA: Defense Technical Information Center, October 2002. http://dx.doi.org/10.21236/ada408623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Perzanowski, Dennis, Alan C. Schultz, William Adams, and Elaine Marsh. Using a Natural Language and Gesture Interface for Unmanned Vehicles. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada435161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Elliott, Linda R., Susan G. Hill, and Michael Barnes. Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers. Fort Belvoir, VA: Defense Technical Information Center, July 2016. http://dx.doi.org/10.21236/ad1011904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography