Articles de revues sur le sujet « Digital gesture »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Digital gesture.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Digital gesture ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Hesenius, Marc, Markus Kleffmann et Volker Gruhn. « AugIR Meets GestureCards : A Digital Sketching Environment for Gesture-Based Applications ». Interacting with Computers 33, no 2 (mars 2021) : 134–54. http://dx.doi.org/10.1093/iwcomp/iwab017.

Texte intégral
Résumé :
Abstract To gain a common understanding of an application’s layouts, dialogs and interaction flows, development teams often sketch user interface (UI). Nowadays, they must also define multi-touch gestures, but tools for sketching UIs often lack support for custom gestures and typically just integrate a basic predefined gesture set, which might not suffice to specifically tailor the interaction to the desired use cases. Furthermore, sketching can be enhanced with digital means, but it remains unclear whether digital sketching is actually beneficial when designing gesture-based applications. We extended the AugIR, a digital sketching environment, with GestureCards, a hybrid gesture notation, to allow software engineers to define custom gestures when sketching UIs. We evaluated our approach in a user study contrasting digital and analog sketching of gesture-based UIs.
Styles APA, Harvard, Vancouver, ISO, etc.
2

van den Hoven, Elise, et Ali Mazalek. « Grasping gestures : Gesturing with physical artifacts ». Artificial Intelligence for Engineering Design, Analysis and Manufacturing 25, no 3 (11 juillet 2011) : 255–71. http://dx.doi.org/10.1017/s0890060411000072.

Texte intégral
Résumé :
AbstractGestures play an important role in communication. They support the listener, who is trying to understand the speaker. However, they also support the speaker by facilitating the conceptualization and verbalization of messages and reducing cognitive load. Gestures thus play an important role in collaboration and also in problem-solving tasks. In human–computer interaction, gestures are also used to facilitate communication with digital applications, because their expressive nature can enable less constraining and more intuitive digital interactions than conventional user interfaces. Although gesture research in the social sciences typically considers empty-handed gestures, digital gesture interactions often make use of hand-held objects or touch surfaces to capture gestures that would be difficult to track in free space. In most cases, the physical objects used to make these gestures serve primarily as a means of sensing or input. In contrast, tangible interaction makes use of physical objects as embodiments of digital information. The physical objects in a tangible interface thus serve as representations as well as controls for the digital information they are associated with. Building on this concept, gesture interaction has the potential to make use of the physical properties of hand-held objects to enhance or change the functionality of the gestures made. In this paper, we look at the design opportunities that arise at the intersection of gesture and tangible interaction. We believe that gesturing while holding physical artifacts opens up a new interaction design space for collaborative digital applications that is largely unexplored. We provide a survey of gesture interaction work as it relates to tangible and touch interaction. Based on this survey, we define the design space of tangible gesture interaction as the use of physical devices for facilitating, supporting, enhancing, or tracking gestures people make for digital interaction purposes, and outline the design opportunities in this space.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Vogiatzidakis, Panagiotis, et Panayiotis Koutsabasis. « Gesture Elicitation Studies for Mid-Air Interaction : A Review ». Multimodal Technologies and Interaction 2, no 4 (29 septembre 2018) : 65. http://dx.doi.org/10.3390/mti2040065.

Texte intégral
Résumé :
Mid-air interaction involves touchless manipulations of digital content or remote devices, based on sensor tracking of body movements and gestures. There are no established, universal gesture vocabularies for mid-air interactions with digital content or remote devices based on sensor tracking of body movements and gestures. On the contrary, it is widely acknowledged that the identification of appropriate gestures depends on the context of use, thus the identification of mid-air gestures is an important design decision. The method of gesture elicitation is increasingly applied by designers to help them identify appropriate gesture sets for mid-air applications. This paper presents a review of elicitation studies in mid-air interaction based on a selected set of 47 papers published within 2011–2018. It reports on: (1) the application domains of mid-air interactions examined; (2) the level of technological maturity of systems at hand; (3) the gesture elicitation procedure and its variations; (4) the appropriateness criteria for a gesture; (5) participants number and profile; (6) user evaluation methods (of the gesture vocabulary); (7) data analysis and related metrics. This paper confirms that the elicitation method has been applied extensively but with variability and some ambiguity and discusses under-explored research questions and potential improvements of related research.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zhong, Yushan, Yifan Jia et Liang Ma. « Design and implementation of children’s gesture education games based on AI gesture recognition technology ». MATEC Web of Conferences 355 (2022) : 03043. http://dx.doi.org/10.1051/matecconf/202235503043.

Texte intégral
Résumé :
In order to cultivate children’s imagination and creativity in the cognitive process, combined with the traditional hand shadow game, a children’s gesture education game based on AI gesture recognition technology is designed and developed. The game uses unity development platform, with children’s digital gesture recognition as the content, designs and implements the basic functions involved in the game, including AI gesture recognition function, character animation function, interface interaction function, AR photo taking function and question answering system function. The game is finally released on the mobile terminal. Players can recognize gestures through mobile cameras, interact with virtual cartoon characters in the game, watch cartoon character animation, understand popular science knowledge, and complete the answers in the game. The educational games can better assist children to learn digital gestures, enrich children’s ways of cognition, expand children’s imagination, and let children learn easily with happy educational games.
Styles APA, Harvard, Vancouver, ISO, etc.
5

McNamara, Alison. « Digital Gesture-Based Games ». International Journal of Game-Based Learning 6, no 4 (octobre 2016) : 52–72. http://dx.doi.org/10.4018/ijgbl.2016100104.

Texte intégral
Résumé :
This study aims to provide an account of phase three of the doctoral process where both students and teachers' views contribute to the design and development of a gesture-based game in Ireland at post-primary level. The research showed the school's policies influenced the supportive Information and Communication Technology (ICT) infrastructure, classroom environments influenced a student's ability to participate and teachers' perspectives impacted upon whether they adopted games into their classrooms. While research has been conducted in relation to training schemes for teachers, it is agreed that they are the main change agents in the classroom. Therefore, this study focuses on the game itself and its design elements that support and enhance mathematics education within the Irish context. Practical guidelines for both the game, school's policies and classroom environments are provided based upon the research for mathematics educators and practitioners of game-based learning strategies in their classrooms.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Santosh Kumar J, Vamsi, Vinod, Madhusudhan and Tejas. « Design and Development of IoT Device that Recognizes Hand Gestures using Sensors ». September 2021 7, no 09 (27 septembre 2021) : 29–34. http://dx.doi.org/10.46501/ijmtst0709006.

Texte intégral
Résumé :
A hand gesture is a non-verbal means of communication involving the motion of fingers to convey information. Hand gestures are used in sign language and are a way of communication for deaf and mute people and also implemented to control devices too. The purpose of gesture recognition in devices has always been providing the gap between the physical world and the digital world. The way humans interact among themselves with the digital world could be implemented via gestures using algorithms. Gestures can be tracked using gyroscope, accelerometers, and more as well. So, in this project we aim to provide an electronic method for hand gesture recognition that is cost-effective, this system makes use of flex sensors, ESP32 board. A flex sensor works on the principle of change in the internal resistance to detect the angle made by the user’s finger at any given time. The flexes made by hand in different combinations amount to a gesture and this gesture can be converted into signals or as a text display on the screen. A smart glove is designed which is equipped with custom-made flex sensors that detect the gestures and convert them to text and an ESP32 board, the component used to supplement the gestures detected by a flex sensor. This helps in identifying machines the human sign language and perform the task or identify a word through hand gestures and respond according to it.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Powar, Salonee, Shweta Kadam, Sonali Malage et Priyanka Shingane. « Automated Digital Presentation Control using Hand Gesture Technique ». ITM Web of Conferences 44 (2022) : 03031. http://dx.doi.org/10.1051/itmconf/20224403031.

Texte intégral
Résumé :
In today’s digital world Presentation using a slideshow is an effective and attractive way that helps speakers to convey information and convince the audience. There are ways to control slides with devices like mouse, keyboard, or laser pointer, etc. The drawback is one should have previous knowledge about the devices in order to manage them. Gesture recognition has acquired importance a couple of years prior and are utilized to control applications like media players, robot control, gaming. The hand gesture recognition system builds the use of gloves, markers and so on However, the utilization of such gloves or markers expands the expense of the system. In this proposed system, Artificial intelligence-based hand gesture detection methodology is proposed. Users will be able to change the slides of the presentation in both forward and backward directions by just doing hand gestures. Use of hand gestures cause connection simple, helpful, and doesn’t need any additional gadget. The suggested method is to help speakers for a productive presentation with natural improved communication with the computer. Specifically, the proposed system is more viable than utilizing a laser pointer since the hand is more apparent and thus can better grab the attention of the audience.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zhao, Shichao. « Exploring How Interactive Technology Enhances Gesture-Based Expression and Engagement : A Design Study ». Multimodal Technologies and Interaction 3, no 1 (27 février 2019) : 13. http://dx.doi.org/10.3390/mti3010013.

Texte intégral
Résumé :
The interpretation and understanding of physical gestures play a significant role in various forms of art. Interactive technology and digital devices offer a plethora of opportunities for personal gesture-based experience and they assist in the creation of collaborative artwork. In this study, three prototypes for use with different digital devices (digital camera, PC camera, and Kinect) were designed. Subsequently, a series of workshops were conducted and in-depth interviews with participants from different cultural and occupational backgrounds. The latter were designed to explore how to specifically design personalised gesture-based expressions and how to engage the creativity of the participants in their gesture-based experiences. The findings indicated that, in terms of gesture-based interaction, the participants preferred to engage with the visual traces that were displayed at specific timings in multi-experience spaces. Their gesture-based interactions could effectively support non-verbal emotional expression. In addition, the participants were shown to be strongly inclined to combine their personal stories and emotions into their own gesture-based artworks. Based on the participants’ different cultural and occupational backgrounds, their artistic creation could be spontaneously formed.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wattamwar, Aniket. « Sign Language Recognition using CNN ». International Journal for Research in Applied Science and Engineering Technology 9, no 9 (30 septembre 2021) : 826–30. http://dx.doi.org/10.22214/ijraset.2021.38058.

Texte intégral
Résumé :
Abstract: This research work presents a prototype system that helps to recognize hand gesture to normal people in order to communicate more effectively with the special people. Aforesaid research work focuses on the problem of gesture recognition in real time that sign language used by the community of deaf people. The problem addressed is based on Digital Image Processing using CNN (Convolutional Neural Networks), Skin Detection and Image Segmentation techniques. This system recognizes gestures of ASL (American Sign Language) including the alphabet and a subset of its words. Keywords: gesture recognition, digital image processing, CNN (Convolutional Neural Networks), image segmentation, ASL (American Sign Language), alphabet
Styles APA, Harvard, Vancouver, ISO, etc.
10

Adak, Nitin, et Dr S. D. Lokhande Dr. S. D. Lokhande. « An Accelerometer-Based Digital Pen for Handwritten Digit and Gesture Recognition ». Indian Journal of Applied Research 3, no 12 (1 octobre 2011) : 207–10. http://dx.doi.org/10.15373/2249555x/dec2013/61.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Kolhe, Ashwini, R. R. Itkarkar et Anilkumar V. Nandani. « Robust Part-Based Hand Gesture Recognition Using Finger-Earth Mover’s Distance ». International Journal of Advanced Research in Computer Science and Software Engineering 7, no 7 (29 juillet 2017) : 131. http://dx.doi.org/10.23956/ijarcsse/v7i7/0196.

Texte intégral
Résumé :
Hand gesture recognition is of great importance for human-computer interaction (HCI), because of its extensive applications in virtual reality, sign language recognition, and computer games. Despite lots of previous work, traditional vision-based hand gesture recognition methods are still far from satisfactory for real-life applications. Because of the nature of optical sensing, the quality of the captured images is sensitive to lighting conditions and cluttered backgrounds, thus optical sensor based methods are usually unable to detect and track the hands robustly, which largely affects the performance of hand gesture recognition. Compared to the entire human body, the hand is a smaller object with more complex articulations and more easily affected by segmentation errors. It is thus a very challenging problem to recognize hand gestures. This work focuses on building a robust part-based hand gesture recognition system. To handle the noisy hand shapes obtained from digital camera, we propose a novel distance metric, Finger-Earth Mover’s Distance (FEMD), to measure the dissimilarity between hand shapes. As it only matches the finger parts while not the whole hand, it can better distinguish the hand gestures of slight differences. The experiments demonstrate that proposed hand gesture recognition system’s mean accuracy is 80.4% which is measured on 6 gesture database.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Fahrudin, Fikri, Mesi Andriani, Muallimin et Eka Altiarika. « Gerakan Tangan Pemain Otomatis Menggunakan Computer Vision ». Journal of Information Technology and society 1, no 1 (24 juin 2023) : 15–19. http://dx.doi.org/10.35438/jits.v1i1.19.

Texte intégral
Résumé :
Gesture recognition allows users of computer science technology to connect with their digital devices more conveniently. Technology for gesture recognition can be helpful in a variety of contexts, such as automated household appliances, automobiles, and interpretation of hand gestures. Gesture recognition is part of gesture recognition which determines what message a certain hand movement wants to convey. In developing this automatic hand movement we use segmentation and object detection where this method involves using algorithms to detect and identify objects or areas related to hand movements. , such as human skin, fingers, and others. Detection of human hand movements using computer vision is a digital image processing technique that aims to recognize human hand movements from image or video data. This technique can be applied in various applications such as human-computer interaction, hand gesture recognition, or video games. Making Automatic Player Hand Movements Using Computer Vision has the potential to improve the user experience when playing games or using interactive applications that require hand movements, according to the research and development that has been done. Games could be controlled more precisely and with less need for additional hardware such as joysticks or controllers if computer vision technology was able to accurately distinguish hand movements.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Attygalle, Nuwan T., Luis A. Leiva, Matjaž Kljun, Christian Sandor, Alexander Plopski, Hirokazu Kato et Klen Čopič Pucihar. « No Interface, No Problem : Gesture Recognition on Physical Objects Using Radar Sensing ». Sensors 21, no 17 (27 août 2021) : 5771. http://dx.doi.org/10.3390/s21175771.

Texte intégral
Résumé :
Physical objects are usually not designed with interaction capabilities to control digital content. Nevertheless, they provide an untapped source for interactions since every object could be used to control our digital lives. We call this the missing interface problem: Instead of embedding computational capacity into objects, we can simply detect users’ gestures on them. However, gesture detection on such unmodified objects has to date been limited in the spatial resolution and detection fidelity. To address this gap, we conducted research on micro-gesture detection on physical objects based on Google Soli’s radar sensor. We introduced two novel deep learning architectures to process range Doppler images, namely a three-dimensional convolutional neural network (Conv3D) and a spectrogram-based ConvNet. The results show that our architectures enable robust on-object gesture detection, achieving an accuracy of approximately 94% for a five-gesture set, surpassing previous state-of-the-art performance results by up to 39%. We also showed that the decibel (dB) Doppler range setting has a significant effect on system performance, as accuracy can vary up to 20% across the dB range. As a result, we provide guidelines on how to best calibrate the radar sensor.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Herman, L., Z. Stachoň, R. Stuchlík, J. Hladík et P. Kubíček. « TOUCH INTERACTION WITH 3D GEOGRAPHICAL VISUALIZATION ON WEB : SELECTED TECHNOLOGICAL AND USER ISSUES ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W2 (5 octobre 2016) : 33–40. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w2-33-2016.

Texte intégral
Résumé :
The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones) and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli) focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users’ performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Qian, Hao, Yangbin Chi, Zining Dong, Feng Yan et Limin Zhang. « A Gesture Recognition Method with a Charge Induction Array of Nine Electrodes ». Sensors 22, no 3 (3 février 2022) : 1158. http://dx.doi.org/10.3390/s22031158.

Texte intégral
Résumé :
In order to develop a non-contact and simple gesture recognition technology, a recognition method with a charge induction array of nine electrodes is proposed. Firstly, the principle of signal acquisition based on charge induction is introduced, and the whole system is given. Secondly, the recognition algorithms, including the pre-processing algorithm and back propagation neural network (BPNN) algorithm, are given to recognize three input modes of hand gestures, digital input, direction input and key input, respectively. Finally, experiments of three input modes of hand gestures are carried out, and the recognition accuracy is 97.2%, 94%, and 100% for digital input, direction input, and key input, respectively. The outstanding characteristic of this method is the real-time recognition of three hand gestures in the distance of 2 cm without the need of wearing any device, as well as being low cost and easy to implement.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Sruthi S et Swetha S. « Hand Gesture Controlled Presentation using OpenCV and MediaPipe ». international journal of engineering technology and management sciences 7, no 4 (2023) : 338–42. http://dx.doi.org/10.46647/ijetms.2023.v07i04.046.

Texte intégral
Résumé :
In today's digital era, presentations play a crucial role in various domains, ranging from education to business. However, traditional manual presentation methods, reliant on input devices such as keyboards or clickers, have inherent limitations in terms of mobility, interactivity, and user experience. To address these limitations, gesture-controlled presentations have emerged as a promising solution, harnessing the power of computer vision techniques to interpret hand gestures and enable natural interaction with presentation content. This paper presents a comprehensive system for hand gesture-controlled presentations using OpenCV and MediaPipe libraries. OpenCV is employed to capture video input from a webcam, while MediaPipe is utilized for hand tracking and landmark extraction. By analyzing finger positions and movements, the system accurately recognizes predefined gestures. Presenters can seamlessly control the slides, hold a pointer, annotate the content, and engage with the audience in a more interactive manner. The responsiveness and real-time performance contribute to an enhanced presentation experience.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Volioti, Christina, Apostolos Tsagaris, Dimitrios Trigkas, Theodoros Iliou, Menelaos N. Katsantonis et Ioannis Mavridis. « HapticSOUND : An Interactive Learning Experience with a Digital Musical Instrument ». Applied Sciences 13, no 12 (14 juin 2023) : 7149. http://dx.doi.org/10.3390/app13127149.

Texte intégral
Résumé :
In this paper, an interactive learning experience is proposed, aiming to involve museum visitors in a personalized experience of the transmittal of cultural knowledge in an active and creative way. The proposed system, called HapticSOUND, consists of three subsystems: (a) the Information, where visitors are informed about the traditional musical instruments; (b) the Entertainment, where visitors are entertained by playing serious games to virtually assemble traditional musical instruments by a set of 3D objects; and (c) the Interaction, where visitors interact with a digital musical instrument which is an exact 3D-printed replica of a traditional musical instrument, where cameras have been placed to capture user gestures and machine learning algorithms have been implemented for gesture recognition. The museum visitor can interact with the lifelike replica to tactilely and aurally explore the instrument’s abilities, producing sounds guided by the system and receiving real-time visual and audio feedback. Emphasis is given to the Interaction Subsystem, where a pilot study was conducted to evaluate the usability of the subsystem. Preliminary results were promising since the usability was satisfactory, indicating that it is an innovative approach that utilizes sensorimotor learning and machine learning techniques in the context of playing sounds based on real-time gesture and fingering recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Mo, Dong-Han, Chuen-Lin Tien, Yu-Ling Yeh, Yi-Ru Guo, Chern-Sheng Lin, Chih-Chin Chen et Che-Ming Chang. « Design of Digital-Twin Human-Machine Interface Sensor with Intelligent Finger Gesture Recognition ». Sensors 23, no 7 (27 mars 2023) : 3509. http://dx.doi.org/10.3390/s23073509.

Texte intégral
Résumé :
In this study, the design of a Digital-twin human-machine interface sensor (DT-HMIS) is proposed. This is a digital-twin sensor (DT-Sensor) that can meet the demands of human-machine automation collaboration in Industry 5.0. The DT-HMIS allows users/patients to add, modify, delete, query, and restore their previously memorized DT finger gesture mapping model and programmable logic controller (PLC) logic program, enabling the operation or access of the programmable controller input-output (I/O) interface and achieving the extended limb collaboration capability of users/patients. The system has two main functions: the first is gesture-encoded virtual manipulation, which indirectly accesses the PLC through the DT mapping model to complete control of electronic peripherals for extension-limbs ability by executing logic control program instructions. The second is gesture-based virtual manipulation to help non-verbal individuals create special verbal sentences through gesture commands to improve their expression ability. The design method uses primitive image processing and eight-way dual-bit signal processing algorithms to capture the movement of human finger gestures and convert them into digital signals. The system service maps control instructions by observing the digital signals of the DT-HMIS and drives motion control through mechatronics integration or speech synthesis feedback to express the operation requirements of inconvenient work or complex handheld physical tools. Based on the human-machine interface sensor of DT computer vision, it can reflect the user’s command status without the need for additional wearable devices and promote interaction with the virtual world. When used for patients, the system ensures that the user’s virtual control is mapped to physical device control, providing the convenience of independent operation while reducing caregiver fatigue. This study shows that the recognition accuracy can reach 99%, demonstrating practicality and application prospects. In future applications, users/patients can interact virtually with other peripheral devices through the DT-HMIS to meet their own interaction needs and promote industry progress.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Sahithi, G. « A Model for Sign Language Recognition System using Deep Learning ». International Journal for Research in Applied Science and Engineering Technology 10, no 6 (30 juin 2022) : 2270–76. http://dx.doi.org/10.22214/ijraset.2022.44286.

Texte intégral
Résumé :
Abstract: Conversing to someone with listening disability is usually the main challenge. Sign language has indelibly ended up the final panacea and is a completely effective device for people with listening and speech inability to speak their emotions and critiques to the world. It makes the combination technique among them and others easy and much less complex. However, the discovery of signal language alone, isn't always enough . There are many strings connected to this boon.The signal gestures regularly get blended and stressed for a person who has by no means learned or is aware of it in a exclusive language. However, this communique gap which has existed for years can now be narrowed with the advent of diverse strategies to automate the detection of signal gestures . In this paper, we introduce a Sign Language reputation the use of Sign Language. In this study, the consumer have to be capable of seize snap shots of the hand gesture the use of internet digital digicam and the device shall expect and display the call of the captured image. We use the HSV shade set of rules to come across the hand gesture and set the historical past to black. The snap shots go through a chain of processing steps which consist of diverse Computer imaginative and prescient strategies including the conversion to grayscale, dilation and masks operation. And the location of hobby which, in our case is the hand gesture is segmented. The capabilities extracted are the binary pixels of the snap shots. We employ Convolutional Neural Network(CNN) for schooling and to categorise the snap shots. We are capable of realising 10 Sign gesture alphabets with excessive accuracy. Our version has carried out a wonderful accuracy of above 90%.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Mehta, Devika. « Panchatantra Storytelling using Hand Gesture and Digital System ». International Journal for Research in Applied Science and Engineering Technology V, no VIII (29 août 2017) : 538–43. http://dx.doi.org/10.22214/ijraset.2017.8075.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

sundari, P. Gnana, K. Jawa har, J. Elan go, M. Anbu raj et K. Harish. « Digital Art using Hand Gesture Control with IOT ». International Journal of Engineering Trends and Technology 45, no 8 (25 mars 2017) : 412–15. http://dx.doi.org/10.14445/22315381/ijett-v45p277.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

K, Jishma, et Anupama Jims. « Hand Gesture Controlled Presentation Viewer with AI Virtual Painter ». YMER Digital 21, no 05 (23 mai 2022) : 1016–25. http://dx.doi.org/10.37896/ymer21.05/b6.

Texte intégral
Résumé :
Online teaching has been encouraged for many years but the COVID-19 pandemic has promoted it to an even greater extent. Teachers had to quickly shift to online teaching methods and processes and conduct all the classroom activities online. The global pandemic has accelerated the transition from chalk and board learning to mouse and click - digital learning. Even though there are online whiteboards available for teaching, teachers often find it difficult to draw using a mouse. A solution for this would be to get an external digital board and stylus but not everyone would be able to afford it. The Hand-Gesture Controlled Presentation Viewer With AI Virtual Painter is a project where one can navigate through the presentation slides and draw anything on them just like how one would on a normal board, just by using their fingers. This project aims to digitalise the traditional blackboard-chalk system and eliminate the need for using a mouse or keyboard while taking classes. HandGesture controlled devices especially laptops and computers have recently gained a lot of attraction. This system works by detecting landmarks on one’s hand to recognise the gestures. The project recognises five hand gestures. It uses a thumb finger for moving to the next slide, a little finger for moving to the previous slide, two fingers for displaying the pointer, one finger for drawing on the screen and three fingers for erasing whatever has been drawn. The infrastructure is provided between the user and the system using only a camera. The camera’s output will be presented on the system’s screen so that the user can further calibrate it. Keywords: Hand Gestures, Gesture Detection, Virtual Painter.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Mohammed Musharaf Z, Meril Akash. J et M. Malleswari. « Dynamic virtual assistance of I/O functionalities ». World Journal of Advanced Engineering Technology and Sciences 8, no 2 (30 mars 2023) : 023–33. http://dx.doi.org/10.30574/wjaets.2023.8.2.0061.

Texte intégral
Résumé :
With significant advancements being witnessed in the engineering industry daily, it has become increasingly vital for society to seek out particular new ways of interacting with computer technology and automation as their demand grows in society. Today, every device is developing the use of touch screen technology on its computer systems, although it is not cost-effective to use in all applications. A specialized system, similar to a virtual device, that provides object pursuit (tracking) and Gestures to let us engage; it might be an effective alternative to the standard touch screen and also the solid physical gadgets. The goal is to create an object pursuit (tracking) program that communicates with the computer system. This proposed model is a computer vision-based control system that involves hand movements taken from a digital camera via a hand detection technique implemented with OpenCV libraries. Our project applies gesture recognition as a topic that comes under two computer science fields augmented reality and human-computer interaction and we have created a virtual gesture system to elucidate human gestures through mathematical algorithms. Users can use simple finger or hand gestures to control or interact with the system without physically touching them and also included voice assistance to start and end the gesture controlling system. Gesture recognition can be viewed as a way for computers to begin to recognize human body language and signs, thus stuffing the void between computing systems and humans than the earliest text user interfaces or even graphical user interfaces, which still limit the majority of input to keyboard and mouse are may not be very efficient at all times. The algorithm is focused on deep learning for detecting the gestures. Hence, the proposed system will avoid the pandemic situation of COVID-19 spread by reducing the human interaction with the devices to control the system
Styles APA, Harvard, Vancouver, ISO, etc.
24

Purse, Lisa. « Digital Visceral : Textural Play and the Flamboyant Gesture in Digital Screen Violence ». Journal of Popular Film and Television 45, no 1 (2 janvier 2017) : 16–25. http://dx.doi.org/10.1080/01956051.2017.1270137.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Freire, Davi Soares, Renata Imaculada Soares Pereira, Maiara Jéssica Ribeiro Silva et Sandro César Silveira Jucá. « Embedded Linux System for Digital Image Recognition using Internet of Things ». Journal of Mechatronics Engineering 1, no 2 (7 octobre 2018) : 2. http://dx.doi.org/10.21439/jme.v1i2.14.

Texte intégral
Résumé :
The present paper describes the use of Digital Image Processing and Internet of Things for gestures recognition using depth sensors available on Kinect device. Using the open source libraries OpenCV and libfreenect, image data are translated and used for communicating Raspberry Pi embedded Linux system with PIC microcontroller board for peripheral devices controlling. A LED is triggered according to the hand gesture representing the corresponding number. Data are stored in a PHP Apache server running locally on Raspberry Pi. The proposed system can be used as a multifunctional tool in areas such as learning process, post-traumatic rehabilitation and visual and motor cognition time. Using image binarization and Naive-Bayes classifier, the achieved results show error lower than 5%.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Adema, Janneke, et Kamila Kuc. « Unruly Gestures : Seven Cine-Paragraphs on Reading/Writing Practices in our Post-Digital Condition ». Culture Unbound 11, no 1 (12 avril 2019) : 190–208. http://dx.doi.org/10.3384/cu.2000.1525.2019111190.

Texte intégral
Résumé :
Unruly gestures presents a hybrid performative intervention by means of video, text, and still images. With this experimental essay we aspire to break down various preconceptions about reading/writing gestures. Breaking away from a narrative that sees these gestures foremost as passive entities – as either embodiments of pure subjective intentionality, or as bodily movements shaped and controlled by media technologies (enabling specific sensory engagements with texts) – we aim to reappraise them. Indeed, in this essay we identify numerous dominant narratives that relate to gestural agency, to the media-specificity of gestures, and to their (linear) historicity, naturalness and humanism. This essay disrupts these preconceptions, and by doing so, it unfolds an alternative genealogy of ‘unruly gestures.’ These are gestures that challenge gestural conditioning through particular media technologies, cultural power structures, hegemonic discourses, and the biopolitical self. We focus on reading/writing gestures that have disrupted gestural hegemonies and material-discursive forms of gestural control through time and across media. Informed by Tristan Tzara’s cut-up techniques, where through the gesture of cutting the Dadaists subverted established traditions of authorship, intentionality, and linearity, this essay has been cut-up into seven semi-autonomous cine-paragraphs (accessible in video and print). Each of these cine-paragraphs confronts specific gestural preconceptions while simultaneously showcasing various unruly gestures.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Choe, Jong-Hoon. « Gesture Interaction of Digital Frame for Visual Image Content ». Journal of the Korea Contents Association 10, no 10 (28 octobre 2010) : 120–27. http://dx.doi.org/10.5392/jkca.10.10.120.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Jia, Lesong, Xiaozhou Zhou, Hao Qin, Ruidong Bai, Liuqing Wang et Chengqi Xue. « Research on Discrete Semantics in Continuous Hand Joint Movement Based on Perception and Expression ». Sensors 21, no 11 (27 mai 2021) : 3735. http://dx.doi.org/10.3390/s21113735.

Texte intégral
Résumé :
Continuous movements of the hand contain discrete expressions of meaning, forming a variety of semantic gestures. For example, it is generally considered that the bending of the finger includes three semantic states of bending, half bending, and straightening. However, there is still no research on the number of semantic states that can be conveyed by each movement primitive of the hand, especially the interval of each semantic state and the representative movement angle. To clarify these issues, we conducted experiments of perception and expression. Experiments 1 and 2 focused on perceivable semantic levels and boundaries of different motion primitive units from the perspective of visual semantic perception. Experiment 3 verified and optimized the segmentation results obtained above and further determined the typical motion values of each semantic state. Furthermore, in Experiment 4, the empirical application of the above semantic state segmentation was illustrated by using Leap Motion as an example. We ended up with the discrete gesture semantic expression space both in the real world and Leap Motion Digital World, containing the clearly defined number of semantic states of each hand motion primitive unit and boundaries and typical motion angle values of each state. Construction of this quantitative semantic expression will play a role in guiding and advancing research in the fields of gesture coding, gesture recognition, and gesture design.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Koptyra, Katarzyna, et Marek R. Ogiela. « Steganography in IoT : Information Hiding with APDS-9960 Proximity and Gestures Sensor ». Sensors 22, no 7 (29 mars 2022) : 2612. http://dx.doi.org/10.3390/s22072612.

Texte intégral
Résumé :
This article describes a steganographic system for IoT based on an APDS-9960 gesture sensor. The sensor is used in two modes: as a trigger or data input. In trigger mode, gestures control when to start and finish the embedding process; then, the data come from an external source or are pre-existing. In data input mode, the data to embed come directly from the sensor that may detect gestures or RGB color. The secrets are embedded in time-lapse photographs, which are later converted to videos. Selected hardware and steganographic methods allowed for smooth operation in the IoT environment. The system may cooperate with a digital camera and other sensors.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Patel, Nihar. « Smart City using Sixth Sense Technology ». International Journal for Research in Applied Science and Engineering Technology 10, no 1 (31 janvier 2022) : 1205–8. http://dx.doi.org/10.22214/ijraset.2022.40024.

Texte intégral
Résumé :
Abstract: Villages are considered to be the heart of our nation India and the economic development of villages along with cities is also important to us. Therefore, in order to bring development to the grass-root level, the focus should be on the progress of the village. ‘Sixth Sense’ is a gesture interface that enhances the physical world around us with digital information and allows us to use natural hand gestures to interact with that information. Sixth Sense technology helps to bridge this gap between the tangible and intangible world. But the most important piece of information in this new age of technology that helps a person make the right decision is something that cannot be seen and analyzed by our natural senses. Sixth Sense Technology Concept is an attempt to connect this data to the real world in the digital world. Using this sixth sense technology we can easily transform or develop developing countries/cities into smart cities / developed countries. Keywords: Sixth Sense Technology, digital world, natural hand gestures, IOT
Styles APA, Harvard, Vancouver, ISO, etc.
31

Van Nort, Doug. « Instrumental Listening : sonic gesture as design principle ». Organised Sound 14, no 2 (29 juin 2009) : 177–87. http://dx.doi.org/10.1017/s1355771809000284.

Texte intégral
Résumé :
In the majority of discussions surrounding the design of digital instruments and real-time performance systems, notions such as control and mapping are seen from a classical systems point of view: the former is often seen as a variable from an input device or perhaps some driving signal, while the latter is considered as the liaison between input and output parameters. At the same time there is a large body of research regarding gesture in performance that is concerned with the expressive and communicative nature of musical performance. While these views are certainly central to a conceptual understanding of ‘instrument’, it can be limiting to consider them a priori as the only proper model, and to mediate one’s conception of digital instrument design by fixed notions of control, mapping and gesture. As an example of an alternative way to view instrumental response, control structuring and mapping design, this paper discusses the concept of gesture from the point of view of the perception of human intentionality in sound and how one might consider this in interaction design.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Moldovan, Constantin Catalin, et Ionel Staretu. « Real-Time Gesture Recognition for Controlling a Virtual Hand ». Advanced Materials Research 463-464 (février 2012) : 1147–50. http://dx.doi.org/10.4028/www.scientific.net/amr.463-464.1147.

Texte intégral
Résumé :
Object tracking in three dimensional environments is an area of research that has attracted a lot of attention lately, for its potential regarding the interaction between man and machine. Hand gesture detection and recognition, in real time, from video stream, plays a significant role in the human-computer interaction and, on the current digital image processing applications, this represent a difficult task. This paper aims to present a new method for human hand control in virtual environments, by eliminating the need of an external device currently used for hand motion capture and digitization. A first step in this direction would be the detection of human hand, followed by the detection of gestures and their use to control a virtual hand in a virtual environment.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Zhou, Zhenkun, Yin Zhang, Jiangqin Wu et Yueting Zhuang. « CReader : Multi-Touch Gesture Supported Reading Environment in Digital Library ». Advanced Science Letters 10, no 1 (15 mai 2012) : 421–27. http://dx.doi.org/10.1166/asl.2012.3328.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Kum, Junyeong, et Myungho Lee. « Can Gestural Filler Reduce User-Perceived Latency in Conversation with Digital Humans ? » Applied Sciences 12, no 21 (29 octobre 2022) : 10972. http://dx.doi.org/10.3390/app122110972.

Texte intégral
Résumé :
The demand for a conversational system with digital humans has increased with the development of artificial intelligence. Latency can occur in such conversational systems because of natural language processing and network issues, which can deteriorate the user’s performance and the availability of the systems. There have been attempts to mitigate user-perceived latency by using conversational fillers in human–agent interaction and human–robot interaction. However, non-verbal cues, such as gestures, have received less attention in such attempts, despite their essential roles in communication. Therefore, we designed gestural fillers for the digital humans. This study examined the effects of whether the conversation type and gesture filler matched or not. We also compared the effects of the gestural fillers with conversational fillers. The results showed that the gestural fillers mitigate user-perceived latency and affect the willingness, impression, competence, and discomfort in conversations with digital humans.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Rahmad, Cahya, Arief Prasetyo et Riza Awwalul Baqy. « PowerPoint slideshow navigation control with hand gestures using Hidden Markov Model method ». Matrix : Jurnal Manajemen Teknologi dan Informatika 12, no 1 (29 mars 2022) : 7–18. http://dx.doi.org/10.31940/matrix.v12i1.7-18.

Texte intégral
Résumé :
Gesture is the easiest and most expressive way of communication between humans and computers, especially gestures that focus on hand and facial movements. Users can use simple gestures to communicate their ideas with a computer without interacting physically. One form of communication between users and machines is in the teaching and learning process in college. One of them is the way the speakers deliver material in the classroom. Most speakers nowadays make use of projectors that project PowerPoint slides from a connected laptop. In running the presentation, the speaker needs to move a slide from one slide to the next or to the previous slide. Therefore, a hand gesture recognition system is needed so it can implement the above interactions. In this study, a PowerPoint navigation control system was built. Digital imaging techniques use a combination of methods. The YCbCr threshold method is used to detect skin color. Furthermore, the morphological method is used to refine the detection results. Then the background subtraction method is used to detect moving objects. The classification method uses the Hidden Markov Model (HMM). With 526 hand images, the result shows that the accuracy of the confusion matrix is 74.5% and the sensitivity is 76.47%. From the accuracy and sensitivity values, it can be concluded that the Hidden Markov Model method can detect gestures quite well as a PowerPoint slide navigation control.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Keefe, Daniel F. « From Gesture to Form : The Evolution of Expressive Freehand Spatial Interfaces ». Leonardo 44, no 5 (octobre 2011) : 460–61. http://dx.doi.org/10.1162/leon_a_00261.

Texte intégral
Résumé :
This paper presents a series of insights from an ongoing investigation into refining custom spatial computer interfaces and graphical primitives for suggesting 3D form in immersive digital spaces. Technical innovations utilizing 3D gesture capture, force feedback, and stereoscopic presentation are described through reference to specific free-form digital sculptures created with the CavePainting and Drawing on Air interfaces. The role of the human hand in digital art practice and the potential of interfaces that tightly couple freehand movements with geometric algorithms are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Keefe, Daniel F. « From Gesture to Form : The Evolution of Expressive Freehand Spatial Interfaces ». Leonardo 46, no 1 (février 2013) : 82–83. http://dx.doi.org/10.1162/leon_a_00492.

Texte intégral
Résumé :
This paper presents a series of insights from an ongoing investigation into refining custom spatial computer interfaces and graphical primitives for suggesting 3D form in immersive digital spaces. Technical innovations utilizing 3D gesture capture, force feedback, and stereoscopic presentation are described through reference to specific free-form digital sculptures created with the CavePainting and Drawing on Air interfaces. The role of the human hand in digital art practice and the potential of interfaces that tightly couple freehand movements with geometric algorithms are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Zhang, Ning. « 3D Digital Model of Folk Dance Based on Few-Shot Learning and Gesture Recognition ». Computational Intelligence and Neuroscience 2022 (30 juin 2022) : 1–11. http://dx.doi.org/10.1155/2022/3682261.

Texte intégral
Résumé :
Folk dance is a very unique local culture in China, and dances in different regions have different characteristics. With the development of 3D digital technology and human gesture recognition technology, how to apply it in folk dance is a question worth thinking about. So, this paper recognizes and collects dance movements through human body detection and tracking technology in human gesture recognition technology. Then, this paper writes the data into the AAM model for 3D digital modeling and retains the information by integrating the manifold ordering. Finally, this paper designs a folk dance learning method combined with the Few-Shot learning method. This paper also designs a data set test experiment, an algorithm data set comparison experiment, and a target matching algorithm comparison experiment to optimize the learning method designed in this paper. The final results show that the Few-Shot learning method based on gesture recognition 3D digital modeling of folk dances designed in this paper reduces the learning time by 17% compared with the traditional folk dance learning methods. And the Few-Shot learning method designed in this paper improves the dance action score by 14% compared with the traditional learning method.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Rhodes, Chris, Richard Allmendinger et Ricardo Climent. « New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music ». Entropy 22, no 12 (7 décembre 2020) : 1384. http://dx.doi.org/10.3390/e22121384.

Texte intégral
Résumé :
Interactive music uses wearable sensors (i.e., gestural interfaces—GIs) and biometric datasets to reinvent traditional human–computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometric datasets from GIs when predicting musical actions (termed performance gestures). ML allows musicians to create novel interactions with digital media. Wekinator is a popular ML software amongst artists, allowing users to train models through demonstration. It is built on the Waikato Environment for Knowledge Analysis (WEKA) framework, which is used to build supervised predictive models. Previous research has used biometric data from GIs to train specific ML models. However, previous research does not inform optimum ML model choice, within music, or compare model performance. Wekinator offers several ML models. Thus, we used Wekinator and the Myo armband GI and study three performance gestures for piano practice to solve this problem. Using these, we trained all models in Wekinator and investigated their accuracy, how gesture representation affects model accuracy and if optimisation can arise. Results show that neural networks are the strongest continuous classifiers, mapping behaviour differs amongst continuous models, optimisation can occur and gesture representation disparately affects model mapping behaviour; impacting music practice.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Nam, Jung, et Daniel F. Keefe. « Spatial Correlation : An Interactive Display of Virtual Gesture Sculpture ». Leonardo 50, no 1 (février 2017) : 94–95. http://dx.doi.org/10.1162/leon_a_01226.

Texte intégral
Résumé :
Spatial Correlation is an interactive digital artwork that provides a new window into the process of creating freeform handcrafted virtual sculptures while standing in an immersive Cave virtual reality (VR) environment. The piece originates in the lab, where the artist’s full-body, dance-like sculpting process is recorded using a combination of spatial tracking devices and an array of nine synchronized video cameras. Later, in the gallery, these raw data are reinterpreted as part of an interactive visualization that relates the three spaces in which the sculpture exists: 1) the physical lab/studio space in which the sculpture was created, 2) the digital virtual space in which the sculpture is mathematically defined and stored, and 3) the physical gallery space in which viewers now interact with the sculpture.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Le Cor, Gwen. « From erasure poetry to e-mash-ups, “reel on/ another ! power!” ». Convergence : The International Journal of Research into New Media Technologies 24, no 3 (23 novembre 2016) : 305–20. http://dx.doi.org/10.1177/1354856516675254.

Texte intégral
Résumé :
This article builds on an analysis of Sea and Spar Between by Nick Montfort and Stephanie Strickland and Tree of Codes by Jonathan Safran Foer to examine print and digital forms of writing through resonance, replication, and repetition. It explores the plastic and textual space of the page and screen and focuses more specifically on the composition of fragments and the way they can be apprehended by readers. Conversely, digital borrowing is not a mechanical process of self-identical recurrence, and like its print counterpart, it is a gesture of differenciation and a play of singularities (Deleuze). In investigating the entanglement of a work with a source text, this article also explores how creative gestures initiate a “floating” space as theorized by Jean-François Lyotard, that is, a space at once rigid and flexible where the reader is both bound and floating.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Holmes, Tiffany. « The Corporeal Stenographer : Language, Gesture, and Cyberspace ». Leonardo 32, no 5 (octobre 1999) : 383–89. http://dx.doi.org/10.1162/002409499553613.

Texte intégral
Résumé :
The author describes her research and creative practice exploring the intersection between digital, biomedical, and linguistic modes of bodi ly representation. She synthesizes traditional forms of painting with new computer and medical imaging technologies to call into question the relationship between visible and invisible bodily forms and actions. Her paintings and installations use scientifically rendered images of the body (from symbolic DNA sequences to developing cel lular structures) in order to consider the role of the tools and technologies used to organize and view these images. The issue of visual, linguistic, and scientific literacy is thus a corollary concern.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Mustafa, Sriyanti, Toto Nusantara, Subanji Subanji et Santi Irawati. « Mathematical Thinking Process of Autistic Students in Terms of Representational Gesture ». International Education Studies 9, no 6 (26 mai 2016) : 93. http://dx.doi.org/10.5539/ies.v9n6p93.

Texte intégral
Résumé :
<p class="apa">The aim of this study is to describe the mathematical thinking process of autistic students in terms of gesture, using a qualitative approach. Data collecting is conducted by using 3 (three) audio-visual cameras. During the learning process, both teacher and students’ activity are recorded using handy cam and digital camera (full HD capacity). Once the data is collected (the recording process is complete), it will be analyzed exploratively until data triangulation is carried out. Results of this study describes the process of mathematical thinking in terms of a gesture of students with autism in three categories, namely correctly processed, partially processed, and contradictory processed. Correctly processed is a series of actions to solve the problems that are processed properly marked with a matching gesture, partially processed is a series of actions to resolve problems with partially processed properly marked with discrepancy gesture, while contradictory processed is a series of actions to solve the problems that are processed incorrectly marked with the dominance of discrepancy gesture. Matching gesture demonstrate the suitability of movement or facial expressions when observing, pointing, and uncover/calling the object being observed, while the discrepancy gesture indicates a mismatch movements or facial expressions when observing, pointing, and uncover/calling the object being observed.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
44

Sreyas, S., Sreeja Kochuvila, S. Vignesh et R. Pranav. « AIR TOUCH : Human Machine Interface Using Electromyography Signals ». Journal of Physics : Conference Series 2251, no 1 (1 avril 2022) : 012001. http://dx.doi.org/10.1088/1742-6596/2251/1/012001.

Texte intégral
Résumé :
Abstract Novel interactions between futuristic devices and humans in the ever-expanding digital world is gaining momentum in the current era. In this paper, a system is proposed where electromyography (EMG) signals are used to control the cursor on a PC with the movement of the hand, making effortless interaction between user and the computer. The hand movements are detected using accelerometer and EMG signals acquired using electrodes are used to classify the hand gestures. Time domain features are extracted from the EMG signals and the gestures are classified using K-Nearest Neighbor (KNN) classifier. The operation to be performed on PC is determined from the gesture with help of a suitable interface. This system is implemented to perform the positioning of the cursor and two of the most common actions of a mouse, namely, single click and double click. The system showed an accuracy of 98% in classifying the gestures.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Repp, Bruno H. « A Constraint on the Expressive Timing of a Melodic Gesture : Evidence from Performance and Aesthetic Judgment ». Music Perception 10, no 2 (1992) : 221–41. http://dx.doi.org/10.2307/40285608.

Texte intégral
Résumé :
Discussions of music performance often stress diversity and artistic freedom, yet there is general agreement that interpretation is not arbitrary and that there are standards that performances can be judged by. However, there have been few objective demonstrations of any extant constraints on music performance and judgment, particularly at the level of expressive microstructure. This study illustrates such a constraint in one specific case: the expressive timing of a melodic gesture that occurs repeatedly in Robert Schumann's famous piano piece, "Traumerei." Tone onset timing measurements in 28 recorded performances by famous pianists suggest that the most common " temporal shape" of this (nominally isochronous) musical gesture is parabolic and that individual variations can be described largely by varying a single degree of freedom of the parabolic timing function. The aesthetic validity of this apparent constraint on local performance timing was investigated in a perceptual experiment. Listeners judged a variety of timing patterns (original parabolic, shifted parabolic, and nonparabolic) imposed on the same melodic gesture, produced on an electronic piano under control of a Musical Instrument Digital Interface (MIDI). The original parabolic patterns received the highest ratings from musically trained listeners. (Musically untrained listeners were unable to give consistent judgments.) The results support the hypothesis that there are classes of optimal temporal shapes for melodic gestures in music performance and that musically acculturated listeners know and expect these shapes. Being classes of shapes, they represent flexible constraints within which artistic freedom and individual preference can manifest themselves.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Tafler, David I. « Drawing Spirits in the Sand : Performative Storytelling in the Digital Age ». Religions 10, no 9 (21 août 2019) : 492. http://dx.doi.org/10.3390/rel10090492.

Texte intégral
Résumé :
For First Nations people living in the central desert of Australia, the performance of oral storytelling drawing in the sand drives new agency in the cultural metamorphosis of communication practices accelerated by the proliferation of portable digital devices. Drawing on the ground sustains the proxemic and kinesthetic aspects of performative storytelling as a sign gesture system. When rendering this drawing supra-language, the people negotiate and ride the ontological divide symbolized by traditional elders in First Nations communities and digital engineers who program and code. In particular, storytelling’s chronemic encounter offsets the estrangement of the recorded event and maintains every participants’ ability to shape identity and navigate space-time relationships. Drawing storytelling demonstrates a concomitant capacity to mediate changes in tradition and spiritual systems. While the digital portals of the global arena remain open and luring, the force enabled by the chiasmic entwinement of speech, gesture and sand continues to map the frontier of First Nations identity formation and reformation.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Padmasari, Ayung Candra, Yona Wahyuningsih et Deti Rostika. « Design of Digital Map based on Hand Gesture as a Preservation of West Java History Sites for Elementary School ». Letters in Information Technology Education (LITE) 2, no 2 (9 novembre 2019) : 23. http://dx.doi.org/10.17977/um010v2i22019p023.

Texte intégral
Résumé :
Social science study of historical content at present does not seem to be directly proportional to the development of the industrial revolution 4.0. it is because of rote learning styles, text-based, teacher-centered teaching methods without technologicalaided modifications. With these problems, it is necessary to have design innovations and media for learning models, one of them is to design a hand gesture-based map equipped with a leap motion controller. This study aims to design a digital hand gesture-based map design as the preservation of West Java historical sites for elementary school children. The method used in the study is Design and Development (D&D). The results of this study are the design of an interactive map as a teaching media innovation and the configuration response of the tool with the results of 13 ms / FPS 33 ms / 60 FPS tap gesture responses.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Kalra, Siddharth, Sarika Jain et Amit Agarwal. « Gesture Controlled Tactile Augmented Reality Interface for the Visually Impaired ». Journal of Information Technology Research 14, no 2 (avril 2021) : 125–51. http://dx.doi.org/10.4018/jitr.2021040107.

Texte intégral
Résumé :
This paper proposes to create an augmented reality interface for the visually impaired, enabling a way of haptically interacting with the computer system by creating a virtual workstation, facilitating a natural and intuitive way to accomplish a multitude of computer-based tasks (such as emailing, word processing, storing and retrieving files from the computer, making a phone call, searching the web, etc.). The proposed system utilizes a combination of a haptic glove device, a gesture-based control system, and an augmented reality computer interface which creates an immersive interaction between the blind user and the computer. The gestures are recognized, and the user is provided with audio and vibratory haptic feedbacks. This user interface allows the user to actually “touch, feel, and physically interact” with digital controls and virtual real estate of a computer system. A test of applicability was conducted which showcased promising positive results.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Zhu, Penghua, Jie Zhu, Xiaofei Xue et Yongtao Song. « Stretchable Filler/Solid Rubber Piezoresistive Thread Sensor for Gesture Recognition ». Micromachines 13, no 1 (22 décembre 2021) : 7. http://dx.doi.org/10.3390/mi13010007.

Texte intégral
Résumé :
Recently, the stretchable piezoresistive composites have become a focus in the fields of the biomechanical sensing and human posture recognition because they can be directly and conformally attached to bodies and clothes. Here, we present a stretchable piezoresistive thread sensor (SPTS) based on Ag plated glass microspheres (Ag@GMs)/solid rubber (SR) composite, which was prepared using new shear dispersion and extrusion vulcanization technology. The SPTS has the high gauge factors (7.8~11.1) over a large stretching range (0–50%) and approximate linear curves about the relative change of resistance versus the applied strain. Meanwhile, the SPTS demonstrates that the hysteresis is as low as 2.6% and has great stability during 1000 stretching/releasing cycles at 50% strain. Considering the excellent mechanical strain-driven characteristic, the SPTS was carried out to monitor posture recognitions and facial movements. Moreover, the novel SPTS can be successfully integrated with software and hardware information modules to realize an intelligent gesture recognition system, which can promptly and accurately reflect the produced electrical signals about digital gestures, and successfully be translated into text and voice. This work demonstrates great progress in stretchable piezoresistive sensors and provides a new strategy for achieving a real-time and effective-communication intelligent gesture recognition system.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Zhang, Yajun, Yan Yang, Zijian Li, Zhixiong Yang, Xu Liu et Bo Yuan. « RF-Alphabet : Cross Domain Alphabet Recognition System Based on RFID Differential Threshold Similarity Calculation Model ». Sensors 23, no 2 (13 janvier 2023) : 920. http://dx.doi.org/10.3390/s23020920.

Texte intégral
Résumé :
Gesture recognition can help people with a speech impairment to communicate and promote the development of Human-Computer Interaction (HCI) technology. With the development of wireless technology, passive gesture recognition based on RFID has become a research hotspot. In this paper, we propose a low-cost, non-invasive and scalable gesture recognition technology, and successfully implement the RF-alphabet, a gesture recognition system for complex, fine-grained, domain-independent 26 English letters; the RF-alphabet has three major advantages: first, this paper achieves complete capture of complex, fine-grained gesture data by designing a dual-tag, dual-antenna layout. Secondly, to overcome the disadvantages of the large training sets and long training times of traditional deep learning. We design and combine the Difference threshold similarity calculation prediction model to extract digital signal features to achieve real-time feature analysis of gesture signals. Finally, the RF alphabet solves the problem of confusing the signal characteristics of letters. Confused letters are distinguished by comparing the phase values of feature points. The RF-alphabet ends up with an average accuracy of 90.28% and 89.7% in different domains for new users and new environments, respectively, by performing feature analysis on similar signals. The real-time, robustness, and scalability of the RF-alphabet are proven.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie