Journal articles on the topic 'Gesture – Computer simulation'

To see the other types of publications on this topic, follow the link: Gesture – Computer simulation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gesture – Computer simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wu, Yingnian, Guojun Yang, and Lin Zhang. "Mouse simulation in human–machine interface using kinect and 3 gear systems." International Journal of Modeling, Simulation, and Scientific Computing 05, no. 04 (September 29, 2014): 1450015. http://dx.doi.org/10.1142/s1793962314500159.

Full text
Abstract:
We never stop finding better ways to communicate with machines. To interact with computers we tried several ways, from punched tape and tape reader to QWERTY keyboards and command lines, from graphic user interface and mouse to multi-touch screens. The way we communicate with computers or devices are getting more direct and easier. In this paper, we give gesture mouse simulation in human–computer interface based on 3 Gear Systems using two Kinect sensors. The Kinect sensor is the perfect device to achieve dynamic gesture tracking and pose recognition. We hope the 3 Gear Systems can work as a mouse, to be more specific, use gestures to do click, double click and scroll. We use Coordinate Converting Matrix and Kalman Filter to reduce the shaking caused by errors and makes the interface create a better user experience. Finally the future of human–computer interface is discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Shahan Yamin Siddiqui, , Ghanwa Batool, Muhammad Sohail Irshad, Hafiz Muhammad Usama, Muhammad Tariq Siddique, Bilal Shoaib, Sajid Farooq, and , Arfa Hassan. "Time Complexity of Color Camera Depth Map Hand Edge Closing Recognition Algorithm." Lahore Garrison University Research Journal of Computer Science and Information Technology 4, no. 3 (September 25, 2020): 1–22. http://dx.doi.org/10.54692/lgurjcsit.2020.040395.

Full text
Abstract:
The objective of this paper is to calculate the time complexity of the colored camera depth map hand edge closing algorithm of the hand gesture recognition technique. It has been identified as hand gesture recognition through human-computer interaction using color camera and depth map technique, which is used to find the time complexity of the algorithms using 2D minima methods, brute force, and plane sweep. Human-computer interaction is a very much essential component of most people's daily life. The goal of gesture recognition research is to establish a system that can classify specific human gestures and can make its use to convey information for the device control. These methods have different input types and different classifiers and techniques to identify hand gestures. This paper includes the algorithm of one of the hand gesture recognition “Color camera depth map hand edge recognition” algorithm and its time complexity and simulation on MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
3

Ding, Xueyan, and Yi Zhang. "Human-Computer Interaction System Application in Hotel Management Teaching Practice." Mobile Information Systems 2022 (July 12, 2022): 1–8. http://dx.doi.org/10.1155/2022/6215736.

Full text
Abstract:
With the increasing demand for the performance and security of communication networks, the fifth-generation mobile technology has developed rapidly and has attracted unprecedented attention. At the same time, this article analyzes the current research status of visual gesture recognition and human-computer interaction based on the Internet of Things. In view of the current shortcomings of gesture recognition, this article proposes a solution that involves using Biaonect somatosensory sensors to recognize gestures and explore human-computer interaction. Then, we analyze how the Kinea somatosensory sensor works to obtain depth images, study the method of obtaining gesture positions and joint points based on the depth information, and combine the depth information and the skin color model to create a three-dimensional image of the gesture simulation. With the rapid development of China’s tourism industry, China’s hotel industry has entered an era, in which domestic and foreign competitors coexist in the hotel industry. The development of hotels urgently needs high-quality hotel professionals who have received professional training and are familiar with hotel management. In hotel management teaching, human-computer interactive learning can effectively improve learning interest. In this paper, the structure of the human-computer interaction system based on gesture recognition is established, which can effectively improve the recognition accuracy and is of great significance in the hotel management teaching system.
APA, Harvard, Vancouver, ISO, and other styles
4

Aneela, Banda. "Implementing a Real Time Virtual Mouse System and Fingertip Detection based on Artificial Intelligence." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 25, 2021): 2265–70. http://dx.doi.org/10.22214/ijraset.2021.35485.

Full text
Abstract:
Artificial intelligence refers to the simulation of human intelligence in computers that have been trained to think and act like humans. It is a broad branch of computer science devoted to the creation of intelligent machines capable of doing activities that would normally need human intelligence. Despite the fact that Artificial intelligence is a heterogeneous science with several techniques, developments in machine learning and deep learning are driving a paradigm shift in practically every business. Human-computer interaction requires the identification of hand gestures utilizing vision-based technology. The keyboard and mouse have grown more significant in human-computer interaction in recent decades. This involves the progression of touch technology over buttons, as well as a variety of other gesture control modalities. A normal camera may be used to construct a hand tracking-based virtual mouse application. We combine camera and computer vision technologies, such as finger- tip identification and gesture recognition, into the proposed system to handle mouse operations (volume control, right click, left click), and show how it can execute all that existing mouse devices can.
APA, Harvard, Vancouver, ISO, and other styles
5

Stančić, Ivo, Josip Musić, Tamara Grujić, Mirela Kundid Vasić, and Mirjana Bonković. "Comparison and Evaluation of Machine Learning-Based Classification of Hand Gestures Captured by Inertial Sensors." Computation 10, no. 9 (September 14, 2022): 159. http://dx.doi.org/10.3390/computation10090159.

Full text
Abstract:
Gesture recognition is a topic in computer science and language technology that aims to interpret human gestures with computer programs and many different algorithms. It can be seen as the way computers can understand human body language. Today, the main interaction tools between computers and humans are still the keyboard and mouse. Gesture recognition can be used as a tool for communication with the machine and interaction without any mechanical device such as a keyboard or mouse. In this paper, we present the results of a comparison of eight different machine learning (ML) classifiers in the task of human hand gesture recognition and classification to explore how to efficiently implement one or more tested ML algorithms on an 8-bit AVR microcontroller for on-line human gesture recognition with the intention to gesturally control the mobile robot. The 8-bit AVR microcontrollers are still widely used in the industry, but due to their lack of computational power and limited memory, it is a challenging task to efficiently implement ML algorithms on them for on-line classification. Gestures were recorded by using inertial sensors, gyroscopes, and accelerometers placed at the wrist and index finger. One thousand and eight hundred (1800) hand gestures were recorded and labelled. Six important features were defined for the identification of nine different hand gestures using eight different machine learning classifiers: Decision Tree (DT), Random Forests (RF), Logistic Regression (LR), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM) with linear kernel, Naïve Bayes classifier (NB), K-Nearest Neighbours (KNN), and Stochastic Gradient Descent (SGD). All tested algorithms were ranged according to Precision, Recall, and F1-score (abb.: P-R-F1). The best algorithms were SVM (P-R-F1: 0.9865, 0.9861, and 0.0863), and RF (P-R-F1: 0.9863, 0.9861, and 0.0862), but their main disadvantage is their unusability for on-line implementations in 8-bit AVR microcontrollers, as proven in the paper. The next best algorithms have had only slightly poorer performance than SVM and RF: KNN (P-R-F1: 0.9835, 0.9833, and 0.9834) and LR (P-R-F1: 0.9810, 0.9810, and 0.9810). Regarding the implementation on 8-bit microcontrollers, KNN has proven to be inadequate, like SVM and RF. However, the analysis for LR has proved that this classifier could be efficiently implemented on targeted microcontrollers. Having in mind its high F1-score (comparable to SVM, RF, and KNN), this leads to the conclusion that the LR is the most suitable classifier among tested for on-line applications in resource-constrained environments, such as embedded devices based on 8-bit AVR microcontrollers, due to its lower computational complexity in comparison with other tested algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Khaled, Hazem, Samir G. Sayed, El Sayed M. Saad, and Hossam Ali. "Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms." Mathematical Problems in Engineering 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/741068.

Full text
Abstract:
Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI). The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.
APA, Harvard, Vancouver, ISO, and other styles
7

Jagodziński, Piotr, and Robert Wolski. "THE EXAMINATION OF THE IMPACT ON STUDENTS’ USE OF GESTURES WHILE WORKING IN A VIRTUAL CHEMICAL LABORATORY FOR THEIR COGNITIVE ABILITIES." Problems of Education in the 21st Century 61, no. 1 (October 5, 2014): 46–57. http://dx.doi.org/10.33225/pec/14.61.46.

Full text
Abstract:
One of the cognitive theories is the embodied cognition theory. According to this theory, it is important to use appropriate gestures in the process of assimilating new information and the acquisition of new skills. The further development of information and communication technologies has enabled the development of interfaces that allow the user to control computer programs and electronic devices by using gestures. These Natural User Interfaces (NUI) were used in teaching Chemistry in middle school and secondary school. A virtual chemical laboratory was developed in which students can simulate the performance of laboratory activities, similar to those that are performed in a real lab. The Kinect sensor was used to detect and analyze hand movement. The conducted research established the educational effectiveness of a virtual laboratory, which is an example of a system based on GBS gestures (gesture-based system). The use of the teaching methods and to what extent they increase the student's complete understanding were examined. The results indicate that the use of the gesture-based system in teaching makes it more attractive and increases the quality of teaching Chemistry. Key words: chemistry experiments, educational simulation, gesture based system, embodied cognition theory.
APA, Harvard, Vancouver, ISO, and other styles
8

Kara, Tolgay, and Ahmad Soliman Masri. "Modeling and Analysis of a Visual Feedback System to Support Efficient Object Grasping of an EMG-Controlled Prosthetic Hand." Current Directions in Biomedical Engineering 5, no. 1 (September 1, 2019): 207–10. http://dx.doi.org/10.1515/cdbme-2019-0053.

Full text
Abstract:
AbstractMillions of people around the world have lost their upper limbs mainly due to accidents and wars. Recently in the Middle East, the demand for prosthetic limbs has increased dramatically due to ongoing wars in the region. Commercially available prosthetic limbs are expensive while the most economical method available for controlling prosthetic limbs is the Electromyography (EMG). Researchers on EMG-controlled prosthetic limbs are facing several challenges, which include efficiency problems in terms of functionality especially in prosthetic hands. A major issue that needs to be solved is the fact that currently available low-cost EMG-controlled prosthetic hands cannot enable the user to grasp various types of objects in various shapes, and cannot provide the efficient use of the object by deciding the necessary hand gesture. In this paper, a computer vision-based mechanism is proposed with the purpose of detecting and recognizing objects and applying optimal hand gesture through visual feedback. The objects are classified into groups and the optimal hand gesture to grasp and use the targeted object that is most efficient for the user is implemented. A simulation model of the human hand kinematics is developed for simulation tests to reveal the efficacy of the proposed method. 80 different types of objects are detected, recognized, and classified for simulation tests, which can be realized by using two electrodes supplying the input to perform the action. Simulation results reveal the performance of proposed EMG-controlled prosthetic hand in maintaining optimal hand gestures in computer environment. Results are promising to help disabled people handle and use objects more efficiently without higher costs.
APA, Harvard, Vancouver, ISO, and other styles
9

Yan, Xu, and Wang Wei Lan. "The Research of Thangka Buddha Gesture Detection Algorithm." JOURNAL OF ADVANCES IN MATHEMATICS 11, no. 4 (September 16, 2015): 5089–93. http://dx.doi.org/10.24297/jam.v11i4.1258.

Full text
Abstract:
This paper describes both the meaning of split Thangka Buddha gesture and steps, then choice the Canny operator method of edge detection on the thangka Buddha's gesture.Through a simple way of human-computer interaction,the Buddha's gesture will be cut off from Thangka Buddha,and then make simulation experiments by software of Matlab.Finally,through analysis and comparison,expounds the advantages and disadvantages of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Haber, Jeffrey, and Joon Chung. "Assessment of UAV operator workload in a reconfigurable multi-touch ground control station environment." Journal of Unmanned Vehicle Systems 4, no. 3 (September 1, 2016): 203–16. http://dx.doi.org/10.1139/juvs-2015-0039.

Full text
Abstract:
Multi-touch computer inputs allow users to interact with a virtual environment through the use of gesture commands on a monitor instead of a mouse and keyboard. This style of input is easy for the human mind to adapt to because gestures directly reflect how one interacts with the natural environment. This paper presents and assesses a personal-computer-based unmanned aerial vehicle ground control station that utilizes multi-touch gesture inputs and system reconfigurability to enhance operator performance. The system was developed at Ryerson University’s Mixed-Reality Immersive Motion Simulation Laboratory using commercial-off-the-shelf Presagis software. The ground control station was then evaluated using NASA’s task load index to determine if the inclusion of multi-touch gestures and reconfigurability provided an improvement in operator workload over the more traditional style of mouse and keyboard inputs. To conduct this assessment, participants were tasked with flying a simulated aircraft through a specified number of waypoints, and had to utilize a payload controller within a predetermined area. The task load index results from these flight tests have initially shown that the developed touch-capable ground control station improved operator workload while reducing the impact of all six related human factors.
APA, Harvard, Vancouver, ISO, and other styles
11

Hoa Tat Thang. "Computer control in human-machine interaction systems by hand movements." Journal of Military Science and Technology, CSCE5 (December 15, 2021): 42–48. http://dx.doi.org/10.54939/1859-1043.j.mst.csce5.2021.42-48.

Full text
Abstract:
Computers have become popular in recent years. The forms of human-computer interaction are increasingly diverse. In many cases, controlling the computer is not only through the mouse and keyboard, but humans must control the computer through body language and representation. For some people with physical disabilities, controlling the computer through hand movements is essential to help them interact with the computer. The field of simulation also needs these interactive applications. This paper studies a solution to build a hand tracking and gesture recognition system that allows cursor movement and corresponding actions with mouse and keyboard. The research team confirms that the system works stably, accurately and can control the computer instead of a conventional mouse and keyboard through the implementation and evaluation.
APA, Harvard, Vancouver, ISO, and other styles
12

Jarsaillon, Pierre J., Naohisa Sakamoto, and Akira Kageyama. "Flexible visualization framework for head-mounted display with gesture interaction interface." International Journal of Modeling, Simulation, and Scientific Computing 09, no. 03 (May 24, 2018): 1840002. http://dx.doi.org/10.1142/s1793962318400020.

Full text
Abstract:
As the development of new visualization systems, within the field of simulation, offers their users more insights from their simulations, immersive systems are becoming a part of the visualization techniques. With the recent advancements of the Head-Mounted Displays (HMDs) and popularity of motion sensors, human beings and computer become more interactive. This study aims to evaluate the potential of such systems as a visualization tool through the development of a new flexible framework for visualization within virtual reality (VR) environment, using an Oculus Rift and a Leap Motion. Two approaches are then compared: high-3D object rendering within the virtual scene approach and a user experience-oriented system with an intuitive interface. To assess the quality of the interface and its relevance for the user, different types of gestures are implemented and tested. From an experiment on users to evaluate the developed system as a proper visualization tool, the HMDs, paired with a motion sensor to make a gesture-controlled interface seem to be promising mediums despite various constraints on development regarding the technology limitations.
APA, Harvard, Vancouver, ISO, and other styles
13

Alvarez-Lopez, Fernando, Marcelo Fabián Maina, and Francesc Saigí-Rubió. "Use of Commercial Off-The-Shelf Devices for the Detection of Manual Gestures in Surgery: Systematic Literature Review." Journal of Medical Internet Research 21, no. 5 (May 3, 2019): e11925. http://dx.doi.org/10.2196/11925.

Full text
Abstract:
Background The increasingly pervasive presence of technology in the operating room raises the need to study the interaction between the surgeon and computer system. A new generation of tools known as commercial off-the-shelf (COTS) devices enabling touchless gesture–based human-computer interaction is currently being explored as a solution in surgical environments. Objective The aim of this systematic literature review was to provide an account of the state of the art of COTS devices in the detection of manual gestures in surgery and to identify their use as a simulation tool for motor skills teaching in minimally invasive surgery (MIS). Methods For this systematic literature review, a search was conducted in PubMed, Excerpta Medica dataBASE, ScienceDirect, Espacenet, OpenGrey, and the Institute of Electrical and Electronics Engineers databases. Articles published between January 2000 and December 2017 on the use of COTS devices for gesture detection in surgical environments and in simulation for surgical skills learning in MIS were evaluated and selected. Results A total of 3180 studies were identified, 86 of which met the search selection criteria. Microsoft Kinect (Microsoft Corp) and the Leap Motion Controller (Leap Motion Inc) were the most widely used COTS devices. The most common intervention was image manipulation in surgical and interventional radiology environments, followed by interaction with virtual reality environments for educational or interventional purposes. The possibility of using this technology to develop portable low-cost simulators for skills learning in MIS was also examined. As most of the articles identified in this systematic review were proof-of-concept or prototype user testing and feasibility testing studies, we concluded that the field was still in the exploratory phase in areas requiring touchless manipulation within environments and settings that must adhere to asepsis and antisepsis protocols, such as angiography suites and operating rooms. Conclusions COTS devices applied to hand and instrument gesture–based interfaces in the field of simulation for skills learning and training in MIS could open up a promising field to achieve ubiquitous training and presurgical warm up.
APA, Harvard, Vancouver, ISO, and other styles
14

Kryuchkov, B. I., V. M. Usov, V. A. Chertopolokhov, A. L. Ronzhin, and A. A. Karpov. "SIMULATION OF THE «COSMONAUT-ROBOT» SYSTEM INTERACTION ON THE LUNAR SURFACE BASED ON METHODS OF MACHINE VISION AND COMPUTER GRAPHICS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W4 (May 10, 2017): 129–33. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w4-129-2017.

Full text
Abstract:
Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut’s poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut’s position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.
APA, Harvard, Vancouver, ISO, and other styles
15

Tiwari, Shamik. "A Blur Classification Approach Using Deep Convolution Neural Network." International Journal of Information System Modeling and Design 11, no. 1 (January 2020): 93–111. http://dx.doi.org/10.4018/ijismd.2020010106.

Full text
Abstract:
Computer vision-based gesture identification is designed to recognize human actions with the help of images. During the process of gesture image acquisition, images suffer various degradations. The method of recovering these degraded images is called restoration. In the case of blind restoration of such a degraded image where blur information is unavailable, it is essential to determine the exact blur type. This article presents a convolution neural network model for blur classification which categories a blur found in a hand gesture image into one of the four blur categories: motion, defocus, Gaussian, and box blur. The simulation results demonstrate the improved preciseness of the CNN model when compared to the MLP model.
APA, Harvard, Vancouver, ISO, and other styles
16

Hikawa, Hiroomi, Yuta Ichikawa, Hidetaka Ito, and Yutaka Maeda. "Dynamic Gesture Recognition System with Gesture Spotting Based on Self-Organizing Maps." Applied Sciences 11, no. 4 (February 22, 2021): 1933. http://dx.doi.org/10.3390/app11041933.

Full text
Abstract:
In this paper, a real-time dynamic hand gesture recognition system with gesture spotting function is proposed. In the proposed system, input video frames are converted to feature vectors, and they are used to form a posture sequence vector that represents the input gesture. Then, gesture identification and gesture spotting are carried out in the self-organizing map (SOM)-Hebb classifier. The gesture spotting function detects the end of the gesture by using the vector distance between the posture sequence vector and the winner neuron’s weight vector. The proposed gesture recognition method was tested by simulation and real-time gesture recognition experiment. Results revealed that the system could recognize nine types of gesture with an accuracy of 96.6%, and it successfully outputted the recognition result at the end of gesture using the spotting result.
APA, Harvard, Vancouver, ISO, and other styles
17

He, Na Na, Zhi Quan Feng, Zhong Zhu Huang, and Xue Wen Yang. "Visual Attention Distribution and its Application in the Gesture Interaction System." Applied Mechanics and Materials 713-715 (January 2015): 2185–88. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.2185.

Full text
Abstract:
Aiming at making the simulation of human visual attention behavior more truly in computer, starting from analyzing operator’s cognitive model, a gesture tracking algorithm is put forward based on the distribution model of visual attention. To begin with, analyzing the change of the operator human eye sight, a visual attention model was built. Secondly, the basic characteristics of visual attention model were studied. Finally, the three Gauss formula is used to describe the model. Experimental results show that the algorithm can effectively improve the speed and tracking accuracy of gesture interaction.
APA, Harvard, Vancouver, ISO, and other styles
18

Feng, Yehua. "Research on Human Skeleton Teaching System Based on Leap Motion." Academic Journal of Science and Technology 1, no. 1 (February 24, 2022): 1–4. http://dx.doi.org/10.54097/ajst.v1i1.233.

Full text
Abstract:
With the development of virtual reality technology, virtual teaching technology continues to improve, virtual reality technology is more and more widely used in medicine. Virtual reality technology can improve teaching efficiency by simulating real scenes with high reality. Human learning is very necessary for a medical professional, however due to the limitation of teaching resources, teaching things make teaching process insurmountable problems, therefore, in this paper, based on the Leap Motion of human body skeleton virtual teaching system, through computer modeling to simulate human body skeleton, the human-computer interaction technology based on gesture recognition algorithm, Realize human skeleton simulation teaching. Finally, the experiment proves that the virtual assistant teaching system can obviously improve the learning efficiency of learners.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Bi-Xiao, Chen-Guang Yang, and Jun-Pei Zhong. "Research on Transfer Learning of Vision-based Gesture Recognition." International Journal of Automation and Computing 18, no. 3 (March 8, 2021): 422–31. http://dx.doi.org/10.1007/s11633-020-1273-9.

Full text
Abstract:
AbstractGesture recognition has been widely used for human-robot interaction. At present, a problem in gesture recognition is that the researchers did not use the learned knowledge in existing domains to discover and recognize gestures in new domains. For each new domain, it is required to collect and annotate a large amount of data, and the training of the algorithm does not benefit from prior knowledge, leading to redundant calculation workload and excessive time investment. To address this problem, the paper proposes a method that could transfer gesture data in different domains. We use a red-green-blue (RGB) Camera to collect images of the gestures, and use Leap Motion to collect the coordinates of 21 joint points of the human hand. Then, we extract a set of novel feature descriptors from two different distributions of data for the study of transfer learning. This paper compares the effects of three classification algorithms, i.e., support vector machine (SVM), broad learning system (BLS) and deep learning (DL). We also compare learning performances with and without using the joint distribution adaptation (JDA) algorithm. The experimental results show that the proposed method could effectively solve the transfer problem between RGB Camera and Leap Motion. In addition, we found that when using DL to classify the data, excessive training on the source domain may reduce the accuracy of recognition in the target domain.
APA, Harvard, Vancouver, ISO, and other styles
20

Angelidis, Alexis, Geoff Wyvill, and Marie-Paule Cani. "Sweepers: Swept deformation defined by gesture." Graphical Models 68, no. 1 (January 2006): 2–14. http://dx.doi.org/10.1016/j.gmod.2005.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Eisenstein, Jacob. "Sylvie Gibet, Nicolas Courty, & Jean-François Kamp (Eds.) (2006). Gesture in human computer interaction and simulation." Gesture 7, no. 1 (April 18, 2007): 119–27. http://dx.doi.org/10.1075/gest.7.1.08eis.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Nariman, Dahlan, Hiroaki Nishino, Kouichi Utsumiya, and Kazuyoshi Korida. "Interaction Using Multiview Representation in a Large-Scale Virtual Environment." Journal of Robotics and Mechatronics 12, no. 1 (February 20, 2000): 35–39. http://dx.doi.org/10.20965/jrm.2000.p0035.

Full text
Abstract:
The evolution of virtual reality and 3D computer graphics has made digital visual simulation increasingly popular. Realistic visual simulation as in urban planning requires large-scale virtual environments using a large number of objects and space expansion. In such large systems, it is difficult to get information on the overall view and status using conventional representation. Effective representation and interaction must be considered for objects in a large-scale virtual environment. We propose large-scale 3D virtual environment interaction using multiview representation incorporating local and global views. Local view displays the detailed environment and global view provides a simple small-scale model of the environment giving users an overall view and status. Gesture interface enables them to directly interact with the model using natural and intuitive actions.
APA, Harvard, Vancouver, ISO, and other styles
23

Wahid, Md Ferdous, Reza Tafreshi, Mubarak Al-Sowaidi, and Reza Langari. "Subject-independent hand gesture recognition using normalization and machine learning algorithms." Journal of Computational Science 27 (July 2018): 69–76. http://dx.doi.org/10.1016/j.jocs.2018.04.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Fernández-Baena, Adso, Raúl Montaño, Marc Antonijoan, Arturo Roversi, David Miralles, and Francesc Alías. "Gesture synthesis adapted to speech emphasis." Speech Communication 57 (February 2014): 331–50. http://dx.doi.org/10.1016/j.specom.2013.06.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Colley, Mark, Pascal Jansen, Enrico Rukzio, and Jan Gugenheimer. "SwiVR-Car-Seat." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 4 (December 27, 2021): 1–26. http://dx.doi.org/10.1145/3494968.

Full text
Abstract:
Autonomous vehicles provide new input modalities to improve interaction with in-vehicle information systems. However, due to the road and driving conditions, the user input can be perturbed, resulting in reduced interaction quality. One challenge is assessing the vehicle motion effects on the interaction without an expensive high-fidelity simulator or a real vehicle. This work presents SwiVR-Car-Seat, a low-cost swivel seat to simulate vehicle motion using rotation. In an exploratory user study (N=18), participants sat in a virtual autonomous vehicle and performed interaction tasks using the input modalities touch, gesture, gaze, or speech. Results show that the simulation increased the perceived realism of vehicle motion in virtual reality and the feeling of presence. Task performance was not influenced uniformly across modalities; gesture and gaze were negatively affected while there was little impact on touch and speech. The findings can advise automotive user interface design to mitigate the adverse effects of vehicle motion on the interaction.
APA, Harvard, Vancouver, ISO, and other styles
26

Conway, Rylan T., and Evan W. Sangaline. "A Monte Carlo simulation approach for quantitatively evaluating keyboard layouts for gesture input." International Journal of Human-Computer Studies 99 (March 2017): 37–47. http://dx.doi.org/10.1016/j.ijhcs.2016.10.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Nichols, Charles. "The vBow: a virtual violin bow controller for mapping gesture to synthesis with haptic feedback." Organised Sound 7, no. 2 (August 2002): 215–20. http://dx.doi.org/10.1017/s135577180200211x.

Full text
Abstract:
The vBow, a virtual violin bow musical controller, has been designed to provide the computer musician with most of the gestural freedom of a bow on a violin string. Four cable and servomotor systems allow for four degrees of freedom, including the lateral motion of a bow stroke across a string, the rotational motion of a bow crossing strings, the vertical motion of a bow approaching and pushing into a string, and the longitudinal motion of a bow travelling along the length of a string. Encoders, attached to the shaft of the servomotors, sense the gesture of the performer, through the rotation of the servomotor shafts, turned by the motion of the cables. The data from each encoder is mapped to a parameter in synthesis software of a bowed-string physical model. The software also sends control voltages to the servomotors, engaging them and the cables attached to them with a haptic feedback simulation of friction, vibration, detents and elasticity.
APA, Harvard, Vancouver, ISO, and other styles
28

Wagner, Petra, Zofia Malisz, and Stefan Kopp. "Gesture and speech in interaction: An overview." Speech Communication 57 (February 2014): 209–32. http://dx.doi.org/10.1016/j.specom.2013.09.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Rudd, Grant, Liam Daly, and Filip Cuckov. "Intuitive gesture-based control system with collision avoidance for robotic manipulators." Industrial Robot: the international journal of robotics research and application 47, no. 2 (January 13, 2020): 243–51. http://dx.doi.org/10.1108/ir-03-2019-0045.

Full text
Abstract:
Purpose This paper aims to present an intuitive control system for robotic manipulators that pairs a Leap Motion, a low-cost optical tracking and gesture recognition device, with the ability to record and replay trajectories and operation to create an intuitive method of controlling and programming a robotic manipulator. This system was designed to be extensible and includes modules and methods for obstacle detection and dynamic trajectory modification for obstacle avoidance. Design/methodology/approach The presented control architecture, while portable to any robotic platform, was designed to actuate a six degree-of-freedom robotic manipulator of our own design. From the data collected by the Leap Motion, the manipulator was controlled by mapping the position and orientation of the human hand to values in the joint space of the robot. Additional recording and playback functionality was implemented to allow for the robot to repeat the desired tasks once the task had been demonstrated and recorded. Findings Experiments were conducted on our custom-built robotic manipulator by first using a simulation model to characterize and quantify the robot’s tracking of the Leap Motion generated trajectory. Tests were conducted in the Gazebo simulation software in conjunction with Robot Operating System, where results were collected by recording both the real-time input from the Leap Motion sensor, and the corresponding pose data. The results of these experiments show that the goal of accurate and real-time control of the robot was achieved and validated our methods of transcribing, recording and repeating six degree-of-freedom trajectories from the Leap Motion camera. Originality/value As robots evolve in complexity, the methods of programming them need to evolve to become more intuitive. Humans instinctively teach by demonstrating the task to a given subject, who then observes the various poses and tries to replicate the motions. This work aims to integrate the natural human teaching methods into robotics programming through an intuitive, demonstration-based programming method.
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Xinyang, Lei Wang, Jie Xiong, Chi Lin, Xinhua Gao, Jiale Li, and Yibo Wang. "UQRCom." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 4 (December 21, 2022): 1–22. http://dx.doi.org/10.1145/3571588.

Full text
Abstract:
While communication in the air has been a norm with the pervasiveness of WiFi and LTE infrastructure, underwater communication still faces a lot of challenges. Even nowadays, the main communication method for divers in underwater environment is hand gesture. There are multiple issues associated with gesture-based communication including limited amount of information and ambiguity. On the other hand, traditional RF-based wireless communication technologies which have achieved great success in the air can hardly work in underwater environment due to the extremely severe attenuation. In this paper, we propose UQRCom, an underwater wireless communication system designed for divers. We design a UQR code which stems from QR code and address the unique challenges in underwater environment such as color cast, contrast reduction and light interfere. With both real-world experiments and simulation, we show that the proposed system can achieve robust real-time communication in underwater environment. For UQR codes with a size of 19.8 cm x 19.8 cm, the communication distance can be 11.2 m and the achieved data rate (6.9 kbps ~ 13.6 kbps) is high enough for voice communication between divers.
APA, Harvard, Vancouver, ISO, and other styles
31

Song, Peng, Hang Yu, and Stefan Winkler. "Vision-based 3D Finger Interactions for Mixed Reality Games with Physics Simulation." International Journal of Virtual Reality 8, no. 2 (January 1, 2009): 1–6. http://dx.doi.org/10.20870/ijvr.2009.8.2.2717.

Full text
Abstract:
Mixed reality applications can provide users with enhanced interaction experiences by integrating virtual and real world objects in a mixed environment. Through the mixed reality interface, a more realistic and immersive control style is achieved compared to the traditional keyboard and mouse input devices. The interface proposed in this paper consists of a stereo camera, which tracks the user's hands and fingers robustly and accurately in the 3D space. To enable a physically realistic experience in the interaction, a physics engine is adopted for the simulating the physics of virtual object manipulation. The objects can be picked up and tossed with physical characteristics, such as gravity and collisions which occur in the real world. Detection and interaction in our system is fully computer-vision based, without any markers or additional sensors. We demonstrate this gesture-based interface using two mixed reality game implementations: finger fishing, in which a player can simulate fishing for virtual objects with his/her fingers as in a real environment, and Jenga, which is a simulation of the well-known tower building game. A user study is conducted and reported to demonstrate the accuracy, effectiveness and comfort of using this interactive interface.
APA, Harvard, Vancouver, ISO, and other styles
32

Han, Zijun, Zhaoming Lu, Xiangming Wen, Jingbo Zhao, Lingchao Guo, and Yue Liu. "In-Air Handwriting by Passive Gesture Tracking Using Commodity WiFi." IEEE Communications Letters 24, no. 11 (November 2020): 2652–56. http://dx.doi.org/10.1109/lcomm.2020.3007982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Pelachaud, Catherine. "Studies on gesture expressivity for a virtual agent." Speech Communication 51, no. 7 (July 2009): 630–39. http://dx.doi.org/10.1016/j.specom.2008.04.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hoetjes, Marieke, Emiel Krahmer, and Marc Swerts. "Does our speech change when we cannot gesture?" Speech Communication 57 (February 2014): 257–67. http://dx.doi.org/10.1016/j.specom.2013.06.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kelner, Judith, and Eduardo Albuquerque. "Guest Editorial: Foreword to the special Issue of the XVII Symposium on Virtual and Augmented Reality – SVR 2015." Journal on Interactive Systems 6, no. 2 (December 1, 2015): 1. http://dx.doi.org/10.5753/jis.2015.658.

Full text
Abstract:
This special issue of the JIS (SBC Journal on Interactive Systems) acknowledges the best papers of the XVII Symposium on Virtual and Augmented Reality (SVR 2015). SVR – the most important event on Virtual and Augmented Reality in Brazil in the last 17 years. Academic and professional members of the Brazilian Computer Society (SBC) support the conference since its beginning. To attend reader expectations, the selected papers come from different sub-areas of the Virtual and Augmented Reality. The developments introduced in the papers reflect important recent advances and achievements by the community. More specifically, this issue includes studies on crowd simulation, gesture driven interaction, the use of a camera as a pointing device, and tracking by applying grayscale conversion.
APA, Harvard, Vancouver, ISO, and other styles
36

Henderson, Jay, Tanya R. Jonker, Edward Lank, Daniel Wigdor, and Ben Lafreniere. "Investigating Cross-Modal Approaches for Evaluating Error Acceptability of a Recognition-Based Input Technique." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 1 (March 29, 2022): 1–24. http://dx.doi.org/10.1145/3517262.

Full text
Abstract:
Emerging input techniques that rely on sensing and recognition can misinterpret a user's intention, resulting in errors and, potentially, a negative user experience. To enhance the development of such input techniques, it is valuable to understand implications of these errors, but they can very costly to simulate. Through two controlled experiments, this work explores various low-cost methods for evaluating error acceptability of freehand mid-air gestural input in virtual reality. Using a gesture-driven game and a drawing application, the first experiment elicited error characteristics through text descriptions, video demonstrations, and a touchscreen-based interactive simulation. The results revealed that video effectively conveyed the dynamics of errors, whereas the interactive modalities effectively reproduced the user experience of effort and frustration. The second experiment contrasts the interactive touchscreen simulation with the target modality - a full VR simulation - and highlights the relative costs and benefits for assessment in an alternative, but still interactive, modality. These findings introduce a spectrum of low-cost methods for evaluating recognition-based errors in VR and a series of characteristics that can be understood in each.
APA, Harvard, Vancouver, ISO, and other styles
37

Ali Shakroum, Moamer, KOK Wai wong, and Lance Chun Che Fung. "The Effectiveness of the Gesture-Based Learning System (GBLS) and Its Impact on Learning Experience." Journal of Information Technology Education: Research 15 (2016): 191–210. http://dx.doi.org/10.28945/3518.

Full text
Abstract:
Several studies and experiments have been conducted in recent years to examine the value and the advantage of using the Gesture-Based Learning System (GBLS).The investigation of the influence of the GBLS mode on the learning outcomes is still scarce. Most previous studies did not address more than one category of learning outcomes (cognitive, affective outcomes, etc.) at the same time when used to understand the impact of GBLS. Moreover, none of these studies considered the difference in students’ characteristics such as learning styles and spatial abilities. Therefore, a comprehensive empirical research on the impact of the GBLS mode on learning outcomes is needed. The purpose of this paper is to fill in the gap and to investigate the effectiveness of the GBLS mode on learning using Technology Mediated Learning (TML) models. This study revealed that the GBLS mode has greater positive impact on students’ learning outcomes (cognitive and affective outcomes) when compared with other two learning modes that are classified as Computer Simulation Software Learning (CSSL) mode and conventional learning mode. In addition, this study also found that the GBLS mode is capable of serving all students with different learning styles and spatial ability levels. The results of this study revealed that the GBLS mode outperformed the existing learning methods by providing a unique learning experience that considers the differences between students. The results have also shown that the Kinect user interface can create an interactive and an enjoyable learning experience.
APA, Harvard, Vancouver, ISO, and other styles
38

Bastwesy, Marwa R. M., Nada M. ElShennawy, and Mohamed T. Faheem Saidahmed. "Deep Learning Sign Language Recognition System Based on Wi-Fi CSI." International Journal of Intelligent Systems and Applications 12, no. 6 (December 8, 2020): 33–45. http://dx.doi.org/10.5815/ijisa.2020.06.03.

Full text
Abstract:
Many sensing gesture recognition systems based on Wi-Fi signals are introduced because of the commercial off-the-shelf Wi-Fi devices without any need for additional equipment. In this paper, a deep learning-based sign language recognition system is proposed. Wi-Fi CSI amplitude and phase information is used as input to the proposed model. The proposed model uses three types of deep learning: CNN, LSTM, and ABLSTM with a complete study of the impact of optimizers, the use of amplitude and phase of CSI, and preprocessing phase. Accuracy, F-score, Precision, and recall are used as performance metrics to evaluate the proposed model. The proposed model achieves 99.855%, 99.674%, 99.734%, and 93.84% average recognition accuracy for the lab, home, lab + home, and 5 different users in a lab environment, respectively. Experimental results show that the proposed model can effectively detect sign gestures in complex environments compared with some deep learning recognition models.
APA, Harvard, Vancouver, ISO, and other styles
39

Hu, Bin, and Jiacun Wang. "Deep Learning Based Hand Gesture Recognition and UAV Flight Controls." International Journal of Automation and Computing 17, no. 1 (September 30, 2019): 17–29. http://dx.doi.org/10.1007/s11633-019-1194-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Gonseth, Chloe, Anne Vilain, and Coriandre Vilain. "An experimental study of speech/gesture interactions and distance encoding." Speech Communication 55, no. 4 (May 2013): 553–71. http://dx.doi.org/10.1016/j.specom.2012.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Saqib, Shazia, Allah Ditta, Muhammad Adnan Khan, Syed Asad Raza Kazmi, and Hani Alquhayz. "Intelligent Dynamic Gesture Recognition Using CNN Empowered by Edit Distance." Computers, Materials & Continua 66, no. 2 (2021): 2061–76. http://dx.doi.org/10.32604/cmc.2020.013905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ur Rehman, Muneeb, Fawad Ahmed, Muhammad Attique Khan, Usman Tariq, Faisal Abdulaziz Alfouzan, Nouf M. Alzahrani, and Jawad Ahmad. "Dynamic Hand Gesture Recognition Using 3D-CNN and LSTM Networks." Computers, Materials & Continua 70, no. 3 (2022): 4675–90. http://dx.doi.org/10.32604/cmc.2022.019586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Harrison, Reginald Langford, Stefan Bilbao, James Perry, and Trevor Wishart. "An Environment for Physical Modeling of Articulated Brass Instruments." Computer Music Journal 39, no. 4 (December 2015): 80–95. http://dx.doi.org/10.1162/comj_a_00332.

Full text
Abstract:
This article presents a synthesis environment for physical modeling of valved brass instrument sounds. Synthesis is performed using finite-difference time-domain methods that allow for flexible simulation of time-varying systems. Users have control over the instrument configuration as well as player parameters, such as mouth pressure, lip dynamics, and valve depressions, which can be varied over the duration of a gesture. This article introduces the model used in the environment, the development of code from prototyping in MATLAB and optimization in C, and the incorporation of the executable file in the Sound Loom interface of the Composers Desktop Project. Planned additions to the environment are then discussed. The environment binaries are available to download online along with example sounds and input files.
APA, Harvard, Vancouver, ISO, and other styles
44

Park, Ben Joonyeon, Taekjin Jang, Jong Woo Choi, and Namkug Kim. "Gesture-Controlled Interface for Contactless Control of Various Computer Programs with a Hooking-Based Keyboard and Mouse-Mapping Technique in the Operating Room." Computational and Mathematical Methods in Medicine 2016 (2016): 1–7. http://dx.doi.org/10.1155/2016/5170379.

Full text
Abstract:
We developed a contactless interface that exploits hand gestures to effectively control medical images in the operating room. We developed an in-house program called GestureHook that exploits message hooking techniques to convert gestures into specific functions. For quantitative evaluation of this program, we used gestures to control images of a dynamic biliary CT study and compared the results with those of a mouse (8.54±1.77 s to5.29±1.00 s;p<0.001) and measured the recognition rates of specific gestures and the success rates of tasks based on clinical scenarios. For clinical applications, this program was set up in the operating room to browse images for plastic surgery. A surgeon browsed images from three different programs: CT images from a PACS program, volume-rendered images from a 3D PACS program, and surgical planning photographs from a basic image viewing program. All programs could be seamlessly controlled by gestures and motions. This approach can control all operating room programs without source code modification and provide surgeons with a new way to safely browse through images and easily switch applications during surgical procedures.
APA, Harvard, Vancouver, ISO, and other styles
45

Padmini, Palli, C. Paramasivam, G. Jyothish Lal, Sadeen Alharbi, and Kaustav Bhowmick. "A Real-Time Oral Cavity Gesture Based Words Synthesizer Using Sensors." Computers, Materials & Continua 71, no. 3 (2022): 4523–54. http://dx.doi.org/10.32604/cmc.2022.022857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Baek, Taeseung, and Yong-Gu Lee. "Traffic control hand signal recognition using convolution and recurrent neural networks." Journal of Computational Design and Engineering 9, no. 2 (February 25, 2022): 296–309. http://dx.doi.org/10.1093/jcde/qwab080.

Full text
Abstract:
Abstract Gesture understanding is one of the most challenging problems in computer vision. Among them, traffic hand signal recognition requires the consideration of speed and the validity of the commanding signal. The lack of available datasets is also a serious problem. Most classifiers approach these problems using the skeletons of target actors in an image. Extracting the three-dimensional coordinates of skeletons is simplified when depth information accompanies the images. However, depth cameras cost significantly more than RGB cameras. Furthermore, the extraction of the skeleton needs to be performed in prior. Here, we show a hand signal detection algorithm without skeletons. Instead of skeletons, we use simple object detectors trained to acquire hand directions. The variance in the time length of gestures mixed with random pauses and noise is handled with a recurrent neural network (RNN). Furthermore, we have developed a flag sequence algorithm to assess the validity of the commanding signal. In whole, the computed hand directions are sent to the RNN, which identifies six types of hand signals given by traffic controllers with the ability to distinguish time variations and intermittent randomly appearing noises. We constructed a hand signal dataset composed of 100 thousand RGB images that is made publicly available. We achieved correct recognition of the hand signals with various backgrounds at 91% accuracy. A processing speed of 30 FPS in FHD video streams, which is a 52% improvement over the best among previous works, was achieved. Despite the extra burden of deciding the validity of the hand signals, this method surpasses methods that solely use RGB video streams. Our work is capable of performing with nonstationary viewpoints, such as those taken from moving vehicles. To accomplish this goal, we set a higher priority for the speed and validity assessment of the recognized commanding signals. The collected dataset is made publicly available through the Korean government portal under the URL “data.go.kr/data/15075814/fileData.do.”
APA, Harvard, Vancouver, ISO, and other styles
47

Rusiewicz, Heather Leavy, Susan Shaiman, Jana M. Iverson, and Neil Szuminsky. "Effects of perturbation and prosody on the coordination of speech and gesture." Speech Communication 57 (February 2014): 283–300. http://dx.doi.org/10.1016/j.specom.2013.06.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Esteve-Gibert, Núria, and Pilar Prieto. "Infants temporally coordinate gesture-speech combinations before they produce their first words." Speech Communication 57 (February 2014): 301–16. http://dx.doi.org/10.1016/j.specom.2013.06.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Adikari, Sasadara B., Naleen C. Ganegoda, Ravinda G. N. Meegama, and Indika L. Wanniarachchi. "Applicability of a Single Depth Sensor in Real-Time 3D Clothes Simulation: Augmented Reality Virtual Dressing Room Using Kinect Sensor." Advances in Human-Computer Interaction 2020 (May 18, 2020): 1–10. http://dx.doi.org/10.1155/2020/1314598.

Full text
Abstract:
A busy lifestyle led people to buy readymade clothes from retail stores with or without fit-on, expecting a perfect match. The existing online cloth shopping systems are capable of providing only 2D images of the clothes, which does not lead to a perfect match for the individual user. To overcome this problem, the apparel industry conducts many studies to reduce the time gap between cloth selection and final purchase by introducing “virtual dressing rooms.” This paper discusses the design and implementation of augmented reality “virtual dressing room” for real-time simulation of 3D clothes. The system is developed using a single Microsoft Kinect V2 sensor as the depth sensor, to obtain user body parameter measurements, including 3D measurements such as the circumferences of chest, waist, hip, thigh, and knee to develop a unique model for each user. The size category of the clothes is chosen based on the measurements of each customer. The Unity3D game engine was incorporated for overlaying 3D clothes virtually on the user in real time. The system is also equipped with gender identification and gesture controllers to select the cloth. The developed application successfully augmented the selected dress model with physics motions according to the physical movements made by the user, which provides a realistic fitting experience. The performance evaluation reveals that a single depth sensor can be applied in the real-time simulation of 3D cloth with less than 10% of the average measurement error.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Chang, Jiteng Sun, Long Wang, Guojin Chen, Ming Xu, Jing Ni, Rizauddin Ramli, Shaohui Su, and Changyong Chu. "Pneumatic Bionic Hand with Rigid-Flexible Coupling Structure." Materials 15, no. 4 (February 13, 2022): 1358. http://dx.doi.org/10.3390/ma15041358.

Full text
Abstract:
This paper presents a rigid-flexible composite of bionic hand structure design scheme solution for solving the problem of low load on the soft gripping hand. The bionic hand was designed based on the Fast Pneumatic Network (FPN) approach, which can produce a soft finger bending drive mechanism. A soft finger bending driver was developed and assembled into a human-like soft gripping hand which includes a thumb for omnidirectional movement and four modular soft fingers. An experimental comparison of silicone rubber materials with different properties was conducted to determine suitable materials. The combination of 3D printing technology and mold pouring technology was adopted to complete the prototype preparation of the bionic hand. Based on the second-order Yeoh model, a soft bionic finger mathematical model was established, and ABAQUS simulation analysis software was used for correction to verify the feasibility of the soft finger bending. We adopted a pneumatic control scheme based on a motor micro-pump and developed a human–computer interface through LabView. A comparative experiment was carried out on the bending performance of the finger, and the experimental data were analyzed to verify the accuracy of the mathematical model and simulation. In this study, the control system was designed, and the human-like finger gesture and grasping experiments were carried out.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography