Academic literature on the topic 'Gesture – Computer simulation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gesture – Computer simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gesture – Computer simulation"

1

Wu, Yingnian, Guojun Yang, and Lin Zhang. "Mouse simulation in human–machine interface using kinect and 3 gear systems." International Journal of Modeling, Simulation, and Scientific Computing 05, no. 04 (September 29, 2014): 1450015. http://dx.doi.org/10.1142/s1793962314500159.

Full text
Abstract:
We never stop finding better ways to communicate with machines. To interact with computers we tried several ways, from punched tape and tape reader to QWERTY keyboards and command lines, from graphic user interface and mouse to multi-touch screens. The way we communicate with computers or devices are getting more direct and easier. In this paper, we give gesture mouse simulation in human–computer interface based on 3 Gear Systems using two Kinect sensors. The Kinect sensor is the perfect device to achieve dynamic gesture tracking and pose recognition. We hope the 3 Gear Systems can work as a mouse, to be more specific, use gestures to do click, double click and scroll. We use Coordinate Converting Matrix and Kalman Filter to reduce the shaking caused by errors and makes the interface create a better user experience. Finally the future of human–computer interface is discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Shahan Yamin Siddiqui, , Ghanwa Batool, Muhammad Sohail Irshad, Hafiz Muhammad Usama, Muhammad Tariq Siddique, Bilal Shoaib, Sajid Farooq, and , Arfa Hassan. "Time Complexity of Color Camera Depth Map Hand Edge Closing Recognition Algorithm." Lahore Garrison University Research Journal of Computer Science and Information Technology 4, no. 3 (September 25, 2020): 1–22. http://dx.doi.org/10.54692/lgurjcsit.2020.040395.

Full text
Abstract:
The objective of this paper is to calculate the time complexity of the colored camera depth map hand edge closing algorithm of the hand gesture recognition technique. It has been identified as hand gesture recognition through human-computer interaction using color camera and depth map technique, which is used to find the time complexity of the algorithms using 2D minima methods, brute force, and plane sweep. Human-computer interaction is a very much essential component of most people's daily life. The goal of gesture recognition research is to establish a system that can classify specific human gestures and can make its use to convey information for the device control. These methods have different input types and different classifiers and techniques to identify hand gestures. This paper includes the algorithm of one of the hand gesture recognition “Color camera depth map hand edge recognition” algorithm and its time complexity and simulation on MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
3

Ding, Xueyan, and Yi Zhang. "Human-Computer Interaction System Application in Hotel Management Teaching Practice." Mobile Information Systems 2022 (July 12, 2022): 1–8. http://dx.doi.org/10.1155/2022/6215736.

Full text
Abstract:
With the increasing demand for the performance and security of communication networks, the fifth-generation mobile technology has developed rapidly and has attracted unprecedented attention. At the same time, this article analyzes the current research status of visual gesture recognition and human-computer interaction based on the Internet of Things. In view of the current shortcomings of gesture recognition, this article proposes a solution that involves using Biaonect somatosensory sensors to recognize gestures and explore human-computer interaction. Then, we analyze how the Kinea somatosensory sensor works to obtain depth images, study the method of obtaining gesture positions and joint points based on the depth information, and combine the depth information and the skin color model to create a three-dimensional image of the gesture simulation. With the rapid development of China’s tourism industry, China’s hotel industry has entered an era, in which domestic and foreign competitors coexist in the hotel industry. The development of hotels urgently needs high-quality hotel professionals who have received professional training and are familiar with hotel management. In hotel management teaching, human-computer interactive learning can effectively improve learning interest. In this paper, the structure of the human-computer interaction system based on gesture recognition is established, which can effectively improve the recognition accuracy and is of great significance in the hotel management teaching system.
APA, Harvard, Vancouver, ISO, and other styles
4

Aneela, Banda. "Implementing a Real Time Virtual Mouse System and Fingertip Detection based on Artificial Intelligence." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 25, 2021): 2265–70. http://dx.doi.org/10.22214/ijraset.2021.35485.

Full text
Abstract:
Artificial intelligence refers to the simulation of human intelligence in computers that have been trained to think and act like humans. It is a broad branch of computer science devoted to the creation of intelligent machines capable of doing activities that would normally need human intelligence. Despite the fact that Artificial intelligence is a heterogeneous science with several techniques, developments in machine learning and deep learning are driving a paradigm shift in practically every business. Human-computer interaction requires the identification of hand gestures utilizing vision-based technology. The keyboard and mouse have grown more significant in human-computer interaction in recent decades. This involves the progression of touch technology over buttons, as well as a variety of other gesture control modalities. A normal camera may be used to construct a hand tracking-based virtual mouse application. We combine camera and computer vision technologies, such as finger- tip identification and gesture recognition, into the proposed system to handle mouse operations (volume control, right click, left click), and show how it can execute all that existing mouse devices can.
APA, Harvard, Vancouver, ISO, and other styles
5

Stančić, Ivo, Josip Musić, Tamara Grujić, Mirela Kundid Vasić, and Mirjana Bonković. "Comparison and Evaluation of Machine Learning-Based Classification of Hand Gestures Captured by Inertial Sensors." Computation 10, no. 9 (September 14, 2022): 159. http://dx.doi.org/10.3390/computation10090159.

Full text
Abstract:
Gesture recognition is a topic in computer science and language technology that aims to interpret human gestures with computer programs and many different algorithms. It can be seen as the way computers can understand human body language. Today, the main interaction tools between computers and humans are still the keyboard and mouse. Gesture recognition can be used as a tool for communication with the machine and interaction without any mechanical device such as a keyboard or mouse. In this paper, we present the results of a comparison of eight different machine learning (ML) classifiers in the task of human hand gesture recognition and classification to explore how to efficiently implement one or more tested ML algorithms on an 8-bit AVR microcontroller for on-line human gesture recognition with the intention to gesturally control the mobile robot. The 8-bit AVR microcontrollers are still widely used in the industry, but due to their lack of computational power and limited memory, it is a challenging task to efficiently implement ML algorithms on them for on-line classification. Gestures were recorded by using inertial sensors, gyroscopes, and accelerometers placed at the wrist and index finger. One thousand and eight hundred (1800) hand gestures were recorded and labelled. Six important features were defined for the identification of nine different hand gestures using eight different machine learning classifiers: Decision Tree (DT), Random Forests (RF), Logistic Regression (LR), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM) with linear kernel, Naïve Bayes classifier (NB), K-Nearest Neighbours (KNN), and Stochastic Gradient Descent (SGD). All tested algorithms were ranged according to Precision, Recall, and F1-score (abb.: P-R-F1). The best algorithms were SVM (P-R-F1: 0.9865, 0.9861, and 0.0863), and RF (P-R-F1: 0.9863, 0.9861, and 0.0862), but their main disadvantage is their unusability for on-line implementations in 8-bit AVR microcontrollers, as proven in the paper. The next best algorithms have had only slightly poorer performance than SVM and RF: KNN (P-R-F1: 0.9835, 0.9833, and 0.9834) and LR (P-R-F1: 0.9810, 0.9810, and 0.9810). Regarding the implementation on 8-bit microcontrollers, KNN has proven to be inadequate, like SVM and RF. However, the analysis for LR has proved that this classifier could be efficiently implemented on targeted microcontrollers. Having in mind its high F1-score (comparable to SVM, RF, and KNN), this leads to the conclusion that the LR is the most suitable classifier among tested for on-line applications in resource-constrained environments, such as embedded devices based on 8-bit AVR microcontrollers, due to its lower computational complexity in comparison with other tested algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Khaled, Hazem, Samir G. Sayed, El Sayed M. Saad, and Hossam Ali. "Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms." Mathematical Problems in Engineering 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/741068.

Full text
Abstract:
Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI). The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.
APA, Harvard, Vancouver, ISO, and other styles
7

Jagodziński, Piotr, and Robert Wolski. "THE EXAMINATION OF THE IMPACT ON STUDENTS’ USE OF GESTURES WHILE WORKING IN A VIRTUAL CHEMICAL LABORATORY FOR THEIR COGNITIVE ABILITIES." Problems of Education in the 21st Century 61, no. 1 (October 5, 2014): 46–57. http://dx.doi.org/10.33225/pec/14.61.46.

Full text
Abstract:
One of the cognitive theories is the embodied cognition theory. According to this theory, it is important to use appropriate gestures in the process of assimilating new information and the acquisition of new skills. The further development of information and communication technologies has enabled the development of interfaces that allow the user to control computer programs and electronic devices by using gestures. These Natural User Interfaces (NUI) were used in teaching Chemistry in middle school and secondary school. A virtual chemical laboratory was developed in which students can simulate the performance of laboratory activities, similar to those that are performed in a real lab. The Kinect sensor was used to detect and analyze hand movement. The conducted research established the educational effectiveness of a virtual laboratory, which is an example of a system based on GBS gestures (gesture-based system). The use of the teaching methods and to what extent they increase the student's complete understanding were examined. The results indicate that the use of the gesture-based system in teaching makes it more attractive and increases the quality of teaching Chemistry. Key words: chemistry experiments, educational simulation, gesture based system, embodied cognition theory.
APA, Harvard, Vancouver, ISO, and other styles
8

Kara, Tolgay, and Ahmad Soliman Masri. "Modeling and Analysis of a Visual Feedback System to Support Efficient Object Grasping of an EMG-Controlled Prosthetic Hand." Current Directions in Biomedical Engineering 5, no. 1 (September 1, 2019): 207–10. http://dx.doi.org/10.1515/cdbme-2019-0053.

Full text
Abstract:
AbstractMillions of people around the world have lost their upper limbs mainly due to accidents and wars. Recently in the Middle East, the demand for prosthetic limbs has increased dramatically due to ongoing wars in the region. Commercially available prosthetic limbs are expensive while the most economical method available for controlling prosthetic limbs is the Electromyography (EMG). Researchers on EMG-controlled prosthetic limbs are facing several challenges, which include efficiency problems in terms of functionality especially in prosthetic hands. A major issue that needs to be solved is the fact that currently available low-cost EMG-controlled prosthetic hands cannot enable the user to grasp various types of objects in various shapes, and cannot provide the efficient use of the object by deciding the necessary hand gesture. In this paper, a computer vision-based mechanism is proposed with the purpose of detecting and recognizing objects and applying optimal hand gesture through visual feedback. The objects are classified into groups and the optimal hand gesture to grasp and use the targeted object that is most efficient for the user is implemented. A simulation model of the human hand kinematics is developed for simulation tests to reveal the efficacy of the proposed method. 80 different types of objects are detected, recognized, and classified for simulation tests, which can be realized by using two electrodes supplying the input to perform the action. Simulation results reveal the performance of proposed EMG-controlled prosthetic hand in maintaining optimal hand gestures in computer environment. Results are promising to help disabled people handle and use objects more efficiently without higher costs.
APA, Harvard, Vancouver, ISO, and other styles
9

Yan, Xu, and Wang Wei Lan. "The Research of Thangka Buddha Gesture Detection Algorithm." JOURNAL OF ADVANCES IN MATHEMATICS 11, no. 4 (September 16, 2015): 5089–93. http://dx.doi.org/10.24297/jam.v11i4.1258.

Full text
Abstract:
This paper describes both the meaning of split Thangka Buddha gesture and steps, then choice the Canny operator method of edge detection on the thangka Buddha's gesture.Through a simple way of human-computer interaction,the Buddha's gesture will be cut off from Thangka Buddha,and then make simulation experiments by software of Matlab.Finally,through analysis and comparison,expounds the advantages and disadvantages of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Haber, Jeffrey, and Joon Chung. "Assessment of UAV operator workload in a reconfigurable multi-touch ground control station environment." Journal of Unmanned Vehicle Systems 4, no. 3 (September 1, 2016): 203–16. http://dx.doi.org/10.1139/juvs-2015-0039.

Full text
Abstract:
Multi-touch computer inputs allow users to interact with a virtual environment through the use of gesture commands on a monitor instead of a mouse and keyboard. This style of input is easy for the human mind to adapt to because gestures directly reflect how one interacts with the natural environment. This paper presents and assesses a personal-computer-based unmanned aerial vehicle ground control station that utilizes multi-touch gesture inputs and system reconfigurability to enhance operator performance. The system was developed at Ryerson University’s Mixed-Reality Immersive Motion Simulation Laboratory using commercial-off-the-shelf Presagis software. The ground control station was then evaluated using NASA’s task load index to determine if the inclusion of multi-touch gestures and reconfigurability provided an improvement in operator workload over the more traditional style of mouse and keyboard inputs. To conduct this assessment, participants were tasked with flying a simulated aircraft through a specified number of waypoints, and had to utilize a payload controller within a predetermined area. The task load index results from these flight tests have initially shown that the developed touch-capable ground control station improved operator workload while reducing the impact of all six related human factors.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Gesture – Computer simulation"

1

Kolesnik, Paul. "Conducting gesture recognition, analysis and performance system." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81499.

Full text
Abstract:
A number of conducting gesture analysis and performance systems have been developed over the years. However, most of the previous projects either primarily concentrated on tracking tempo and amplitude indicating gestures, or implemented individual mapping techniques for expressive gestures that varied from research to research. There is a clear need for a uniform process that could be applied toward analysis of both indicative and expressive gestures. The proposed system provides a set of tools that contain extensive functionality for identification, classification and performance with conducting gestures. Gesture recognition procedure is designed on the basis of Hidden Markov Model (HMM) process. A set of HMM tools are developed for Max/MSP software. Training and recognition procedures are applied toward both right-hand beat- and amplitude-indicative gestures, and left-hand expressive gestures. Continuous recognition of right-hand gestures is incorporated into a real-time gesture analysis and performance system in Eyesweb and Max/MSP/Jitter environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Morales, González Rafael. "Rich multi-touch input with passive tokens." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS309/document.

Full text
Abstract:
L'entrée multi-points offre un canal d'interaction très expressif pour les dispositifs équipés d'une technologie tactile multi-points. Cependant, alors que la taille du canal de communication est, en théorie, très grande, la plupart des systèmes n'en font, en pratique, qu'un usage très limité. Cet état de fait est probablement dû à la difficulté de gérer un grand nombre de gestes multi-points pour deux raisons principales: (1) les limites cognitives et motrices des humains et (2) les difficultés techniques pour l'élaboration de systèmes de reconnaissance robustes. Cette thèse étudie une nouvelle technique d'entrée, TouchTokens, pour enrichir le vocabulaire de gestes multi-points, en se basant sur la position relative des points de contact et des objets (tokens) passifs. Un TouchToken est un "token" passif avec des encoches qui indiquent à l'utilisateur comment l'attraper, et qui est donc associé à une configuration de doigts qui lui est propre. Ainsi, lorsque les utilisateurs tiennent un token tout en étant en contact avec la surface, le système reconnaît le schéma de points de contact correspondant avec une grande robustesse. Nous commençons par présenter le principe avec des tokens rigides de forme basique. L'algorithme de reconnaissance et la conception des tokens sont issus des conclusions d'une étude formative dans laquelle nous avons collecté et analysé des schémas de points de contact lorsque les utilisateurs tiennent des tokens de taille et de forme variable. Cette première étude montre que les utilisateurs ont des stratégies individuelles cohérentes, mais que ces stratégies dépendent de l'utilisateur. Ces conclusions nous ont menés à l'élaboration de tokens avec des encoches afin que les utilisateurs attrapent un même token toujours de la même façon. L'expérience que nous avons menée sur ce nouvel ensemble de tokens démontre que nous pouvons les reconnaître avec un niveau de robustesse supérieur à 95%. Nous discutons les rôles que peuvent jouer les TouchTokens dans les systèmes interactifs, et nous présentons un échantillon d'applications de démonstration. La conception initiale des TouchTokens ne supporte qu'un ensemble d'interactions se limitant au modèle à deux états de l'interaction directe. Dans un second projet, nous décrivons une technique de fabrication avec une découpeuse laser qui permet de faire des tokens flexibles que les utilisateurs peuvent, par exemple, courber ou compresser en plus de les faire glisser sur la surface. Nous augmentons notre reconnaisseur pour analyser les micro-mouvements des doigts pendant la manipulation du token afin de reconnaître ces manipulations. Cette approche basée sur l'analyse des micro-mouvements des doigts nous permet également de discriminer, lorsque l'utilisateur enlève ses doigts de la surface, le cas où il enlève le token de la surface, du cas où le token est resté sur la surface. Nous rapportons sur les expériences que nous avons menées pour déterminer la valeur des paramètres de nos différents reconnaisseurs, et tester leur robustesse. Nous obtenons des taux de reconnaissance supérieurs à 90% sur les données collectées. Nous finissons cette thèse par la présentation de deux outils qui permettent de construire et reconnaître des tokens de forme arbitraire, TouchTokenBuilder and TouchTokenTracker. TouchTokenBuilder est une application logicielle qui permet de placer des encoches sur des contours vectoriels de forme arbitraire, et qui alerte en cas de conflit de reconnaissance entre tokens. TouchTokenBuilder produit deux fichiers en sortie: une description vectorielle des tokens pour leur construction, et une description numérique servant à leur reconnaissance. TouchTokenTracker est une librairie logicielle qui prend cette description numérique en entrée, et qui permet aux développeurs de traquer la géométrie (position, orientation et forme) des tokens au cours de leur manipulation sur la surface
This thesis investigates a novel input technique for enriching the gesture vocabulary on a multi-touch surface based on fingers' relative location and passive tokens. The first project, TouchTokens, presents a novel technique for interacting with multi-touch surfaces and tokens. The originality is that these tokens are totally passive (no need for any additional electronic components) and their design features notches that guide users' grasp. The purpose of the notches is to indicate a finger spatial configuration (touch pattern) that is specific to the token. When users hold a token and place it on the surface, touching them simultaneously, the system can recognize the resulting touch patterns with a very high level of accuracy (>95%). This approach works on any touch-sensitive surface and makes it possible to easily build low-cost interfaces that combine no-conductive tangibles and gestural input. This technique supports a new multi-touch input that the system can recognize. However, the interaction is limited to the two-state model of touch interaction as the system only knows the tokens' position and cannot detect tokens that are not touched. In the second project of the thesis, we introduce a laser-cut lattice hinge technique for making the tokens flexible. We then develop a new recognizer that analyzes the micro-movements of the fingers while user are holding and deforming those tokens on the surface. We run three experiments to design and calibrate algorithms for discriminating the three following types of manipulations: (1) when a token is left on the surface rather than taken off it (On/Off); (2) when a token has been bent, and (3) when it is squeezed. Our results show that our algorithms can recognize these three manipulations with an accuracy of: On/Off 90.1%, Bent 91.1% and Squeezed 96,9%.The thesis concludes with the presentation of two tools, TouchTokenBuilder and TouchTokenTracker, for facilitating the development of tailor-made tangibles using a simple direct-manipulation interface. TouchTokenBuilder is a software application that assists interface designers in placing notches on arbitrarily-shaped vector contours for creating conflict-free token sets and warning them about potential conflicts. It outputs two files: a vector-graphics description of all tokens in the set and a numerical description of the geometry of each token. TouchTokenTracker is a software library that takes as input the numerical description produced by TouchTokenBuilder, and enables developers to track the tokens' full geometry (location, orientation and shape) throughout their manipulation on the multi-touch surface
APA, Harvard, Vancouver, ISO, and other styles
3

Bainville, Eric. "Modélisation géométrique et dynamique d'un geste chirurgical." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00004351.

Full text
Abstract:
Ce travail traite du problème de la simulation et de l'assistance par ordinateur d'une opération chirurgicale. Nous abordons trois aspects de ce problème. Tout d'abord, nous décrivons un système d'assistance en temps-réel à une opération chirurgicale particulière : la rétropéritonéoscopie. Ce système permet d'afficher en continu des images de synthèse en fonction des déplacements de l'instrument chirurgical. Le chirurgien dispose ainsi d'informations supplémentaires lui permettant de rendre son geste plus rapide et précis. Nous détaillons la conception et la réalisation de ce système, ainsi que les expérimentations sur spécimen anatomique. Ensuite, pour aller plus loin et simuler effectivement le comportement des organes du patient au cours de l'opération, nous avons conçu un nouveau modèle de système de solides. Ce modèle fait cohabiter des solides rigides polygonaux et des solides fortement déformables représentés par un maillage de type ''éléments finis''. L'hypothèse d'une évolution quasi-statique du système et l'utilisation de lois de comportement élastique issues de la mécanique permettent d'obtenir un modèle robuste et réaliste. Nous détaillons l'implémentation de ce modèle et présentons quelques résultats proches des comportements ''chirurgicaux''. Enfin, nous étudions quelques outils mathématiques et algorithmiques utilisés dans les deux systèmes précédents : la représentation des rotations, la mécanique des milieux continus, la mise en correspondance rigide de nuages de points appariés, et la détection d'objets circulaires et elliptiques et de grilles dans des images en niveaux de gris.
APA, Harvard, Vancouver, ISO, and other styles
4

Aubry, Matthieu. "Modélisation et apprentissage de synergies pour le contrôle du mouvement de personnages virtuels - Application au geste d'atteinte de cible." Phd thesis, Université de Bretagne occidentale - Brest, 2010. http://tel.archives-ouvertes.fr/tel-00506601.

Full text
Abstract:
L'utilisation de personnages virtuels est devenue de plus en plus courante dans de nombreux domaines, par exemple, les jeux vidéos, la conception assistée par ordinateur ou l'étude du handicap. L'obtention de mouvements réalistes, reproduisant les mécanismes sensorimoteurs propres aux mouvements humains, est un verrou scientifique important. Cette thèse s'inscrit dans le courant des modèles de synthèse de mouvements utilisant des données capturées pour améliorer le réalisme. Notre proposition est d'intégrer la notion de synergie au sein d'une boucle de contrôle sensorimotrice pour améliorer le compromis entre réalisme, simplicité et autonomie. Une synergie est formée lorsque différentes entités collaborent pour atteindre un objectif commun. Nous avons étudié la possibilité de les modéliser au niveau des différents degrés de liberté du bras, lors de mouvements d'atteinte de cibles. Le modèle de synergies corrige le comportement de la boucle de contrôle en ajustant la dynamique du mouvement et les trajectoires des articulations. Les caractéristiques communes aux synergies mises en oeuvre sont représentées par des fonctions spatiales et temporelles qui peuvent être paramétrées pour reproduire les spécificités individuelles. Une méthode d'apprentissage permet d'ajuster les valeurs des paramètres afin d'obtenir les mouvements synthétisés les plus proches possible des exemples capturés. Les expériences réalisées comparent différents modèles de synergies et différentes heuristiques d'apprentissage. Les résultats obtenus sont utilisés pour trouver le meilleur compromis entre complexité du modèle, qualité du résultat et rapidité d'apprentissage. L'étude de caractéristiques des mouvements synthétisés montre la possibilité de reproduire la dynamique du mouvement et les trajectoires des articulations grâce à la modélisation proposée. Finalement, un mécanisme de classification des synergies est proposé. Le résultat obtenu représente les liens entre les conditions de réalisation du mouvement et les différentes synergies mises en oeuvre. Dans ce travail, ces liens sont exploités pour l'extrapolation et l'analyse des synergies mises en oeuvre par une personne. Le mécanisme d'extrapolation utilise un critère de sélection pour trouver, parmi les liens obtenus à l'issue de l'apprentissage, la synergie à mettre en oeuvre lors de la synthèse de nouveaux mouvements. L'analyse des similitudes entre différentes synergies est possible grâce à la sémantique apportée aux paramètres par le modèle de synergie. Les perspectives de ce travail concernent d'une part l'extension de ce modèle à d'autres types de gestes, et d'autres part son exploitation pour l'analyse des synergies de sujets particuliers, par exemple pour des personnes possédant des handicaps moteurs.
APA, Harvard, Vancouver, ISO, and other styles
5

Ramstein, Christophe. "Analyse, représentation et traitement du geste instrumental : application aux instruments à clavier." Phd thesis, Grenoble INPG, 1991. http://tel.archives-ouvertes.fr/tel-00340367.

Full text
Abstract:
Dans le cadre de la conception et de la réalisation d'un outil informatique pour la création musicale, on s'intéresse au geste instrumental pour contrôler en temps réel des processus de synthèse sonore par simulation de mécanisme instrumentaux et pour étudier sa relation a la composition musicale. Pour décrire et classifier le geste instrumental, assimile a une séquence d'événements gestuels, capte et mémorise sous la forme de signaux échantillonnes, nous posons le probleme de la segmentation. En considérant le jeu instrumental sur clavier nous proposons des éléments de syntaxe gestuelle et en déduisons des critères pour la segmentation automatique. Nous nous intéressons ensuite à la représentation graphique des événements gestuels segmentes et a leur composition et transformation. Nous définissons plusieurs niveaux de représentation et pour chacun d'eux, des procédures de traitement manuels ou pris en charge, tout ou en partie, par des modèles de composition. L'éditeur de geste, intégré dans le système de synthèse sonore cordis-anima, synthétise l'ensemble de ces possibilites
APA, Harvard, Vancouver, ISO, and other styles
6

Wolf, Rémi. "Quantification de la qualité d'un geste chirurgical à partir de connaissances a priori." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00965163.

Full text
Abstract:
Le développement de la chirurgie laparoscopique entraîne de nouveaux défis pour le chirurgien, sa perception visuelle et tactile du site opératoire étant modifiée par rapport à son expérience antérieure. De nombreux dispositifs ont été développés autour de la procédure chirurgicale afin d'aider le chirurgien à réaliser le geste avec la meilleure qualité possible. Ces dispositifs visent à permettre au chirurgien de mieux percevoir le contexte dans lequel il intervient, à planifier de façon optimale la stratégie opératoire et à l'assister lors de la réalisation de son geste. La conception d'un système d'analyse de la procédure chirurgicale, permettant d'identifier des situations à risque et d'améliorer la qualité du geste, est un enjeu majeur du domaine des Gestes Médico-Chirurgicaux Assistés par Ordinateur. L'évaluation de la qualité du geste explore plusieurs composantes de sa réalisation : les habiletés techniques du chirurgien, ainsi que ses connaissances théoriques et sa capacité de jugement. L'objectif de cette thèse était de développer une méthode d'évaluation de la qualité technique des gestes du chirurgien à partir de connaissances à priori, qui soit adaptée aux contraintes spécifiques du bloc opératoire sans modifier l'environnement du chirurgien. Cette évaluation s'appuie sur la définition de métriques prédictives de la qualité du geste chirurgical, dérivées des trajectoires des instruments au cours de la procédure. La première étape de ce travail a donc consisté en la mise au point d'une méthode de suivi de la position des instruments laparoscopiques dans la cavité abdominale au cours de la chirurgie, à partir des images endoscopiques et sans ajout de marqueurs. Ce suivi combine des modèles géométriques de la caméra, de l'instrument et de son orientation, ainsi que des modèles statistiques décrivant les évolutions de cette dernière. Cette méthode permet le suivi de plusieurs instruments de laparoscopie dans des conditions de banc d'entraînement, en temps différé pour le moment. La seconde étape a consisté à extraire des trajectoires des paramètres prédictifs de la qualité du geste chirurgical, à partir de régressions aux moindres carrés partiels et de classifieurs k-means. Plusieurs nouvelles métriques ont été identifiées, se rapportant à la coordination des mains du chirurgien ainsi qu'à l'optimisation de son espace de travail. Ce dispositif est destiné à s'intégrer dans un système plus large, permettant d'apporter au chirurgien, en temps réel, des informations contextualisées concernant son geste, en fusionnant par exemple les données issues de la trajectoire à des données multi-modales d'imagerie per-opératoire.
APA, Harvard, Vancouver, ISO, and other styles
7

Chouly, Franz. "Modélisation physique des voies aériennes supérieures pour le Syndrome d'Apnées Obstructives du Sommeil." Phd thesis, Grenoble INPG, 2005. http://tel.archives-ouvertes.fr/tel-00012061.

Full text
Abstract:
Le Syndrome d'Apnées Obstructives du Sommeil est caractérisé
par la survenue fréquente d' épisodes d' obstruction
des voies aériennes supérieures. L'intérêt d'une modélisation physique
est qu'elle autorise une compréhension
plus fine du phénomène, et laisse espérer une amélioration
des traitements.
Le but a donc été de concevoir, puis
de valider, un algorithme de simulation numérique de
l'interaction entre les tissus vivants et le flux
d'air à l'origine d'un épisode apnéique. Afin d'alléger
les calculs et de réduire le temps de simulation, des hypothèses
simplificatrices ont été envisagées. D'une part, en ce qui concerne
les tissus vivants, la méthode des éléments finis permet une
prédiction réaliste de leur déformation. Le cadre
des petites perturbations et de l'élasticité linéaire implique
de plus un calcul rapide de la réponse mécanique.
D'autre part, la simulation
de l'écoulement d'air se fait via une formulation
asymptotique des équations de Navier-Stokes (équations
de Navier-Stokes Réduites / Prandtl), qui facilite la résolution
numérique.
Afin de valider hypothèses physiques et méthode de résolution
numérique, une maquette in-vitro a été utilisée. Celle-ci
permet de reproduire, dans des conditions contrôlées, une
interaction entre flux d'air et paroi déformable analogue
à celle qui se produit à la base de la langue en début d'obstruction.
Une mesure précise de la déformation du conduit d'écoulement
est obtenue à l'aide d'une caméra digitale.
Une série de comparaisons quantitatives
a montré qu'en dépit des simplifications effectuées, l'erreur entre prédiction
et mesures est faible.
Finalement, pour se rapprocher de la réalité clinique,
des modèles de voies aériennes supérieures de quatre patients apnéiques ont
été construits à partir de radiographies sagittales.
Des comparaisons entre simulations à partir de radiographies
pré-opératoires et post-opératoires ont montré
que les prédictions étaient globalement cohérentes avec les conséquences
du geste chirurgical. Elles ont pû également
mettre en évidence certaines limites de notre approche,
dûes à la complexité du phénomène.
APA, Harvard, Vancouver, ISO, and other styles
8

Bouënard, Alexandre. "Synthesis of Music Performances: Virtual Character Animation as a Controller of Sound Synthesis." Phd thesis, Université de Bretagne Sud, 2009. http://tel.archives-ouvertes.fr/tel-00497292.

Full text
Abstract:
Ces dernières années ont vu l'émergence de nom- breuses interfaces musicales ayant pour objectif principal d'offrir de nouvelles expériences instru- mentales. La spécification de telles interfaces met généralement en avant l'expertise des musiciens à appréhender des données sensorielles multiples et hétérogènes (visuelles, sonores et tactiles). Ces interfaces mettent ainsi en jeu le traitement de ces différentes données pour la conception de nouveaux modes d'interaction. Cette thèse s'intéresse plus spécifiquement à l'analyse, la modélisation ainsi que la synthèse de situations in- strumentales de percussion. Nous proposons ainsi un système permettant de synthétiser les retours vi- suel et sonore de performances de percussion, dans lesquelles un percussionniste virtuel contrôle des pro- cessus de synthèse sonore. L'étape d'analyse montre l'importance du contrôle de l'extrémité de la mailloche par des percussionnistes ex- perts jouant de la timbale. Cette analyse nécessite la capture préalable des gestes instrumentaux de dif- férents percussionnistes. Elle conduit à l'extraction de paramètres à partir des trajectoires extremité capturées pour diverses variations de jeu. Ces paramètres sont quantitativement évalués par leur capacité à représen- ter ces variations. Le système de synthèse proposé dans ce travail met en oeuvre l'animation physique d'un percussionniste virtuel capable de contrôler des processus de synthèse sonore. L'animation physique met en jeu un nouveau mode de contrôle du modèle physique par la seule spé- cification de la trajectoire extrémité de la mailloche. Ce mode de contrôle est particulièrement pertinent au re- gard de l'importance du contrôle de la mailloche mis en évidence dans l'analyse précédente. L'approche physique adoptée est de plus utilisée pour permettre l'interaction du percussionniste virtuel avec un modèle physique de timbale. En dernier lieu, le système proposé est utilisé dans une perspective de composition musicale. La con- struction de nouvelles situations instrumentales de percussion est réalisée grâce à la mise en oeuvre de partitions gestuelles. Celles-ci sont obtenues par l'assemblage et l'articulation d'unités gestuelles canoniques disponibles dans les données capturées. Cette approche est appliquée à la composition et la synthèse d'exercices de percussion, et evaluée qualitativement par un professeur de percussion.
APA, Harvard, Vancouver, ISO, and other styles
9

Luciani, Annie. "Un outil informatique de création d'images animées : modèles d'objets, langage, contrôle gestuel en temps réel : le système ANIMA." Phd thesis, 1985. http://tel.archives-ouvertes.fr/tel-00319267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Luboz, Vincent. "Chirurgie de l'exophtalmie dysthyroïdienne : planning et assistance au geste." Phd thesis, 2003. http://tel.archives-ouvertes.fr/tel-00005178.

Full text
Abstract:
Cette thèse propose une étude du comportement des tissus mous intra-orbitaires dans le cadre du traitement des exophtalmies, caractérisées par une protrusion du globe oculaire. La partie chirurgicale de ce traitement consiste en une décompression de l'orbite par le biais d'une ostéotomie des parois orbitaires pour réduire la protrusion du globe. Nos travaux visent à prédire les relations entre le volume des tissus décompressés, la surface d'ostéotomie et le recul oculaire résultant pour apporter une aide au planning chirurgical. Pour cela, deux modèles ont été développés. Le premier est un modèle analytique simple de l'orbite assimilant les parois orbitaires à un cône et le globe à une sphère. Il permet de déterminer de façon satisfaisante le volume décompressé en fonction du recul souhaité. Le deuxième est un modèle biomécanique des tissus mous intra-orbitaires et de leurs interactions avec les parois osseuses et le globe. Il s'agit d'un maillage Eléments Finis utilisant un matériau poroélastique et prenant en compte la morphologie de l'orbite du patient, les propriétés mécaniques des tissus mous. Il permet de quantifier le recul oculaire et le volume de tissus décompressés en fonction de l'effort (ou du déplacement) imposé par le chirurgien et de l'ostéotomie (surface et position). Ses résultats sont relativement intéressants et permettent d'évaluer le comportement des tissus intra-orbitaires. La méthode de génération automatique de maillage éléments finis, développée dans cette thèse, a permis d'effectuer différentes simulations d'ostéotomies et de conclure que la morphologie de l'orbite patient a un impact sur le recul et le volume décompressé et que l'influence de la surface d'ostéotomie est modérée. Une analyse rhéologique des tissus mous orbitaires a été effectuée. Les tests in vitro ont permis de poser les bases de futures mesures. Les tests in vivo, réalisés avec un capteur ad hoc, ont déterminé la raideur des tissus mous. Bien que les modèles présentés dans cette thèse ne soient pas encore utilisables dans le cadre d'un planning chirurgical, ils fournissent des résultats satisfaisants et une bonne estimation des phénomènes observés lors d'une décompression orbitaire.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Gesture – Computer simulation"

1

Sales Dias, Miguel, Sylvie Gibet, Marcelo M. Wanderley, and Rafael Bastos, eds. Gesture-Based Human-Computer Interaction and Simulation. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-92865-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gibet, Sylvie, Nicolas Courty, and Jean-François Kamp, eds. Gesture in Human-Computer Interaction and Simulation. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11678816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Human activity recognition and gesture spotting with body-worn sensors. Konstanz: Hartung-Gorre Verlag, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

David, Hutchison. Gesture-Based Human-Computer Interaction and Simulation: 7th International Gesture Workshop, GW 2007, Lisbon, Portugal, May 23-25, 2007, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kevin, Zhou S., ed. Analysis and modeling of faces and gestures: Third international workshop, AMFG 2007 Rio de Janeiro, Brazil, October 20, 2007 : proceedings. Berlin: Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sylvie Gibet,Nicolas Courty,Jean-Francois Kamp. Gesture in Human-Computer Interaction and Simulation. Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gesture In Embodied Communication And Humancomputer Interaction 8th International Gesture Workshop Gw 2009 Bielefeld Germany February 2527 2009 Revised Selected Papers. Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gibet, Sylvie, Nicolas Courty, and Jean-Francois Kamp. Gesture in Human-Computer Interaction and Simulation: 6th International Gesture Workshop, GW 2005, Berder Island, France, May 18-20, 2005, Revised Selected Papers. Springer London, Limited, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wachsmuth, Ipke, and Stefan Kopp. Gesture in Embodied Communication and Human Computer Interaction: 8th International Gesture Workshop, GW 2009, Bielefeld, Germany, February 25-27, 2009 Revised Selected Papers. Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

(Editor), Sylvie Gibet, Nicolas Courty (Editor), and Jean-Francois Kamp (Editor), eds. Gesture in Human-Computer Interaction and Simulation: 6th International Gesture Workshop, GW 2005, Berder Island, France, May 18-20, 2005, Revised Selected Papers (Lecture Notes in Computer Science). Springer, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Gesture – Computer simulation"

1

Haouchine, Nazim, Danail Stoyanov, Frederick Roy, and Stephane Cotin. "DejaVu: Intra-operative Simulation for Surgical Gesture Rehearsal." In Lecture Notes in Computer Science, 523–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66185-8_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rieser, Hannes, Kirsten Bergmann, and Stefan Kopp. "How Do Iconic Gestures Convey Visuo-Spatial Information? Bringing Together Empirical, Theoretical, and Simulation Studies." In Gesture and Sign Language in Human-Computer Interaction and Embodied Communication, 139–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34182-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kasemsap, Kijpokin. "The Fundamentals of Human-Computer Interaction." In Advanced Methodologies and Technologies in Artificial Intelligence, Computer Simulation, and Human-Computer Interaction, 524–35. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7368-5.ch039.

Full text
Abstract:
This chapter explains the overview of human-computer interaction (HCI); cognitive models, socio-organizational issues, and stakeholder requirements; HCI and hand gesture recognition; and the multifaceted applications of HCI. HCI is a sociotechnological discipline whose goal is to bring the power of computers and communication systems to people in ways and forms that are both accessible and useful in the effective manner. HCI plays an important role in identifying the environmental and social issues that can affect the use of systems, and provide techniques to ensure the design of the system will be usable, effective, and safe. HCI draws on computer science, computer and communications engineering, graphic design, management, psychology, and sociology as it tries to make computer and communications systems ever more usable in executing tasks. HCI is an important consideration for any business that uses computers in their everyday operation.
APA, Harvard, Vancouver, ISO, and other styles
4

Reed, Stephen K. "Action." In Cognitive Skills You Need for the 21st Century, 15–26. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780197529003.003.0002.

Full text
Abstract:
Actions can be either physical, virtual, or mental and act on either physical, virtual, or mental objects. For instance, Maria Montessori constructed educational materials that enabled students to learn by manipulation. The materials required physical actions on physical objects, such as combining beads to depict operations on numbers. Nintendo’s Wii video game supported physical actions on virtual objects. Gestures are actions that often apply to imaginary objects. Virtual actions involve manipulating computer consoles such as those used in robotic surgery to operate on physical objects. Virtual actions on virtual objects occur in many video games and instructional software. Virtual actions on mental objects occur in computer systems that use audio feedback to help the blind learn to navigate. Mental actions can be captured in brain–computer interfaces to control both physical robots and information on a computer screen. Mental actions on mental objects produce mental simulations. The increasing popularity of augmented reality will require more research on the pairing of physical, virtual, and mental actions and objects.
APA, Harvard, Vancouver, ISO, and other styles
5

Uhl, Claude, Annie Luciani, and Jean-Loup Florens. "Hardware Architecture of a Real-Time Simulator for the CORDIS-ANIMA System: Physical Models, Images, Gestures, Sounds." In Computer Graphics, 421–34. Elsevier, 1995. http://dx.doi.org/10.1016/b978-0-12-227741-2.50034-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gesture – Computer simulation"

1

Zhang, Guofeng, and Dongming Zhang. "Research on vision-based multi-user gesture recognition Human-Computer Interaction." In 2008 Asia Simulation Conference - 7th International Conference on System Simulation and Scientific Computing (ICSC). IEEE, 2008. http://dx.doi.org/10.1109/asc-icsc.2008.4675604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ito, Teruaki. "Simple Gesture Distinction for Brief Message Exchange." In 2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation. IEEE, 2010. http://dx.doi.org/10.1109/ams.2010.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pang, Yee Yong, Nor Azman Ismail, and Phuah Leong Siang Gilbert. "A Real Time Vision-Based Hand Gesture Interaction." In 2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation. IEEE, 2010. http://dx.doi.org/10.1109/ams.2010.55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Стародубцев, Илья, Il'ya Starodubcev, Рустам Самедов, Rustam Samedov, Игорь Гайнияров, Igor' Gayniyarov, Илья Обабков, et al. "Animatronic hand using ESP8266." In 29th International Conference on Computer Graphics, Image Processing and Computer Vision, Visualization Systems and the Virtual Environment GraphiCon'2019. Bryansk State Technical University, 2019. http://dx.doi.org/10.30987/graphicon-2019-1-274-278.

Full text
Abstract:
3D-printing technology puts the question of augmentation in rehabilitation, feedback providing and real objects interaction fields. Rapid manufacturing of prototypes and industrial designs leads to new fields appearance for 3D-printing technology. For example, there are hand prostheses, which are child-oriented, or animatronic models for communication. It is raises the question of managing physical hand in each of these tasks. This work presents the anthropomorphic hand, which is stand mounted. The main focus is on the software solution for gestures simulation. Special gesture format was developed to solve this problem. Prototype was developed by modifying open hand model "InMoov" as a debug realization. The article presents original model part as a circuitry and 3D stand model. The issue of anthropomorphic limb control is universal. The problem is most acute among systems with accurate interaction. Our model covers this problem field.
APA, Harvard, Vancouver, ISO, and other styles
5

Al-Behadili, Husam, Arne Grumpe, Lubaba Migdadi, and Christian Wohler. "Semi-supervised Learning Using Incremental Support Vector Machine and Extreme Value Theory in Gesture Data." In 2016 UKSim-AMSS 18th International Conference on Computer Modelling and Simulation (UKSim). IEEE, 2016. http://dx.doi.org/10.1109/uksim.2016.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Batayneh, Wafa, Ahmad Bataineh, Samer Abandeh, Mohammad Al-Jarrah, Mohammad Banisaeed, and Bara’ah alzo’ubei. "Using EMG Signals to Remotely Control a 3D Industrial Robotic Arm." In ASME 2019 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/imece2019-10234.

Full text
Abstract:
Abstract In this paper, a muscle gesture computer Interface (MGCI) system for robot navigation Control employing a commercial wearable MYO gesture Control armband is proposed. the motion and gesture control device from Thalamic Labs. The software interface is developed using LabVIEW and Visual Studio C++. The hardware Interface between the Thalamic lab’s MYO armband and the robotic arm has been implemented using a National Instruments My RIO, which provides real time EMG data needed. This system allows the user to control a three Degrees of freedom robotic arm remotely by his/her Intuitive motion by Combining the real time Electromyography (EMG) signal and inertial measurement unit (IMU) signals. Computer simulations and experiments are developed to evaluate the feasibility of the proposed System. This system will allow a person to wear this/her armband and move his/her hand and the robotic arm will imitate the motion of his/her hand. The armband can pick up the EMG signals of the person’s hand muscles, which is a time varying noisy signal, and then process this MYO EMG signals using LabVIEW and make classification of this signal in order to evaluate the angles which are used as feedback to servo motors needed to move the robotic arm. A simulation study of the system showed very good results. Tests show that the robotic arm can imitates the arm motion at an acceptable rate and with very good accuracy.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Kai, Xiaozhou Zhou, and Chengqi Xue. "Investigating the Effect of Targets’ Spatial Distribution on the Performance of Gesture Interaction in Virtual Reality Environment." In Intelligent Human Systems Integration (IHSI 2022) Integrating People and Intelligent Systems. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe100976.

Full text
Abstract:
In virtual reality(VR) environment, how to accurately and effectively select target objects is an important part of interactive tasks such as virtual simulation and virtual assembly. Gesture is one of the most crucial interaction technologies in human-computer interaction. This study conducted a multi-factor experiment in VR environment to explore the effects on the performance of pointing task under different depths and different perspectives. Participants need to point to the target objects in different viewing angle areas at different depth levels with HTC-Vive headset and Noitom HI5 gloves. The results show that the higher the depth level and the closer the target is to the visual center, the higher the pointing accuracy, and the pointing deviation is smaller on the dominant hand side than the non-dominant hand side. The research results have reference value for the spatial distribution of target objects for gesture interactive tasks in VR environment.
APA, Harvard, Vancouver, ISO, and other styles
8

Brett Talbot, Thomas, and Chinmay Chinara. "Open Medical Gesture: An Open-Source Experiment in Naturalistic Physical Interactions for Mixed and Virtual Reality Simulations." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1002054.

Full text
Abstract:
Mixed (MR) and Virtual Reality (VR) simulations are hampered by requirements for hand controllers or attempts to perseverate in use of two-dimensional computer interface paradigms from the 1980s. From our efforts to produce more naturalistic interactions for combat medic training for the military, we have developed an open-source toolkit that enables direct hand controlled responsive interactions that is sensor independent and can function with depth sensing cameras, webcams or sensory gloves. From this research and review of current literature, we have discerned several best approaches for hand-based human computer interactions which provide intuitive, responsive, useful, and low frustration experiences for VR users. The center of an effective gesture system is a universal hand model that can map to inputs from several different kinds of sensors rather than depending on a specific commercial product. Parts of the hand are effectors in simulation space with a physics-based model. Therefore, translational and rotational forces from the hands will impact physical objects in VR which varies based on the mass of the virtual objects. We incorporate computer code w/ objects, calling them “Smart Objects”, which allows such objects to have movement properties and collision detection for expected manipulation. Examples of smart objects include scissors, a ball, a turning knob, a moving lever, or a human figure with moving limbs. Articulation points contain collision detectors and code to assist in expected hand actions. We include a library of more than 40 Smart Objects in the toolkit. Thus, is it possible to throw a ball, hit that ball with a bat, cut a bandage, turn on a ventilator or to lift and inspect a human arm.We mediate the interaction of the hands with virtual objects. Hands often violate the rules of a virtual world simply by passing through objects. One must interpret user intent. This can be achieved by introducing stickiness of the hands to objects. If the human’s hands overshoot an object, we place the hand onto that object’s surface unless the hand passes the object by a significant distance. We also make hands and fingers contact an object according to the object’s contours and do not allow fingers to sink into the interior of an object. Haptics, or a sense of physical resistance and tactile sensation from contacting physical objects is a supremely difficult technical challenge and is an expensive pursuit. Our approach ignores true haptics, but we have experimented with an alternative approach, called audio tactile synesthesia where we substitute the sensation of touch for that of sound. The idea is to associate parts of each hand with a tone of a specific frequency upon contacting objects. The attack rate of the sound envelope varies with the velocity of contact and hardness of the object being ‘touched’. Such sounds can feel softer or harder depending on the nature of ‘touch’ being experienced. This substitution technique can provide tactile feedback through indirect, yet still naturalistic means. The artificial intelligence (AI) technique to determine discrete hand gestures and motions within the physical space is a special form of AI called Long Short Term Memory (LSTM). LSTM allows much faster and flexible recognition than other machine learning approaches. LSTM is particularly effective with points in motion. Latency of recognition is very low. In addition to LSTM, we employ other synthetic vision & object recognition AI to the discrimination of real-world objects. This allows for methods to conduct virtual simulations. For example, it is possible to pick up a virtual syringe and inject a medication into a virtual patient through hand motions. We track the hand points to contact with the virtual syringe. We also detect when the hand is compressing the syringe plunger. We could also use virtual medications & instruments on human actors or manikins, not just on virtual objects. With object recognition AI, we can place a syringe on a tray in the physical world. The human user can pick up the syringe and use it on a virtual patient. Thus, we are able to blend physical and virtual simulation together seamlessly in a highly intuitive and naturalistic manner.The techniques and technologies explained here represent a baseline capability whereby interacting in mixed and virtual reality can now be much more natural and intuitive than it has ever been. We have now passed a threshold where we can do away with game controllers and magnetic trackers for VR. This advancement will contribute to greater adoption of VR solutions. To foster this, our team has committed to freely sharing these technologies for all purposes and at no cost as an open-source tool. We encourage the scientific, research, educational and medical communities to adopt these resources and determine their effectiveness and utilize these tools and practices to grow the body of useful VR applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Kuts, Vladimir, Yevhen Bondarenko, Marietta Gavriljuk, Andriy Paryshev, Sergei Jegorov, Simone Pizzagall, and Tauno Otto. "Digital Twin: Universal User Interface for Online Management of the Manufacturing System." In ASME 2021 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/imece2021-69092.

Full text
Abstract:
Abstract Industry 4.0 concept enables connecting a multitude of equipment to computer simulations through IoT and virtual commissioning, but using conventional interfaces for each separate piece of equipment for control and maintenance of Digital Twins is not always an optimal solution. Industrial Digital Twins software toolkits usually consist of simulation or offline programming tools. It can even connect real machines and controllers and sensors to feed a simulation with actual production data and later analyze it. Moreover, Virtual Reality (VR) and Augmented Reality (AR) are used in different ways for monitoring and design purposes. However, there are many software tools for the simulation and re-programming of robots on the market already, but those are a limited number of software that combine all these features, and all of those send data only in one way, not allowing to re-program machines from the simulations. The related research aims to build a modular framework for designing and deploying Digital Twins of industrial equipment (i.e., robots, manufacturing lines), focusing on online connectivity for monitoring and control. A developed use-case solution enables one to operate the equipment in VR/AR/Personal Computer (PC) and mobile interfaces from any point globally while receiving real-time feedback and state information of the machinery equipment. Gamified multi-platform interfaces allow for more intuitive interactions with Digital Twins, providing a real-scale model of the real device, augmented by spatial UIs, actuated physical elements, and gesture tracking. The introduced solution can control and simulate any aspect of the production line without limitation of brand or type of the machine and being managed and self-learning independently by exploiting Machine Learning algorithms. Moreover, various interfaces such as PC, mobile, VR, and AR give an unlimited number of options for interactions with your manufacturing shop floor both offline and online. Furthermore, when it comes to manufacturing floor data monitoring, all gathered data is being used for statistical analysis, and in a later phase, predictive maintenance functions are enabled based on it. However, the research scope is broader; this particular research paper introduces a use-case interface on a mobile platform, monitoring and controlling the production unit of three various industrial- and three various mobile robots, partially supported by data monitoring sensors. The solution is developed using the game engine Unity3D, Robot Operation System (ROS), and MQTT for connectivity. Thus, developed is a universal modular Digital Twin all-in-one software platform for users and operators, enabling full control over the manufacturing system unit.
APA, Harvard, Vancouver, ISO, and other styles
10

Shieh, Meng-Dar, and Chih-Chieh Yang. "Designing Product Forms Using a Virtual Hand and Deformable Models." In ASME 2006 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/detc2006-99171.

Full text
Abstract:
This paper presents a computer-aided conceptual design system for developing product forms. The system integrates a virtual hand, which is manipulated by the designer, with deformable models representing the product forms. Designers can use gestural input and full hand pointing in the system to discover potential new ways for product form design. In the field of industrial design, styling and ergonomics are two important factors that determine a successful product design. Traditionally, designers explore possible concepts by sketching their ideas and then using clay or foam mock-ups to test them during the early phases of product design. With our deformable modeling simulation system, we provide a useful and efficient tool for industrial designers that enable to produce product form proposals efficiently without unnecessary trial and error. Designers can input pre-scanned 3D raw data or a 3D CAD model as an initial prototype. Then, the input model is given the material’s elastic property via the construction of a volume-like mass-spring-damping system. The virtual hand in the system constantly changes gestures as the designer manipulates it with a glove-based input device. The product form will be deformed or shaped according to the amount of force exerted by the virtual hand. A mesh smoothing feature called “PN-triangle” is also used to improve the appearance of the deformed model. Finally, a physical prototype with volume and weight is generated using a rapid prototyping machine. Designers can use these mock-ups to conduct further ergonomic evaluations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography