To see the other types of publications on this topic, follow the link: Human-Robot motion.

Dissertations / Theses on the topic 'Human-Robot motion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Human-Robot motion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Paulin, Rémi. "human-robot motion : an attention-based approach." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM018.

Full text
Abstract:
Pour les robots mobiles autonomes conçus pour partager notre environnement, la sécurité et l'efficacité de leur trajectoire ne sont pas les seuls aspects à prendre en compte pour la planification de leur mouvement: ils doivent respecter des règles sociales afin de ne pas gêner les personnes environnantes. Dans un tel contexte social, la plupart des techniques de planification de mouvement actuelles s'appuient fortement sur le concept d'espaces sociaux; de tels espaces sociaux sont cependant difficiles à modéliser et ils sont d'une utilisation limitée dans le contexte d'interactions homme-robot où l'intrusion dans les espaces sociaux est nécessaire. Ce travail présente une nouvelle approche pour la planification de mouvements dans un contexte social qui permet de gérer des environnements complexes ainsi que des situation d’interaction homme-robot. Plus précisément, le concept d'attention est utilisé pour modéliser comment l'influence de l'environnement dans son ensemble affecte la manière dont le mouvement du robot est perçu par les personnes environnantes. Un nouveau modèle attentionnel est introduit qui estime comment nos ressources attentionnelles sont partagées entre les éléments saillants de notre environnement. Basé sur ce modèle, nous introduisons le concept de champ attentionnel. Un planificateur de mouvement est ensuite développé qui s'appuie sur le champ attentionnel afin de produire des mouvements socialement acceptables. Notre planificateur de mouvement est capable d'optimiser simultanément plusieurs objectifs tels que la sécurité, l'efficacité et le confort des mouvements. Les capacités de l'approche proposée sont illustrées sur plusieurs scénarios simulés dans lesquels le robot est assigné différentes tâches. Lorsque la tâche du robot consiste à naviguer dans l'environnement sans causer de distraction, notre approche produit des résultats prometteurs même dans des situations complexes. Aussi, lorsque la tâche consiste à attirer l'attention d'une personne en vue d'interagir avec elle, notre planificateur de mouvement est capable de choisir automatiquement une destination qui exprime au mieux son désir d'interagir, tout en produisant un mouvement sûr, efficace et confortable
For autonomous mobile robots designed to share their environment with humans, path safety and efficiency are not the only aspects guiding their motion: they must follow social rules so as not to cause discomfort to surrounding people. Most socially-aware path planners rely heavily on the concept of social spaces; however, social spaces are hard to model and they are of limited use in the context of human-robot interaction where intrusion into social spaces is necessary. In this work, a new approach for socially-aware path planning is presented that performs well in complex environments as well as in the context of human-robot interaction. Specifically, the concept of attention is used to model how the influence of the environment as a whole affects how the robot's motion is perceived by people within close proximity. A new computational model of attention is presented that estimates how our attentional resources are shared amongst the salient elements in our environment. Based on this model, the novel concept of attention field is introduced and a path planner that relies on this field is developed in order to produce socially acceptable paths. To do so, a state-of-the-art many-objective optimization algorithm is successfully applied to the path planning problem. The capacities of the proposed approach are illustrated in several case studies where the robot is assigned different tasks. Firstly, when the task is to navigate in the environment without causing distraction our approach produces promising results even in complex situations. Secondly, when the task is to attract a person's attention in view of interacting with him or her, the motion planner is able to automatically choose a destination that best conveys its desire to interact whilst keeping the motion safe, efficient and socially acceptable
APA, Harvard, Vancouver, ISO, and other styles
2

Lasota, Przemyslaw A. (Przemyslaw Andrzej). "Robust human motion prediction for safe and efficient human-robot interaction." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122497.

Full text
Abstract:
Thesis: Ph. D. in Autonomous Systems, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 175-188).
From robotic co-workers in factories to assistive robots in homes, human-robot interaction (HRI) has the potential to revolutionize a large array of domains by enabling robotic assistance where it was previously not possible. Introducing robots into human-occupied domains, however, requires strong consideration for the safety and efficiency of the interaction. One particularly effective method of supporting safe an efficient human-robot interaction is through the use of human motion prediction. By predicting where a person might reach or walk toward in the upcoming moments, a robot can adjust its motions to proactively resolve motion conflicts and avoid impeding the person's movements. Current approaches to human motion prediction, however, often lack the robustness required for real-world deployment. Many methods are designed for predicting specific types of tasks and motions, and do not necessarily generalize well to other domains.
It is also possible that no single predictor is suitable for predicting motion in a given scenario, and that multiple predictors are needed. Due to these drawbacks, without expert knowledge in the field of human motion prediction, it is difficult to deploy prediction on real robotic systems. Another key limitation of current human motion prediction approaches lies in deficiencies in partial trajectory alignment. Alignment of partially executed motions to a representative trajectory for a motion is a key enabling technology for many goal-based prediction methods. Current approaches of partial trajectory alignment, however, do not provide satisfactory alignments for many real-world trajectories. Specifically, due to reliance on Euclidean distance metrics, overlapping trajectory regions and temporary stops lead to large alignment errors.
In this thesis, I introduce two frameworks designed to improve the robustness of human motion prediction in order to facilitate its use for safe and efficient human-robot interaction. First, I introduce the Multiple-Predictor System (MPS), a datadriven approach that uses given task and motion data in order to synthesize a high performing predictor by automatically identifying informative prediction features and combining the strengths of complementary prediction methods. With the use of three distinct human motion datasets, I show that using the MPS leads to lower prediction error in a variety of HRI scenarios, and allows for accurate prediction for a range of time horizons. Second, in order to address the drawbacks of prior alignment techniques, I introduce the Bayesian ESTimator for Partial Trajectory Alignment (BEST-PTA).
This Bayesian estimation framework uses a combination of optimization, supervised learning, and unsupervised learning components that are trained and synthesized based on a given set of example trajectories. Through an evaluation on three human motion datasets, I show that BEST-PTA reduces alignment error when compared to state-of-the-art baselines. Furthermore, I demonstrate that this improved alignment reduces human motion prediction error. Lastly, in order to assess the utility of the developed methods for improving safety and efficiency in HRI, I introduce an integrated framework combining prediction with robot planning in time. I describe an implementation and evaluation of this framework on a real physical system. Through this demonstration, I show that the developed approach leads to automatically derived adaptive robot behavior. I show that the developed framework leads to improvements in quantitative metrics of safety and efficiency with the use of a simulated evaluation.
"Funded by the NASA Space Technology Research Fellowship Program and the National Science Foundation"--Page 6
by Przemyslaw A. Lasota.
Ph. D. in Autonomous Systems
Ph.D.inAutonomousSystems Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
3

Gray, Cobb Susan Valerie. "Perception and orientation issues in human control of robot motion." Thesis, University of Nottingham, 1991. http://eprints.nottingham.ac.uk/11237/.

Full text
Abstract:
The use of remote teach controls for programming industrial robots has led to concern over programmer safety and reliability. The primary issue is the close proximity to the robot arm required for the programmer or maintainer to clearly see the tool actions, and it is feared that errors in robot control could result in injury. The further concern that variations in teach control design could cause "negative transfer" of learning has led to a call for standardisation of robot teach controls. However,at present there is insufficient data to provide suitable design recommendations. This is because previous researchers have measured control performance on very general, and completely different, programming tasks. This work set out to examine the motion control task, from which a framework was developed to represent the robot motion control process. This showed the decisions and actions required to achieve robot movement, together with the factors which may influence them. Two types of influencing factors were identified: robot system factors and human cognitive factors. Robot system factors add complexity to the control task by producing motion reversals which alter the control-robot motion relationship. These motion reversals were identified during the experimental programme which examined observers' perception of robot motion under different conditions of human-robot orientation and robot arm configuration. These determine the orientation of the robot with respect to the observer at any given time. It was found that changes in orientation may influence the observer's perception of robot movement producing inconsistent descriptions of the same movement viewed under different orientations. Furthermore, due to the strong association between perceived movement and control selection demonstrated n these experiments, no particular differences in error performance using different control designs were observed. It is concluded that human cognitive factors, specifically the operators' perception of robot movement and their ability to recognise motion reversals, have greater influence on control selection errors than control design per se.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Chien-Ming. "Joint attention in human-robot interaction." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.

Full text
Abstract:
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
APA, Harvard, Vancouver, ISO, and other styles
5

Narsipura, Sreenivasa Manish. "Modeling of human movement for the generation of humanoid robot motion." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0120/document.

Full text
Abstract:
La robotique humanoïde arrive a maturité avec des robots plus rapides et plus précis. Pour faire face à la complexité mécanique, la recherche a commencé à regarder au-delà du cadre habituel de la robotique, vers les sciences de la vie, afin de mieux organiser le contrôle du mouvement. Cette thèse explore le lien entre mouvement humain et le contrôle des systèmes anthropomorphes tels que les robots humanoïdes. Tout d’abord, en utilisant des méthodes classiques de la robotique, telles que l’optimisation, nous étudions les principes qui sont à la base de mouvements répétitifs humains, tels que ceux effectués lorsqu’on joue au yoyo. Nous nous concentrons ensuite sur la locomotion en nous inspirant de résultats en neurosciences qui mettent en évidence le rôle de la tête dans la marche humaine. En développant une interface permettant à un utilisateur de commander la tête du robot, nous proposons une méthode de contrôle du mouvement corps-complet d’un robot humanoïde, incluant la production de pas et permettant au corps de suivre le mouvement de la tête. Cette idée est poursuivie dans l’étude finale dans laquelle nous analysons la locomotion de sujets humains, dirigée vers une cible, afin d’extraire des caractéristiques du mouvement sous forme invariants. En faisant le lien entre la notion “d’invariant” en neurosciences et celle de “tâche cinématique” en robotique humanoïde, nous développons une méthode pour produire une locomotion réaliste pour d’autres systèmes anthropomorphes. Dans ce cas, les résultats sont illustrés sur le robot humanoïde HRP2 du LAAS-CNRS. La contribution générale de cette thèse est de montrer que, bien que la planification de mouvement pour les robots humanoïdes peut être traitée par des méthodes classiques de robotique, la production de mouvements réalistes nécessite de combiner ces méthodes à l’observation systématique et formelle du comportement humain
Humanoid robotics is coming of age with faster and more agile robots. To compliment the physical complexity of humanoid robots, the robotics algorithms being developed to derive their motion have also become progressively complex. The work in this thesis spans across two research fields, human neuroscience and humanoid robotics, and brings some ideas from the former to aid the latter. By exploring the anthropological link between the structure of a human and that of a humanoid robot we aim to guide conventional robotics methods like local optimization and task-based inverse kinematics towards more realistic human-like solutions. First, we look at dynamic manipulation of human hand trajectories while playing with a yoyo. By recording human yoyo playing, we identify the control scheme used as well as a detailed dynamic model of the hand-yoyo system. Using optimization this model is then used to implement stable yoyo-playing within the kinematic and dynamic limits of the humanoid HRP-2. The thesis then extends its focus to human and humanoid locomotion. We take inspiration from human neuroscience research on the role of the head in human walking and implement a humanoid robotics analogy to this. By allowing a user to steer the head of a humanoid, we develop a control method to generate deliberative whole-body humanoid motion including stepping, purely as a consequence of the head movement. This idea of understanding locomotion as a consequence of reaching a goal is extended in the final study where we look at human motion in more detail. Here, we aim to draw to a link between “invariants” in neuroscience and “kinematic tasks” in humanoid robotics. We record and extract stereotypical characteristics of human movements during a walking and grasping task. These results are then normalized and generalized such that they can be regenerated for other anthropomorphic figures with different kinematic limits than that of humans. The final experiments show a generalized stack of tasks that can generate realistic walking and grasping motion for the humanoid HRP-2. The general contribution of this thesis is in showing that while motion planning for humanoid robots can be tackled by classical methods of robotics, the production of realistic movements necessitate the combination of these methods with the systematic and formal observation of human behavior
APA, Harvard, Vancouver, ISO, and other styles
6

Umali, Antonio. "Framework For Robot-Assisted Doffing of Personal Protective Equipment." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/940.

Full text
Abstract:
"When treating highly-infectious diseases such as Ebola, health workers are at high risk of infection during the doffing of Personal Protective Equipment (PPE). This is due to factors such as fatigue, hastiness, and inconsistency in training. The introduction of a semi-autonomous robot doffing assistant has the potential to increase the safety of the doffing procedure by assisting the human during high-risk sub-tasks. The addition of a robot into the procedure introduces the need to transform a purely human task into a sequence of safe and effective human-robot collaborative actions. We take advantage of the fact that the human can do the more intricate motions during the procedure. Since diseases like Ebola can spread through the mucous membranes of the eyes, ears, nose, and mouth our goal is to keep the human’s hands away from his or her face as much as possible. Thus our framework focuses on using the robot to help avoid such human risky motion. As secondary goals, we seek to also minimize the human’s effort and make the robot’s motion intuitive for the human. To address different versions and variants of PPE, we propose a way of segmenting the doffing procedure into a sequence of human and robot actions such that the robot only assists when necessary. Our framework then synthesizes assistive motions for the robot that perform parts of the tasks according to the metrics above. Our experiments on five doffing tasks suggest that the introduction of a robot assistant improves the safety of the procedure in three out of four of the high-risk doffing tasks while reducing effort in all five tasks."
APA, Harvard, Vancouver, ISO, and other styles
7

Pai, Abhishek. "Distance-Scaled Human-Robot Interaction with Hybrid Cameras." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563872095430977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Conte, Dean Edward. "Autonomous Robotic Escort Incorporating Motion Prediction with Human Intention." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102581.

Full text
Abstract:
This thesis presents a framework for a mobile robot to escort a human to their destination successfully and efficiently. The proposed technique uses accurate path prediction incorporating human intention to locate the robot in front of the human while walking. Human intention is inferred by the head pose, an effective past-proven implicit indicator of intention, and fused with conventional physics-based motion prediction. The human trajectory is estimated and predicted using a particle filter because of the human's nonlinear and non-Gaussian behavior, and the robot control action is determined from the predicted human pose allowing for anticipative autonomous escorting. Experimental analysis shows that the incorporation of the proposed human intention model reduces human position prediction error by approximately 35% when turning. Furthermore, experimental validation with an omnidirectional mobile robotic platform shows escorting up to 50% more accurate compared to the conventional techniques, while achieving 97% success rate.
Master of Science
This thesis presents a method for a mobile robot to escort a human to their destination successfully and efficiently. The proposed technique uses human intention to predict the walk path allowing the robot to be in front of the human while walking. Human intention is inferred by the head direction, an effective past-proven indicator of intention, and is combined with conventional motion prediction. The robot motion is then determined from the predicted human position allowing for anticipative autonomous escorting. Experimental analysis shows that the incorporation of the proposed human intention reduces human position prediction error by approximately 35% when turning. Furthermore, experimental validation with an mobile robotic platform shows escorting up to 50% more accurate compared to the conventional techniques, while achieving 97% success rate. The unique escorting interaction method proposed has applications such as touch-less shopping cart robots, exercise companions, collaborative rescue robots, and sanitary transportation for hospitals.
APA, Harvard, Vancouver, ISO, and other styles
9

Nitz, Pettersson Hannes, and Samuel Vikström. "VISION-BASED ROBOT CONTROLLER FOR HUMAN-ROBOT INTERACTION USING PREDICTIVE ALGORITHMS." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54609.

Full text
Abstract:
The demand for robots to work in environments together with humans is growing. This calls for new requirements on robots systems, such as the need to be perceived as responsive and accurate in human interactions. This thesis explores the possibility of using AI methods to predict the movement of a human and evaluating if that information can assist a robot with human interactions. The AI methods that were used is a Long Short Term Memory(LSTM) network and an artificial neural network(ANN). Both networks were trained on data from a motion capture dataset and on four different prediction times: 1/2, 1/4, 1/8 and a 1/16 second. The evaluation was performed directly on the dataset to determine the prediction error. The neural networks were also evaluated on a robotic arm in a simulated environment, to show if the prediction methods would be suitable for a real-life system. Both methods show promising results when comparing the prediction error. From the simulated system, it could be concluded that with the LSTM prediction the robotic arm would generally precede the actual position. The results indicate that the methods described in this thesis report could be used as a stepping stone for a human-robot interactive system.
APA, Harvard, Vancouver, ISO, and other styles
10

Hayne, Rafi. "Toward Enabling Safe & Efficient Human-Robot Manipulation in Shared Workspaces." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/1012.

Full text
Abstract:
"When humans interact, there are many avenues of physical communication available ranging from vocal to physical gestures. In our past observations, when humans collaborate on manipulation tasks in shared workspaces there is often minimal to no verbal or physical communication, yet the collaboration is still fluid with minimal interferences between partners. However, when humans perform similar tasks in the presence of a robot collaborator, manipulation can be clumsy, disconnected, or simply not human-like. The focus of this work is to leverage our observations of human-human interaction in a robot's motion planner in order to facilitate more safe, efficient, and human-like collaborative manipulation in shared workspaces. We first present an approach to formulating the cost function for a motion planner intended for human-robot collaboration such that robot motions are both safe and efficient. To achieve this, we propose two factors to consider in the cost function for the robot's motion planner: (1) Avoidance of the workspace previously-occupied by the human, so robot motion is safe as possible, and (2) Consistency of the robot's motion, so that the motion is predictable as possible for the human and they can perform their task without focusing undue attention on the robot. Our experiments in simulation and a human-robot workspace sharing study compare a cost function that uses only the first factor and a combined cost that uses both factors vs. a baseline method that is perfectly consistent but does not account for the human's previous motion. We find using either cost function we outperform the baseline method in terms of task success rate without degrading the task completion time. The best task success rate is achieved with the cost function that includes both the avoidance and consistency terms. Next, we present an approach to human-attention aware robot motion generation which attempts to convey intent of the robot's task to its collaborator. We capture human attention through the combined use of a wearable eye-tracker and motion capture system. Since human attention isn't static, we present a method of generating a motion policy that can be queried online. Finally, we show preliminary tests of this method."
APA, Harvard, Vancouver, ISO, and other styles
11

Manasrah, Ahmad Adli. "Human Motion Tracking for Assisting Balance Training and Control of a Humanoid Robot." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4141.

Full text
Abstract:
Abstract This project illustrates the use of the human's ability to balance according to his center of gravity as demonstrated in two applications. The center of gravity of a human is explained in detail in order to use it in controlling the Aldebaran NAO robot and in the robot-assisted balance training. The first application explains how a humanoid robot can mimic a human's movements via a three dimensional depth sensor where the sensor analyzes the position of a user's limbs and how the robot can lift one foot and balance on the other by redistributing the its body mass when the user lifts his foot. The results showed that this algorithm enabled NAO to successfully mimic the users' arms, and was able to balance on one foot by repositioning its center of mass. The second application investigates how individuals with stroke lean when undergoing robot-assisted balance training. In some instances, they can develop inappropriate leaning behaviors during the training. The Kinect sensor is used to assist in optimizing patients' results by integrating it with the training program. The results showed that the Kinect sensor can improve the efficiency of the process by giving users graphical information about their mass distribution and whether they are leaning correctly or not.
APA, Harvard, Vancouver, ISO, and other styles
12

Ososky, Scott. "Influence of Task-Role Mental Models on Human Interpretation of Robot Motion Behavior." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6331.

Full text
Abstract:
The transition in robotics from tools to teammates has begun. However, the benefit autonomous robots provide will be diminished if human teammates misinterpret robot behaviors. Applying mental model theory as the organizing framework for human understanding of robots, the current empirical study examined the influence of task-role mental models of robots on the interpretation of robot motion behaviors, and the resulting impact on subjective ratings of robots. Observers (N = 120) were exposed to robot behaviors that were either congruent or incongruent with their task-role mental model, by experimental manipulation of preparatory robot task-role information to influence mental models (i.e., security guard, groundskeeper, or no information), the robot's actual task-role behaviors (i.e., security guard or groundskeeper), and the order in which these robot behaviors were presented. The results of the research supported the hypothesis that observers with congruent mental models were significantly more accurate in interpreting the motion behaviors of the robot than observers without a specific mental model. Additionally, an incongruent mental model, under certain circumstances, significantly hindered an observer's interpretation accuracy, resulting in subjective sureness of inaccurate interpretations. The strength of the effects that mental models had on the interpretation and assessment of robot behaviors was thought to have been moderated by the ease with which a particular mental model could reasonably explain the robot's behavior, termed mental model applicability. Finally, positive associations were found between differences in observers' interpretation accuracy and differences in subjective ratings of robot intelligence, safety, and trustworthiness. The current research offers implications for the relationships between mental model components, as well as implications for designing robot behaviors to appear more transparent, or opaque, to humans.
Ph.D.
Doctorate
Graduate Studies
Sciences
Modeling & Simulation
APA, Harvard, Vancouver, ISO, and other styles
13

Busch, Baptiste. "Optimization techniques for an ergonomic human-robot interaction." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0027/document.

Full text
Abstract:
L’interaction Humain-Robot est un domaine de recherche en pleine expansion parmi la communauté robotique. De par sa nature il réunit des chercheurs venant de domaines variés, tels que psychologie, sociologie et, bien entendu, robotique. Ensemble, ils définissent et dessinent les robots avec lesquels nous interagirons dans notre quotidien.Comme humains et robots commencent à travailler en environnement partagés, la diversité des tâches qu’ils peuvent accomplir augmente drastiquement. Cela créé de nombreux défis et questions qu’il nous faut adresser, en terme de sécurité et d’acceptation des systèmes robotiques.L’être humain a des besoins et attentes bien spécifiques qui ne peuvent être occultés lors de la conception des interactions robotiques. D’une certaine manière, il existe un besoin fort pour l’émergence d’une véritable interaction humain-robot ergonomique.Au cours de cette thèse, nous avons mis en place des méthodes pour inclure des critères ergonomiques et humains dans les algorithmes de prise de décisions, afin d’automatiser le processus de génération d’une interaction ergonomique. Les solutions que nous proposons se basent sur l’utilisation de fonctions de coût encapsulant les besoins humains et permettent d’optimiser les mouvements du robot et le choix des actions. Nous avons ensuite appliqué cette méthode à deux problèmes courants d’interaction humain-robot.Dans un premier temps, nous avons proposé une technique pour améliorer la lisibilité des mouvements du robot afin d’arriver à une meilleure compréhension des ses intentions. Notre approche ne requiert pas de modéliser le concept de lisibilité de mouvements mais pénalise les trajectoires qui amènent à une interprétation erronée ou tardive des intentions du robot durant l’accomplissement d’une tâche partagée. Au cours de plusieurs études utilisateurs nous avons observé un gain substantiel en terme de temps de prédiction et une réduction des erreurs d’interprétation.Puis, nous nous sommes attelés au problème du choix des actions et des mouvements qui vont maximiser l’ergonomie physique du partenaire humain. En utilisant une mesure d’ergonomie des postures humaines, nous simulons les actions et mouvements du robot et de l’humain pour accomplir une tâche donnée, tout en évitant les situations où l’humain serait dans une posture de travail à risque. Les études utilisateurs menées montrent que notre méthode conduit à des postures de travail plus sûr et à une interaction perçue comme étant meilleure
Human-Robot Interaction (HRI) is a growing field in the robotic community. By its very nature it brings together researchers from various domains including psychology, sociology and obviously robotics who are shaping and designing the robots people will interact with ona daily basis. As human and robots starts working in a shared environment, the diversity of tasks theycan accomplish together is rapidly increasing. This creates challenges and raises concerns tobe addressed in terms of safety and acceptance of the robotic systems. Human beings havespecific needs and expectations that have to be taken into account when designing robotic interactions. In a sense, there is a strong need for a truly ergonomic human-robot interaction.In this thesis, we propose methods to include ergonomics and human factors in the motions and decisions planning algorithms, to automatize this process of generating an ergonomicinteraction. The solutions we propose make use of cost functions that encapsulate the humanneeds and enable the optimization of the robot’s motions and choices of actions. We haveapplied our method to two common problems of human-robot interaction.First, we propose a method to increase the legibility of the robot motions to achieve abetter understanding of its intentions. Our approach does not require modeling the conceptof legible motions but penalizes the trajectories that leads to late or mispredictions of therobot’s intentions during a live execution of a shared task. In several user studies we achievesubstantial gains in terms of prediction time and reduced interpretation errors.Second, we tackle the problem of choosing actions and planning motions that maximize thephysical ergonomics on the human side. Using a well-accepted ergonomic evaluation functionof human postures, we simulate the actions and motions of both the human and the robot,to accomplish a specific task, while avoiding situations where the human could be at risk interms of working posture. The conducted user studies show that our method leads to saferworking postures and a better perceived interaction
APA, Harvard, Vancouver, ISO, and other styles
14

Mielke, Erich Allen. "Force and Motion Based Methods for Planar Human-Robot Co-manipulation of Extended Objects." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/6767.

Full text
Abstract:
As robots become more common operating in close proximity to people, new opportunities arise for physical human-robot interaction, such as co-manipulation of extended objects. Co-manipulation involves physical interaction between two partners where an object held by both is manipulated in tandem. There is a dearth of viable high degree-of-freedom co-manipulation controllers, especially for extended objects, as well as a lack of information about how human-human teams perform in high degree-of-freedom tasks. One method for creating co-manipulation controllers is to pattern them off of human data. This thesis uses this technique by exploring a previously completed experimental study. The study involved human-human dyads in leader-follower format performing co-manipulation tasks with an extended object in 6 degrees of freedom. Two important tasks performed in this experiment were lateral translation and planar rotation tasks. This thesis focuses on these two tasks because they represent planar motion. Most previous control methods are for 1 or 2 degrees-of-freedom. The study provided information about how human-human dyads perform planar tasks. Most notably, planar tasks generally adhere to minimum-jerk trajectories, and do not minimize interaction forces between users. The study also helped solve the translation versus rotation problem. From the experimental data, torque patterns were discovered at the beginning of the trial that defined intent to translate or rotate. From these patterns, a new method of planar co-manipulation control was developed, called Extended Variable Impedance Control. This is a novel 3 degree-of-freedom method that is applicable to a variety of planar co-manipulation scenarios. Additionally, the data was fed through a Recursive Neural Network. The network takes in a series of motion data and predicts the next step in the series. The predicted data was used as an intent estimate in another novel 3 degree of freedom method called Neural Network Prediction Control. This method is capable of generalizing to 6 degrees of freedom, but is limited in this thesis for comparison with the other method. An experiment, involving 16 participants, was developed to test the capabilities of both controllers for planar tasks. A dual manipulator robot with an omnidirectional base was used in the experiment. The results from the study show that both the Neural Network Prediction Control and Extended Variable Impedance Control controllers performed comparably to blindfolded human-human dyads. A survey given to participants informed us they preferred to use the Extended Variable Impedance Control. These two unique controllers are the major results of this work.
APA, Harvard, Vancouver, ISO, and other styles
15

Yussof, Hanafiah, Mitsuhiro Yamano, Yasuo Nasu, and Masahiro Ohka. "Design of a 21-DOF Humanoid Robot to Attain Flexibility in Human-Like Motion." IEEE, 2006. http://hdl.handle.net/2237/9506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Rivera, Francisco. "Using Motion Capture and Virtual Reality to test the advantages of Human Robot Collaboration." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17205.

Full text
Abstract:
Nowadays Virtual Reality (VR) and Human Robot Collaboration (HRC) are becoming more and more important in Industry as well as science. This investigation studies the applications of these two technologies in the ergonomic field by developing a system able to visualise and present ergonomics evaluation results in real time assembly tasks in a VR Environment, and also, evaluating the advantages of Human Robot Collaboration by studying in Virtual Reality a specific operation carried at Volvo Global Trucks Operation´s factory in Skövde. Regarding the first part of this investigation an innovative system was developed able to show ergonomic feedbacks in real time, as well as make ergonomic evaluations of the whole workload inside of a VR environment. This system can be useful for future research in the Virtual Ergonomics field regarding matters related to ergonomic learning rate of the workers when performing assembly tasks, design of ergonomic workstations, effect of different types assembly instructions in VR and a wide variety of different applications. The assembly operation with and without robot was created in IPS to use its VR functionality in order to test the assembly task in real users with natural movements of the body. The posture data of the users performing the tasks in Virtual Reality was collected. The users performed the task without the collaborative robot and then, with the collaborative robot. Their posture data was collected by using a Motion Capture equipment called Smart Textiles (developed at the University of Skövde) and the two different ergonomic evaluations (Using Smart Textiles’ criteria) of the two different task compared. The results show that when the robot implemented in this specific assembly task, the posture of the workers (specially the posture of the arms) has a great improvement if it is compared to the same task without the robot.
APA, Harvard, Vancouver, ISO, and other styles
17

Smith, Christian. "Input Estimation for Teleoperation : Using Minimum Jerk Human Motion Models to Improve Telerobotic Performance." Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11590.

Full text
Abstract:
This thesis treats the subject of applying human motion models to create estimators for the input signals of human operators controlling a telerobotic system.In telerobotic systems, the control signal input by the operator is often treated as a known quantity. However, there are instances where this is not the case. For example, a well-studied problem is teleoperation under time delay, where the robot at the remote site does not have access to current operator input due to time delays in the communication channel. Another is where the hardware sensors in the input device have low accuracy. Both these cases are studied in this thesis. A solution to these types of problems is to apply an estimator to the input signal. There exist several models that describe human hand motion, and these can be used to create a model-based estimator. In the present work, we propose the use of the minimum jerk (MJ) model. This choice of model is based mainly on the simplicity of the MJ model, which can be described as a fifth degree polynomial in the cartesian space of the position of the subject's hand. Estimators incorporating the MJ model are implemented and inserted into control systems for a teleoperatedrobot arm. We perform experiments where we show that these estimators can be used for predictors increasing task performance in the presence of time delays. We also show how similar estimators can be used to implement direct position control using a handheld device equipped only with accelerometers.
QC 20100810
APA, Harvard, Vancouver, ISO, and other styles
18

Vassallo, Christian. "Using human-inspired models for guiding robot locomotion." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30177/document.

Full text
Abstract:
Cette thèse a été effectuée dans le cadre du projet européen Koroibot dont l'objectif est le développement d'algorithmes de marche avancés pour les robots humanoïdes. Dans le but de contrôler les robots d'une manière sûre et efficace chez les humains, il est nécessaire de comprendre les règles, les principes et les stratégies de l'homme lors de la locomotion et de les transférer à des robots. L'objectif de cette thèse est d'étudier et d'identifier les stratégies de locomotion humaine et créer des algorithmes qui pourraient être utilisés pour améliorer les capacités du robot. La contribution principale est l'analyse sur les principes de piétons qui guident les stratégies d'évitement des collisions. En particulier, nous observons comment les humains adapter une tâche de locomotion objectif direct quand ils ont à interférer avec un obstacle en mouvement traversant leur chemin. Nous montrons les différences entre la stratégie définie par les humains pour éviter un obstacle non-collaboratif et la stratégie pour éviter un autre être humain, et la façon dont les humains interagissent avec un objet si se déplaçant en manier simil à l'humaine. Deuxièmement, nous présentons un travail effectué en collaboration avec les neuroscientifiques de calcul. Nous proposons une nouvelle approche pour synthétiser réalistes complexes mouvements du robot humanoïde avec des primitives de mouvement. Trajectoires humaines walking-to-grasp ont été enregistrés. L'ensemble des mouvements du corps sont reciblées et proportionnée afin de correspondre à la cinématique de robots humanoïdes. Sur la base de cette base de données des mouvements, nous extrayons les primitives de mouvement. Nous montrons que ces signaux sources peuvent être exprimées sous forme de solutions stables d'un système dynamique autonome, qui peut être considéré comme un système de central pattern generators (CPGs). Sur la base de cette approche, les stratégies réactives walking-to-grasp ont été développés et expérimenté avec succès sur le robot humanoïde HRP-2 au LAAS-CNRS. Dans la troisième partie de la thèse, nous présentons une nouvelle approche du problème de pilotage d'un robot soumis à des contraintes non holonomes par une porte en utilisant l'asservissement visuel. La porte est représentée par deux points de repère situés sur ses supports verticaux. La plan géométric qui a été construit autour de la porte est constituée de faisceaux de hyperboles, des ellipses et des cercles orthogonaux. Nous montrons que cette géométrie peut être mesurée directement dans le plan d'image de la caméra et que la stratégie basée sur la vision présentée peut également être lié à l'homme. Simulation et expériences réalistes sont présentés pour montrer l'efficacité de nos solutions
This thesis has been done within the framework of the European Project Koroibot which aims at developing advanced algorithms to improve the humanoid robots locomotion. It is organized in three parts. With the aim of steering robots in a safe and efficient manner among humans it is required to understand the rules, principles and strategies of human during locomotion and transfer them to robots. The goal of this thesis is to investigate and identify the human locomotion strategies and create algorithms that could be used to improve robot capabilities. A first contribution is the analysis on pedestrian principles which guide collision avoidance strategies. In particular, we observe how humans adapt a goal-direct locomotion task when they have to interfere with a moving obstacle crossing their way. We show differences both in the strategy set by humans to avoid a non-collaborative obstacle with respect to avoid another human, and the way humans interact with an object moving in human-like way. Secondly, we present a work done in collaboration with computational neuroscientists. We propose a new approach to synthetize realistic complex humanoid robot movements with motion primitives. Human walking-to-grasp trajectories have been recorded. The whole body movements are retargeted and scaled in order to match the humanoid robot kinematics. Based on this database of movements, we extract the motion primitives. We prove that these sources signals can be expressed as stable solutions of an autonomous dynamical system, which can be regarded as a system of coupled central pattern generators (CPGs). Based on this approach, reactive walking-to-grasp strategies have been developed and successfully experimented on the humanoid robot HRP at LAAS-CNRS. In the third part of the thesis, we present a new approach to the problem of vision-based steering of robot subject to non-holonomic constrained to pass through a door. The door is represented by two landmarks located on its vertical supports. The planar geometry that has been built around the door consists of bundles of hyperbolae, ellipses, and orthogonal circles. We prove that this geometry can be directly measured in the camera image plane and that the proposed vision-based control strategy can also be related to human. Realistic simulation and experiments are reported to show the effectiveness of our solutions
APA, Harvard, Vancouver, ISO, and other styles
19

Boberg, Arvid. "Virtual lead-through robot programming : Programming virtual robot by demonstration." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11403.

Full text
Abstract:
This report describes the development of an application which allows a user to program a robot in a virtual environment by the use of hand motions and gestures. The application is inspired by the use of robot lead-through programming which is an easy and hands-on approach for programming robots, but instead of performing it online which creates loss in productivity the strength from offline programming where the user operates in a virtual environment is used as well. Thus, this is a method which saves on the economy and prevents contamination of the environment. To convey hand gesture information into the application which will be implemented for RobotStudio, a Kinect sensor is used for entering the data into the virtual environment. Similar work has been performed before where, by using hand movements, a physical robot’s movement can be manipulated, but for virtual robots not so much. The results could simplify the process of programming robots and supports the work towards Human-Robot Collaboration as it allows people to interact and communicate with robots, a major focus of this work. The application was developed in the programming language C# and has two different functions that interact with each other, one for the Kinect and its tracking and the other for installing the application in RobotStudio and implementing the calculated data into the robot. The Kinect’s functionality is utilized through three simple hand gestures to jog and create targets for the robot: open, closed and “lasso”. A prototype of this application was completed which through motions allowed the user to teach a virtual robot desired tasks by moving it to different positions and saving them by doing hand gestures. The prototype could be applied to both one-armed robots as well as to a two-armed robot such as ABB’s YuMi. The robot's orientation while running was too complicated to be developed and implemented in time and became the application's main bottleneck, but remained as one of several other suggestions for further work in this project.
APA, Harvard, Vancouver, ISO, and other styles
20

Desormeaux, Kevin. "Temporal models of motions and forces for Human-Robot Interactive manipulation." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30221.

Full text
Abstract:
L'intérêt pour la robotique a débuté dans les années 70 et depuis les robots n'ont cessé de remplacer les humains dans l'industrie. L'automatisation à outrance n'apporte cependant pas que des avantages, car elle nécessite des environnements parfaitement contrôlés et la reprogrammation d'une tâche est longue et fastidieuse. Le besoin accru d'adaptabilité et de ré-utilisabilité des systèmes d'assemblage force la robotique à se révolutionner en amenant notamment l'homme et le robot à interagir. Ce nouveau type de collaboration permet de combiner les forces respectives des humains et des robots. Cependant l'homme ne pourra être inclus en tant qu'agent actif dans ces nouveaux espaces de travail collaboratifs que si l'on dispose de robots sûrs, intuitifs et facilement reprogrammables. C'est à la lumière de ce constat qu'on peut deviner le rôle crucial de la génération de mouvement pour les robots de demain. Pour que les humains et les robots puissent collaborer, ces derniers doivent générer des mouvements sûrs afin de garantir la sécurité de l'homme tant physique que psychologique. Les trajectoires sont un excellent modèle pour la génération de mouvements adaptés aux robots collaboratifs, car elles offrent une description simple et précise de l'évolution du mouvement. Les trajectoires dîtes souples sont bien connues pour générer des mouvements sûrs et confortables pour l'homme. Dans cette thèse nous proposons un algorithme de génération de trajectoires temps-réel basé sur des séquences de segments de fonctions polynomiales de degré trois pour construire des trajectoires souples. Ces trajectoires sont construites à partir de conditions initiales et finales arbitraires, une condition nécessaire pour que les robots soient capables de réagir instantanément à des événements imprévus. L'approche basée sur un modèle à jerk-contraint offre des solutions orientées performance: les trajectoires sont optimales en temps sous contraintes de sécurité. Ces contraintes de sécurité sont des contraintes cinématiques qui dépendent de la tâche et du contexte et doivent être spécifiées. Pour guider le choix de ces contraintes, nous avons étudié le rôle de la cinématique dans la définition des propriétés ergonomiques du mouvement.[...]
It was in the 70s when the interest for robotics really emerged. It was barely half a century ago, and since then robots have been replacing humans in the industry. This robot-oriented solution doesn't come without drawbacks as full automation requires time-consuming programming as well as rigid environments. With the increased need for adaptability and reusability of assembly systems, robotics is undergoing major changes and see the emergence of a new type of collaboration between humans and robots. Human-Robot collaboration get the best of both world by combining the respective strengths of humans and robots. But, to include the human as an active agent in these new collaborative workspaces, safe and flexible robots are required. It is in this context that we can apprehend the crucial role of motion generation in tomorrow's robotics. For the emergence of human-robot cooperation, robots have to generate motions ensuring the safety of humans, both physical and physchological. For this reason motion generation has been a restricting factor to the growth of robotics in the past. Trajectories are excellent candidates in the making of desirable motions designed for collaborative robots, because they allow to simply and precisely describe the motions. Smooth trajectories are well known to provide safe motions with good ergonomic properties. In this thesis we propose an Online Trajectory Generation algorithm based on sequences of segment of third degree polynomial functions to build smooth trajectories. These trajectories are built from arbitrary initial and final conditions, a requirement for robots to be able to react instantaneously to unforeseen events. Our approach built on a constrained-jerk model offers performance-oriented solutions : the trajectories are time-optimal under safety constraints. These safety constraints are kinematic constraints that are task and context dependent and must be specified. To guide the choice of these constraints we investigated the role of kinematics in the definition of ergonomics properties of motions. We also extended our algorithm to cope with non-admissible initial configurations, opening the way to trajectory generation under non-constant motion constraints. [...]
APA, Harvard, Vancouver, ISO, and other styles
21

Lazarov, Kristiyan, and Badi Mirzai. "Behaviour-Aware Motion Planning for Autonomous Vehicles Incorporating Human Driving Style." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254224.

Full text
Abstract:
This paper proposes a model to ensure safe and realistic human-robot interaction for an autonomous vehicle interacting with a human-driven vehicle, by incorporating the driving style of the human driver. The interaction is modeled as a game, where both agents try to maximize future rewards. The driving style of the human is captured via the role of the human driver in the game, capturing the fact that humans with different driving styles reason differently. The solution of the game is obtained using an numerical approximation and used by the autonomous vehicle to plan optimally ahead. The model is validated via simulations on a safety-critical scenario, where realistic driving style-dependent behaviour emerges naturally.
APA, Harvard, Vancouver, ISO, and other styles
22

Glorieux, Emile. "Multi-Robot Motion Planning Optimisation for Handling Sheet Metal Parts." Doctoral thesis, Högskolan Väst, Avdelningen för produktionssystem (PS), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-10947.

Full text
Abstract:
Motion planning for robot operations is concerned with path planning and trajectory generation. In multi-robot systems, i.e. with multiple robots operating simultaneously in a shared workspace, the motion planning also needs to coordinate the robots' motions to avoid collisions between them. The multi-robot coordination decides the cycle-time for the planned paths and trajectories since it determines to which extend the operations can take place simultaneously without colliding. To obtain the quickest cycle-time, there needs to bean optimal balance between, on the one hand short paths and fast trajectories, and on the other hand possibly longer paths and slower trajectories to allow that the operations take place simultaneously in the shared workspace. Due to the inter-dependencies, it becomes necessary to consider the path planning, trajectory generation and multi-robot coordination together as one optimisation problem in order to find this optimal balance.This thesis focusses on optimising the motion planning for multi-robot material handling systems of sheet metal parts. A methodology to model the relevant aspects of this motion planning problem together as one multi-disciplinary optimisation problem for Simulation based Optimisation (SBO) is proposed. The identified relevant aspects include path planning,trajectory generation, multi-robot coordination, collision-avoidance, motion smoothness, end-effectors' holding force, cycle-time, robot wear, energy efficiency, part deformations, induced stresses in the part, and end-effectors' design. The cycle-time is not always the (only) objective since it is sometimes equally/more important to minimise robot wear, energy consumption, and/or part deformations. Different scenarios for these other objectives are therefore also investigated. Specialised single- and multi-objective algorithms are proposed for optimising the motion planning of these multi-robot systems. This thesis also investigates how to optimise the velocity and acceleration profiles of the coordinated trajectories for multi-robot material handling of sheet metal parts. Another modelling methodology is proposed that is based on a novel mathematical model that parametrises the velocity and acceleration profiles of the trajectories, while including the relevant aspects of the motion planning problem excluding the path planning since the paths are now predefined.This enables generating optimised trajectories that have tailored velocity and acceleration profiles for the specific material handling operations in order to minimise the cycle-time,energy consumption, or deformations of the handled parts.The proposed methodologies are evaluated in different scenarios. This is done for real world industrial case studies that consider the multi-robot material handling of a multi-stage tandem sheet metal press line, which is used in the automotive industry to produce the cars' body panels. The optimisation results show that significant improvements can be obtained compared to the current industrial practice.
APA, Harvard, Vancouver, ISO, and other styles
23

Taqi, Sarah M. A. M. "Reproduction of Observed Trajectories Using a Two-Link Robot." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308031627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Waldhart, Jules. "A NEW FORMULATION AND RESOLUTION SCHEMES FOR PLANNING COLLABORATIVE HUMAN-ROBOT TASKS." Thesis, Toulouse, INSA, 2018. http://www.theses.fr/2018ISAT0047.

Full text
Abstract:
Les robots interagissant avec des humains doivent se comporter en adéquation avec certaines de nos règles sociaux-culturelles, qui doivent être considérées par chaque composant du robot. Lorsqu’il décide d’une action à faire et de comment l’exécuter, le système a besoin de communiquer l’information contextuelle pertinente à chacun de ses composants afin qu’ils puissent respecter ces règles. Il est essentiel que de tels robots puissent se coordonner sans accrocs avec leur partenaires humains. Nous humains utilisons de nombreux signaux de synchronisation notamment via le regard, la lisibilité de nos gestes ou par le dialogue. Nous inférons efficacement les possibilités d’actions de nos partenaires, ce qui nous aide à anticiper ce qu’ils vont ou devraient faire afin de mieux planifier nos propres actions. Dans le domaine de l’interaction Homme-robot, ces capacités sont essentielles. Cette thèse présente notre approche pour résoudre deux tâches où humains et robots collaborent étroitement: un problème de transport d’objet où plusieurs robots et humain doivent ou peuvent se faire passer un objet de main à main pour l’amener d’un endroit à un autre, et une tâche de guide où le robot aide des humains à s’orienter en utilisant dialogue, navigation et mouvements déictiques (pointage). Nous présentons notre implantation de ces composants et de leur articulation dans le cadre d’une d’architecture où l’information contextuelle est transmise des plus hauts niveaux de décision vers les plus bas qui l’utilisent pour s’adapter. Le robot planifie aussi pour les actions des humains, comme dans un système multi-robot, ce qui lui permet de ne pas être dans l’attente des actions des humains, mais d’être proactif dans la proposition d’une solution, et d’anticiper leurs actions futures
When interacting with humans, robotic systems shall behave in compliance to some of our socio-cultural rules, and every component of the robot have to take them into account. When deciding an action to perform and how to perform it, the system then needs to communicate pertinent contextual information to its components so they can plan respecting these rules. It is also essential for such robot to ensure a smooth coordination with its human partners. We humans use many cues for synchronization like gaze, legible motions or speech. We are good at inferring what actions are available to our partner, helping us to get an idea of what others are going to do (or what they should do) to better plan for our own actions. Enabling the robot with such capacities is key in the domain of human-robot interaction. This thesis presents our approach to solve two tasks where humans and robots collaborate deeply: a transport problem where multiple robots and humans need to or can handover an object to bring it from one place to another, and a guiding task where the robot helps the humans to orient themselves using speech, navigation and deictic gestures (pointing). We present our implementation of components and their articulation in a architecture where contextual information is transmitted from higher levels decision components to lower ones, which use it to adapt. Our planners also plan for the human actions, as in a multi-robot system: this allows to not be waiting for humans to act, but rather be proactive in the proposal of a solution, and try to predict the actions they will take
APA, Harvard, Vancouver, ISO, and other styles
25

Gharbi, Mamoun. "Geometric reasoning planning in the context of Human-Robot Interaction." Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0047/document.

Full text
Abstract:
Au cours des dernières années, la communauté robotique s'est largement intéressée au domaine de l'interaction homme-robot (HRI). Un des aspects de ce domaine est de faire agir les robots en présence de l'homme, tout en respectant sa sécurité ainsi que son confort. Pour atteindre cet objectif, un robot doit planifier ses actions tout en prenant explicitement en compte les humains afin d'adapter le plan à leurs positions, leurs capacités et leurs préférences. La première partie de cette thèse concerne les transferts d'objets entre humains et robots : où, quand et comment les effectuer? Dépendant des préférences de l'Homme, il est parfois préférable, ou pas, partager l'effort du transfert d'objet entre lui et le robot, mais encore, à certains moments, un seul transfert d'objet n'est pas suffisant pour atteindre l'objectif (amener l'objet à un agent cible), le robot doit alors planifier une séquence de transfert d'objet entre plusieurs agents afin d'arriver à ses fins. Quel que soit le cas, pendant le transfert d'objet, un certain nombre de signaux doivent être échangés par les deux protagonistes afin de réussir l'action. Un des signaux les plus utilisés est le regard. Lorsque le donneur tend le bras afin de transférer l'objet, il doit regarder successivement le receveur puis l'objet afin de faciliter le transfert. Le transfert d'objet peut être considéré comme une action de base dans un plan plus vaste, nous amenant à la seconde partie de cette thèse qui présente une formalization de ce type d'actions de base" et d'actions plus complexes utilisant des conditions, des espaces de recherche et des contraintes. Cette partie rend aussi compte du framework et des différents algorithmes utilisés pour résoudre et calculer ces actions en fonction de leur description. La dernière partie de la thèse montre comment ce framework peut s'adapter à un planificateur de plus haut niveau (un planificateur de tâches par exemple) et une méthode pour combiner la planification symbolique et géométrique. Le planificateur de tâches utilise des appels à des fonctions externes lui permettant de vérifier la faisabilité de la tâche courante, et en cas de succès, de récupérer l'état du monde fourni par le raisonneur géométrique et de l'utilisé afin de poursuivre la planification. Cette partie montre également différentes extensions de cette algorithme, tels que les \validation géométriques" où nous testons l'infaisabilité de plusieurs actions à la fois ou \les contraintes" où l'ajout de contraintes au niveau symbolique peut dirigée la recherche géométrique ou encore \recherche dirigé par coût" où le planificateur symbolique utilise les informations fournies par la partie géométrique afin d'éviter le calcul de plans moins intéressants
In the last few years, the Human robot interaction (HRI) field has been in the spotlight of the robotics community. One aspect of this field is making robots act in the presence of humans, while keeping them safe and comfortable. In order to achieve this, a robot needs to plan its actions while explicitly taking into account the humans and adapt its plans to their whereabouts, capacities and preferences. The first part of this thesis is about human-robot handover: where, when and how to perform them? Depending on the human preferences, it may be better, or not, to share the handover effort between him and the robot, while in other cases, a unique handover might not be enough to achieve the goal (bringing the object to a target agent) and a sequence of handovers might be needed. In any case, during the handover, a number of cues should be used by both protagonists involved in one handover. One of the most used cue is the gaze. When the giver reaches out with his arm, he should look at the object, and when the motion is finished, he should look at the receiver's face to facilitate the transfer. The handover can be considered as a basic action in a bigger plan. The second part of this thesis reports about a formalization of these kind of basic actions" and more complex ones by the use of conditions, search spaces and restraints. It also reports about a framework and different algorithms used to solve and compute these actions based on their description. The last part of the thesis shows how the previously cited framework can fit in with a higher level planner (such as a task planner) and a method to combine a symbolic and geometric planner. The task planner uses external calls to the geometric planner to assess the feasibility of the current task, and in case of success, retrieve the state of the world provided by the geometric reasoner and use it to continue the planning. This part also shows different extensions enabling a faster search. Some of these extensions are \Geometric checks" where we test the infeasibility of multiple actions at once, \constraints" where adding constraints at the symbolic level can drive the geometric search, and \cost driven search" where the symbolic planner uses information form the geometric one to prune out over costly plans
APA, Harvard, Vancouver, ISO, and other styles
26

Adorno, Bruno. "Two-arm Manipulation : from Manipulators to Enhanced Human-Robot Collaboration." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20064/document.

Full text
Abstract:
Cette thèse est consacrée à l'étude de la manipulation et de la coordination robotique à deux bras ayant pour objectif le développement d'une approche unifiée dont différentes tâches seront décrites dans le même formalisme. Afin de fournir un cadre théorique compact et rigoureux, les techniques présentées utilisent les quaternions duaux afin de représenter les différents aspects de la modélisation cinématique ainsi que de la commande.Une nouvelle représentation de la manipulation à deux bras est proposée - l'espace dual des tâches de coopération - laquelle exploite l'algèbre des quaternions duaux afin d'unifier les précédentes approches présentées dans la littérature. La méthode est étendue pour prendre en compte l'ensemble des chaînes cinématiques couplées incluant la simulation d'un manipulateur mobile.Une application originale de l'espace dual des tâches de coopération est développée afin de représenter de manière intuitive les tâches principales impliquées dans une collaboration homme-robot. Plusieurs expérimentations sont réalisées pour valider les techniques proposées. De plus, cette thèse propose une nouvelle classe de tâches d'interaction homme-robot dans laquelle le robot contrôle tout les aspects de la coordination. Ainsi, au-delà du contrôle de son propre bras, le robot contrôle le bras de l'humain par le biais de la stimulation électrique fonctionnelle (FES) dans le cadre d'applications d'interaction robot / personne handicapée.Grâce à cette approche générique développée tout au long de cette thèse, les outils théoriques qui en résultent sont compacts et capables de décrire et de contrôler un large éventail de tâches de manipulations robotiques complexes
This thesis is devoted to the study of robotic two-arm coordination/manipulation from a unified perspective, and conceptually different bimanual tasks are thus described within the same formalism. In order to provide a consistent and compact theory, the techniques presented herein use dual quaternions to represent every single aspect of robot kinematic modeling and control.A novel representation for two-arm manipulation is proposed—the cooperative dual task-space—which exploits the dual quaternion algebra to unify the various approaches found in the literature. The method is further extended to take into account any serially coupled kinematic chain, and a case study is performed using a simulated mobile manipulator. An original application of the cooperative dual task-space is proposed to intuitively represent general human-robot collaboration (HRC) tasks, and several experiments were performed to validate the proposed techniques. Furthermore, the thesis proposes a novel class of HRC taskswherein the robot controls all the coordination aspects; that is, in addition to controlling its own arm, the robot controls the human arm by means of functional electrical stimulation (FES).Thanks to the holistic approach developed throughout the thesis, the resultant theory is compact, uses a small set of mathematical tools, and is capable of describing and controlling a broad range of robot manipulation tasks
APA, Harvard, Vancouver, ISO, and other styles
27

Houda, Taha. "Human Interaction in a large workspace parallel robot platform with a virtual environment." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG047.

Full text
Abstract:
L'objectif de la thèse porte sur la définition, la mise oeuvre et l'évaluation d'un algorithme de restitution de mouvement prenant en compte les contraintes de perception du système vestibulaire chez l'humain et les contraintes liées à la physique du simulateur de mouvement utilisé. Ce dernier est constitué par une plateforme robotique série-parallèle à 8 degrés de liberté entièrement conçue dans le laboratoire et destinée principalement à l'assistance aux personnes présentant un handicap moteur. Cette restitution sensorielle nécessite des travaux de recherches pluridisciplinaires en robotique et en réalité virtuelle. Aussi, une formalisation de la modélisation dynamique, basée sur l'état de l'art, a été adaptée et les paramètres dynamiques optimisés et identifés à la plateforme de mouvement à 8 degrés de libertés. Plusieurs méthodes de génération de mouvement, gérant la redondance de la plateforme, ont été étudiées mises en oeuvre et comparées. La méthode basée sur l'optimisation par essaims particulaires (PSO) plus performante a été retenue. Cet algorithme est par la suite utilisé pour optimiser les paramètres du contrôleur de la plateforme par mode glissant.Le simulateur a été utilisé pour une application de ski en réalité virtuelle reproduisant la station de Combloux en Haute-Savoie dédiée aux personnes handicapées. Les résultats de simulation montrent un très bon suivi des consignes et une bonne réduction des oscillations. Ces travaux seront poursuivis par l'utilisation d'interfaces multi sensorielles de réalité virtuelle d'assistance à l'humain
The thesis objective relates to the denition, the implementation and the evaluation of a Motion Cueing Algorithm taking into account the perceptual constraints of the vestibular systemin humans and the constraints related to the movement physics of the used simulator. The latter consists of a series-parallel robotic platform with 8 degrees of freedom, entirely designed in the laboratory and intended primarily to assist people with motor disabilities. This sensory restitution requires multidisciplinary research work in robotics and virtual reality. Moreover, a formalization of dynamic modeling, based on the state of the art, was adapted and the dynamic parameters optimized and identied for the 8 degrees of freedom motion platform. Several methods of trajectory generation, exploitation of the platform redundancy, have been studied, implemented,and compared. The most e cient particle swarm optimization (PSO) method was chosen. This algorithm is then used to optimize the parameters of the platform controller in sliding mode. The simulator was used for a virtual reality ski application reproducing the Combloux resort in Haute-Savoie dedicated to disabled people. The simulation results show a very good trajectory tracking behavior and a good reduction in terms of oscillations. This work will be continued through the use of multi-sensory human-assisted virtual reality interfaces
APA, Harvard, Vancouver, ISO, and other styles
28

Eshelman-Haynes, Candace Lee. "Visual contributions to spatial perception during a remote navigation task." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1247510065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Diaz-Mercado, Yancy J. "Interactions in multi-robot systems." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55020.

Full text
Abstract:
The objective of this research is to develop a framework for multi-robot coordination and control with emphasis on human-swarm and inter-agent interactions. We focus on two problems: in the first we address how to enable a single human operator to externally influence large teams of robots. By directly imposing density functions on the environment, the user is able to abstract away the size of the swarm and manipulate it as a whole, e.g., to achieve specified geometric configurations, or to maneuver it around. In order to pursue this approach, contributions are made to the problem of coverage of time-varying density functions. In the second problem, we address the characterization of inter-agent interactions and enforcement of desired interaction patterns in a provably safe (i.e., collision free) manner, e.g., for achieving rich motion patterns in a shared space, or for mixing of sensor information. We use elements of the braid group, which allows us to symbolically characterize classes of interaction patterns. We further construct a new specification language that allows us to provide rich, temporally-layered specifications to the multi-robot mixing framework, and present algorithms that significantly reduce the search space of specification-satisfying symbols with exactness guarantees. We also synthesize provably safe controllers that generate and track trajectories to satisfy these symbolic inputs. These controllers allow us to find bounds on the amount of safe interactions that can be achieved in a given bounded domain.
APA, Harvard, Vancouver, ISO, and other styles
30

Bartholomew, Paul D. "Optimal behavior composition for robotics." Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51872.

Full text
Abstract:
The development of a humanoid robot that mimics human motion requires extensive programming as well as understanding the motion limitations of the robot. Programming the countless possibilities for a robot’s response to observed human motion can be time consuming. To simplify this process, this thesis presents a new approach for mimicking captured human motion data through the development of a composition routine. This routine is built upon a behavior-based framework and is coupled with optimization by calculus to determine the appropriate weightings of predetermined motion behaviors. The completion of this thesis helps to fill a void in human/robot interactions involving mimicry and behavior-based design. Technological advancements in the way computers and robots identify human motion and determine for themselves how to approximate that motion have helped make possible the mimicry of observed human subjects. In fact, many researchers have developed humanoid systems that are capable of mimicking human motion data; however, these systems do not use behavior-based design. This thesis will explain the framework and theory behind our optimal behavior composition algorithm and the selection of sinusoidal motion primitives that make up a behavior library. This algorithm breaks captured motion data into various time intervals, then optimally weights the defined behaviors to best approximate the captured data. Since this routine does not reference previous or following motion sequences, discontinuities may exist between time intervals. To address this issue, the addition of a PI controller to regulate and smooth out the transitions between time intervals will be shown. The effectiveness of using the optimal behavior composition algorithm to create an approximated motion that mimics capture motion data will be demonstrated through an example configuration of hardware and a humanoid robot platform. An example of arm motion mimicry will be presented and includes various image sequences from the mimicry as well as trajectories containing the joint positions for both the human and the robot.
APA, Harvard, Vancouver, ISO, and other styles
31

Wei, Junqing. "Autonomous Vehicle Social Behavior for Highway Driving." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/919.

Full text
Abstract:
In recent years, autonomous driving has become an increasingly practical technology. With state-of-the-art computer and sensor engineering, autonomous vehicles may be produced and widely used for travel and logistics in the near future. They have great potential to reduce traffic accidents, improve transportation efficiency, and release people from driving tasks while commuting. Researchers have built autonomous vehicles that can drive on public roads and handle normal surrounding traffic and obstacles. However, in situations like lane changing and merging, the autonomous vehicle faces the challenge of performing smooth interaction with human-driven vehicles. To do this, autonomous vehicle intelligence still needs to be improved so that it can better understand and react to other human drivers on the road. In this thesis, we argue for the importance of implementing ”socially cooperative driving”, which is an integral part of everyday human driving, in autonomous vehicles. An intention-integrated Prediction- and Cost function-Based algorithm (iPCB) framework is proposed to enable an autonomous vehicles to perform cooperative social behaviors. We also propose a behavioral planning framework to enable the socially cooperative behaviors with the iPCB algorithm. The new architecture is implemented in an autonomous vehicle and can coordinate the existing Adaptive Cruise Control (ACC) and Lane Centering interface to perform socially cooperative behaviors. The algorithm has been tested in over 500 entrance ramp and lane change scenarios on public roads in multiple cities in the US and over 10; 000 in simulated case and statistical testing. Results show that the proposed algorithm and framework for autonomous vehicle improves the performance of autonomous lane change and entrance ramp handling. Compared with rule-based algorithms that were previously developed on an autonomous vehicle for these scenarios, over 95% of potentially unsafe situations are avoided.
APA, Harvard, Vancouver, ISO, and other styles
32

Guerriero, Brian A. "Haptic control and operator-guided gait coordination of a pneumatic hexapedal rescue robot." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Gielniak, Michael Joseph. "Adaptation of task-aware, communicative variance for motion control in social humanoid robotic applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43591.

Full text
Abstract:
An algorithm for generating communicative, human-like motion for social humanoid robots was developed. Anticipation, exaggeration, and secondary motion were demonstrated as examples of communication. Spatiotemporal correspondence was presented as a metric for human-like motion, and the metric was used to both synthesize and evaluate motion. An algorithm for generating an infinite number of variants from a single exemplar was established to avoid repetitive motion. The algorithm was made task-aware by including the functionality of satisfying constraints. User studies were performed with the algorithm using human participants. Results showed that communicative, human-like motion can be harnessed to direct partner attention and communicate state information. Furthermore, communicative, human-like motion for social robots produced by the algorithm allows humans partners to feel more engaged in the interaction, recognize motion earlier, label intent sooner, and remember interaction details more accurately.
APA, Harvard, Vancouver, ISO, and other styles
34

Mainprice, Jim. "Planification de mouvement pour la manipulation d'objets sous contraintes d'interaction homme-robot." Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00782708.

Full text
Abstract:
Un robot agit sur son environnement par le mouvement, sa capacité à planifier ses mouvements est donc une composante essentielle de son autonomie. La planification de mouvement est un domaine de recherche qui a largement été étudié durant ces dernières décennies. L'objectif de cette thèse est de concevoir des méthodes algorithmiques performantes permettant le calcul automatique de trajectoires pour des systèmes robotiques complexes dans le cadre de la robotique d'assistance. Ce champ applicatif émergeant de la robotique autonome apporte de nouvelles contraintes et de nouveaux défis. Les systèmes considérés qui ont pour vocation de servir l'homme et de l'accompagner dans des tâches du quotidien doivent tenir compte de la sécurité et du bien-être de l'homme. Pour cela, les mouvements du robot doivent être générés en considérant explicitement le partenaire humain raisonant sur un modèle du comportement social de l'homme, de ses capacités et de ses limites afin de produire un comportement synergique optimal. Dans cette thèse nous étendons les travaux pionniers menés au LAAS dans ce domaine afin de produire des mouvements considérant l'homme de manière explicite dans des environnements encombrés. Des algorithmes d'exploration de l'espace des configurations par échantillonnage aléatoire sont combinés à des algorithmes d'optimisation de trajectoire afin de produire des mouvements sûrs et agréables. Nous proposons dans un deuxième temps un planificateur de tâche d'échange d'objet prenant en compte la mobilité du receveur humain permettant ainsi de partager l'effort lors du transfert. La pertinence de cette approche a été étudiée dans une étude utilisateur. Finalement, nous présentons une architecture logicielle qui permet de prendre en compte l'homme de manière dynamique lors de la réalisation de tâches de manipulation interactive. Cette architecture, développée en collaboration avec un partenaire du projet européen Dexmart a également été évaluée dans une étude utilisateur.
APA, Harvard, Vancouver, ISO, and other styles
35

Lallement, Raphael. "Symbolic and Geometric Planning for teams of Robots and Humans." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0010/document.

Full text
Abstract:
La planification HTN (Hierarchical Task Network, ou Réseau Hiérarchique de Tâches) est une approche très souvent utilisée pour produire des séquences de tâches servant à contrôler des systèmes intelligents. Cette thèse présente le planificateur HATP (Hierarchical Agent-base Task Planner, ou Planificateur Hiérarchique centré Agent) qui étend la planification HTN classique en enrichissant la représentation des domaines et leur sémantique afin d'être plus adaptées à la robotique, tout en offrant aussi une prise en compte des humains. Quand on souhaite générer un plan pour des robots tout en prenant en compte les humains, il apparaît que les problèmes sont complexes et fortement interdépendants. Afin de faire face à cette complexité, nous avons intégré à HATP un planificateur géométrique apte à déduire l'effet réel des actions sur l'environnement et ainsi permettre de considérer la visibilité et l'accessibilité des éléments. Cette thèse se concentre sur l'intégration de ces deux planificateurs de nature différente et étudie comment par leur combinaison ils permettent de résoudre de nouvelles classes de problèmes de planification pour la robotique
Hierarchical Task Network (HTN) planning is a popular approach to build task plans to control intelligent systems. This thesis presents the HATP (Hierarchical Agent-based Task Planner) planning framework which extends the traditional HTN planning domain representation and semantics by making them more suitable for roboticists, and by offering human-awareness capabilities. When computing human-aware robot plans, it appears that the problems are very complex and highly intricate. To deal with this complexity we have integrated a geometric planner to reason about the actual impact of actions on the environment and allow to take into account the affordances (reachability, visibility). This thesis presents in detail this integration between two heterogeneous planning layers and explores how they can be combined to solve new classes of robotic planning problems
APA, Harvard, Vancouver, ISO, and other styles
36

Zanlongo, Sebastian A. "Multi-Robot Coordination and Scheduling for Deactivation & Decommissioning." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3897.

Full text
Abstract:
Large quantities of high-level radioactive waste were generated during WWII. This waste is being stored in facilities such as double-shell tanks in Washington, and the Waste Isolation Pilot Plant in New Mexico. Due to the dangerous nature of radioactive waste, these facilities must undergo periodic inspections to ensure that leaks are detected quickly. In this work, we provide a set of methodologies to aid in the monitoring and inspection of these hazardous facilities. This allows inspection of dangerous regions without a human operator, and for the inspection of locations where a person would not be physically able to enter. First, we describe a robot equipped with sensors which uses a modified A* path-planning algorithm to navigate in a complex environment with a tether constraint. This is then augmented with an adaptive informative path planning approach that uses the assimilated sensor data within a Gaussian Process distribution model. The model's predictive outputs are used to adaptively plan the robot's path, to quickly map and localize areas from an unknown field of interest. The work was validated in extensive simulation testing and early hardware tests. Next, we focused on how to assign tasks to a heterogeneous set of robots. Task assignment is done in a manner which allows for task-robot dependencies, prioritization of tasks, collision checking, and more realistic travel estimates among other improvements from the state-of-the-art. Simulation testing of this work shows an increase in the number of tasks which are completed ahead of a deadline. Finally, we consider the case where robots are not able to complete planned tasks fully autonomously and require operator assistance during parts of their planned trajectory. We present a sampling-based methodology for allocating operator attention across multiple robots, or across different parts of a more sophisticated robot. This allows few operators to oversee large numbers of robots, allowing for a more scalable robotic infrastructure. This work was tested in simulation for both multi-robot deployment, and high degree-of-freedom robots, and was also tested in multi-robot hardware deployments. The work here can allow robots to carry out complex tasks, autonomously or with operator assistance. Altogether, these three components provide a comprehensive approach towards robotic deployment within the deactivation and decommissioning tasks faced by the Department of Energy.
APA, Harvard, Vancouver, ISO, and other styles
37

Fernández, Baena Adso. "Animation and Interaction of Responsive, Expressive, and Tangible 3D Virtual Characters." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/311800.

Full text
Abstract:
This thesis is framed within the field of 3D Character Animation. Virtual characters are used in many Human Computer Interaction applications such as video games and serious games. Within these virtual worlds they move and act in similar ways to humans controlled by users through some form of interface or by artificial intelligence. This work addresses the challenges of developing smoother movements and more natural behaviors driving motions in real-time, intuitively, and accurately. The interaction between virtual characters and intelligent objects will also be explored. With these subjects researched the work will contribute to creating more responsive, expressive, and tangible virtual characters. The navigation within virtual worlds uses locomotion such as walking, running, etc. To achieve maximum realism, actors' movements are captured and used to animate virtual characters. This is the philosophy of motion graphs: a structure that embeds movements where the continuous motion stream is generated from concatenating motion pieces. However, locomotion synthesis, using motion graphs, involves a tradeoff between the number of possible transitions between different kinds of locomotion, and the quality of these, meaning smooth transition between poses. To overcome this drawback, we propose the method of progressive transitions using Body Part Motion Graphs (BPMGs). This method deals with partial movements, and generates specific, synchronized transitions for each body part (group of joints) within a window of time. Therefore, the connectivity within the system is not linked to the similarity between global poses allowing us to find more and better quality transition points while increasing the speed of response and execution of these transitions in contrast to standard motion graphs method. Secondly, beyond getting faster transitions and smoother movements, virtual characters also interact with each other and with users by speaking. This interaction requires the creation of appropriate gestures according to the voice that they reproduced. Gestures are the nonverbal language that accompanies voiced language. The credibility of virtual characters when speaking is linked to the naturalness of their movements in sync with the voice in speech and intonation. Consequently, we analyzed the relationship between gestures, speech, and the performed gestures according to that speech. We defined intensity indicators for both gestures (GSI, Gesture Strength Indicator) and speech (PSI, Pitch Strength Indicator). We studied the relationship in time and intensity of these cues in order to establish synchronicity and intensity rules. Later we adapted the mentioned rules to select the appropriate gestures to the speech input (tagged text from speech signal) in the Gesture Motion Graph (GMG). The evaluation of resulting animations shows the importance of relating the intensity of speech and gestures to generate believable animations beyond time synchronization. Subsequently, we present a system that leads automatic generation of gestures and facial animation from a speech signal: BodySpeech. This system also includes animation improvements such as: increased use of data input, more flexible time synchronization, and new features like editing style of output animations. In addition, facial animation also takes into account speech intonation. Finally, we have moved virtual characters from virtual environments to the physical world in order to explore their interaction possibilities with real objects. To this end, we present AvatARs, virtual characters that have tangible representation and are integrated into reality through augmented reality apps on mobile devices. Users choose a physical object to manipulate in order to control the animation. They can select and configure the animation, which serves as a support for the virtual character represented. Then, we explored the interaction of AvatARs with intelligent physical objects like the Pleo social robot. Pleo is used to assist hospitalized children in therapy or simply for playing. Despite its benefits, there is a lack of emotional relationship and interaction between the children and Pleo which makes children lose interest eventually. This is why we have created a mixed reality scenario where Vleo (AvatAR as Pleo, virtual element) and Pleo (real element) interact naturally. This scenario has been tested and the results conclude that AvatARs enhances children's motivation to play with Pleo, opening a new horizon in the interaction between virtual characters and robots.
Aquesta tesi s'emmarca dins del món de l'animació de personatges virtuals tridimensionals. Els personatges virtuals s'utilitzen en moltes aplicacions d'interacció home màquina, com els videojocs o els serious games, on es mouen i actuen de forma similar als humans dins de mons virtuals, i on són controlats pels usuaris per mitjà d'alguna interfície, o d'altra manera per sistemes intel·ligents. Reptes com aconseguir moviments fluids i comportament natural, controlar en temps real el moviment de manera intuitiva i precisa, i inclús explorar la interacció dels personatges virtuals amb elements físics intel·ligents; són els que es treballen a continuació amb l'objectiu de contribuir en la generació de personatges virtuals responsius, expressius i tangibles. La navegació dins dels mons virtuals fa ús de locomocions com caminar, córrer, etc. Per tal d'aconseguir el màxim de realisme, es capturen i reutilitzen moviments d'actors per animar els personatges virtuals. Així funcionen els motion graphs, una estructura que encapsula moviments i per mitjà de cerques dins d'aquesta, els concatena creant un flux continu. La síntesi de locomocions usant els motion graphs comporta un compromís entre el número de transicions entre les diferents locomocions, i la qualitat d'aquestes (similitud entre les postures a connectar). Per superar aquest inconvenient, proposem el mètode transicions progressives usant Body Part Motion Graphs (BPMGs). Aquest mètode tracta els moviments de manera parcial, i genera transicions específiques i sincronitzades per cada part del cos (grup d'articulacions) dins d'una finestra temporal. Per tant, la conectivitat del sistema no està lligada a la similitud de postures globals, permetent trobar més punts de transició i de més qualitat, i sobretot incrementant la rapidesa en resposta i execució de les transicions respecte als motion graphs estàndards. En segon lloc, més enllà d'aconseguir transicions ràpides i moviments fluids, els personatges virtuals també interaccionen entre ells i amb els usuaris parlant, creant la necessitat de generar moviments apropiats a la veu que reprodueixen. Els gestos formen part del llenguatge no verbal que acostuma a acompanyar a la veu. La credibilitat dels personatges virtuals parlants està lligada a la naturalitat dels seus moviments i a la concordança que aquests tenen amb la veu, sobretot amb l'entonació d'aquesta. Així doncs, hem realitzat l'anàlisi de la relació entre els gestos i la veu, i la conseqüent generació de gestos d'acord a la veu. S'han definit indicadors d'intensitat tant per gestos (GSI, Gesture Strength Indicator) com per la veu (PSI, Pitch Strength Indicator), i s'ha estudiat la relació entre la temporalitat i la intensitat de les dues senyals per establir unes normes de sincronia temporal i d'intensitat. Més endavant es presenta el Gesture Motion Graph (GMG), que selecciona gestos adients a la veu d'entrada (text anotat a partir de la senyal de veu) i les regles esmentades. L'avaluació de les animaciones resultants demostra la importància de relacionar la intensitat per generar animacions cre\"{ibles, més enllà de la sincronització temporal. Posteriorment, presentem un sistema de generació automàtica de gestos i animació facial a partir d'una senyal de veu: BodySpeech. Aquest sistema també inclou millores en l'animació, major reaprofitament de les dades d'entrada i sincronització més flexible, i noves funcionalitats com l'edició de l'estil les animacions de sortida. A més, l'animació facial també té en compte l'entonació de la veu. Finalment, s'han traslladat els personatges virtuals dels entorns virtuals al món físic per tal d'explorar les possibilitats d'interacció amb objectes reals. Per aquest fi, presentem els AvatARs, personatges virtuals que tenen representació tangible i que es visualitzen integrats en la realitat a través d'un dispositiu mòbil gràcies a la realitat augmentada. El control de l'animació es duu a terme per mitjà d'un objecte físic que l'usuari manipula, seleccionant i parametritzant les animacions, i que al mateix temps serveix com a suport per a la representació del personatge virtual. Posteriorment, s'ha explorat la interacció dels AvatARs amb objectes físics intel·ligents com el robot social Pleo. El Pleo s'utilitza per a assistir a nens hospitalitzats en teràpia o simplement per jugar. Tot i els seus beneficis, hi ha una manca de relació emocional i interacció entre els nens i el Pleo que amb el temps fa que els nens perdin l'interès en ell. Així doncs, hem creat un escenari d'interacció mixt on el Vleo (un AvatAR en forma de Pleo; element virtual) i el Pleo (element real) interactuen de manera natural. Aquest escenari s'ha testejat i els resultats conclouen que els AvatARs milloren la motivació per jugar amb el Pleo, obrint un nou horitzó en la interacció dels personatges virtuals amb robots.
Esta tesis se enmarca dentro del mundo de la animación de personajes virtuales tridimensionales. Los personajes virtuales se utilizan en muchas aplicaciones de interacción hombre máquina, como los videojuegos y los serious games, donde dentro de mundo virtuales se mueven y actúan de manera similar a los humanos, y son controlados por usuarios por mediante de alguna interfaz, o de otro modo, por sistemas inteligentes. Retos como conseguir movimientos fluidos y comportamiento natural, controlar en tiempo real el movimiento de manera intuitiva y precisa, y incluso explorar la interacción de los personajes virtuales con elementos físicos inteligentes; son los que se trabajan a continuación con el objetivo de contribuir en la generación de personajes virtuales responsivos, expresivos y tangibles. La navegación dentro de los mundos virtuales hace uso de locomociones como andar, correr, etc. Para conseguir el máximo realismo, se capturan y reutilizan movimientos de actores para animar los personajes virtuales. Así funcionan los motion graphs, una estructura que encapsula movimientos y que por mediante búsquedas en ella, los concatena creando un flujo contínuo. La síntesi de locomociones usando los motion graphs comporta un compromiso entre el número de transiciones entre las distintas locomociones, y la calidad de estas (similitud entre las posturas a conectar). Para superar este inconveniente, proponemos el método transiciones progresivas usando Body Part Motion Graphs (BPMGs). Este método trata los movimientos de manera parcial, y genera transiciones específicas y sincronizadas para cada parte del cuerpo (grupo de articulaciones) dentro de una ventana temporal. Por lo tanto, la conectividad del sistema no está vinculada a la similitud de posturas globales, permitiendo encontrar más puntos de transición y de más calidad, incrementando la rapidez en respuesta y ejecución de las transiciones respeto a los motion graphs estándards. En segundo lugar, más allá de conseguir transiciones rápidas y movimientos fluídos, los personajes virtuales también interaccionan entre ellos y con los usuarios hablando, creando la necesidad de generar movimientos apropiados a la voz que reproducen. Los gestos forman parte del lenguaje no verbal que acostumbra a acompañar a la voz. La credibilidad de los personajes virtuales parlantes está vinculada a la naturalidad de sus movimientos y a la concordancia que estos tienen con la voz, sobretodo con la entonación de esta. Así pues, hemos realizado el análisis de la relación entre los gestos y la voz, y la consecuente generación de gestos de acuerdo a la voz. Se han definido indicadores de intensidad tanto para gestos (GSI, Gesture Strength Indicator) como para la voz (PSI, Pitch Strength Indicator), y se ha estudiado la relación temporal y de intensidad para establecer unas reglas de sincronía temporal y de intensidad. Más adelante se presenta el Gesture Motion Graph (GMG), que selecciona gestos adientes a la voz de entrada (texto etiquetado a partir de la señal de voz) y las normas mencionadas. La evaluación de las animaciones resultantes demuestra la importancia de relacionar la intensidad para generar animaciones creíbles, más allá de la sincronización temporal. Posteriormente, presentamos un sistema de generación automática de gestos y animación facial a partir de una señal de voz: BodySpeech. Este sistema también incluye mejoras en la animación, como un mayor aprovechamiento de los datos de entrada y una sincronización más flexible, y nuevas funcionalidades como la edición del estilo de las animaciones de salida. Además, la animación facial también tiene en cuenta la entonación de la voz. Finalmente, se han trasladado los personajes virtuales de los entornos virtuales al mundo físico para explorar las posibilidades de interacción con objetos reales. Para este fin, presentamos los AvatARs, personajes virtuales que tienen representación tangible y que se visualizan integrados en la realidad a través de un dispositivo móvil gracias a la realidad aumentada. El control de la animación se lleva a cabo mediante un objeto físico que el usuario manipula, seleccionando y configurando las animaciones, y que a su vez sirve como soporte para la representación del personaje. Posteriormente, se ha explorado la interacción de los AvatARs con objetos físicos inteligentes como el robot Pleo. Pleo se utiliza para asistir a niños en terapia o simplemente para jugar. Todo y sus beneficios, hay una falta de relación emocional y interacción entre los niños y Pleo que con el tiempo hace que los niños pierdan el interés. Así pues, hemos creado un escenario de interacción mixto donde Vleo (AvatAR en forma de Pleo; virtual) y Pleo (real) interactúan de manera natural. Este escenario se ha testeado y los resultados concluyen que los AvatARs mejoran la motivación para jugar con Pleo, abriendo un nuevo horizonte en la interacción de los personajes virtuales con robots.
APA, Harvard, Vancouver, ISO, and other styles
38

Cavalcante, Fernando Zuher Mohamad Said. "Reconhecimento de movimentos humanos para imitação e controle de um robô humanoide." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30112012-160848/.

Full text
Abstract:
Em interações humano-robô ainda existem muitas limitações a serem superadas referentes à provisão de uma comunicação natural quanto aos sentidos humanos. A capacidade de interagir com os seres humanos de maneira natural em contextos sociais (pelo uso da fala, gestos, expressões faciais, movimentos do corpo) é um ponto fundamental para garantir a aceitação de robôs em uma sociedade de pessoas não especialistas em manipulação de engenhos robóticos. Outrossim, a maioria dos robôs existentes possui habilidades limitadas de percepção, cognição e comportamento em comparação com seres humanos. Nesse contexto, este projeto de pesquisa investigou o potencial da arquitetura robótica do humanoide NAO, no tocante à capacidade de realizar interações com seres humanos através de imitação de movimentos do corpo de uma pessoa e pelo controle do robô. Quanto a sensores, foi utilizado um sensor câmera não-intrusivo de profundidade incorporado no dispositivo Kinect. Quanto às técnicas, alguns conceitos matemáticos foram abordados para abstração das configurações espaciais de algumas junções/membros do corpo humano essas configurações foram capturadas por meio da utilização da biblioteca OpenNI. Os experimentos realizados versaram sobre a imitação e o controle do robô por meio da avaliação de vários usuários. Os resultados desses experimentos revelaram um desempenho satisfatório quanto ao sistema desenvolvido
In human-robot interactions there are still many restrictions to overcome regarding the provision of a communication as natural to the human senses. The ability to interact with humans in a natural way in social contexts (the use of speech, gestures, facial expressions, body movements) is a key point to ensure the acceptance of robots in a society of people not specialized in manipulation of robotic devices. Moreover, most existing robots have limited abilities of perception, cognition and behavior in comparison with humans. In this context, this research project investigated the potential of the robotic architecture of the NAO humanoid robot, in terms of ability to perform interactions with humans through imitation of body movements of a person and the robot control. As for sensors, we used a non-intrusive sensor depth-camera built into the device Kinect. As to techniques, some mathematical concepts were discussed for abstraction of the spatial configurations of some joints/members of the human body these configurations were captured through the use of the OpenNI library. The performed experiments were about imitation and the control of the robot through the evaluation of various users. The results of these experiments showed a satisfactory performance for the developed system
APA, Harvard, Vancouver, ISO, and other styles
39

Sisbot, Akin. "Towards human-aware robot motions." Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00343633.

Full text
Abstract:
L'introduction des robots dans la vie quotidienne apporte un problème important qui "s'ajoute" au "défi standard" des robots autonomes : la présence d'hommes dans son environnement et le besoin d'interagir avec eux. Ce travail s'intéresse aux problèmes de l'interaction proche entre humains et robots, en se plaçant du point de vue des décisions de mouvement qui doivent être prises par le robot pour assurer un mouvement sûr, effectif, compréhensible et confortable pour l'homme. On présente un cadre général de planification de mouvement qui prend explicitement en compte la présence de l'homme. Ce cadre est matérialisé par deux planificateurs. Le premier, "Human-Aware Navigation Planner", est un planificateur de navigation qui raisonne sur la sécurité, la visibilité, la posture et les préférences de l'homme pour générer des mouvements sûrs et confortables pour l'homme. Le deuxième, "Human-Aware Manipulation Planner", est un planificateur qui traite les problèmes de transfert d'objet entre l'homme et le robot. Ce planificateur transforme le problème initial de planification de mouvement en un problème beaucoup plus riche de recherche d'un chemin "pour réaliser une tache" fournissant ainsi la possibilité de raisonner à un niveau d'abstraction supérieur. Les deux planificateurs sont intégrés dans deux plates-formes robotiques, Jido et Rackham, et validés à travers des études utilisateurs dans le cadre du projet européen COGNIRON.
APA, Harvard, Vancouver, ISO, and other styles
40

Sisbot, Emrah Akin. "Towards human-aware robot motions." Toulouse 3, 2008. http://thesesups.ups-tlse.fr/755/.

Full text
Abstract:
L'introduction des robots dans la vie quotidienne apporte un problème important qui "s'ajoute" au "défi standard" des robots autonomes : la présence d'hommes dans son environnement et le besoin d'interagir avec eux. Ce travail s'intéresse aux problèmes de l'interaction proche entre humains et robots, en se plaçant du point de vue des décisions de mouvement qui doivent être prises par le robot pour assurer un mouvement sûr, effectif, compréhensible et confortable pour l'homme. On présente un cadre général de planification de mouvement qui prend explicitement en compte la présence de l'homme. Ce cadre est matérialisé par deux planificateurs. Le premier, " Human-Aware Navigation Planner ", est un planificateur de navigation qui raisonne sur la sécurité, la visibilité, la posture et les préférences de l'homme pour générer des mouvements sûrs et confortables pour l'homme. Le deuxième, " Human-Aware Manipulation Planner ", est un planificateur qui traite les problèmes de transfert d'objet entre l'homme et le robot. Ce planificateur transforme le problème initial de planification de mouvement en un problème beaucoup plus riche de recherche d'un chemin " pour réaliser une tache " fournissant ainsi la possibilité de raisonner à un niveau d'abstraction supérieur. Les deux planificateurs sont intégrés dans deux plates-formes robotiques, Jido et Rackham, et validés à travers des études utilisateurs dans le cadre du projet européen COGNIRON
In an environment where a robot has to move among people, the notion of safety becomes more important and should be studied in every detail. The feasibility of a task leaves its place to the "comfort" for an interactive robot. For a robot that physically interacts with humans, accomplishing a task with the expense of human comfort is not acceptable even the robot does not harm any person. The robot has to perform motion and manipulation actions and should be able to determine where a given task should be achieved, how to place itself relatively to a human, how to approach him/her, how to hand the object and how to move in a relatively constrained environment by taking into account the safety and the comfort of all the humans in the environment. In this work, we propose a novel motion planning framework answering these questions along with its implementation into a navigation and a manipulation planner. We present the Human-Aware Navigation Planner that takes into account the safety, the fields of view, the preferences and the states of all the humans as well as the environment and generates paths that are not only collision free but also comfortable. We also present the Human-Aware Manipulation Planner that breaks the commonly used human-centric approaches and allows the robot to decide and take initiative about the way of an object transfer takes place. Human's safety, field of view, state, preferences as well as its kinematic structure is taken into account to generate safe and most importantly comfortable and legible motions that make robot's intention clear to its human partner
APA, Harvard, Vancouver, ISO, and other styles
41

Sunardi, Mathias I. "Expressive Motion Synthesis for Robot Actors in Robot Theatre." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/720.

Full text
Abstract:
Lately, personal and entertainment robotics are becoming more and more common. In this thesis, the application of entertainment robots in the context of a Robot Theatre is studied. Specifically, the thesis focuses on the synthesis of expressive movements or animations for the robot performers (Robot Actors). The novel paradigm emerged from computer animation is to represent the motion data as a set of signals. Thus, preprogrammed motion data can be quickly modified using common signal processing techniques such as multiresolution filtering and spectral analysis. However, manual adjustments of the filtering and spectral methods parameters, and good artistic skills are still required to obtain the desired expressions in the resulting animation. Music contains timing, timbre and rhythm information which humans can translate into affect, and express the affect through movement dynamics, such as in dancing. Music data is then assumed to contain affective information which can be expressed in the movements of a robot. In this thesis, music data is used as input signal to generate motion data (Dance) and to modify a sequence of pre-programmed motion data (Scenario) for a custom-made Lynxmotion robot and a KHR-1 robot, respectively. The music data in MIDI format is parsed for timing and melodic information, which are then mapped to joint angle values. Surveys were done to validate the usefulness and contribution of music signals to add expressiveness to the movements of a robot for the Robot Theatre application.
APA, Harvard, Vancouver, ISO, and other styles
42

Velor, Tosan. "A Low-Cost Social Companion Robot for Children with Autism Spectrum Disorder." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41428.

Full text
Abstract:
Robot assisted therapy is becoming increasingly popular. Research has proven it can be of benefit to persons dealing with a variety of disorders, such as Autism Spectrum Disorder (ASD), Attention Deficit Hyperactivity Disorder (ADHD), and it can also provide a source of emotional support e.g. to persons living in seniors’ residences. The advancement in technology and a decrease in cost of products related to consumer electronics, computing and communication has enabled the development of more advanced social robots at a lower cost. This brings us closer to developing such tools at a price that makes them affordable to lower income individuals and families. Currently, in several cases, intensive treatment for patients with certain disorders (to the level of becoming effective) is practically not possible through the public health system due to resource limitations and a large existing backlog. Pursuing treatment through the private sector is expensive and unattainable for those with a lower income, placing them at a disadvantage. Design and effective integration of technology, such as using social robots in treatment, reduces the cost considerably, potentially making it financially accessible to lower income individuals and families in need. The Objective of the research reported in this manuscript is to design and implement a social robot that meets the low-cost criteria, while also containing the required functions to support children with ASD. The design considered contains knowledge acquired in the past through research involving the use of various types of technology for the treatment of mental and/or emotional disabilities.
APA, Harvard, Vancouver, ISO, and other styles
43

Júnior, Valdir Grassi. "Arquitetura híbrida para robôs móveis baseada em funções de navegação com interação humana." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3132/tde-19092006-145159/.

Full text
Abstract:
Existem aplicações na área da robótica móvel em que, além da navegação autônoma do robô, é necessário que um usuário humano interaja no controle de navegação do robô. Neste caso, considerado como controle semi-autônomo, o usuário humano têm a possibilidade de alterar localmente a trajetória autônoma previamente planejada para o robô. Entretanto, o sistema de controle inteligente do robô, por meio de um módulo independente do usuário, continuamente evita colisões, mesmo que para isso os comandos do usuário precisem ser modificados. Esta abordagem cria um ambiente seguro para navegação que pode ser usado em cadeiras de rodas robotizadas e veículos robóticos tripulados onde a segurança do ser humano deve ser garantida. Um sistema de controle que possua estas características deve ser baseado numa arquitetura para robôs móveis adequada. Esta arquitetura deve integrar a entrada de comandos de um ser humano com a camada de controle autônomo do sistema que evita colisões com obstáculos estáticos e dinâmicos, e que conduz o robô em direção ao seu objetivo de navegação. Neste trabalho é proposta uma arquitetura de controle híbrida (deliberativa/reativa) para um robô móvel com interação humana. Esta arquitetura, desenvolvida principalmente para tarefas de navegação, permite que o robô seja operado em diferentes níveis de autonomia, possibilitando que um usuário humano compartilhe o controle do robô de forma segura enquanto o sistema de controle evita colisões. Nesta arquitetura, o plano de movimento do robô é representado por uma função de navegação. É proposto um método para combinar um comportamento deliberativo que executa o plano de movimento, com comportamentos reativos definidos no contexto de navegação, e com entradas contínuas de controle provenientes do usuário. O sistema de controle inteligente definido por meio da arquitetura foi implementado em uma cadeira de rodas robotizada. São apresentados alguns dos resultados obtidos por meio de experimentos realizados com o sistema de controle implementado operando em diferentes modos de autonomia.
There are some applications in mobile robotics that require human user interaction besides the autonomous navigation control of the robot. For these applications, in a semi-autonomous control mode, the human user can locally modify the autonomous pre-planned robot trajectory by sending continuous commands to the robot. In this case, independently from the user\'s commands, the intelligent control system must continuously avoid collisions, modifying the user\'s commands if necessary. This approach creates a safety navigation system that can be used in robotic wheelchairs and manned robotic vehicles where the human safety must be guaranteed. A control system with those characteristics should be based on a suitable mobile robot architecture. This architecture must integrate the human user\'s commands with the autonomous control layer of the system which is responsible for avoiding static and dynamic obstacles and for driving the robot to its navigation goal. In this work we propose a hybrid (deliberative/reactive) mobile robot architecture with human interaction. This architecture was developed mainly for navigation tasks and allows the robot to be operated on different levels of autonomy. The user can share the robot control with the system while the system ensures the user and robot\'s safety. In this architecture, a navigation function is used for representing the robot\'s navigation plan. We propose a method for combining the deliberative behavior responsible for executing the navigation plan, with the reactive behaviors defined to be used while navigating, and with the continuous human user\'s inputs. The intelligent control system defined by the proposed architecture was implemented in a robotic wheelchair, and we present some experimental results of the chair operating on different autonomy modes.
APA, Harvard, Vancouver, ISO, and other styles
44

Montecillo, Puente Francisco Javier. "Transfert de Mouvement Humain vers Robot Humanoïde." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0119/document.

Full text
Abstract:
Le but de cette thèse est le transfert du mouvement humain vers un robot humanoïde en ligne. Dans une première partie, le mouvement humain, enregistré par un système de capture de mouvement, est analysé pour extraire des caractéristiques qui doivent être transférées vers le robot humanoïde. Dans un deuxième temps, le mouvement du robot qui comprend ces caractéristiques est calculé en utilisant la cinématique inverse avec priorité. L'ensemble des tâches avec leurs priorités est ainsi transféré. La méthode permet une reproduction du mouvement la plus fidèle possible, en ligne et pour le haut du corps. Finalement, nous étudions le problème du transfert mouvement des pieds. Pour cette étude, le mouvement des pieds est analysé pour extraire les trajectoires euclidiennes qui sont adaptées au robot. Les trajectoires du centre du masse qui garantit que le robot ne tombe pas sont calculées `a partir de la position des pieds et du modèle du pendule inverse. Il est ainsi possible réaliser une imitation complète incluant les mouvements du haut du corps ainsi que les mouvements des pieds
The aim of this thesis is to transfer human motion to a humanoid robot online. In the first part of this work, the human motion recorded by a motion capture system is analyzed to extract salient features that are to be transferred on the humanoid robot. We introduce the humanoid normalized model as the set of motion properties. In the second part of this work, the robot motion that includes the human motion features is computed using the inverse kinematics with priority. In order to transfer the motion properties a stack of tasks is predefined. Each motion property in the humanoid normalized model corresponds to one target in the stack of tasks. We propose a framework to transfer human motion online as close as possible to a human motion performance for the upper body. Finally, we study the problem of transfering feet motion. In this study, the motion of feet is analyzed to extract the Euclidean trajectories adapted to the robot. Moreover, the trajectory of the center of mass which ensures that the robot does not fall is calculated from the feet positions and the inverse pendulum model of the robot. Using this result, it is possible to achieve complete imitation of upper body movements and including feet motion
APA, Harvard, Vancouver, ISO, and other styles
45

Menychtas, Dimitrios. "Human Body Motions Optimization for Able-Bodied Individuals and Prosthesis Users During Activities of Daily Living Using a Personalized Robot-Human Model." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7547.

Full text
Abstract:
Current clinical practice regarding upper body prosthesis prescription and training is lacking a standarized, quantitative method to evaluate the impact of the prosthetic device. The amputee care team typically uses prior experiences to provide prescription and training customized for each individual. As a result, it is quite challenging to determine the right type and fit of a prosthesis and provide appropriate training to properly utilize it early in the process. It is also very difficult to anticipate expected and undesired compensatory motions due to reduced degrees of freedom of a prosthesis user. In an effort to address this, a tool was developed to predict and visualize the expected upper limb movements from a prescribed prosthesis and its suitability to the needs of the amputee. It is expected to help clinicians make decisions such as choosing between a body-powered or a myoelectric prosthesis, and whether to include a wrist joint. To generate the motions, a robotics-based model of the upper limbs and torso was created and a weighted least-norm (WLN) inverse kinematics algorithm was used. The WLN assigns a penalty (i.e. the weight) on each joint to create a priority between redundant joints. As a result, certain joints will contribute more to the total motion. Two main criteria were hypothesized to dictate the human motion. The first one was a joint prioritization criterion using a static weighting matrix. Since different joints can be used to move the hand in the same direction, joint priority will select between equivalent joints. The second criterion was to select a range of motion (ROM) for each joint specifically for a task. The assumption was that if the joints' ROM is limited, then all the unnatural postures that still satisfy the task will be excluded from the available solutions solutions. Three sets of static joint prioritization weights were investigated: a set of optimized weights specifically for each task, a general set of static weights optimized for all tasks, and a set of joint absolute average velocity-based weights. Additionally, task joint limits were applied both independently and in conjunction with the static weights to assess the simulated motions they can produce. Using a generalized weighted inverse control scheme to resolve for redundancy, a human-like posture for each specific individual was created. Motion capture (MoCap) data were utilized to generate the weighting matrices required to resolve the kinematic redundancy of the upper limbs. Fourteen able-bodied individuals and eight prosthesis users with a transradial amputation on the left side participated in MoCap sessions. They performed ROM and activities of daily living (ADL) tasks. The methods proposed here incorporate patient's anthropometrics, such as height, limb lengths, and degree of amputation, to create an upper body kinematic model. The model has 23 degrees-of-freedom (DoFs) to reflect a human upper body and it can be adjusted to reflect levels of amputation. The weighting factors resulted from this process showed how joints are prioritized during each task. The physical meaning of the weighting factors is to demonstrate which joints contribute more to the task. Since the motion is distributed differently between able-bodied individuals and prosthesis users, the weighting factors will shift accordingly. This shift highlights the compensatory motion that exist on prosthesis users. The results show that using a set of optimized joint prioritization weights for each specific task gave the least RMS error compared to common optimized weights. The velocity-based weights had a slightly higher RMS error than the task optimized weights but it was not statistically significant. The biggest benefit of that weight set is their simplicity to implement compared to the optimized weights. Another benefit of the velocity based weights is that they can explicitly show how mobile each joint is during a task and they can be used alongside the ROM to identify compensatory motion. The inclusion of task joint limits gave lower RMS error when the joint movements were similar across subjects and therefore the ROM of each joint for the task could be established more accurately. When the joint movements were too different among participants, the inclusion of task limits was detrimental to the simulation. Therefore, the static set of task specific optimized weights was found to be the most accurate and robust method. However, the velocity-based weights method was simpler with similar accuracy. The methods presented here were integrated in a previously developed graphical user interface (GUI) to allow the clinician to input the data of the prospective prosthesis users. The simulated motions can be presented as an animation that performs the requested task. Ultimately, the final animation can be used as a proposed kinematic strategy that a prosthesis user and a clinician can refer to, during the rehabilitation process as a guideline. This work has the potential to impact current prosthesis prescription and training by providing personalized proposed motions for a task.
APA, Harvard, Vancouver, ISO, and other styles
46

Montecillo, Puente Francisco Javier. "Human Motion Transfer on Humanoid Robot." Phd thesis, 2010. http://oatao.univ-toulouse.fr/7261/1/montecillo.pdf.

Full text
Abstract:
The aim of this thesis is to transfer human motion to a humanoid robot online. In the first part of this work, the human motion recorded by a motion capture system is analyzed to extract salient features that are to be transferred on the humanoid robot. We introduce the humanoid normalized model as the set of motion properties. In the second part of this work, the robot motion that includes the human motion features is computed using the inverse kinematics with priority. In order to transfer the motion properties a stack of tasks is predefined. Each motion property in the humanoid normalized model corresponds to one target in the stack of tasks. We propose a framework to transfer human motion online as close as possible to a human motion performance for the upper body. Finally, we study the problem of transfering feet motion. In this study, the motion of feet is analyzed to extract the Euclidean trajectories adapted to the robot. Moreover, the trajectory of the center of mass which ensures that the robot does not fall is calculated from the feet positions and the inverse pendulum model of the robot. Using this result, it is possible to achieve complete imitation of upper body movements and including feet motion
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Yi-Ru, and 陳羿如. "Human Robot Interaction with Motion Platform." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/43989787584905587513.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
97
This thesis presents an upper body tracking method with a monocular camera. The human model is defined in a high dimensional state space. We hereby propose a hierarchical structure model to solve the tracking problem by particle filter with partitioned sampling. The spatial and temporal information from the image is used to track the human body and estimate the human posture. When doing the human-robot interaction, a static monocular camera may not get plenty of information from the 2D images, so we must move the camera platform to a better position for acquiring more enriched image information. The proposed upper body tracking technique will then self-adjust to estimate the human posture during the camera movement. To validate the effectiveness of the proposed tracking approach, extensive experiments have been performed, of which the result appear to be quite promising.
APA, Harvard, Vancouver, ISO, and other styles
48

Lin, Fu-Wei, and 林富偉. "Intelligent humanoid robot design in human motion imitation." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/02000110582389028831.

Full text
Abstract:
碩士
銘傳大學
電腦與通訊工程學系碩士班
102
In this paper, we designed a system to accomplish the tasks of imitation of human motion of humanoid robot. The system not only can be used to immitate the human motion but also to control the humanoid robot through the human motion. We use the Kinect to capture human motion and the corresponding body joints of the human were transmitted by WiFi to the DARwIn-OP humanoid robot. DARwIn-OP calculates the motors’ parameters and balance itself to do the similar movements of the human. The experiment results show that the humanoid robot can follow up the movements of the human in several kinds of movements. Finally, this intelligent imitation humanoid robot system can be implemented in real-time demonstration.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Wen-Chien, and 陳文建. "SOPC Based Human Biped Motion Tracking Controlfor Human-Sized Biped Robot." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/87205375922556367275.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
96
This thesis presents the motion control system design of the human-sized biped robot, aiRobot-HBR1, and proposes a human biped motion (HBM) tracking control approach in which humans can control the robot by an integrated sensor control module (ISCM). aiRobot-HBR1 is a human-size biped robot with 110 cm height and 40 Kg weight, and has a total of 12 D.O.Fs. First, this thesis presents the control structure of the motion control system and the motion pattern planning of the robot. Designing the motor controller, the graphic user interface, and the integrated sensor control module along with the central processor unit, Nios FPGA, we construct a control platform for developing the control strategies of the robot based on SOPC. Furthermore, this thesis establishes the dynamic model of the integrated sensor control module which integrates a gyro and an accelerometer. The Kalman filter is utilized to estimate the states of the model to track the human biped motion. Combining the biped robot with the tracking result of the human-body motion, we propose a real-time HBM tracking control and a HBM recognition approach. The estimated posture is used to control the motion and the behavior of aiRobot-HBR1. Finally, the experiment results indicate the validity of the proposed motion control system, real-time HBM tracking control, and the HBM recognition.
APA, Harvard, Vancouver, ISO, and other styles
50

Yu, Yueh-Chi. "Environment and Human Behavior Learning for Robot Motion Control." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2207200814024900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography