Dissertations / Theses on the topic 'Human-Robot motion'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Human-Robot motion.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Paulin, Rémi. "human-robot motion : an attention-based approach." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM018.
Full textFor autonomous mobile robots designed to share their environment with humans, path safety and efficiency are not the only aspects guiding their motion: they must follow social rules so as not to cause discomfort to surrounding people. Most socially-aware path planners rely heavily on the concept of social spaces; however, social spaces are hard to model and they are of limited use in the context of human-robot interaction where intrusion into social spaces is necessary. In this work, a new approach for socially-aware path planning is presented that performs well in complex environments as well as in the context of human-robot interaction. Specifically, the concept of attention is used to model how the influence of the environment as a whole affects how the robot's motion is perceived by people within close proximity. A new computational model of attention is presented that estimates how our attentional resources are shared amongst the salient elements in our environment. Based on this model, the novel concept of attention field is introduced and a path planner that relies on this field is developed in order to produce socially acceptable paths. To do so, a state-of-the-art many-objective optimization algorithm is successfully applied to the path planning problem. The capacities of the proposed approach are illustrated in several case studies where the robot is assigned different tasks. Firstly, when the task is to navigate in the environment without causing distraction our approach produces promising results even in complex situations. Secondly, when the task is to attract a person's attention in view of interacting with him or her, the motion planner is able to automatically choose a destination that best conveys its desire to interact whilst keeping the motion safe, efficient and socially acceptable
Lasota, Przemyslaw A. (Przemyslaw Andrzej). "Robust human motion prediction for safe and efficient human-robot interaction." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122497.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 175-188).
From robotic co-workers in factories to assistive robots in homes, human-robot interaction (HRI) has the potential to revolutionize a large array of domains by enabling robotic assistance where it was previously not possible. Introducing robots into human-occupied domains, however, requires strong consideration for the safety and efficiency of the interaction. One particularly effective method of supporting safe an efficient human-robot interaction is through the use of human motion prediction. By predicting where a person might reach or walk toward in the upcoming moments, a robot can adjust its motions to proactively resolve motion conflicts and avoid impeding the person's movements. Current approaches to human motion prediction, however, often lack the robustness required for real-world deployment. Many methods are designed for predicting specific types of tasks and motions, and do not necessarily generalize well to other domains.
It is also possible that no single predictor is suitable for predicting motion in a given scenario, and that multiple predictors are needed. Due to these drawbacks, without expert knowledge in the field of human motion prediction, it is difficult to deploy prediction on real robotic systems. Another key limitation of current human motion prediction approaches lies in deficiencies in partial trajectory alignment. Alignment of partially executed motions to a representative trajectory for a motion is a key enabling technology for many goal-based prediction methods. Current approaches of partial trajectory alignment, however, do not provide satisfactory alignments for many real-world trajectories. Specifically, due to reliance on Euclidean distance metrics, overlapping trajectory regions and temporary stops lead to large alignment errors.
In this thesis, I introduce two frameworks designed to improve the robustness of human motion prediction in order to facilitate its use for safe and efficient human-robot interaction. First, I introduce the Multiple-Predictor System (MPS), a datadriven approach that uses given task and motion data in order to synthesize a high performing predictor by automatically identifying informative prediction features and combining the strengths of complementary prediction methods. With the use of three distinct human motion datasets, I show that using the MPS leads to lower prediction error in a variety of HRI scenarios, and allows for accurate prediction for a range of time horizons. Second, in order to address the drawbacks of prior alignment techniques, I introduce the Bayesian ESTimator for Partial Trajectory Alignment (BEST-PTA).
This Bayesian estimation framework uses a combination of optimization, supervised learning, and unsupervised learning components that are trained and synthesized based on a given set of example trajectories. Through an evaluation on three human motion datasets, I show that BEST-PTA reduces alignment error when compared to state-of-the-art baselines. Furthermore, I demonstrate that this improved alignment reduces human motion prediction error. Lastly, in order to assess the utility of the developed methods for improving safety and efficiency in HRI, I introduce an integrated framework combining prediction with robot planning in time. I describe an implementation and evaluation of this framework on a real physical system. Through this demonstration, I show that the developed approach leads to automatically derived adaptive robot behavior. I show that the developed framework leads to improvements in quantitative metrics of safety and efficiency with the use of a simulated evaluation.
"Funded by the NASA Space Technology Research Fellowship Program and the National Science Foundation"--Page 6
by Przemyslaw A. Lasota.
Ph. D. in Autonomous Systems
Ph.D.inAutonomousSystems Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
Gray, Cobb Susan Valerie. "Perception and orientation issues in human control of robot motion." Thesis, University of Nottingham, 1991. http://eprints.nottingham.ac.uk/11237/.
Full textHuang, Chien-Ming. "Joint attention in human-robot interaction." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.
Full textNarsipura, Sreenivasa Manish. "Modeling of human movement for the generation of humanoid robot motion." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0120/document.
Full textHumanoid robotics is coming of age with faster and more agile robots. To compliment the physical complexity of humanoid robots, the robotics algorithms being developed to derive their motion have also become progressively complex. The work in this thesis spans across two research fields, human neuroscience and humanoid robotics, and brings some ideas from the former to aid the latter. By exploring the anthropological link between the structure of a human and that of a humanoid robot we aim to guide conventional robotics methods like local optimization and task-based inverse kinematics towards more realistic human-like solutions. First, we look at dynamic manipulation of human hand trajectories while playing with a yoyo. By recording human yoyo playing, we identify the control scheme used as well as a detailed dynamic model of the hand-yoyo system. Using optimization this model is then used to implement stable yoyo-playing within the kinematic and dynamic limits of the humanoid HRP-2. The thesis then extends its focus to human and humanoid locomotion. We take inspiration from human neuroscience research on the role of the head in human walking and implement a humanoid robotics analogy to this. By allowing a user to steer the head of a humanoid, we develop a control method to generate deliberative whole-body humanoid motion including stepping, purely as a consequence of the head movement. This idea of understanding locomotion as a consequence of reaching a goal is extended in the final study where we look at human motion in more detail. Here, we aim to draw to a link between “invariants” in neuroscience and “kinematic tasks” in humanoid robotics. We record and extract stereotypical characteristics of human movements during a walking and grasping task. These results are then normalized and generalized such that they can be regenerated for other anthropomorphic figures with different kinematic limits than that of humans. The final experiments show a generalized stack of tasks that can generate realistic walking and grasping motion for the humanoid HRP-2. The general contribution of this thesis is in showing that while motion planning for humanoid robots can be tackled by classical methods of robotics, the production of realistic movements necessitate the combination of these methods with the systematic and formal observation of human behavior
Umali, Antonio. "Framework For Robot-Assisted Doffing of Personal Protective Equipment." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/940.
Full textPai, Abhishek. "Distance-Scaled Human-Robot Interaction with Hybrid Cameras." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563872095430977.
Full textConte, Dean Edward. "Autonomous Robotic Escort Incorporating Motion Prediction with Human Intention." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102581.
Full textMaster of Science
This thesis presents a method for a mobile robot to escort a human to their destination successfully and efficiently. The proposed technique uses human intention to predict the walk path allowing the robot to be in front of the human while walking. Human intention is inferred by the head direction, an effective past-proven indicator of intention, and is combined with conventional motion prediction. The robot motion is then determined from the predicted human position allowing for anticipative autonomous escorting. Experimental analysis shows that the incorporation of the proposed human intention reduces human position prediction error by approximately 35% when turning. Furthermore, experimental validation with an mobile robotic platform shows escorting up to 50% more accurate compared to the conventional techniques, while achieving 97% success rate. The unique escorting interaction method proposed has applications such as touch-less shopping cart robots, exercise companions, collaborative rescue robots, and sanitary transportation for hospitals.
Nitz, Pettersson Hannes, and Samuel Vikström. "VISION-BASED ROBOT CONTROLLER FOR HUMAN-ROBOT INTERACTION USING PREDICTIVE ALGORITHMS." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54609.
Full textHayne, Rafi. "Toward Enabling Safe & Efficient Human-Robot Manipulation in Shared Workspaces." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/1012.
Full textManasrah, Ahmad Adli. "Human Motion Tracking for Assisting Balance Training and Control of a Humanoid Robot." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4141.
Full textOsosky, Scott. "Influence of Task-Role Mental Models on Human Interpretation of Robot Motion Behavior." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6331.
Full textPh.D.
Doctorate
Graduate Studies
Sciences
Modeling & Simulation
Busch, Baptiste. "Optimization techniques for an ergonomic human-robot interaction." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0027/document.
Full textHuman-Robot Interaction (HRI) is a growing field in the robotic community. By its very nature it brings together researchers from various domains including psychology, sociology and obviously robotics who are shaping and designing the robots people will interact with ona daily basis. As human and robots starts working in a shared environment, the diversity of tasks theycan accomplish together is rapidly increasing. This creates challenges and raises concerns tobe addressed in terms of safety and acceptance of the robotic systems. Human beings havespecific needs and expectations that have to be taken into account when designing robotic interactions. In a sense, there is a strong need for a truly ergonomic human-robot interaction.In this thesis, we propose methods to include ergonomics and human factors in the motions and decisions planning algorithms, to automatize this process of generating an ergonomicinteraction. The solutions we propose make use of cost functions that encapsulate the humanneeds and enable the optimization of the robot’s motions and choices of actions. We haveapplied our method to two common problems of human-robot interaction.First, we propose a method to increase the legibility of the robot motions to achieve abetter understanding of its intentions. Our approach does not require modeling the conceptof legible motions but penalizes the trajectories that leads to late or mispredictions of therobot’s intentions during a live execution of a shared task. In several user studies we achievesubstantial gains in terms of prediction time and reduced interpretation errors.Second, we tackle the problem of choosing actions and planning motions that maximize thephysical ergonomics on the human side. Using a well-accepted ergonomic evaluation functionof human postures, we simulate the actions and motions of both the human and the robot,to accomplish a specific task, while avoiding situations where the human could be at risk interms of working posture. The conducted user studies show that our method leads to saferworking postures and a better perceived interaction
Mielke, Erich Allen. "Force and Motion Based Methods for Planar Human-Robot Co-manipulation of Extended Objects." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/6767.
Full textYussof, Hanafiah, Mitsuhiro Yamano, Yasuo Nasu, and Masahiro Ohka. "Design of a 21-DOF Humanoid Robot to Attain Flexibility in Human-Like Motion." IEEE, 2006. http://hdl.handle.net/2237/9506.
Full textRivera, Francisco. "Using Motion Capture and Virtual Reality to test the advantages of Human Robot Collaboration." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17205.
Full textSmith, Christian. "Input Estimation for Teleoperation : Using Minimum Jerk Human Motion Models to Improve Telerobotic Performance." Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11590.
Full textQC 20100810
Vassallo, Christian. "Using human-inspired models for guiding robot locomotion." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30177/document.
Full textThis thesis has been done within the framework of the European Project Koroibot which aims at developing advanced algorithms to improve the humanoid robots locomotion. It is organized in three parts. With the aim of steering robots in a safe and efficient manner among humans it is required to understand the rules, principles and strategies of human during locomotion and transfer them to robots. The goal of this thesis is to investigate and identify the human locomotion strategies and create algorithms that could be used to improve robot capabilities. A first contribution is the analysis on pedestrian principles which guide collision avoidance strategies. In particular, we observe how humans adapt a goal-direct locomotion task when they have to interfere with a moving obstacle crossing their way. We show differences both in the strategy set by humans to avoid a non-collaborative obstacle with respect to avoid another human, and the way humans interact with an object moving in human-like way. Secondly, we present a work done in collaboration with computational neuroscientists. We propose a new approach to synthetize realistic complex humanoid robot movements with motion primitives. Human walking-to-grasp trajectories have been recorded. The whole body movements are retargeted and scaled in order to match the humanoid robot kinematics. Based on this database of movements, we extract the motion primitives. We prove that these sources signals can be expressed as stable solutions of an autonomous dynamical system, which can be regarded as a system of coupled central pattern generators (CPGs). Based on this approach, reactive walking-to-grasp strategies have been developed and successfully experimented on the humanoid robot HRP at LAAS-CNRS. In the third part of the thesis, we present a new approach to the problem of vision-based steering of robot subject to non-holonomic constrained to pass through a door. The door is represented by two landmarks located on its vertical supports. The planar geometry that has been built around the door consists of bundles of hyperbolae, ellipses, and orthogonal circles. We prove that this geometry can be directly measured in the camera image plane and that the proposed vision-based control strategy can also be related to human. Realistic simulation and experiments are reported to show the effectiveness of our solutions
Boberg, Arvid. "Virtual lead-through robot programming : Programming virtual robot by demonstration." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11403.
Full textDesormeaux, Kevin. "Temporal models of motions and forces for Human-Robot Interactive manipulation." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30221.
Full textIt was in the 70s when the interest for robotics really emerged. It was barely half a century ago, and since then robots have been replacing humans in the industry. This robot-oriented solution doesn't come without drawbacks as full automation requires time-consuming programming as well as rigid environments. With the increased need for adaptability and reusability of assembly systems, robotics is undergoing major changes and see the emergence of a new type of collaboration between humans and robots. Human-Robot collaboration get the best of both world by combining the respective strengths of humans and robots. But, to include the human as an active agent in these new collaborative workspaces, safe and flexible robots are required. It is in this context that we can apprehend the crucial role of motion generation in tomorrow's robotics. For the emergence of human-robot cooperation, robots have to generate motions ensuring the safety of humans, both physical and physchological. For this reason motion generation has been a restricting factor to the growth of robotics in the past. Trajectories are excellent candidates in the making of desirable motions designed for collaborative robots, because they allow to simply and precisely describe the motions. Smooth trajectories are well known to provide safe motions with good ergonomic properties. In this thesis we propose an Online Trajectory Generation algorithm based on sequences of segment of third degree polynomial functions to build smooth trajectories. These trajectories are built from arbitrary initial and final conditions, a requirement for robots to be able to react instantaneously to unforeseen events. Our approach built on a constrained-jerk model offers performance-oriented solutions : the trajectories are time-optimal under safety constraints. These safety constraints are kinematic constraints that are task and context dependent and must be specified. To guide the choice of these constraints we investigated the role of kinematics in the definition of ergonomics properties of motions. We also extended our algorithm to cope with non-admissible initial configurations, opening the way to trajectory generation under non-constant motion constraints. [...]
Lazarov, Kristiyan, and Badi Mirzai. "Behaviour-Aware Motion Planning for Autonomous Vehicles Incorporating Human Driving Style." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254224.
Full textGlorieux, Emile. "Multi-Robot Motion Planning Optimisation for Handling Sheet Metal Parts." Doctoral thesis, Högskolan Väst, Avdelningen för produktionssystem (PS), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-10947.
Full textTaqi, Sarah M. A. M. "Reproduction of Observed Trajectories Using a Two-Link Robot." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308031627.
Full textWaldhart, Jules. "A NEW FORMULATION AND RESOLUTION SCHEMES FOR PLANNING COLLABORATIVE HUMAN-ROBOT TASKS." Thesis, Toulouse, INSA, 2018. http://www.theses.fr/2018ISAT0047.
Full textWhen interacting with humans, robotic systems shall behave in compliance to some of our socio-cultural rules, and every component of the robot have to take them into account. When deciding an action to perform and how to perform it, the system then needs to communicate pertinent contextual information to its components so they can plan respecting these rules. It is also essential for such robot to ensure a smooth coordination with its human partners. We humans use many cues for synchronization like gaze, legible motions or speech. We are good at inferring what actions are available to our partner, helping us to get an idea of what others are going to do (or what they should do) to better plan for our own actions. Enabling the robot with such capacities is key in the domain of human-robot interaction. This thesis presents our approach to solve two tasks where humans and robots collaborate deeply: a transport problem where multiple robots and humans need to or can handover an object to bring it from one place to another, and a guiding task where the robot helps the humans to orient themselves using speech, navigation and deictic gestures (pointing). We present our implementation of components and their articulation in a architecture where contextual information is transmitted from higher levels decision components to lower ones, which use it to adapt. Our planners also plan for the human actions, as in a multi-robot system: this allows to not be waiting for humans to act, but rather be proactive in the proposal of a solution, and try to predict the actions they will take
Gharbi, Mamoun. "Geometric reasoning planning in the context of Human-Robot Interaction." Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0047/document.
Full textIn the last few years, the Human robot interaction (HRI) field has been in the spotlight of the robotics community. One aspect of this field is making robots act in the presence of humans, while keeping them safe and comfortable. In order to achieve this, a robot needs to plan its actions while explicitly taking into account the humans and adapt its plans to their whereabouts, capacities and preferences. The first part of this thesis is about human-robot handover: where, when and how to perform them? Depending on the human preferences, it may be better, or not, to share the handover effort between him and the robot, while in other cases, a unique handover might not be enough to achieve the goal (bringing the object to a target agent) and a sequence of handovers might be needed. In any case, during the handover, a number of cues should be used by both protagonists involved in one handover. One of the most used cue is the gaze. When the giver reaches out with his arm, he should look at the object, and when the motion is finished, he should look at the receiver's face to facilitate the transfer. The handover can be considered as a basic action in a bigger plan. The second part of this thesis reports about a formalization of these kind of basic actions" and more complex ones by the use of conditions, search spaces and restraints. It also reports about a framework and different algorithms used to solve and compute these actions based on their description. The last part of the thesis shows how the previously cited framework can fit in with a higher level planner (such as a task planner) and a method to combine a symbolic and geometric planner. The task planner uses external calls to the geometric planner to assess the feasibility of the current task, and in case of success, retrieve the state of the world provided by the geometric reasoner and use it to continue the planning. This part also shows different extensions enabling a faster search. Some of these extensions are \Geometric checks" where we test the infeasibility of multiple actions at once, \constraints" where adding constraints at the symbolic level can drive the geometric search, and \cost driven search" where the symbolic planner uses information form the geometric one to prune out over costly plans
Adorno, Bruno. "Two-arm Manipulation : from Manipulators to Enhanced Human-Robot Collaboration." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20064/document.
Full textThis thesis is devoted to the study of robotic two-arm coordination/manipulation from a unified perspective, and conceptually different bimanual tasks are thus described within the same formalism. In order to provide a consistent and compact theory, the techniques presented herein use dual quaternions to represent every single aspect of robot kinematic modeling and control.A novel representation for two-arm manipulation is proposed—the cooperative dual task-space—which exploits the dual quaternion algebra to unify the various approaches found in the literature. The method is further extended to take into account any serially coupled kinematic chain, and a case study is performed using a simulated mobile manipulator. An original application of the cooperative dual task-space is proposed to intuitively represent general human-robot collaboration (HRC) tasks, and several experiments were performed to validate the proposed techniques. Furthermore, the thesis proposes a novel class of HRC taskswherein the robot controls all the coordination aspects; that is, in addition to controlling its own arm, the robot controls the human arm by means of functional electrical stimulation (FES).Thanks to the holistic approach developed throughout the thesis, the resultant theory is compact, uses a small set of mathematical tools, and is capable of describing and controlling a broad range of robot manipulation tasks
Houda, Taha. "Human Interaction in a large workspace parallel robot platform with a virtual environment." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG047.
Full textThe thesis objective relates to the denition, the implementation and the evaluation of a Motion Cueing Algorithm taking into account the perceptual constraints of the vestibular systemin humans and the constraints related to the movement physics of the used simulator. The latter consists of a series-parallel robotic platform with 8 degrees of freedom, entirely designed in the laboratory and intended primarily to assist people with motor disabilities. This sensory restitution requires multidisciplinary research work in robotics and virtual reality. Moreover, a formalization of dynamic modeling, based on the state of the art, was adapted and the dynamic parameters optimized and identied for the 8 degrees of freedom motion platform. Several methods of trajectory generation, exploitation of the platform redundancy, have been studied, implemented,and compared. The most e cient particle swarm optimization (PSO) method was chosen. This algorithm is then used to optimize the parameters of the platform controller in sliding mode. The simulator was used for a virtual reality ski application reproducing the Combloux resort in Haute-Savoie dedicated to disabled people. The simulation results show a very good trajectory tracking behavior and a good reduction in terms of oscillations. This work will be continued through the use of multi-sensory human-assisted virtual reality interfaces
Eshelman-Haynes, Candace Lee. "Visual contributions to spatial perception during a remote navigation task." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1247510065.
Full textDiaz-Mercado, Yancy J. "Interactions in multi-robot systems." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55020.
Full textBartholomew, Paul D. "Optimal behavior composition for robotics." Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51872.
Full textWei, Junqing. "Autonomous Vehicle Social Behavior for Highway Driving." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/919.
Full textGuerriero, Brian A. "Haptic control and operator-guided gait coordination of a pneumatic hexapedal rescue robot." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24626.
Full textGielniak, Michael Joseph. "Adaptation of task-aware, communicative variance for motion control in social humanoid robotic applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43591.
Full textMainprice, Jim. "Planification de mouvement pour la manipulation d'objets sous contraintes d'interaction homme-robot." Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00782708.
Full textLallement, Raphael. "Symbolic and Geometric Planning for teams of Robots and Humans." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0010/document.
Full textHierarchical Task Network (HTN) planning is a popular approach to build task plans to control intelligent systems. This thesis presents the HATP (Hierarchical Agent-based Task Planner) planning framework which extends the traditional HTN planning domain representation and semantics by making them more suitable for roboticists, and by offering human-awareness capabilities. When computing human-aware robot plans, it appears that the problems are very complex and highly intricate. To deal with this complexity we have integrated a geometric planner to reason about the actual impact of actions on the environment and allow to take into account the affordances (reachability, visibility). This thesis presents in detail this integration between two heterogeneous planning layers and explores how they can be combined to solve new classes of robotic planning problems
Zanlongo, Sebastian A. "Multi-Robot Coordination and Scheduling for Deactivation & Decommissioning." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3897.
Full textFernández, Baena Adso. "Animation and Interaction of Responsive, Expressive, and Tangible 3D Virtual Characters." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/311800.
Full textAquesta tesi s'emmarca dins del món de l'animació de personatges virtuals tridimensionals. Els personatges virtuals s'utilitzen en moltes aplicacions d'interacció home màquina, com els videojocs o els serious games, on es mouen i actuen de forma similar als humans dins de mons virtuals, i on són controlats pels usuaris per mitjà d'alguna interfície, o d'altra manera per sistemes intel·ligents. Reptes com aconseguir moviments fluids i comportament natural, controlar en temps real el moviment de manera intuitiva i precisa, i inclús explorar la interacció dels personatges virtuals amb elements físics intel·ligents; són els que es treballen a continuació amb l'objectiu de contribuir en la generació de personatges virtuals responsius, expressius i tangibles. La navegació dins dels mons virtuals fa ús de locomocions com caminar, córrer, etc. Per tal d'aconseguir el màxim de realisme, es capturen i reutilitzen moviments d'actors per animar els personatges virtuals. Així funcionen els motion graphs, una estructura que encapsula moviments i per mitjà de cerques dins d'aquesta, els concatena creant un flux continu. La síntesi de locomocions usant els motion graphs comporta un compromís entre el número de transicions entre les diferents locomocions, i la qualitat d'aquestes (similitud entre les postures a connectar). Per superar aquest inconvenient, proposem el mètode transicions progressives usant Body Part Motion Graphs (BPMGs). Aquest mètode tracta els moviments de manera parcial, i genera transicions específiques i sincronitzades per cada part del cos (grup d'articulacions) dins d'una finestra temporal. Per tant, la conectivitat del sistema no està lligada a la similitud de postures globals, permetent trobar més punts de transició i de més qualitat, i sobretot incrementant la rapidesa en resposta i execució de les transicions respecte als motion graphs estàndards. En segon lloc, més enllà d'aconseguir transicions ràpides i moviments fluids, els personatges virtuals també interaccionen entre ells i amb els usuaris parlant, creant la necessitat de generar moviments apropiats a la veu que reprodueixen. Els gestos formen part del llenguatge no verbal que acostuma a acompanyar a la veu. La credibilitat dels personatges virtuals parlants està lligada a la naturalitat dels seus moviments i a la concordança que aquests tenen amb la veu, sobretot amb l'entonació d'aquesta. Així doncs, hem realitzat l'anàlisi de la relació entre els gestos i la veu, i la conseqüent generació de gestos d'acord a la veu. S'han definit indicadors d'intensitat tant per gestos (GSI, Gesture Strength Indicator) com per la veu (PSI, Pitch Strength Indicator), i s'ha estudiat la relació entre la temporalitat i la intensitat de les dues senyals per establir unes normes de sincronia temporal i d'intensitat. Més endavant es presenta el Gesture Motion Graph (GMG), que selecciona gestos adients a la veu d'entrada (text anotat a partir de la senyal de veu) i les regles esmentades. L'avaluació de les animaciones resultants demostra la importància de relacionar la intensitat per generar animacions cre\"{ibles, més enllà de la sincronització temporal. Posteriorment, presentem un sistema de generació automàtica de gestos i animació facial a partir d'una senyal de veu: BodySpeech. Aquest sistema també inclou millores en l'animació, major reaprofitament de les dades d'entrada i sincronització més flexible, i noves funcionalitats com l'edició de l'estil les animacions de sortida. A més, l'animació facial també té en compte l'entonació de la veu. Finalment, s'han traslladat els personatges virtuals dels entorns virtuals al món físic per tal d'explorar les possibilitats d'interacció amb objectes reals. Per aquest fi, presentem els AvatARs, personatges virtuals que tenen representació tangible i que es visualitzen integrats en la realitat a través d'un dispositiu mòbil gràcies a la realitat augmentada. El control de l'animació es duu a terme per mitjà d'un objecte físic que l'usuari manipula, seleccionant i parametritzant les animacions, i que al mateix temps serveix com a suport per a la representació del personatge virtual. Posteriorment, s'ha explorat la interacció dels AvatARs amb objectes físics intel·ligents com el robot social Pleo. El Pleo s'utilitza per a assistir a nens hospitalitzats en teràpia o simplement per jugar. Tot i els seus beneficis, hi ha una manca de relació emocional i interacció entre els nens i el Pleo que amb el temps fa que els nens perdin l'interès en ell. Així doncs, hem creat un escenari d'interacció mixt on el Vleo (un AvatAR en forma de Pleo; element virtual) i el Pleo (element real) interactuen de manera natural. Aquest escenari s'ha testejat i els resultats conclouen que els AvatARs milloren la motivació per jugar amb el Pleo, obrint un nou horitzó en la interacció dels personatges virtuals amb robots.
Esta tesis se enmarca dentro del mundo de la animación de personajes virtuales tridimensionales. Los personajes virtuales se utilizan en muchas aplicaciones de interacción hombre máquina, como los videojuegos y los serious games, donde dentro de mundo virtuales se mueven y actúan de manera similar a los humanos, y son controlados por usuarios por mediante de alguna interfaz, o de otro modo, por sistemas inteligentes. Retos como conseguir movimientos fluidos y comportamiento natural, controlar en tiempo real el movimiento de manera intuitiva y precisa, y incluso explorar la interacción de los personajes virtuales con elementos físicos inteligentes; son los que se trabajan a continuación con el objetivo de contribuir en la generación de personajes virtuales responsivos, expresivos y tangibles. La navegación dentro de los mundos virtuales hace uso de locomociones como andar, correr, etc. Para conseguir el máximo realismo, se capturan y reutilizan movimientos de actores para animar los personajes virtuales. Así funcionan los motion graphs, una estructura que encapsula movimientos y que por mediante búsquedas en ella, los concatena creando un flujo contínuo. La síntesi de locomociones usando los motion graphs comporta un compromiso entre el número de transiciones entre las distintas locomociones, y la calidad de estas (similitud entre las posturas a conectar). Para superar este inconveniente, proponemos el método transiciones progresivas usando Body Part Motion Graphs (BPMGs). Este método trata los movimientos de manera parcial, y genera transiciones específicas y sincronizadas para cada parte del cuerpo (grupo de articulaciones) dentro de una ventana temporal. Por lo tanto, la conectividad del sistema no está vinculada a la similitud de posturas globales, permitiendo encontrar más puntos de transición y de más calidad, incrementando la rapidez en respuesta y ejecución de las transiciones respeto a los motion graphs estándards. En segundo lugar, más allá de conseguir transiciones rápidas y movimientos fluídos, los personajes virtuales también interaccionan entre ellos y con los usuarios hablando, creando la necesidad de generar movimientos apropiados a la voz que reproducen. Los gestos forman parte del lenguaje no verbal que acostumbra a acompañar a la voz. La credibilidad de los personajes virtuales parlantes está vinculada a la naturalidad de sus movimientos y a la concordancia que estos tienen con la voz, sobretodo con la entonación de esta. Así pues, hemos realizado el análisis de la relación entre los gestos y la voz, y la consecuente generación de gestos de acuerdo a la voz. Se han definido indicadores de intensidad tanto para gestos (GSI, Gesture Strength Indicator) como para la voz (PSI, Pitch Strength Indicator), y se ha estudiado la relación temporal y de intensidad para establecer unas reglas de sincronía temporal y de intensidad. Más adelante se presenta el Gesture Motion Graph (GMG), que selecciona gestos adientes a la voz de entrada (texto etiquetado a partir de la señal de voz) y las normas mencionadas. La evaluación de las animaciones resultantes demuestra la importancia de relacionar la intensidad para generar animaciones creíbles, más allá de la sincronización temporal. Posteriormente, presentamos un sistema de generación automática de gestos y animación facial a partir de una señal de voz: BodySpeech. Este sistema también incluye mejoras en la animación, como un mayor aprovechamiento de los datos de entrada y una sincronización más flexible, y nuevas funcionalidades como la edición del estilo de las animaciones de salida. Además, la animación facial también tiene en cuenta la entonación de la voz. Finalmente, se han trasladado los personajes virtuales de los entornos virtuales al mundo físico para explorar las posibilidades de interacción con objetos reales. Para este fin, presentamos los AvatARs, personajes virtuales que tienen representación tangible y que se visualizan integrados en la realidad a través de un dispositivo móvil gracias a la realidad aumentada. El control de la animación se lleva a cabo mediante un objeto físico que el usuario manipula, seleccionando y configurando las animaciones, y que a su vez sirve como soporte para la representación del personaje. Posteriormente, se ha explorado la interacción de los AvatARs con objetos físicos inteligentes como el robot Pleo. Pleo se utiliza para asistir a niños en terapia o simplemente para jugar. Todo y sus beneficios, hay una falta de relación emocional y interacción entre los niños y Pleo que con el tiempo hace que los niños pierdan el interés. Así pues, hemos creado un escenario de interacción mixto donde Vleo (AvatAR en forma de Pleo; virtual) y Pleo (real) interactúan de manera natural. Este escenario se ha testeado y los resultados concluyen que los AvatARs mejoran la motivación para jugar con Pleo, abriendo un nuevo horizonte en la interacción de los personajes virtuales con robots.
Cavalcante, Fernando Zuher Mohamad Said. "Reconhecimento de movimentos humanos para imitação e controle de um robô humanoide." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30112012-160848/.
Full textIn human-robot interactions there are still many restrictions to overcome regarding the provision of a communication as natural to the human senses. The ability to interact with humans in a natural way in social contexts (the use of speech, gestures, facial expressions, body movements) is a key point to ensure the acceptance of robots in a society of people not specialized in manipulation of robotic devices. Moreover, most existing robots have limited abilities of perception, cognition and behavior in comparison with humans. In this context, this research project investigated the potential of the robotic architecture of the NAO humanoid robot, in terms of ability to perform interactions with humans through imitation of body movements of a person and the robot control. As for sensors, we used a non-intrusive sensor depth-camera built into the device Kinect. As to techniques, some mathematical concepts were discussed for abstraction of the spatial configurations of some joints/members of the human body these configurations were captured through the use of the OpenNI library. The performed experiments were about imitation and the control of the robot through the evaluation of various users. The results of these experiments showed a satisfactory performance for the developed system
Sisbot, Akin. "Towards human-aware robot motions." Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00343633.
Full textSisbot, Emrah Akin. "Towards human-aware robot motions." Toulouse 3, 2008. http://thesesups.ups-tlse.fr/755/.
Full textIn an environment where a robot has to move among people, the notion of safety becomes more important and should be studied in every detail. The feasibility of a task leaves its place to the "comfort" for an interactive robot. For a robot that physically interacts with humans, accomplishing a task with the expense of human comfort is not acceptable even the robot does not harm any person. The robot has to perform motion and manipulation actions and should be able to determine where a given task should be achieved, how to place itself relatively to a human, how to approach him/her, how to hand the object and how to move in a relatively constrained environment by taking into account the safety and the comfort of all the humans in the environment. In this work, we propose a novel motion planning framework answering these questions along with its implementation into a navigation and a manipulation planner. We present the Human-Aware Navigation Planner that takes into account the safety, the fields of view, the preferences and the states of all the humans as well as the environment and generates paths that are not only collision free but also comfortable. We also present the Human-Aware Manipulation Planner that breaks the commonly used human-centric approaches and allows the robot to decide and take initiative about the way of an object transfer takes place. Human's safety, field of view, state, preferences as well as its kinematic structure is taken into account to generate safe and most importantly comfortable and legible motions that make robot's intention clear to its human partner
Sunardi, Mathias I. "Expressive Motion Synthesis for Robot Actors in Robot Theatre." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/720.
Full textVelor, Tosan. "A Low-Cost Social Companion Robot for Children with Autism Spectrum Disorder." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41428.
Full textJúnior, Valdir Grassi. "Arquitetura híbrida para robôs móveis baseada em funções de navegação com interação humana." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3132/tde-19092006-145159/.
Full textThere are some applications in mobile robotics that require human user interaction besides the autonomous navigation control of the robot. For these applications, in a semi-autonomous control mode, the human user can locally modify the autonomous pre-planned robot trajectory by sending continuous commands to the robot. In this case, independently from the user\'s commands, the intelligent control system must continuously avoid collisions, modifying the user\'s commands if necessary. This approach creates a safety navigation system that can be used in robotic wheelchairs and manned robotic vehicles where the human safety must be guaranteed. A control system with those characteristics should be based on a suitable mobile robot architecture. This architecture must integrate the human user\'s commands with the autonomous control layer of the system which is responsible for avoiding static and dynamic obstacles and for driving the robot to its navigation goal. In this work we propose a hybrid (deliberative/reactive) mobile robot architecture with human interaction. This architecture was developed mainly for navigation tasks and allows the robot to be operated on different levels of autonomy. The user can share the robot control with the system while the system ensures the user and robot\'s safety. In this architecture, a navigation function is used for representing the robot\'s navigation plan. We propose a method for combining the deliberative behavior responsible for executing the navigation plan, with the reactive behaviors defined to be used while navigating, and with the continuous human user\'s inputs. The intelligent control system defined by the proposed architecture was implemented in a robotic wheelchair, and we present some experimental results of the chair operating on different autonomy modes.
Montecillo, Puente Francisco Javier. "Transfert de Mouvement Humain vers Robot Humanoïde." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0119/document.
Full textThe aim of this thesis is to transfer human motion to a humanoid robot online. In the first part of this work, the human motion recorded by a motion capture system is analyzed to extract salient features that are to be transferred on the humanoid robot. We introduce the humanoid normalized model as the set of motion properties. In the second part of this work, the robot motion that includes the human motion features is computed using the inverse kinematics with priority. In order to transfer the motion properties a stack of tasks is predefined. Each motion property in the humanoid normalized model corresponds to one target in the stack of tasks. We propose a framework to transfer human motion online as close as possible to a human motion performance for the upper body. Finally, we study the problem of transfering feet motion. In this study, the motion of feet is analyzed to extract the Euclidean trajectories adapted to the robot. Moreover, the trajectory of the center of mass which ensures that the robot does not fall is calculated from the feet positions and the inverse pendulum model of the robot. Using this result, it is possible to achieve complete imitation of upper body movements and including feet motion
Menychtas, Dimitrios. "Human Body Motions Optimization for Able-Bodied Individuals and Prosthesis Users During Activities of Daily Living Using a Personalized Robot-Human Model." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7547.
Full textMontecillo, Puente Francisco Javier. "Human Motion Transfer on Humanoid Robot." Phd thesis, 2010. http://oatao.univ-toulouse.fr/7261/1/montecillo.pdf.
Full textChen, Yi-Ru, and 陳羿如. "Human Robot Interaction with Motion Platform." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/43989787584905587513.
Full text國立臺灣大學
電機工程學研究所
97
This thesis presents an upper body tracking method with a monocular camera. The human model is defined in a high dimensional state space. We hereby propose a hierarchical structure model to solve the tracking problem by particle filter with partitioned sampling. The spatial and temporal information from the image is used to track the human body and estimate the human posture. When doing the human-robot interaction, a static monocular camera may not get plenty of information from the 2D images, so we must move the camera platform to a better position for acquiring more enriched image information. The proposed upper body tracking technique will then self-adjust to estimate the human posture during the camera movement. To validate the effectiveness of the proposed tracking approach, extensive experiments have been performed, of which the result appear to be quite promising.
Lin, Fu-Wei, and 林富偉. "Intelligent humanoid robot design in human motion imitation." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/02000110582389028831.
Full text銘傳大學
電腦與通訊工程學系碩士班
102
In this paper, we designed a system to accomplish the tasks of imitation of human motion of humanoid robot. The system not only can be used to immitate the human motion but also to control the humanoid robot through the human motion. We use the Kinect to capture human motion and the corresponding body joints of the human were transmitted by WiFi to the DARwIn-OP humanoid robot. DARwIn-OP calculates the motors’ parameters and balance itself to do the similar movements of the human. The experiment results show that the humanoid robot can follow up the movements of the human in several kinds of movements. Finally, this intelligent imitation humanoid robot system can be implemented in real-time demonstration.
Chen, Wen-Chien, and 陳文建. "SOPC Based Human Biped Motion Tracking Controlfor Human-Sized Biped Robot." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/87205375922556367275.
Full text國立成功大學
電機工程學系碩博士班
96
This thesis presents the motion control system design of the human-sized biped robot, aiRobot-HBR1, and proposes a human biped motion (HBM) tracking control approach in which humans can control the robot by an integrated sensor control module (ISCM). aiRobot-HBR1 is a human-size biped robot with 110 cm height and 40 Kg weight, and has a total of 12 D.O.Fs. First, this thesis presents the control structure of the motion control system and the motion pattern planning of the robot. Designing the motor controller, the graphic user interface, and the integrated sensor control module along with the central processor unit, Nios FPGA, we construct a control platform for developing the control strategies of the robot based on SOPC. Furthermore, this thesis establishes the dynamic model of the integrated sensor control module which integrates a gyro and an accelerometer. The Kalman filter is utilized to estimate the states of the model to track the human biped motion. Combining the biped robot with the tracking result of the human-body motion, we propose a real-time HBM tracking control and a HBM recognition approach. The estimated posture is used to control the motion and the behavior of aiRobot-HBR1. Finally, the experiment results indicate the validity of the proposed motion control system, real-time HBM tracking control, and the HBM recognition.
Yu, Yueh-Chi. "Environment and Human Behavior Learning for Robot Motion Control." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2207200814024900.
Full text