Academic literature on the topic 'Human-Robot motion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Human-Robot motion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Human-Robot motion"

1

Liu, Hongyi, and Lihui Wang. "Human motion prediction for human-robot collaboration." Journal of Manufacturing Systems 44 (July 2017): 287–94. http://dx.doi.org/10.1016/j.jmsy.2017.04.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

KIGUCHI, Kazuo, Subrata Kumar KUNDU, and Makato Sasaki. "1P1-A24 An Inner Skeleton Robot for Human Elbow Motion Assist." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2006 (2006): _1P1—A24_1—_1P1—A24_2. http://dx.doi.org/10.1299/jsmermd.2006._1p1-a24_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gopura, Ranathunga Arachchilage Ruwan Chandra, and Kazuo Kiguchi. "1207 Control of an Exoskeleton Robot for Human Wrist Motion Support." Proceedings of the Conference on Information, Intelligence and Precision Equipment : IIP 2008 (2008): 67–68. http://dx.doi.org/10.1299/jsmeiip.2008.67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khoramshahi, Mahdi, and Aude Billard. "A dynamical system approach for detection and reaction to human guidance in physical human–robot interaction." Autonomous Robots 44, no. 8 (July 26, 2020): 1411–29. http://dx.doi.org/10.1007/s10514-020-09934-9.

Full text
Abstract:
Abstract A seamless interaction requires two robotic behaviors: the leader role where the robot rejects the external perturbations and focuses on the autonomous execution of the task, and the follower role where the robot ignores the task and complies with human intentional forces. The goal of this work is to provide (1) a unified robotic architecture to produce these two roles, and (2) a human-guidance detection algorithm to switch across the two roles. In the absence of human-guidance, the robot performs its task autonomously and upon detection of such guidances the robot passively follows the human motions. We employ dynamical systems to generate task-specific motion and admittance control to generate reactive motions toward the human-guidance. This structure enables the robot to reject undesirable perturbations, track the motions precisely, react to human-guidance by providing proper compliant behavior, and re-plan the motion reactively. We provide analytical investigation of our method in terms of tracking and compliant behavior. Finally, we evaluate our method experimentally using a 6-DoF manipulator.
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Hsien I., and Zan Sheng Chen. "Whole-Body Human-to-Humanoid Motion Imitation." Applied Mechanics and Materials 479-480 (December 2013): 617–21. http://dx.doi.org/10.4028/www.scientific.net/amm.479-480.617.

Full text
Abstract:
Human-to-Humanoid motion imitation is an intuitive method to teach a humanoid robot how to act by human demonstration. For example, teaching a robot how to stand is simply showing the robot how a human stands. Much of previous work in motion imitation focuses on either upper-body or lower-body motion imitation. In this paper, we propose a novel approach to imitate human whole-body motion by a humanoid robot. The main problem of the proposed work is how to control robot balance and keep the robot motion as similar as taught human motion simultaneously. Thus, we propose a balance criterion to assess how well the root can balance and use the criterion and a genetic algorithm to search a sub-optimal solution, making the root balanced and its motion similar to human motion. We have validated the proposed work on an Aldebaran Robotics NAO robot with 25 degrees of freedom. The experimental results show that the root can imitate human postures and autonomously keep itself balanced.
APA, Harvard, Vancouver, ISO, and other styles
6

DARIUSH, BEHZAD, MICHAEL GIENGER, ARJUN ARUMBAKKAM, YOUDING ZHU, BING JIAN, KIKUO FUJIMURA, and CHRISTIAN GOERICK. "ONLINE TRANSFER OF HUMAN MOTION TO HUMANOIDS." International Journal of Humanoid Robotics 06, no. 02 (June 2009): 265–89. http://dx.doi.org/10.1142/s021984360900170x.

Full text
Abstract:
Transferring motion from a human demonstrator to a humanoid robot is an important step toward developing robots that are easily programmable and that can replicate or learn from observed human motion. The so called motion retargeting problem has been well studied and several off-line solutions exist based on optimization approaches that rely on pre-recorded human motion data collected from a marker-based motion capture system. From the perspective of human robot interaction, there is a growing interest in online motion transfer, particularly without using markers. Such requirements have placed stringent demands on retargeting algorithms and limited the potential use of off-line and pre-recorded methods. To address these limitations, we present an online task space control theoretic retargeting formulation to generate robot joint motions that adhere to the robot's joint limit constraints, joint velocity constraints and self-collision constraints. The inputs to the proposed method include low dimensional normalized human motion descriptors, detected and tracked using a vision based key-point detection and tracking algorithm. The proposed vision algorithm does not rely on markers placed on anatomical landmarks, nor does it require special instrumentation or calibration. The current implementation requires a depth image sequence, which is collected from a single time of flight imaging device. The feasibility of the proposed approach is shown by means of online experimental results on the Honda humanoid robot — ASIMO.
APA, Harvard, Vancouver, ISO, and other styles
7

TSUMUGIWA, Toru, Atsushi KAMIYOSHI, Ryuichi YOKOGAWA, and Hiroshi SHIBATA. "1A1-M08 Robot Motion Control based on Relative Motion Information between Human and Robot in Human-Robot Dynamical Interaction." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2007 (2007): _1A1—M08_1—_1A1—M08_3. http://dx.doi.org/10.1299/jsmermd.2007._1a1-m08_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kodama, Ryoji, Toru Nogai, and Katsumi Suzuki. "Effect of the Motion in Horizontal Plane on the Stability of Biped Walking." Journal of Robotics and Mechatronics 5, no. 6 (December 20, 1993): 531–36. http://dx.doi.org/10.20965/jrm.1993.p0531.

Full text
Abstract:
The human act of walking consists of 3-dimensional motion in the sagittal plane, frontal plane, and horizontal plane. However, in a lot of walking robots investigated by many researchers, motions were only considered in the sagittal plane or in the sagittal and frontal planes. If robot walking is modeled to real human walking, then motion in the horizontal plane should also be considered in robot walking. In this paper, our purpose is to investigate the effect of motion in the horizontal plane on biped walking robot. The authors study the effect using an inverse pendulum model. Firstly, we explain horizontal motion in human walking and analyze the walking motion of a robot model. The results of computer simulation are also presented.
APA, Harvard, Vancouver, ISO, and other styles
9

IVANCEVIC, VLADIMIR G., and TIJANA T. IVANCEVIC. "HUMAN VERSUS HUMANOID ROBOT BIODYNAMICS." International Journal of Humanoid Robotics 05, no. 04 (December 2008): 699–713. http://dx.doi.org/10.1142/s0219843608001595.

Full text
Abstract:
In this paper we compare and contrast modern dynamical methodologies common to both humanoid robotics and human biomechanics. While the humanoid robot's motion is defined on the system of constrained rotational Lie groups SO(3) acting in all major robot joints, human motion is defined on the corresponding system of constrained Euclidean groups SE(3) of the full (rotational + translational) rigid motions acting in all synovial human joints. In both cases the smooth configuration manifolds, Q rob and Q hum , respectively, can be constructed. The autonomous Lagrangian dynamics are developed on the corresponding tangent bundles, TQ rob and TQ hum , respectively, which are themselves smooth Riemannian manifolds. Similarly, the autonomous Hamiltonian dynamics are developed on the corresponding cotangent bundles, T*Q rob and T*Q hum , respectively, which are themselves smooth symplectic manifolds. In this way a full rotational + translational biodynamics simulator has been created with 270 DOFs in total, called the Human Biodynamics Engine, which is currently in its validation stage. Finally, in both the human and the humanoid case, the time-dependent biodynamics generalizing the autonomous Lagrangian (of Hamiltonian) dynamics is naturally formulated in terms of jet manifolds.
APA, Harvard, Vancouver, ISO, and other styles
10

Mori, Yoshikazu, Koji Ota, and Tatsuya Nakamura. "Robot Motion Algorithm Based on Interaction with Human." Journal of Robotics and Mechatronics 14, no. 5 (October 20, 2002): 462–70. http://dx.doi.org/10.20965/jrm.2002.p0462.

Full text
Abstract:
In this paper, we quantitatively analyze weariness and impression that a human senses for a robot when the human interacted with the robot through some movements. A red ball and a blue ball are displayed on a simulation screen. The human moves the red ball with a mouse and the computer moves the blue ball. By using these balls, the impression that the action of the robot gives to the human is examined. We analyze the relationship between robot's interactive characterisrtics and produced impressions about the robot in human-robot-interction experiments by using methods of information theory. The difference of the impression between the simulation and the actual robot is proved by an omni-directional robot.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Human-Robot motion"

1

Paulin, Rémi. "human-robot motion : an attention-based approach." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM018.

Full text
Abstract:
Pour les robots mobiles autonomes conçus pour partager notre environnement, la sécurité et l'efficacité de leur trajectoire ne sont pas les seuls aspects à prendre en compte pour la planification de leur mouvement: ils doivent respecter des règles sociales afin de ne pas gêner les personnes environnantes. Dans un tel contexte social, la plupart des techniques de planification de mouvement actuelles s'appuient fortement sur le concept d'espaces sociaux; de tels espaces sociaux sont cependant difficiles à modéliser et ils sont d'une utilisation limitée dans le contexte d'interactions homme-robot où l'intrusion dans les espaces sociaux est nécessaire. Ce travail présente une nouvelle approche pour la planification de mouvements dans un contexte social qui permet de gérer des environnements complexes ainsi que des situation d’interaction homme-robot. Plus précisément, le concept d'attention est utilisé pour modéliser comment l'influence de l'environnement dans son ensemble affecte la manière dont le mouvement du robot est perçu par les personnes environnantes. Un nouveau modèle attentionnel est introduit qui estime comment nos ressources attentionnelles sont partagées entre les éléments saillants de notre environnement. Basé sur ce modèle, nous introduisons le concept de champ attentionnel. Un planificateur de mouvement est ensuite développé qui s'appuie sur le champ attentionnel afin de produire des mouvements socialement acceptables. Notre planificateur de mouvement est capable d'optimiser simultanément plusieurs objectifs tels que la sécurité, l'efficacité et le confort des mouvements. Les capacités de l'approche proposée sont illustrées sur plusieurs scénarios simulés dans lesquels le robot est assigné différentes tâches. Lorsque la tâche du robot consiste à naviguer dans l'environnement sans causer de distraction, notre approche produit des résultats prometteurs même dans des situations complexes. Aussi, lorsque la tâche consiste à attirer l'attention d'une personne en vue d'interagir avec elle, notre planificateur de mouvement est capable de choisir automatiquement une destination qui exprime au mieux son désir d'interagir, tout en produisant un mouvement sûr, efficace et confortable
For autonomous mobile robots designed to share their environment with humans, path safety and efficiency are not the only aspects guiding their motion: they must follow social rules so as not to cause discomfort to surrounding people. Most socially-aware path planners rely heavily on the concept of social spaces; however, social spaces are hard to model and they are of limited use in the context of human-robot interaction where intrusion into social spaces is necessary. In this work, a new approach for socially-aware path planning is presented that performs well in complex environments as well as in the context of human-robot interaction. Specifically, the concept of attention is used to model how the influence of the environment as a whole affects how the robot's motion is perceived by people within close proximity. A new computational model of attention is presented that estimates how our attentional resources are shared amongst the salient elements in our environment. Based on this model, the novel concept of attention field is introduced and a path planner that relies on this field is developed in order to produce socially acceptable paths. To do so, a state-of-the-art many-objective optimization algorithm is successfully applied to the path planning problem. The capacities of the proposed approach are illustrated in several case studies where the robot is assigned different tasks. Firstly, when the task is to navigate in the environment without causing distraction our approach produces promising results even in complex situations. Secondly, when the task is to attract a person's attention in view of interacting with him or her, the motion planner is able to automatically choose a destination that best conveys its desire to interact whilst keeping the motion safe, efficient and socially acceptable
APA, Harvard, Vancouver, ISO, and other styles
2

Lasota, Przemyslaw A. (Przemyslaw Andrzej). "Robust human motion prediction for safe and efficient human-robot interaction." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122497.

Full text
Abstract:
Thesis: Ph. D. in Autonomous Systems, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 175-188).
From robotic co-workers in factories to assistive robots in homes, human-robot interaction (HRI) has the potential to revolutionize a large array of domains by enabling robotic assistance where it was previously not possible. Introducing robots into human-occupied domains, however, requires strong consideration for the safety and efficiency of the interaction. One particularly effective method of supporting safe an efficient human-robot interaction is through the use of human motion prediction. By predicting where a person might reach or walk toward in the upcoming moments, a robot can adjust its motions to proactively resolve motion conflicts and avoid impeding the person's movements. Current approaches to human motion prediction, however, often lack the robustness required for real-world deployment. Many methods are designed for predicting specific types of tasks and motions, and do not necessarily generalize well to other domains.
It is also possible that no single predictor is suitable for predicting motion in a given scenario, and that multiple predictors are needed. Due to these drawbacks, without expert knowledge in the field of human motion prediction, it is difficult to deploy prediction on real robotic systems. Another key limitation of current human motion prediction approaches lies in deficiencies in partial trajectory alignment. Alignment of partially executed motions to a representative trajectory for a motion is a key enabling technology for many goal-based prediction methods. Current approaches of partial trajectory alignment, however, do not provide satisfactory alignments for many real-world trajectories. Specifically, due to reliance on Euclidean distance metrics, overlapping trajectory regions and temporary stops lead to large alignment errors.
In this thesis, I introduce two frameworks designed to improve the robustness of human motion prediction in order to facilitate its use for safe and efficient human-robot interaction. First, I introduce the Multiple-Predictor System (MPS), a datadriven approach that uses given task and motion data in order to synthesize a high performing predictor by automatically identifying informative prediction features and combining the strengths of complementary prediction methods. With the use of three distinct human motion datasets, I show that using the MPS leads to lower prediction error in a variety of HRI scenarios, and allows for accurate prediction for a range of time horizons. Second, in order to address the drawbacks of prior alignment techniques, I introduce the Bayesian ESTimator for Partial Trajectory Alignment (BEST-PTA).
This Bayesian estimation framework uses a combination of optimization, supervised learning, and unsupervised learning components that are trained and synthesized based on a given set of example trajectories. Through an evaluation on three human motion datasets, I show that BEST-PTA reduces alignment error when compared to state-of-the-art baselines. Furthermore, I demonstrate that this improved alignment reduces human motion prediction error. Lastly, in order to assess the utility of the developed methods for improving safety and efficiency in HRI, I introduce an integrated framework combining prediction with robot planning in time. I describe an implementation and evaluation of this framework on a real physical system. Through this demonstration, I show that the developed approach leads to automatically derived adaptive robot behavior. I show that the developed framework leads to improvements in quantitative metrics of safety and efficiency with the use of a simulated evaluation.
"Funded by the NASA Space Technology Research Fellowship Program and the National Science Foundation"--Page 6
by Przemyslaw A. Lasota.
Ph. D. in Autonomous Systems
Ph.D.inAutonomousSystems Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
3

Gray, Cobb Susan Valerie. "Perception and orientation issues in human control of robot motion." Thesis, University of Nottingham, 1991. http://eprints.nottingham.ac.uk/11237/.

Full text
Abstract:
The use of remote teach controls for programming industrial robots has led to concern over programmer safety and reliability. The primary issue is the close proximity to the robot arm required for the programmer or maintainer to clearly see the tool actions, and it is feared that errors in robot control could result in injury. The further concern that variations in teach control design could cause "negative transfer" of learning has led to a call for standardisation of robot teach controls. However,at present there is insufficient data to provide suitable design recommendations. This is because previous researchers have measured control performance on very general, and completely different, programming tasks. This work set out to examine the motion control task, from which a framework was developed to represent the robot motion control process. This showed the decisions and actions required to achieve robot movement, together with the factors which may influence them. Two types of influencing factors were identified: robot system factors and human cognitive factors. Robot system factors add complexity to the control task by producing motion reversals which alter the control-robot motion relationship. These motion reversals were identified during the experimental programme which examined observers' perception of robot motion under different conditions of human-robot orientation and robot arm configuration. These determine the orientation of the robot with respect to the observer at any given time. It was found that changes in orientation may influence the observer's perception of robot movement producing inconsistent descriptions of the same movement viewed under different orientations. Furthermore, due to the strong association between perceived movement and control selection demonstrated n these experiments, no particular differences in error performance using different control designs were observed. It is concluded that human cognitive factors, specifically the operators' perception of robot movement and their ability to recognise motion reversals, have greater influence on control selection errors than control design per se.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Chien-Ming. "Joint attention in human-robot interaction." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.

Full text
Abstract:
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
APA, Harvard, Vancouver, ISO, and other styles
5

Narsipura, Sreenivasa Manish. "Modeling of human movement for the generation of humanoid robot motion." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0120/document.

Full text
Abstract:
La robotique humanoïde arrive a maturité avec des robots plus rapides et plus précis. Pour faire face à la complexité mécanique, la recherche a commencé à regarder au-delà du cadre habituel de la robotique, vers les sciences de la vie, afin de mieux organiser le contrôle du mouvement. Cette thèse explore le lien entre mouvement humain et le contrôle des systèmes anthropomorphes tels que les robots humanoïdes. Tout d’abord, en utilisant des méthodes classiques de la robotique, telles que l’optimisation, nous étudions les principes qui sont à la base de mouvements répétitifs humains, tels que ceux effectués lorsqu’on joue au yoyo. Nous nous concentrons ensuite sur la locomotion en nous inspirant de résultats en neurosciences qui mettent en évidence le rôle de la tête dans la marche humaine. En développant une interface permettant à un utilisateur de commander la tête du robot, nous proposons une méthode de contrôle du mouvement corps-complet d’un robot humanoïde, incluant la production de pas et permettant au corps de suivre le mouvement de la tête. Cette idée est poursuivie dans l’étude finale dans laquelle nous analysons la locomotion de sujets humains, dirigée vers une cible, afin d’extraire des caractéristiques du mouvement sous forme invariants. En faisant le lien entre la notion “d’invariant” en neurosciences et celle de “tâche cinématique” en robotique humanoïde, nous développons une méthode pour produire une locomotion réaliste pour d’autres systèmes anthropomorphes. Dans ce cas, les résultats sont illustrés sur le robot humanoïde HRP2 du LAAS-CNRS. La contribution générale de cette thèse est de montrer que, bien que la planification de mouvement pour les robots humanoïdes peut être traitée par des méthodes classiques de robotique, la production de mouvements réalistes nécessite de combiner ces méthodes à l’observation systématique et formelle du comportement humain
Humanoid robotics is coming of age with faster and more agile robots. To compliment the physical complexity of humanoid robots, the robotics algorithms being developed to derive their motion have also become progressively complex. The work in this thesis spans across two research fields, human neuroscience and humanoid robotics, and brings some ideas from the former to aid the latter. By exploring the anthropological link between the structure of a human and that of a humanoid robot we aim to guide conventional robotics methods like local optimization and task-based inverse kinematics towards more realistic human-like solutions. First, we look at dynamic manipulation of human hand trajectories while playing with a yoyo. By recording human yoyo playing, we identify the control scheme used as well as a detailed dynamic model of the hand-yoyo system. Using optimization this model is then used to implement stable yoyo-playing within the kinematic and dynamic limits of the humanoid HRP-2. The thesis then extends its focus to human and humanoid locomotion. We take inspiration from human neuroscience research on the role of the head in human walking and implement a humanoid robotics analogy to this. By allowing a user to steer the head of a humanoid, we develop a control method to generate deliberative whole-body humanoid motion including stepping, purely as a consequence of the head movement. This idea of understanding locomotion as a consequence of reaching a goal is extended in the final study where we look at human motion in more detail. Here, we aim to draw to a link between “invariants” in neuroscience and “kinematic tasks” in humanoid robotics. We record and extract stereotypical characteristics of human movements during a walking and grasping task. These results are then normalized and generalized such that they can be regenerated for other anthropomorphic figures with different kinematic limits than that of humans. The final experiments show a generalized stack of tasks that can generate realistic walking and grasping motion for the humanoid HRP-2. The general contribution of this thesis is in showing that while motion planning for humanoid robots can be tackled by classical methods of robotics, the production of realistic movements necessitate the combination of these methods with the systematic and formal observation of human behavior
APA, Harvard, Vancouver, ISO, and other styles
6

Umali, Antonio. "Framework For Robot-Assisted Doffing of Personal Protective Equipment." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/940.

Full text
Abstract:
"When treating highly-infectious diseases such as Ebola, health workers are at high risk of infection during the doffing of Personal Protective Equipment (PPE). This is due to factors such as fatigue, hastiness, and inconsistency in training. The introduction of a semi-autonomous robot doffing assistant has the potential to increase the safety of the doffing procedure by assisting the human during high-risk sub-tasks. The addition of a robot into the procedure introduces the need to transform a purely human task into a sequence of safe and effective human-robot collaborative actions. We take advantage of the fact that the human can do the more intricate motions during the procedure. Since diseases like Ebola can spread through the mucous membranes of the eyes, ears, nose, and mouth our goal is to keep the human’s hands away from his or her face as much as possible. Thus our framework focuses on using the robot to help avoid such human risky motion. As secondary goals, we seek to also minimize the human’s effort and make the robot’s motion intuitive for the human. To address different versions and variants of PPE, we propose a way of segmenting the doffing procedure into a sequence of human and robot actions such that the robot only assists when necessary. Our framework then synthesizes assistive motions for the robot that perform parts of the tasks according to the metrics above. Our experiments on five doffing tasks suggest that the introduction of a robot assistant improves the safety of the procedure in three out of four of the high-risk doffing tasks while reducing effort in all five tasks."
APA, Harvard, Vancouver, ISO, and other styles
7

Pai, Abhishek. "Distance-Scaled Human-Robot Interaction with Hybrid Cameras." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563872095430977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Conte, Dean Edward. "Autonomous Robotic Escort Incorporating Motion Prediction with Human Intention." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102581.

Full text
Abstract:
This thesis presents a framework for a mobile robot to escort a human to their destination successfully and efficiently. The proposed technique uses accurate path prediction incorporating human intention to locate the robot in front of the human while walking. Human intention is inferred by the head pose, an effective past-proven implicit indicator of intention, and fused with conventional physics-based motion prediction. The human trajectory is estimated and predicted using a particle filter because of the human's nonlinear and non-Gaussian behavior, and the robot control action is determined from the predicted human pose allowing for anticipative autonomous escorting. Experimental analysis shows that the incorporation of the proposed human intention model reduces human position prediction error by approximately 35% when turning. Furthermore, experimental validation with an omnidirectional mobile robotic platform shows escorting up to 50% more accurate compared to the conventional techniques, while achieving 97% success rate.
Master of Science
This thesis presents a method for a mobile robot to escort a human to their destination successfully and efficiently. The proposed technique uses human intention to predict the walk path allowing the robot to be in front of the human while walking. Human intention is inferred by the head direction, an effective past-proven indicator of intention, and is combined with conventional motion prediction. The robot motion is then determined from the predicted human position allowing for anticipative autonomous escorting. Experimental analysis shows that the incorporation of the proposed human intention reduces human position prediction error by approximately 35% when turning. Furthermore, experimental validation with an mobile robotic platform shows escorting up to 50% more accurate compared to the conventional techniques, while achieving 97% success rate. The unique escorting interaction method proposed has applications such as touch-less shopping cart robots, exercise companions, collaborative rescue robots, and sanitary transportation for hospitals.
APA, Harvard, Vancouver, ISO, and other styles
9

Nitz, Pettersson Hannes, and Samuel Vikström. "VISION-BASED ROBOT CONTROLLER FOR HUMAN-ROBOT INTERACTION USING PREDICTIVE ALGORITHMS." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54609.

Full text
Abstract:
The demand for robots to work in environments together with humans is growing. This calls for new requirements on robots systems, such as the need to be perceived as responsive and accurate in human interactions. This thesis explores the possibility of using AI methods to predict the movement of a human and evaluating if that information can assist a robot with human interactions. The AI methods that were used is a Long Short Term Memory(LSTM) network and an artificial neural network(ANN). Both networks were trained on data from a motion capture dataset and on four different prediction times: 1/2, 1/4, 1/8 and a 1/16 second. The evaluation was performed directly on the dataset to determine the prediction error. The neural networks were also evaluated on a robotic arm in a simulated environment, to show if the prediction methods would be suitable for a real-life system. Both methods show promising results when comparing the prediction error. From the simulated system, it could be concluded that with the LSTM prediction the robotic arm would generally precede the actual position. The results indicate that the methods described in this thesis report could be used as a stepping stone for a human-robot interactive system.
APA, Harvard, Vancouver, ISO, and other styles
10

Hayne, Rafi. "Toward Enabling Safe & Efficient Human-Robot Manipulation in Shared Workspaces." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/1012.

Full text
Abstract:
"When humans interact, there are many avenues of physical communication available ranging from vocal to physical gestures. In our past observations, when humans collaborate on manipulation tasks in shared workspaces there is often minimal to no verbal or physical communication, yet the collaboration is still fluid with minimal interferences between partners. However, when humans perform similar tasks in the presence of a robot collaborator, manipulation can be clumsy, disconnected, or simply not human-like. The focus of this work is to leverage our observations of human-human interaction in a robot's motion planner in order to facilitate more safe, efficient, and human-like collaborative manipulation in shared workspaces. We first present an approach to formulating the cost function for a motion planner intended for human-robot collaboration such that robot motions are both safe and efficient. To achieve this, we propose two factors to consider in the cost function for the robot's motion planner: (1) Avoidance of the workspace previously-occupied by the human, so robot motion is safe as possible, and (2) Consistency of the robot's motion, so that the motion is predictable as possible for the human and they can perform their task without focusing undue attention on the robot. Our experiments in simulation and a human-robot workspace sharing study compare a cost function that uses only the first factor and a combined cost that uses both factors vs. a baseline method that is perfectly consistent but does not account for the human's previous motion. We find using either cost function we outperform the baseline method in terms of task success rate without degrading the task completion time. The best task success rate is achieved with the cost function that includes both the avoidance and consistency terms. Next, we present an approach to human-attention aware robot motion generation which attempts to convey intent of the robot's task to its collaborator. We capture human attention through the combined use of a wearable eye-tracker and motion capture system. Since human attention isn't static, we present a method of generating a motion policy that can be queried online. Finally, we show preliminary tests of this method."
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Human-Robot motion"

1

Martin, W. N. Motion Understanding: Robot and Human Vision. Boston, MA: Springer US, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lenarčič, J., and M. M. Stanišić. Advances in robot kinematics: Motion in man and machine. Dordrecht: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

N, Martin W., and Aggarwal J. K. 1936-, eds. Motion understanding: Robot and human vision. Boston: Kluwer Academic Publishers, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Motion Understanding: Robot and Human Vision. Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Noceti, Nicoletta, Alessandra Sciutti, and Francesco Rea. Modelling Human Motion: From Human Perception to Robot Design. Springer, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

(Editor), W. N. Martin, and J. K. Aggarwal (Editor), eds. Motion Understanding: Robot and Human Vision (The International Series in Engineering and Computer Science). Springer, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

(Editor), Federico Barbagli, Domenico Prattichizzo (Editor), and Kenneth Salisbury (Editor), eds. Multi-point Interaction with Real and Virtual Objects (Springer Tracts in Advanced Robotics). Springer, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Metta, Giorgio. Humans and humanoids. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0047.

Full text
Abstract:
This chapter outlines a number of research lines that, starting from the observation of nature, attempt to mimic human behavior in humanoid robots. Humanoid robotics is one of the most exciting proving grounds for the development of biologically inspired hardware and software—machines that try to recreate billions of years of evolution with some of the abilities and characteristics of living beings. Humanoids could be especially useful for their ability to “live” in human-populated environments, occupying the same physical space as people and using tools that have been designed for people. Natural human–robot interaction is also an important facet of humanoid research. Finally, learning and adapting from experience, the hallmark of human intelligence, may require some approximation to the human body in order to attain similar capacities to humans. This chapter focuses particularly on compliant actuation, soft robotics, biomimetic robot vision, robot touch, and brain-inspired motor control in the context of the iCub humanoid robot.
APA, Harvard, Vancouver, ISO, and other styles
9

Erdem, Uğur Murat, Nicholas Roy, John J. Leonard, and Michael E. Hasselmo. Spatial and episodic memory. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0029.

Full text
Abstract:
The neuroscience of spatial memory is one of the most promising areas for developing biomimetic solutions to complex engineering challenges. Grid cells are neurons recorded in the medial entorhinal cortex that fire when rats are in an array of locations in the environment falling on the vertices of tightly packed equilateral triangles. Grid cells suggest an exciting new approach for enhancing robot simultaneous localization and mapping (SLAM) in changing environments and could provide a common map for situational awareness between human and robotic teammates. Current models of grid cells are well suited to robotics, as they utilize input from self-motion and sensory flow similar to inertial sensors and visual odometry in robots. Computational models, supported by in vivo neural activity data, demonstrate how grid cell representations could provide a substrate for goal-directed behavior using hierarchical forward planning that finds novel shortcut trajectories in changing environments.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Human-Robot motion"

1

Langer, Allison, and Shelly Levy-Tzedek. "Priming and Timing in Human-Robot Interactions." In Modelling Human Motion, 335–50. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46732-6_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Changliu, Te Tang, Hsien-Chung Lin, and Masayoshi Tomizuka. "Efficiency in Real-Time Motion Planning." In Designing Robot Behavior in Human-Robot Interactions, 61–82. Boca Raton : CRC Press, Taylor & Francis Group, [2019] | “A science publishers book.”: CRC Press, 2019. http://dx.doi.org/10.1201/9780429058714-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Illmann, Jörg, Boris Kluge, Erwin Prassler, and Matthias Strobel. "Statistical Recognition of Motion Patterns." In Advances in Human-Robot Interaction, 69–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-31509-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wösch, Thomas, Werner Neubauer, Georg v. Wichert, and Zsolt Kemény. "Motion Planning for Domestic Robot Assistants." In Advances in Human-Robot Interaction, 195–205. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-31509-4_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ge, Shuzhi Sam, and Yanan Li. "Motion Synchronization for Human-Robot Collaboration." In Social Robotics, 248–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34103-8_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lohan, Katrin, Muneeb Imtiaz Ahmad, Christian Dondrup, Paola Ardón, Èric Pairet, and Alessandro Vinciarelli. "Adapting Movements and Behaviour to Favour Communication in Human-Robot Interaction." In Modelling Human Motion, 271–97. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46732-6_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gu, Edward Y. L. "Representations of Rigid Motion." In A Journey from Robot to Digital Human, 49–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39047-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Prassler, Prof Dr Erwin, Dr Andreas Stopp, Martin Hägele, Ioannis Iossifidis, Dr Gisbert Lawitzky, Dr Gerhard Grunwald, and Prof Dr Ing Rüdiger Dillmann. "4 Co-existence: Physical Interaction and Coordinated Motion." In Advances in Human-Robot Interaction, 161–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-31509-4_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmed, Rehan M., Anani V. Ananiev, and Ivan G. Kalaykov. "Compliant Motion Control for Safe Human Robot Interaction." In Robot Motion and Control 2009, 265–74. London: Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-985-5_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gao, Robert X., Lihui Wang, Peng Wang, Jianjing Zhang, and Hongyi Liu. "Human Motion Recognition and Prediction for Robot Control." In Advanced Human-Robot Collaboration in Manufacturing, 261–82. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69178-3_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Human-Robot motion"

1

Potkonjak, Veljko, Vladimir Petrović, Kosta Jovanović, and Dragan Kostić. "Human-Robot Analogy − How Physiology Shapes Human and Robot Motion." In European Conference on Artificial Life 2013. MIT Press, 2013. http://dx.doi.org/10.7551/978-0-262-31709-2-ch021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dragan, Anca D., Shira Bauman, Jodi Forlizzi, and Siddhartha S. Srinivasa. "Effects of Robot Motion on Human-Robot Collaboration." In HRI '15: ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2696454.2696473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dietz, Griffin, Jane L. E, Peter Washington, Lawrence H. Kim, and Sean Follmer. "Human Perception of Swarm Robot Motion." In CHI '17: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3027063.3053220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fukui, Kotaro, Toshihiro Kusano, Yoshikazu Mukaeda, Yuto Suzuki, Atsuo Takanishi, and Masaaki Honda. "Speech robot mimicking human articulatory motion." In Interspeech 2010. ISCA: ISCA, 2010. http://dx.doi.org/10.21437/interspeech.2010-337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dragan, Anca, and Siddhartha Srinivasa. "Familiarization to robot motion." In HRI'14: ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2559636.2559674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Allan, Dylan Hadfield-Menell, Anusha Nagabandi, and Anca D. Dragan. "Expressive Robot Motion Timing." In HRI '17: ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/2909824.3020221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sakata, Wataru, Futoshi Kobayashi, and Hiroyuki Nakamoto. "Robot-human handover based on motion prediction of human." In 2017 6th International Conference on Informatics, Electronics and Vision & 2017 7th International Symposium in Computational Medical and Health Technology (ICIEV-ISCMHT). IEEE, 2017. http://dx.doi.org/10.1109/iciev.2017.8338592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Molina-Tanco, L., J. P. Bandera, R. Marfil, and F. Sandoval. "Real-time human motion analysis for human-robot interaction." In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2005. http://dx.doi.org/10.1109/iros.2005.1545240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kang, Jie, Kai Jia, Fang Xu, Fengshan Zou, Yanan Zhang, and Hengle Ren. "Real-Time Human Motion Estimation for Human Robot Collaboration." In 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). IEEE, 2018. http://dx.doi.org/10.1109/cyber.2018.8688348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thobbi, Anand, Ye Gu, and Weihua Sheng. "Using human motion estimation for human-robot cooperative manipulation." In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011). IEEE, 2011. http://dx.doi.org/10.1109/iros.2011.6094904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography