To see the other types of publications on this topic, follow the link: Human-Robot motion.

Journal articles on the topic 'Human-Robot motion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Human-Robot motion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Hongyi, and Lihui Wang. "Human motion prediction for human-robot collaboration." Journal of Manufacturing Systems 44 (July 2017): 287–94. http://dx.doi.org/10.1016/j.jmsy.2017.04.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

KIGUCHI, Kazuo, Subrata Kumar KUNDU, and Makato Sasaki. "1P1-A24 An Inner Skeleton Robot for Human Elbow Motion Assist." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2006 (2006): _1P1—A24_1—_1P1—A24_2. http://dx.doi.org/10.1299/jsmermd.2006._1p1-a24_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gopura, Ranathunga Arachchilage Ruwan Chandra, and Kazuo Kiguchi. "1207 Control of an Exoskeleton Robot for Human Wrist Motion Support." Proceedings of the Conference on Information, Intelligence and Precision Equipment : IIP 2008 (2008): 67–68. http://dx.doi.org/10.1299/jsmeiip.2008.67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khoramshahi, Mahdi, and Aude Billard. "A dynamical system approach for detection and reaction to human guidance in physical human–robot interaction." Autonomous Robots 44, no. 8 (July 26, 2020): 1411–29. http://dx.doi.org/10.1007/s10514-020-09934-9.

Full text
Abstract:
Abstract A seamless interaction requires two robotic behaviors: the leader role where the robot rejects the external perturbations and focuses on the autonomous execution of the task, and the follower role where the robot ignores the task and complies with human intentional forces. The goal of this work is to provide (1) a unified robotic architecture to produce these two roles, and (2) a human-guidance detection algorithm to switch across the two roles. In the absence of human-guidance, the robot performs its task autonomously and upon detection of such guidances the robot passively follows the human motions. We employ dynamical systems to generate task-specific motion and admittance control to generate reactive motions toward the human-guidance. This structure enables the robot to reject undesirable perturbations, track the motions precisely, react to human-guidance by providing proper compliant behavior, and re-plan the motion reactively. We provide analytical investigation of our method in terms of tracking and compliant behavior. Finally, we evaluate our method experimentally using a 6-DoF manipulator.
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Hsien I., and Zan Sheng Chen. "Whole-Body Human-to-Humanoid Motion Imitation." Applied Mechanics and Materials 479-480 (December 2013): 617–21. http://dx.doi.org/10.4028/www.scientific.net/amm.479-480.617.

Full text
Abstract:
Human-to-Humanoid motion imitation is an intuitive method to teach a humanoid robot how to act by human demonstration. For example, teaching a robot how to stand is simply showing the robot how a human stands. Much of previous work in motion imitation focuses on either upper-body or lower-body motion imitation. In this paper, we propose a novel approach to imitate human whole-body motion by a humanoid robot. The main problem of the proposed work is how to control robot balance and keep the robot motion as similar as taught human motion simultaneously. Thus, we propose a balance criterion to assess how well the root can balance and use the criterion and a genetic algorithm to search a sub-optimal solution, making the root balanced and its motion similar to human motion. We have validated the proposed work on an Aldebaran Robotics NAO robot with 25 degrees of freedom. The experimental results show that the root can imitate human postures and autonomously keep itself balanced.
APA, Harvard, Vancouver, ISO, and other styles
6

DARIUSH, BEHZAD, MICHAEL GIENGER, ARJUN ARUMBAKKAM, YOUDING ZHU, BING JIAN, KIKUO FUJIMURA, and CHRISTIAN GOERICK. "ONLINE TRANSFER OF HUMAN MOTION TO HUMANOIDS." International Journal of Humanoid Robotics 06, no. 02 (June 2009): 265–89. http://dx.doi.org/10.1142/s021984360900170x.

Full text
Abstract:
Transferring motion from a human demonstrator to a humanoid robot is an important step toward developing robots that are easily programmable and that can replicate or learn from observed human motion. The so called motion retargeting problem has been well studied and several off-line solutions exist based on optimization approaches that rely on pre-recorded human motion data collected from a marker-based motion capture system. From the perspective of human robot interaction, there is a growing interest in online motion transfer, particularly without using markers. Such requirements have placed stringent demands on retargeting algorithms and limited the potential use of off-line and pre-recorded methods. To address these limitations, we present an online task space control theoretic retargeting formulation to generate robot joint motions that adhere to the robot's joint limit constraints, joint velocity constraints and self-collision constraints. The inputs to the proposed method include low dimensional normalized human motion descriptors, detected and tracked using a vision based key-point detection and tracking algorithm. The proposed vision algorithm does not rely on markers placed on anatomical landmarks, nor does it require special instrumentation or calibration. The current implementation requires a depth image sequence, which is collected from a single time of flight imaging device. The feasibility of the proposed approach is shown by means of online experimental results on the Honda humanoid robot — ASIMO.
APA, Harvard, Vancouver, ISO, and other styles
7

TSUMUGIWA, Toru, Atsushi KAMIYOSHI, Ryuichi YOKOGAWA, and Hiroshi SHIBATA. "1A1-M08 Robot Motion Control based on Relative Motion Information between Human and Robot in Human-Robot Dynamical Interaction." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2007 (2007): _1A1—M08_1—_1A1—M08_3. http://dx.doi.org/10.1299/jsmermd.2007._1a1-m08_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kodama, Ryoji, Toru Nogai, and Katsumi Suzuki. "Effect of the Motion in Horizontal Plane on the Stability of Biped Walking." Journal of Robotics and Mechatronics 5, no. 6 (December 20, 1993): 531–36. http://dx.doi.org/10.20965/jrm.1993.p0531.

Full text
Abstract:
The human act of walking consists of 3-dimensional motion in the sagittal plane, frontal plane, and horizontal plane. However, in a lot of walking robots investigated by many researchers, motions were only considered in the sagittal plane or in the sagittal and frontal planes. If robot walking is modeled to real human walking, then motion in the horizontal plane should also be considered in robot walking. In this paper, our purpose is to investigate the effect of motion in the horizontal plane on biped walking robot. The authors study the effect using an inverse pendulum model. Firstly, we explain horizontal motion in human walking and analyze the walking motion of a robot model. The results of computer simulation are also presented.
APA, Harvard, Vancouver, ISO, and other styles
9

IVANCEVIC, VLADIMIR G., and TIJANA T. IVANCEVIC. "HUMAN VERSUS HUMANOID ROBOT BIODYNAMICS." International Journal of Humanoid Robotics 05, no. 04 (December 2008): 699–713. http://dx.doi.org/10.1142/s0219843608001595.

Full text
Abstract:
In this paper we compare and contrast modern dynamical methodologies common to both humanoid robotics and human biomechanics. While the humanoid robot's motion is defined on the system of constrained rotational Lie groups SO(3) acting in all major robot joints, human motion is defined on the corresponding system of constrained Euclidean groups SE(3) of the full (rotational + translational) rigid motions acting in all synovial human joints. In both cases the smooth configuration manifolds, Q rob and Q hum , respectively, can be constructed. The autonomous Lagrangian dynamics are developed on the corresponding tangent bundles, TQ rob and TQ hum , respectively, which are themselves smooth Riemannian manifolds. Similarly, the autonomous Hamiltonian dynamics are developed on the corresponding cotangent bundles, T*Q rob and T*Q hum , respectively, which are themselves smooth symplectic manifolds. In this way a full rotational + translational biodynamics simulator has been created with 270 DOFs in total, called the Human Biodynamics Engine, which is currently in its validation stage. Finally, in both the human and the humanoid case, the time-dependent biodynamics generalizing the autonomous Lagrangian (of Hamiltonian) dynamics is naturally formulated in terms of jet manifolds.
APA, Harvard, Vancouver, ISO, and other styles
10

Mori, Yoshikazu, Koji Ota, and Tatsuya Nakamura. "Robot Motion Algorithm Based on Interaction with Human." Journal of Robotics and Mechatronics 14, no. 5 (October 20, 2002): 462–70. http://dx.doi.org/10.20965/jrm.2002.p0462.

Full text
Abstract:
In this paper, we quantitatively analyze weariness and impression that a human senses for a robot when the human interacted with the robot through some movements. A red ball and a blue ball are displayed on a simulation screen. The human moves the red ball with a mouse and the computer moves the blue ball. By using these balls, the impression that the action of the robot gives to the human is examined. We analyze the relationship between robot's interactive characterisrtics and produced impressions about the robot in human-robot-interction experiments by using methods of information theory. The difference of the impression between the simulation and the actual robot is proved by an omni-directional robot.
APA, Harvard, Vancouver, ISO, and other styles
11

Lin, Chiuhsiang Joe, and Rio Prasetyo Lukodono. "Sustainable Human–Robot Collaboration Based on Human Intention Classification." Sustainability 13, no. 11 (May 26, 2021): 5990. http://dx.doi.org/10.3390/su13115990.

Full text
Abstract:
Sustainable manufacturing plays a role in ensuring products’ economic characteristics and reducing energy and resource consumption by improving the well-being of human workers and communities and maintaining safety. Using robots is one way for manufacturers to increase their sustainable manufacturing practices. Nevertheless, there are limitations to directly replacing humans with robots due to work characteristics and practical conditions. Collaboration between robots and humans should accommodate human capabilities while reducing loads and ineffective human motions to prevent human fatigue and maximize overall performance. Moreover, there is a need to establish early and fast communication between humans and machines in human–robot collaboration to know the status of the human in the activity and make immediate adjustments for maximum performance. This study used a deep learning algorithm to classify muscular signals of human motions with accuracy of 88%. It indicates that the signal could be used as information for the robot to determine the human motion’s intention during the initial stage of the entire motion. This approach can increase not only the communication and efficiency of human–robot collaboration but also reduce human fatigue by the early detection of human motion patterns. To enhance human well-being, it is suggested that a human–robot collaboration assembly line adopt similar technologies for a sustainable human–robot collaboration workplace.
APA, Harvard, Vancouver, ISO, and other styles
12

Paulin, Remi, Thierry Fraichard, and Patrick Reignier. "Using Human Attention to Address Human–Robot Motion." IEEE Robotics and Automation Letters 4, no. 2 (April 2019): 2038–45. http://dx.doi.org/10.1109/lra.2019.2899429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

LIU, CHAORAN, CARLOS T. ISHI, HIROSHI ISHIGURO, and NORIHIRO HAGITA. "GENERATION OF NODDING, HEAD TILTING AND GAZING FOR HUMAN–ROBOT SPEECH INTERACTION." International Journal of Humanoid Robotics 10, no. 01 (March 2013): 1350009. http://dx.doi.org/10.1142/s0219843613500096.

Full text
Abstract:
Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human–robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions without gaze information. We also find that an upward motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.
APA, Harvard, Vancouver, ISO, and other styles
14

Tuli, Tadele Belay, and Martin Manns. "Real-Time Motion Tracking for Humans and Robots in a Collaborative Assembly Task." Proceedings 42, no. 1 (November 14, 2019): 48. http://dx.doi.org/10.3390/ecsa-6-06636.

Full text
Abstract:
Human-robot collaboration combines the extended capabilities of humans and robots to create a more inclusive and human-centered production system in the future. However, human safety is the primary concern for manufacturing industries. Therefore, real-time motion tracking is necessary to identify if the human worker body parts enter the restricted working space solely dedicated to the robot. Tracking these motions using decentralized and different tracking systems requires a generic model controller and consistent motion exchanging formats. In this work, our task is to investigate a concept for a unified real-time motion tracking for human-robot collaboration. In this regard, a low cost and game-based motion tracking system, e.g., HTC Vive, is utilized to capture human motion by mapping into a digital human model in the Unity3D environment. In this context, the human model is described using a biomechanical model that comprises joint segments defined by position and orientation. Concerning robot motion tracking, a unified robot description format is used to describe the kinematic trees. Finally, a concept of assembly operation that involves snap joining is simulated to analyze the performance of the system in real-time capability. The distribution of joint variables in spatial-space and time-space is analyzed. The results suggest that real-time tracking in human-robot collaborative assembly environments can be considered to maximize the safety of the human worker. However, the accuracy and reliability of the system regarding system disturbances need to be justified.
APA, Harvard, Vancouver, ISO, and other styles
15

Tomić, Marija, Christine Chevallereau, Kosta Jovanović, Veljko Potkonjak, and Aleksandar Rodić. "Human to humanoid motion conversion for dual-arm manipulation tasks." Robotica 36, no. 8 (April 25, 2018): 1167–87. http://dx.doi.org/10.1017/s0263574718000309.

Full text
Abstract:
SUMMARYA conversion process for the imitation of human dual-arm motion by a humanoid robot is presented. The conversion process consists of an imitation algorithm and an algorithm for generating human-like motion of the humanoid. The desired motions in Cartesian and joint spaces, obtained from the imitation algorithm, are used to generate the human-like motion of the humanoid. The proposed conversion process improves existing techniques and is developed with the aim to enable imitating of human motion with a humanoid robot, to perform a task with and/or without contact between hands and equipment. A comparative analysis shows that our algorithm, which takes into account the situation of marker frames and the position of joint frames, ensures more precise imitation than previously proposed methods. The results of our conversion algorithm are tested on the robot ROMEO through a complex “open/close drawer” task.
APA, Harvard, Vancouver, ISO, and other styles
16

Kuroki, Yoshihiro, Tatsuzo Ishida, Jin-ichi Yamaguchi, Masahiro Fujita, and Toshi T. Doi. "A Small Biped Entertainment Robot." Journal of Robotics and Mechatronics 14, no. 1 (February 20, 2002): 6–12. http://dx.doi.org/10.20965/jrm.2002.p0006.

Full text
Abstract:
We propose a small biped entertainment robot prototype, Sony Dream Robot (SDR-3X) that realizes Motion Entertainment by entertaining people with its controlled dynamic motion performance. New key technologies developed for SDR-3X include the Intelligent Servo Actuator (ISA), the trine-actuator composed of a motor a built-in controller and a gear Another technology is Whole Body Coordinated Dynamic Motion Control. Both technologies realize dynamic, stable motion performance in dynamic biped walking, gymnastic motions and dancing. Human-rohot interactive performances are also described in this paper.
APA, Harvard, Vancouver, ISO, and other styles
17

Matsunaga, Nobutomo, and Shigeyasu Kawaji. "Motion Analysis of Human Lifting Works with Heavy Objects." Journal of Robotics and Mechatronics 17, no. 6 (December 20, 2005): 628–35. http://dx.doi.org/10.20965/jrm.2005.p0628.

Full text
Abstract:
Advances in robot development involves autonomous work in the real world, where robots may lift or carry heavy objects. Motion control of autonomous robots is an important issue, in which configurations and motion differ depending on the robot and the object. Isaka et al. analyzed that lifting configuration is important in realizing efficient lifting minimizing the burden on the lower back, but their analysis was limited to weight lifting of a fixed object. Biped robot control requires analyzing different lifting in diverse situations. Thus, motion analysis is important in clarifying control strategy. We analyzed dynamics of human lifting of barbells in different situations, and found that lifting can be divided into four motions.
APA, Harvard, Vancouver, ISO, and other styles
18

SHIBATA, Satoru, Mohamed Sahbi BENLAMINE, Kanya TANAKA, and Akira SHIMIZU. "Research on Human-Friendly Robot Motions : Motion of a Robot Holding Out an Object to a Human." TRANSACTIONS OF THE JAPAN SOCIETY OF MECHANICAL ENGINEERS Series C 64, no. 617 (1998): 279–87. http://dx.doi.org/10.1299/kikaic.64.279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Tsumugiwa, Toru, Yoshiki Takeuchi, and Ryuichi Yokogawa. "Maneuverability of Impedance-Controlled Motion in a Human-Robot Cooperative Task System." Journal of Robotics and Mechatronics 29, no. 4 (August 20, 2017): 746–56. http://dx.doi.org/10.20965/jrm.2017.p0746.

Full text
Abstract:
This paper presents an evaluation of the maneuverability of impedance-controlled robot motion during a human-robot cooperative positioning task. The objectives of this study are to reveal the results of a quantitative evaluation of the maneuverability of robot motion and to investigate the relationship between the results of the quantitative evaluation and an operator’s higher-order brain activity. Control strategies for the robot that are adequate for human-robot interaction have not yet been explicitly determined because of the difficulty in evaluating the maneuverability of robot motion. First, we analyzed the time normalized position and force/torque trajectories to reveal the characteristics of human motion and performed subjective evaluations for three types of impedance-controlled robot motion, which were controlled using the following strategies: (i) ordinary impedance control, (ii) impedance control with virtual Coulomb friction involved in the robot motion, and (iii) impedance control with a trajectory guidance force. Second, to confirm the analysis results based on the observed trajectories, we investigated differences in the operator’s higher-order brain activity when using the different control strategies by using a functional near-infrared spectroscopy system. The experimental results confirmed the relationship between the analysis results of the control strategies, the motion of the operator, and higher-order brain activity. Consequently, the investigation conducted in this study is effective for evaluating the maneuverability of robot motion during a human-robot cooperative task.
APA, Harvard, Vancouver, ISO, and other styles
20

Zanlungo, Francesco, Florent Ferreri, Jani Even, Luis Yoichi Morales, Zeynep Yücel, and Takayuki Kanda. "Pedestrian Models for Robot Motion." Collective Dynamics 5 (August 12, 2020): A90. http://dx.doi.org/10.17815/cd.2020.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mi, Jian, and Yasutake Takahashi. "Humanoid Robot Motion Modeling Based on Time-Series Data Using Kernel PCA and Gaussian Process Dynamical Models." Journal of Advanced Computational Intelligence and Intelligent Informatics 22, no. 6 (October 20, 2018): 965–77. http://dx.doi.org/10.20965/jaciii.2018.p0965.

Full text
Abstract:
In this article, contrary to popular studies on human motion learning, we focus on addressing the problem of humanoid robot motions directly. Performances of different kernel functions with principal components analysis (PCA) in Gaussian process dynamical models (GPDM) are investigated to build efficient humanoid robot motion models. A novel kernel-PCA-GPDM method is proposed for building different types of humanoid robot motion models. Compared with the standard-PCA-GPDM and auto-encoder-GPDM methods, our proposed method is more efficient in humanoid robot motion modeling. In this work, three types of NAO robot motion models are studied: walk-model, lateral-walk model, and wave-hand model, where motion data are collected from an Aldebaran NAO robot using magnetic rotary encoder sensors. Using kernel-PCA-GPDM method, the motion data are first projected from the high 23-dimension observation space to a 3-dimension low latent space. Then, three types of humanoid robot motion models are learned in the 3D latent space. Compared with other kernel-PCA-GPDM or auto-encoder-GPDM methods, our proposed novel kernel-PCA-GPDM method performs efficiently in motion learning. Finally, we realize humanoid robot motion representation to verify the motion models that we build. The experimental results show that our proposed kernel-PCA-GPDM method builds efficient and smooth motion models.
APA, Harvard, Vancouver, ISO, and other styles
22

SANKAI, Yoshiyuki. "Robot Suit HAL for Human Motion Support." Journal of Life Support Engineering 19, Supplement (2007): 9–10. http://dx.doi.org/10.5136/lifesupport.19.supplement_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sisbot, E. A., L. F. Marin-Urias, R. Alami, and T. Simeon. "A Human Aware Mobile Robot Motion Planner." IEEE Transactions on Robotics 23, no. 5 (October 2007): 874–83. http://dx.doi.org/10.1109/tro.2007.904911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

SANKAI, Yoshiyuki. "Robot Suit HAL for Human Motion Support." Proceedings of the JSME Symposium on Welfare Engineering 2007 (2007): 19–20. http://dx.doi.org/10.1299/jsmewes.2007.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Busch, Baptiste, Jonathan Grizou, Manuel Lopes, and Freek Stulp. "Learning Legible Motion from Human–Robot Interactions." International Journal of Social Robotics 9, no. 5 (March 8, 2017): 765–79. http://dx.doi.org/10.1007/s12369-017-0400-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

He, Yucheng, Ying Hu, Peng Zhang, Baoliang Zhao, Xiaozhi Qi, and Jianwei Zhang. "Human–Robot Cooperative Control Based on Virtual Fixture in Robot-Assisted Endoscopic Sinus Surgery." Applied Sciences 9, no. 8 (April 22, 2019): 1659. http://dx.doi.org/10.3390/app9081659.

Full text
Abstract:
In endoscopic sinus surgery, the robot assists the surgeon in holding the endoscope and acts as the surgeon’s third hand, which helps to reduce the surgeon’s operating burden and improve the quality of the operation. This paper proposes a human–robot cooperative control method based on virtual fixture to realize accurate and safe human–robot interaction in endoscopic sinus surgery. Firstly, through endoscopic trajectory analysis, the endoscopic motion constraint requirements of different surgical stages are obtained, and three typical virtual fixtures suitable for endoscopic sinus surgery are designed and implemented. Based on the typical virtual fixtures, a composite virtual fixture is constructed, and then the overall robot motion constraint model is obtained. Secondly, based on the obtained robot motion constraint model, a human–robot cooperative control method based on virtual fixture is proposed. The method adopts admittance control to realize efficient human–robot interaction between the surgeon and robot during the surgery; the virtual fixture is used to restrain and guide the motion of the robot, thereby ensuring motion safety of the robot. Finally, the proposed method is evaluated through a robot-assisted nasal endoscopy experiment, and the result shows that the proposed method can improve the accuracy and safety of operation during endoscopic sinus surgery.
APA, Harvard, Vancouver, ISO, and other styles
27

Morley, E. C., and J. R. Wilson. "The Matrix of Confusion—a Classification of Robot Movement." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 210, no. 3 (June 1996): 251–60. http://dx.doi.org/10.1243/pime_proc_1996_210_114_02.

Full text
Abstract:
The human-machine interface for robots has received only limited attention as robots have developed. Most research has focused on the design of teach pendants, hand-held devices for programming and manual motion control. Results from these studies have been generally inconclusive as to the best control design for teach pendants. In a fresh approach to the area, the human-robot interface has been analysed. This has resulted in the development of a method of classifying robot movements called the ‘matrix of confusion’. The classification shows the robot motions an operator would see when using a given control for a unique combination of operator position, robot position, programming mode and robot configuration. The use of the matrix has helped to highlight the most important factors in the task of manual motion control of the robot. This has helped in the development of a new motion system for a PUMA robot which is currently being tested in comparative trials.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Shiqi, Haipeng Wang, Shuai Zhang, Shuze Wang, and Ke Han. "Human Motion Trajectory Prediction in Human-Robot Collaborative Tasks." IOP Conference Series: Materials Science and Engineering 646 (October 17, 2019): 012067. http://dx.doi.org/10.1088/1757-899x/646/1/012067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Reyes-Uquillas, Daniel, and Tesheng Hsiao. "Compliant Human–Robot Collaboration with Accurate Path-Tracking Ability for a Robot Manipulator." Applied Sciences 11, no. 13 (June 25, 2021): 5914. http://dx.doi.org/10.3390/app11135914.

Full text
Abstract:
In this article, we aim to achieve manual guidance of a robot manipulator to perform tasks that require strict path following and would benefit from collaboration with a human to guide the motion. The robot can be used as a tool to increase the accuracy of a human operator while remaining compliant with the human instructions. We propose a dual-loop control structure where the outer admittance control loop allows the robot to be compliant along a path considering the projection of the external force to the tangential-normal-binormal (TNB) frame associated with the path. The inner motion control loop is designed based on a modified sliding mode control (SMC) law. We evaluate the system behavior to forces applied from different directions to the end-effector of a 6-DOF industrial robot in a linear motion test. Next, a second test using a 3D path as a tracking task is conducted, where we specify three interaction types: free motion (FM), force-applied motion (FAM), and combined motion with virtual forces (CVF). Results show that the difference of root mean square error (RMSE) among the cases is less than 0.1 mm, which proves the feasibility of applying this method for various path-tracking applications in compliant human–robot collaboration.
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Qiang, Zhangguo Yu, Weimin Zhang, Wei Xu, and Xuechao Chen. "Design and similarity evaluation on humanoid motion based on human motion capture." Robotica 28, no. 5 (August 28, 2009): 737–45. http://dx.doi.org/10.1017/s0263574709990439.

Full text
Abstract:
SUMMARYThis paper explores the design of humanoid complicated dynamic motion based on human motion capture. Captured human data must be adapted for the humanoid robot because its kinematics and dynamics mechanisms differ from those of the human actor. It is expected that humanoid movements are highly similar to those of the human actor. First, the kinematics constraints, including ground contact conditions, are formulated. Second, the similarity evaluation on the humanoid motion based on both the spatial and temporal factors compared with the human motion is proposed. Third, the method to obtain humanoid motion with high similarity is presented. Finally, the effectiveness of the proposed method is confirmed by simulations and experiments of our developed humanoid robot “sword” motion performance.
APA, Harvard, Vancouver, ISO, and other styles
31

KATAOKA, Ryosuke, Yonghoon JI, and Kazunori UMEDA. "Smooth Motion Control of Mobile Robot in Human-robot Coexisting Environment." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2019 (2019): 2P1—A13. http://dx.doi.org/10.1299/jsmermd.2019.2p1-a13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kataoka, Ryosuke, Shota Suzuki, Yonghoon Ji, and Kazunori Umeda. "Smooth Motion Control of Mobile Robot in Human-robot Coexisting Environment." IFAC-PapersOnLine 52, no. 22 (2019): 91–94. http://dx.doi.org/10.1016/j.ifacol.2019.11.054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wei, Yuan, and Jing Zhao. "Designing Human-like Behaviors for Anthropomorphic Arm in Humanoid Robot NAO." Robotica 38, no. 7 (September 30, 2019): 1205–26. http://dx.doi.org/10.1017/s026357471900136x.

Full text
Abstract:
SUMMARYHuman-like motion of robots can improve human–robot interaction and increase the efficiency. In this paper, a novel human-like motion planning strategy is proposed to help anthropomorphic arms generate human-like movements accurately. The strategy consists of three parts: movement primitives, Bayesian network (BN), and a novel coupling neural network (CPNN). The movement primitives are used to decouple the human arm movements. The classification of arm movements improves the accuracy of human-like movements. The motion-decision algorithm based on BN is able to predict occurrence probabilities of the motions and choose appropriate mode of motion. Then, a novel CPNN is proposed to solve the inverse kinematics problems of anthropomorphic arms. The CPNN integrates different models into a single network and reflects the features of these models by changing the network structure. Through the strategy, the anthropomorphic arms can generate various human-like movements with satisfactory accuracy. Finally, the availability of the proposed strategy is verified by simulations for the general motion of humanoid NAO.
APA, Harvard, Vancouver, ISO, and other styles
34

Liang, Peidong, Lianzheng Ge, Yihuan Liu, Lijun Zhao, Ruifeng Li, and Ke Wang. "An Augmented Discrete-Time Approach for Human-Robot Collaboration." Discrete Dynamics in Nature and Society 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/9126056.

Full text
Abstract:
Human-robot collaboration (HRC) is a key feature to distinguish the new generation of robots from conventional robots. Relevant HRC topics have been extensively investigated recently in academic institutes and companies to improve human and robot interactive performance. Generally, human motor control regulates human motion adaptively to the external environment with safety, compliance, stability, and efficiency. Inspired by this, we propose an augmented approach to make a robot understand human motion behaviors based on human kinematics and human postural impedance adaptation. Human kinematics is identified by geometry kinematics approach to map human arm configuration as well as stiffness index controlled by hand gesture to anthropomorphic arm. While human arm postural stiffness is estimated and calibrated within robot empirical stability region, human motion is captured by employing a geometry vector approach based on Kinect. A biomimetic controller in discrete-time is employed to make Baxter robot arm imitate human arm behaviors based on Baxter robot dynamics. An object moving task is implemented to validate the performance of proposed methods based on Baxter robot simulator. Results show that the proposed approach to HRC is intuitive, stable, efficient, and compliant, which may have various applications in human-robot collaboration scenarios.
APA, Harvard, Vancouver, ISO, and other styles
35

Nakagawa, Yuki, and Noriaki Nakagawa. "Relationship Between Human and Robot in Nonverbal Communication." Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no. 1 (January 20, 2017): 20–24. http://dx.doi.org/10.20965/jaciii.2017.p0020.

Full text
Abstract:
The function of the robot living together converges on the problem of communication. We focus on nonverbal communication and relationship between service robots and human. We show nonverbal communication experiments of robot to build relationship between human. We described some ideal fiction robots to live with. We got experience such as; Relationship based on touch communication is managed three elements, appearance, motion and predictable behavior. As the result, these elements are based on embodiment. Human touches robot after feeling safe and natural motion. Motivation to build relationship with robots is decided above three elements in physically but appearance and motion are important. Evaluation of relationship is complicated because relationship grows up depending on spending time and motivation to relate. These experiences were shown by life sized humanoid robot and robot arm in exhibition. Based on these results, evaluation method to understand relationship between robot and human are considered in near future robot development.
APA, Harvard, Vancouver, ISO, and other styles
36

Et.al, JIBUM JUNG. "Use of Human Motion Data to Train Wearable Robots." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (April 11, 2021): 807–11. http://dx.doi.org/10.17762/turcomat.v12i6.2100.

Full text
Abstract:
Development of wearable robots is accelerating. Walking robots mimic human behavior and must operate without accidents. Human motion data are needed to train these robots. We developed a system for extracting human motion data and displaying them graphically.We extracted motion data using a Perception Neuron motion capture system and used the Unity engine for the simulation. Several experiments were performed to demonstrate the accuracy of the extracted motion data.Of the various methods used to collect human motion data, markerless motion capture is highly inaccurate, while optical motion capture is very expensive, requiring several high-resolution cameras and a large number of markers. Motion capture using a magnetic field sensor is subject to environmental interference. Therefore, we used an inertial motion capture system. Each movement sequence involved four and was repeated 10 times. The data were stored and standardized. The motions of three individuals were compared to those of a reference person; the similarity exceeded 90% in all cases. Our rehabilitation robot accurately simulated human movements: individually tailored wearable robots could be designed based on our data. Safe and stable robot operation can be verified in advance via simulation. Walking stability can be increased using walking robots trained via machine learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Yanan, Keng Peng Tee, Rui Yan, and Shuzhi Sam Ge. "Reinforcement learning for human-robot shared control." Assembly Automation 40, no. 1 (October 3, 2019): 105–17. http://dx.doi.org/10.1108/aa-10-2018-0153.

Full text
Abstract:
Purpose This paper aims to propose a general framework of shared control for human–robot interaction. Design/methodology/approach Human dynamics are considered in analysis of the coupled human–robot system. Motion intentions of both human and robot are taken into account in the control objective of the robot. Reinforcement learning is developed to achieve the control objective subject to unknown dynamics of human and robot. The closed-loop system performance is discussed through a rigorous proof. Findings Simulations are conducted to demonstrate the learning capability of the proposed method and its feasibility in handling various situations. Originality/value Compared to existing works, the proposed framework combines motion intentions of both human and robot in a human–robot shared control system, without the requirement of the knowledge of human’s and robot’s dynamics.
APA, Harvard, Vancouver, ISO, and other styles
38

Clever, Debora, Yue Hu, and Katja Mombaur. "Humanoid gait generation in complex environments based on template models and optimality principles learned from human beings." International Journal of Robotics Research 37, no. 10 (May 2, 2018): 1184–204. http://dx.doi.org/10.1177/0278364918765620.

Full text
Abstract:
In this paper, we present an inverse optimal control-based transfer of motions from human experiments to humanoid robots and apply it to walking in constrained environments. To this end, we introduce a 3D template model, which describes motion on the basis of center-of-mass trajectory, foot trajectories, upper-body orientation, and phase duration. Despite its abstract architecture, with prismatic joints combined with damped series elastic actuators instead of knees, the model (including dynamics and constraints) is suitable for describing both human and humanoid locomotion with appropriate parameters. We present and apply an inverse optimal control approach to identify optimality criteria based on human motion capture experiments. The identified optimal strategy is then transferred to a humanoid robot template model for gait generation by solving an optimal control problem, which takes into account the properties of the robot and differences in the environment. The results of this step are the center-of-mass trajectory, the foot trajectories, the torso orientation, and the single and double support phase durations for a sequence of steps, allowing the humanoid robot to walk within a new environment. In a previous paper, we have already presented one computational cycle (from motion capture data to an optimized robot template motion) for the example of walking over irregular stepping stones with the aim of transferring the motion to two very different humanoid robots (iCub@Heidelberg and HRP-2@LAAS). This study represents an extension, containing an entirely new part on the transfer of the optimized template motion to the iCub robot by means of inverse kinematics in a dynamic simulation environment and also on the real robot.
APA, Harvard, Vancouver, ISO, and other styles
39

KIM, JUNG-YUP, and YOUNG-SEOG KIM. "WHOLE-BODY MOTION GENERATION OF ANDROID ROBOT USING MOTION CAPTURE AND NONLINEAR CONSTRAINED OPTIMIZATION." International Journal of Humanoid Robotics 10, no. 02 (June 2013): 1350003. http://dx.doi.org/10.1142/s0219843613500035.

Full text
Abstract:
This paper describes a whole-body motion generation scheme for an android robot using motion capture and an optimization method. Android robots basically require human-like motions due to their human-like appearances. However, they have various limitations on joint angle, and joint velocity as well as different numbers of joints and dimensions compared to humans. Because of these limitations and differences, one appropriate approach is to use an optimization technique for the motion capture data. Another important issue in whole-body motion generation is the gimbal lock problem, where a degree of freedom at the three-DOF shoulder disappears. Since the gimbal lock causes two DOFs at the shoulder joint diverge, a simple and effective strategy is required to avoid the divergence. Therefore, we propose a novel algorithm using nonlinear constrained optimization with special cost functions to cope with the aforementioned problems. To verify our algorithm, we chose a fast boxing motion that has a large range of motion and frequent gimbal lock situations as well as dynamic stepping motions. We then successfully obtained a suitable boxing motion very similar to captured human motion and also derived a zero moment point (ZMP) trajectory that is realizable for a given android robot model. Finally, quantitative and qualitative evaluations in terms of kinematics and dynamics are carried out for the derived android boxing motion.
APA, Harvard, Vancouver, ISO, and other styles
40

Cheng, Lingbo, and Mahdi Tavakoli. "Switched-Impedance Control of Surgical Robots in Teleoperated Beating-Heart Surgery." Journal of Medical Robotics Research 03, no. 03n04 (September 2018): 1841003. http://dx.doi.org/10.1142/s2424905x18410039.

Full text
Abstract:
A novel switched-impedance control method is proposed and implemented for telerobotic beating-heart surgery. Differing from cardiopulmonary-bypass-based arrested-heart surgery, beating-heart surgery creates challenges for the human operator (surgeon) due to the heart’s fast motions and, in the case of a teleoperated surgical robot, the oscillatory haptic feedback to the operator. This paper designs two switched reference impedance models for the master and slave robots to achieve both motion compensation and nonoscillatory force feedback during slave–heart interaction. By changing the parameters of the impedance models, different performances for both robots are obtained: (a) when the slave robot does not make contact with the beating heart, the slave robot closely follows the motion of the master robot as in a regular teleoperation system, (b) when contact occurs, the slave robot automatically compensates for the fast motions of the beating heart while the human operator perceives the nonoscillatory component of the slave–heart interaction forces, creating the feeling of making contact with an idle heart for the human operator. The proposed method is validated through simulations and experiments.
APA, Harvard, Vancouver, ISO, and other styles
41

RATSAMEE, PHOTCHARA, YASUSHI MAE, KENICHI OHARA, TOMOHITO TAKUBO, and TATSUO ARAI. "HUMAN–ROBOT COLLISION AVOIDANCE USING A MODIFIED SOCIAL FORCE MODEL WITH BODY POSE AND FACE ORIENTATION." International Journal of Humanoid Robotics 10, no. 01 (March 2013): 1350008. http://dx.doi.org/10.1142/s0219843613500084.

Full text
Abstract:
The ability of robots to understand human characteristics and make themselves socially accepted by humans are important issues if smooth collision avoidance between humans and robots is to be achieved. When discussing smooth collision avoidance, robot should understand not only physical components such as human position, but also social components such as body pose, face orientation and proxemics (personal space during motion). We integrated these components in a modified social force model (MSFM) which allows robots to predict human motion and perform smooth collision avoidance. In the modified model, short-term intended direction is described by body pose, and a supplementary force related face orientation is added for intention estimation. Face orientation is also the best indication of the direction of personal space during motion, which was verified in preliminary experiments. Our approach was implemented and tested on a real humanoid robot in a situation in which a human is confronted with the robot in an indoor environment. Experimental results showed that better human motion tracking was achieved with body pose and face orientation tracking. Being provided with the face orientation as an indication of the intended direction, and observing the laws of proxemics in a human-like manner, the robot was able to perform avoidance motions that were more human-like when compared to the original social force model (SFM) in a face-to-face confrontation.
APA, Harvard, Vancouver, ISO, and other styles
42

Bodden, Christopher, Daniel Rakita, Bilge Mutlu, and Michael Gleicher. "A flexible optimization-based method for synthesizing intent-expressive robot arm motion." International Journal of Robotics Research 37, no. 11 (September 2018): 1376–94. http://dx.doi.org/10.1177/0278364918792295.

Full text
Abstract:
We present an approach to synthesize robot arm trajectories that effectively communicate the robot’s intent to a human collaborator while achieving task goals. Our approach uses nonlinear constrained optimization to encode task requirements and desired motion properties. Our implementation allows for a wide range of constraints and objectives. We introduce a novel objective function to optimize robot arm motions for intent-expressiveness that works in a range of scenarios and robot arm types. Our formulation supports experimentation with different theories of how viewers interpret robot motion. Through a series of human-subject experiments on real and simulated robots, we demonstrate that our method leads to improved collaborative performance against other methods, including the current state of the art. These experiments also show how our perception heuristic can affect collaborative outcomes.
APA, Harvard, Vancouver, ISO, and other styles
43

Bin Hammam, Ghassan, Patrick M. Wensing, Behzad Dariush, and David E. Orin. "Kinodynamically Consistent Motion Retargeting for Humanoids." International Journal of Humanoid Robotics 12, no. 04 (November 27, 2015): 1550017. http://dx.doi.org/10.1142/s0219843615500176.

Full text
Abstract:
Human-to-humanoid motion retargeting is an important tool to generate human-like humanoid motions. This retargeting problem is often formulated as a Cartesian control problem for the humanoid from a set of task points in the captured human data. Classically, Cartesian control has been developed for redundant systems. While redundancy fundamentally adds new sub-task capabilities, the degree to which secondary objectives can be faithfully executed cannot be determined in advance. In fact, a robot that exhibits redundancy with respect to an operational task may have insufficient degrees of freedom (DOFs) to satisfy more critical constraints. In this paper, we present a Cartesian space resolved acceleration control framework to handle execution of operational tasks and constraints for redundant and nonredundant task specifications. The approach is well suited for online control of humanoid robots from captured human motion data expressed by Cartesian variables. The current formulation enforces kinematic constraints such as joint limits, self-collisions, and foot constraints and incorporates a dynamically-consistent redundancy resolution approach to minimize costly joint motions. The efficacy of the proposed algorithm is demonstrated by simulated and real-time experiments of human motion replication on a Honda humanoid robot model. The algorithm closely tracks all input motions while smoothly and automatically transitioning between regimes where different constraints are binding.
APA, Harvard, Vancouver, ISO, and other styles
44

Bian, Feifei, Danmei Ren, Ruifeng Li, Peidong Liang, Ke Wang, and Lijun Zhao. "Dynamical system based variable admittance control for physical human-robot interaction." Industrial Robot: the international journal of robotics research and application 47, no. 4 (May 15, 2020): 623–35. http://dx.doi.org/10.1108/ir-12-2019-0258.

Full text
Abstract:
Purpose The purpose of this paper is to enable robots to intelligently adapt their damping characteristics and motions in a reactive fashion toward human inputs and task requirements during physical human–robot interaction. Design/methodology/approach This paper exploits a combination of the dynamical system and the admittance model to create robot behaviors. The reference trajectories are generated by dynamical systems while the admittance control enables robots to compliantly follow the reference trajectories. To determine how control is divided between the two models, a collaborative arbitration algorithm is presented to change their contributions to the robot motion based on the contact forces. In addition, the authors investigate to model the robot’s impedance characteristics as a function of the task requirements and build a novel artificial damping field (ADF) to represent the virtual damping at arbitrary robot states. Findings The authors evaluate their methods through experiments on an UR10 robot. The result shows promising performances for the robot to achieve complex tasks in collaboration with human partners. Originality/value The proposed method extends the dynamical system approach with an admittance control law to allow a robot motion being adjusted in real time. Besides, the authors propose a novel ADF method to model the robot’s impedance characteristics as a function of the task requirements.
APA, Harvard, Vancouver, ISO, and other styles
45

Islam, Md Jahidul, Marc Ho, and Junaed Sattar. "Understanding human motion and gestures for underwater human-robot collaboration." Journal of Field Robotics 36, no. 5 (November 15, 2018): 851–73. http://dx.doi.org/10.1002/rob.21837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Beschi, Manuel, Marco Faroni, Cosmin Copot, and Nicola Pedrocchi. "How motion planning affects human factors in human-robot collaboration." IFAC-PapersOnLine 53, no. 5 (2020): 744–49. http://dx.doi.org/10.1016/j.ifacol.2021.04.167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Rosell, Jan, Raúl Suárez, Néstor García, and Muhayy Ud Din. "Planning Grasping Motions for Humanoid Robots." International Journal of Humanoid Robotics 16, no. 06 (December 2019): 1950041. http://dx.doi.org/10.1142/s0219843619500415.

Full text
Abstract:
This paper addresses the problem of obtaining the required motions for a humanoid robot to perform grasp actions trying to mimic the coordinated hand–arm movements humans do. The first step is the data acquisition and analysis, which consists in capturing human movements while grasping several everyday objects (covering four possible grasp types), mapping them to the robot and computing the hand motion synergies for the pre-grasp and grasp phases (per grasp type). Then, the grasp and motion synthesis step is done, which consists in generating potential grasps for a given object using the four family types, and planning the motions using a bi-directional multi-goal sampling-based planner, which efficiently guides the motion planning following the synergies in a reduced search space, resulting in paths with human-like appearance. The approach has been tested in simulation, thoroughly compared with other state-of-the-art planning algorithms obtaining better results, and also implemented in a real robot.
APA, Harvard, Vancouver, ISO, and other styles
48

Chan, Chee Seng, Honghai Liu, and David Brown. "Adapting robot kinematics for human-arm motion recognition." International Journal of Knowledge-based and Intelligent Engineering Systems 11, no. 4 (September 18, 2007): 207–17. http://dx.doi.org/10.3233/kes-2007-11403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kiguchi, Kazuo, Shingo Kariya, Keigo Watanabe, and Toshio Fukuda. "Human Elbow Motion Support with an Exoskeltal Robot." Proceedings of the JSME Bioengineering Conference and Seminar 2000.11 (2000): 3–4. http://dx.doi.org/10.1299/jsmebs.2000.11.0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

KAJIKAWA, Shinya. "Robot Motion Planning for Hand-Over with Human." Transactions of the Japan Society of Mechanical Engineers Series C 68, no. 674 (2002): 3007–14. http://dx.doi.org/10.1299/kikaic.68.3007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography