Academic literature on the topic 'Haptics, Robotics, Human guidance, Engineering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Haptics, Robotics, Human guidance, Engineering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Haptics, Robotics, Human guidance, Engineering"

1

Kanai, Satoshi, and Jouke C. Verlinden. "Special Issue on Augmented Prototyping and Fabrication for Advanced Product Design and Manufacturing." International Journal of Automation Technology 13, no. 4 (2019): 451–52. http://dx.doi.org/10.20965/ijat.2019.p0451.

Full text
Abstract:
“Don’t automate, augment!” This is the takeaway of the seminal book on the future of work by Davenport and Kirby.*1 The emergence of cyber-physical systems makes radical new products and systems possible and challenges the role of humankind. Throughout the design, manufacturing, use, maintenance, and end-of-life stages, digital aspects (sensing, inferencing, connecting) influence the physical (digital fabrication, robotics) and vice versa. A key takeaway is that such innovations can augment human capabilities to extend our mental and physical skills with computational and robotic support – a notion called “augmented well-being.” Furthermore, agile development methods, complemented by mixed-reality systems and 3D-printing systems, enable us to create and adapt such systems on the fly, with almost instant turnaround times. Following this line of thought, our special issue is entitled “Augmented Prototyping and Fabrication for Advanced Product Design and Manufacturing.” Heavily inspired by the framework of Prof. Jun Rekimoto’s Augmented Human framework,*2 we can discern two orthogonal axes: cognitive versus physical and reflective versus active. As depicted in Fig. 1, this creates four different quadrants with important scientific domains that need to be juxtaposed. The contributions in this special issue are valuable steps towards this concept and are briefly discussed below. AR/VR To drive AR to the next level, robust tracking and tracing techniques are essential. The paper by Sumiyoshi et al. presents a new algorithm for object recognition and pose estimation in a strongly cluttered environment. As an example of how AR/VR can reshape human skills training, the development report of Komizunai et al. demonstrates an endotracheal suctioning simulator that establishes an optimized, spatial display with projector-based AR. Robotics/Cyborg Shor et al. present an augmentation display that uses haptics to go beyond the visual senses. The display has all the elements of a robotic system and is directly coupled to the human hand. In a completely different way, the article by Mitani et al. presents a development in soft robotics: a tongue simulator development (smart sensing and production of soft material), with a detailed account of the production and the technical performance. Finally, to consider novel human-robot interaction, human body tracking is essential. The system presented by Maruyama et al. introduces human motion capture based on IME, in this case the motion of cycling. Co-making Augmented well-being has to consider human-centered design and new collaborative environments where the stakeholders involved in whole product life-cycle work together to deliver better solutions. Inoue et al. propose a generalized decision-making scheme for universal design which considers anthropometric diversity. In the paper by Tanaka et al., paper inspection documents are electronically superimposed on 3D design models to enable design-inspection collaboration and more reliable maintenance activities for large-scale infrastructures. Artificial Intelligence Nakamura et al. propose an optimization-based search for interference-free paths and the poses of equipment in cluttered indoor environments, captured by interactive RGBD scans. AR-based guidance is provided to the user. Finally, the editors would like to express their gratitude to the authors for their exceptional contributions and to the anonymous reviewers for their devoted work. We expect that this special issue will encourage a new departure for research on augmented prototyping for product design and manufacturing. *1 T. H. Davenport and J. Kirby, “Only Humans Need Apply: Winners and Losers in the Age of Smart Machines,” Harper Business, 2016. *2 https://lab.rekimoto.org/about/ [Accessed June 21, 2019]
APA, Harvard, Vancouver, ISO, and other styles
2

Klatzky, Roberta L., Susan J. Lederman, and J. D. Balakrishnan. "Task–Driven Extraction of Object Contour by Human Haptics: Part 1." Robotica 9, no. 1 (1991): 43–51. http://dx.doi.org/10.1017/s0263574700015551.

Full text
Abstract:
SUMMARYThe extraction of contour information from objects is essential for purposes of grasping and manipulation. We proposed that human haptic exploration of contours, in the absence of vision, would reveal specialized patterns. Task goals and intrinsic system capacities were assumed to constrain the breadth of processing and the precision with which contour is encoded, thus determining parameters of exploration and ultimately producing movement synergies or “contour exploration procedures.” A methodology for testing these assumptions is described, and the most frequently observed procedures are documented in Part 1. Part 2 will further analyze the procedures, test predictions, and develop implications of the research. The paper (2 parts) is novel in its study of human manipulative behavior from a robotic standpoint; it is thus of interest to robotics research workers interested in the long-term goals of robot manipulation and those interested in an anthropomorphic approach to robotics studies.
APA, Harvard, Vancouver, ISO, and other styles
3

Galambos, Péter, Péter Baranyi, and Gusztáv Arz. "Tensor product model transformation-based control design for force reflecting tele-grasping under time delay." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 228, no. 4 (2013): 765–77. http://dx.doi.org/10.1177/0954406213490375.

Full text
Abstract:
The improvement of direct human–robot physical interaction has recently become one of the strongest motivation factors in robotics research. Impedance/admittance control methods are key technologies in several directions of advanced robotics such as dexterous manipulation, haptics and telemanipulation. In this paper, we propose a control scheme and a design technique for stabilising shared impedance/admittance-based bilateral telemanipulation under varying time delay. The proposed scheme introduces delay-adaptive non-linear damping to stabilise the impedance model. A modified version of the tensor product model transformation is applied to determine the tensor product type polytopic linear parameter varying (LPV) representation of the impedance controlled interaction model, such that the value of the actual time delay becomes an external parameter rather than an inherent property of the system. The main benefit of the proposed approach is that the model form it produces is amenable to the immediate application of modern, linear matrix inequality (LMI)-based multi-objective synthesis methods. The viability of the proposed methodology is demonstrated through a single DoF force reflecting tele-grasping application. The results are also confirmed through laboratory experiments which help to further highlight the perspectives of this novel approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Ramirez-Zamora, Juan Daniel, Omar Arturo Dominguez-Ramirez, Luis Enrique Ramos-Velasco, et al. "HRpI System Based on Wavenet Controller with Human Cooperative-in-the-Loop for Neurorehabilitation Purposes." Sensors 22, no. 20 (2022): 7729. http://dx.doi.org/10.3390/s22207729.

Full text
Abstract:
There exist several methods aimed at human–robot physical interaction (HRpI) to provide physical therapy in patients. The use of haptics has become an option to display forces along a given path so as to it guides the physiotherapist protocol. Critical in this regard is the motion control for haptic guidance to convey the specifications of the clinical protocol. Given the inherent patient variability, a conclusive demand of these HRpI methods is the need to modify online its response with neither rejecting nor neglecting interaction forces but to process them as patient interaction. In this paper, considering the nonlinear dynamics of the robot interacting bilaterally with a patient, we propose a novel adaptive control to guarantee stable haptic guidance by processing the causality of patient interaction forces, despite unknown robot dynamics and uncertainties. The controller implements radial basis neural network with daughter RASP1 wavelets activation function to identify the coupled interaction dynamics. For an efficient online implementation, an output infinite impulse response filter prunes negligible signals and nodes to deal with overparametrization. This contributes to adapt online the feedback gains of a globally stable discrete PID regulator to yield stiffness control, so the user is guided within a perceptual force field. Effectiveness of the proposed method is verified in real-time bimanual human-in-the-loop experiments.
APA, Harvard, Vancouver, ISO, and other styles
5

Almeida, Luis, Paulo Menezes, and Jorge Dias. "Telepresence Social Robotics towards Co-Presence: A Review." Applied Sciences 12, no. 11 (2022): 5557. http://dx.doi.org/10.3390/app12115557.

Full text
Abstract:
Telepresence robots are becoming popular in social interactions involving health care, elderly assistance, guidance, or office meetings. There are two types of human psychological experiences to consider in robot-mediated interactions: (1) telepresence, in which a user develops a sense of being present near the remote interlocutor, and (2) co-presence, in which a user perceives the other person as being present locally with him or her. This work presents a literature review on developments supporting robotic social interactions, contributing to improving the sense of presence and co-presence via robot mediation. This survey aims to define social presence, co-presence, identify autonomous “user-adaptive systems” for social robots, and propose a taxonomy for “co-presence” mechanisms. It presents an overview of social robotics systems, applications areas, and technical methods and provides directions for telepresence and co-presence robot design given the actual and future challenges. Finally, we suggest evaluation guidelines for these systems, having as reference face-to-face interaction.
APA, Harvard, Vancouver, ISO, and other styles
6

Trovato, Gabriele, Alexander Lopez, Renato Paredes, Diego Quiroz, and Francisco Cuellar. "Design and Development of a Security and Guidance Robot for Employment in a Mall." International Journal of Humanoid Robotics 16, no. 05 (2019): 1950027. http://dx.doi.org/10.1142/s0219843619500270.

Full text
Abstract:
Among the possible fields in human society in which robotics can be applied, the possibility of a robotic guard has been imagined since long time. Human guards usually perform a range of tasks in which a robot can provide help. Security personnel not only perform security tasks and patrolling services, but also have to interact with people, providing additional information about the area they safeguard. In this paper, we present the design, development and preliminary tests of RobotMan, an anthropomorphic robot commissioned by a security company that should serve in security roles, such as patrolling large indoor areas and acting as a telepresence platform for the human guards, and guidance roles such as welcoming visitors and providing information. In the preliminary experiment, the new robot and a human guard were employed the roles of security and guidance in a manufacturing center. The results of the experiment collected from 96 participants highlighted differences in participants’ behavior when interacting with the robot rather than the human and a different perception of likeability and authority of the robot depending on subtle differences in its appearance and behavior. These results provide useful indication for the employment of robot guards in a real world situation.
APA, Harvard, Vancouver, ISO, and other styles
7

Bayat, Behzad, Julita Bermejo-Alonso, Joel Carbonera, et al. "Requirements for building an ontology for autonomous robots." Industrial Robot: An International Journal 43, no. 5 (2016): 469–80. http://dx.doi.org/10.1108/ir-02-2016-0059.

Full text
Abstract:
Purpose IEEE Ontologies for Robotics and Automation Working Group were divided into subgroups that were in charge of studying industrial robotics, service robotics and autonomous robotics. This paper aims to present the work in-progress developed by the autonomous robotics (AuR) subgroup. This group aims to extend the core ontology for robotics and automation to represent more specific concepts and axioms that are commonly used in autonomous robots. Design/methodology/approach For autonomous robots, various concepts for aerial robots, underwater robots and ground robots are described. Components of an autonomous system are defined, such as robotic platforms, actuators, sensors, control, state estimation, path planning, perception and decision-making. Findings AuR has identified the core concepts and domains needed to create an ontology for autonomous robots. Practical implications AuR targets to create a standard ontology to represent the knowledge and reasoning needed to create autonomous systems that comprise robots that can operate in the air, ground and underwater environments. The concepts in the developed ontology will endow a robot with autonomy, that is, endow robots with the ability to perform desired tasks in unstructured environments without continuous explicit human guidance. Originality/value Creating a standard for knowledge representation and reasoning in autonomous robotics will have a significant impact on all R&A domains, such as on the knowledge transmission among agents, including autonomous robots and humans. This tends to facilitate the communication among them and also provide reasoning capabilities involving the knowledge of all elements using the ontology. This will result in improved autonomy of autonomous systems. The autonomy will have considerable impact on how robots interact with humans. As a result, the use of robots will further benefit our society. Many tedious tasks that currently can only be performed by humans will be performed by robots, which will further improve the quality of life. To the best of the authors’knowledge, AuR is the first group that adopts a systematic approach to develop ontologies consisting of specific concepts and axioms that are commonly used in autonomous robots.
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Yiqun, Jianliang Ai, and Jiquan Liu. "Guidance and control for own aircraft in the autonomous air combat: A historical review and future prospects." Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering 233, no. 16 (2019): 5943–91. http://dx.doi.org/10.1177/0954410019889447.

Full text
Abstract:
The Autonomous Air Combat technique has been a lasting research topic for decades. However, no complete solutions seem to have appeared because of the highly dynamic and complex nature of the Autonomous Air Combat problem. In devising the Autonomous Air Combat solutions, we follow similar methodologies in the robotics community, and divide the overall scheme into two folds: the perception of other (enemy/friendly) aircraft, and the guidance/control for own aircraft. While the perception in the first fold serves as a foundation, this paper is mainly focused on the second one. Based on our survey, a review of own aircraft guidance/control in the (primarily one-to-one) Autonomous Air Combat solutions is presented. We divide different Autonomous Air Combat solutions into three groups, i.e. mathematics-based, knowledge-encoded, and learning-driven. In each group, we present the representative methods first; problem definition, solution, and a brief overview of the historical development are illustrated. We also comment on both weakness and strengths for each group/method. We point out certain technical paths/challenges that need to be addressed in the future Autonomous Air Combat development, i.e. to abstract and emulate the human pilot experiences, and to develop the online learning capabilites. Inspired by the state-of-art techniques in other similar fields (robotics, autonomous driving), we also propose potential solutions, i.e. traditional approaches enhanced by the novel data-driven technique. Via this paper, we hope to deliver an in-depth analysis of past experiences and potential challenges/solutions for the Autonomous Air Combat technique. We also advocate referring to the approaches/techniques that are utilized in other similar fields in devising the Autonomous Air Combat solutions.
APA, Harvard, Vancouver, ISO, and other styles
9

Sosa-Ceron, Arturo Daniel, Hugo Gustavo Gonzalez-Hernandez, and Jorge Antonio Reyes-Avendaño. "Learning from Demonstrations in Human–Robot Collaborative Scenarios: A Survey." Robotics 11, no. 6 (2022): 126. http://dx.doi.org/10.3390/robotics11060126.

Full text
Abstract:
Human–Robot Collaboration (HRC) is an interdisciplinary research area that has gained attention within the smart manufacturing context. To address changes within manufacturing processes, HRC seeks to combine the impressive physical capabilities of robots with the cognitive abilities of humans to design tasks with high efficiency, repeatability, and adaptability. During the implementation of an HRC cell, a key activity is the robot programming that takes into account not only the robot restrictions and the working space, but also human interactions. One of the most promising techniques is the so-called Learning from Demonstration (LfD), this approach is based on a collection of learning algorithms, inspired by how humans imitate behaviors to learn and acquire new skills. In this way, the programming task could be simplified and provided by the shop floor operator. The aim of this work is to present a survey of this programming technique, with emphasis on collaborative scenarios rather than just an isolated task. The literature was classified and analyzed based on: the main algorithms employed for Skill/Task learning, and the human level of participation during the whole LfD process. Our analysis shows that human intervention has been poorly explored, and its implications have not been carefully considered. Among the different methods of data acquisition, the prevalent method is physical guidance. Regarding data modeling, techniques such as Dynamic Movement Primitives and Semantic Learning were the preferred methods for low-level and high-level task solving, respectively. This paper aims to provide guidance and insights for researchers looking for an introduction to LfD programming methods in collaborative robotics context and identify research opportunities.
APA, Harvard, Vancouver, ISO, and other styles
10

Alnajjar, Fady, Abdul Rahman Hafiz, and Kazuyuki Murase. "HCBPM: An Idea toward a Social Learning Environment for Humanoid Robot." Journal of Robotics 2010 (2010): 1–13. http://dx.doi.org/10.1155/2010/241785.

Full text
Abstract:
To advance robotics toward real-world applications, a growing body of research has focused on the development of control systems for humanoid robots in recent years. Several approaches have been proposed to support the learning stage of such controllers, where the robot can learn new behaviors by observing and/or receiving direct guidance from a human or even another robot. These approaches require dynamic learning and memorization techniques, which the robot can use to reform and update its internal systems continuously while learning new behaviors. Against this background, this study investigates a new approach to the development of an incremental learning and memorization model. This approach was inspired by the principles of neuroscience, and the developed model was named “Hierarchical Constructive Backpropagation with Memory” (HCBPM). The validity of the model was tested by teaching a humanoid robot to recognize a group of objects through natural interaction. The experimental results indicate that the proposed model efficiently enhances real-time machine learning in general and can be used to establish an environment suitable for social learning between the robot and the user in particular.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography