Статті в журналах з теми "Real Time Teleoperation of Robotic Interfaces"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Real Time Teleoperation of Robotic Interfaces.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Real Time Teleoperation of Robotic Interfaces".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Huang, Kevin, Divas Subedi, Rahul Mitra, Isabella Yung, Kirkland Boyd, Edwin Aldrich, and Digesh Chitrakar. "Telelocomotion—Remotely Operated Legged Robots." Applied Sciences 11, no. 1 (December 28, 2020): 194. http://dx.doi.org/10.3390/app11010194.

Повний текст джерела
Анотація:
Teleoperated systems enable human control of robotic proxies and are particularly amenable to inaccessible environments unsuitable for autonomy. Examples include emergency response, underwater manipulation, and robot assisted minimally invasive surgery. However, teleoperation architectures have been predominantly employed in manipulation tasks, and are thus only useful when the robot is within reach of the task. This work introduces the idea of extending teleoperation to enable online human remote control of legged robots, or telelocomotion, to traverse challenging terrain. Traversing unpredictable terrain remains a challenge for autonomous legged locomotion, as demonstrated by robots commonly falling in high-profile robotics contests. Telelocomotion can reduce the risk of mission failure by leveraging the high-level understanding of human operators to command in real-time the gaits of legged robots. In this work, a haptic telelocomotion interface was developed. Two within-user studies validate the proof-of-concept interface: (i) The first compared basic interfaces with the haptic interface for control of a simulated hexapedal robot in various levels of traversal complexity; (ii) the second presents a physical implementation and investigated the efficacy of the proposed haptic virtual fixtures. Results are promising to the use of haptic feedback for telelocomotion for complex traversal tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Weisbin, C., and D. Perillard. "R & D Profile Jet Propulsion Laboratory Robotic Facilities and Associated Research." Robotica 9, no. 1 (January 1991): 7–21. http://dx.doi.org/10.1017/s0263574700015526.

Повний текст джерела
Анотація:
SUMMARYThis paper describes the robotics facilities and associated research program of the Jet Propulsion Laboratory, lead center in telerobotics for the United States National Aeronautics and Space Administration. Emphasis is placed on evolution from teleoperation to remote System automation. Research is described in manipulator modelling and control, real-time planning and monitoring, navigation in outdoor terrain, real-time sensing and perception, human-machine interface, and overall System architectures. Applications to NASA missions emphasize robotic spacecraft for solar System exploration, satellite servicing and retrieval, assembly of structures, and surveillance. Applications to military missions include battlefield navigation, surveillance, logistics, command and control.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lumia, R. "Using NASREM for real-time sensory interactive robot control." Robotica 12, no. 2 (March 1994): 127–35. http://dx.doi.org/10.1017/s0263574700016714.

Повний текст джерела
Анотація:
SUMMARYThe Flight Telerobotic Servicer (FTS) is a robotic device which will be used to build and maintain Space Station Freedom. The FTS is expected to evolve from its initial capability of teleoperation toward greater autonomy by taking advantage of advances in technology as they become available. In order to support this evolution, NASA has chosen the NASA/NIST Standard Reference model for Telerobot Control System Architecture (NASREM) as the FTS functional architecture. As a result of the definition of generic interfaces in NASREM, the system can be modified without major impact. Consequently, different approaches to solve a problem can be tested easily. This paper describes the implementation of NASREM in the NIST laboratory. The approach is to build a flexible testbed to enhance research in robot control, computer vision, and related areas. To illustrate the real-time aspects of the implementation, a sensory interactive motion control experiment will be described.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Overholt, Dan, Edgar Berdahl, and Robert Hamilton. "Advancements in Actuated Musical Instruments." Organised Sound 16, no. 2 (June 28, 2011): 154–65. http://dx.doi.org/10.1017/s1355771811000100.

Повний текст джерела
Анотація:
This article presents recent developments in actuated musical instruments created by the authors, who also describe an ecosystemic model of actuated performance activities that blur traditional boundaries between the physical and virtual elements of musical interfaces. Actuated musical instruments are physical instruments that have been endowed with virtual qualities controlled by a computer in real-time but which are nevertheless tangible. These instruments provide intuitive and engaging new forms of interaction. They are different from traditional (acoustic) and fully automated (robotic) instruments in that they produce sound via vibrating element(s) that are co-manipulated by humans and electromechanical systems. We examine the possibilities that arise when such instruments are played in different performative environments and music-making scenarios, and we postulate that such designs may give rise to new methods of musical performance. The Haptic Drum, the Feedback Resonance Guitar, the Electromagnetically Prepared Piano, the Overtone Fiddle and Teleoperation with Robothands are described, along with musical examples and reflections on the emergent properties of the performance ecologies that these instruments enable. We look at some of the conceptual and perceptual issues introduced by actuated musical instruments, and finally we propose some directions in which such research may be headed in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bouteraa, Yassine, and Ismail Ben Abdallah. "A gesture-based telemanipulation control for a robotic arm with biofeedback-based grasp." Industrial Robot: An International Journal 44, no. 5 (August 21, 2017): 575–87. http://dx.doi.org/10.1108/ir-12-2016-0356.

Повний текст джерела
Анотація:
Purpose The idea is to exploit the natural stability and performance of the human arm during movement, execution and manipulation. The purpose of this paper is to remotely control a handling robot with a low cost but effective solution. Design/methodology/approach The developed approach is based on three different techniques to be able to ensure movement and pattern recognition of the operator’s arm as well as an effective control of the object manipulation task. In the first, the methodology works on the kinect-based gesture recognition of the operator’s arm. However, using only the vision-based approach for hand posture recognition cannot be the suitable solution mainly when the hand is occluded in such situations. The proposed approach supports the vision-based system by an electromyography (EMG)-based biofeedback system for posture recognition. Moreover, the novel approach appends to the vision system-based gesture control and the EMG-based posture recognition a force feedback to inform operator of the real grasping state. Findings The main finding is to have a robust method able to gesture-based control a robot manipulator during movement, manipulation and grasp. The proposed approach uses a real-time gesture control technique based on a kinect camera that can provide the exact position of each joint of the operator’s arm. The developed solution integrates also an EMG biofeedback and a force feedback in its control loop. In addition, the authors propose a high-friendly human-machine-interface (HMI) which allows user to control in real time a robotic arm. Robust trajectory tracking challenge has been solved by the implementation of the sliding mode controller. A fuzzy logic controller has been implemented to manage the grasping task based on the EMG signal. Experimental results have shown a high efficiency of the proposed approach. Research limitations/implications There are some constraints when applying the proposed method, such as the sensibility of the desired trajectory generated by the human arm even in case of random and unwanted movements. This can damage the manipulated object during the teleoperation process. In this case, such operator skills are highly required. Practical implications The developed control approach can be used in all applications, which require real-time human robot cooperation. Originality/value The main advantage of the developed approach is that it benefits at the same time of three various techniques: EMG biofeedback, vision-based system and haptic feedback. In such situation, using only vision-based approaches mainly for the hand postures recognition is not effective. Therefore, the recognition should be based on the biofeedback naturally generated by the muscles responsible of each posture. Moreover, the use of force sensor in closed-loop control scheme without operator intervention is ineffective in the special cases in which the manipulated objects vary in a wide range with different metallic characteristics. Therefore, the use of human-in-the-loop technique can imitate the natural human postures in the grasping task.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Al-Badri, Mohammed, Svenja Ipsen, Sven Böttger, and Floris Ernst. "Robotic 4D ultrasound solution for real-time visualization and teleoperation." Current Directions in Biomedical Engineering 3, no. 2 (September 7, 2017): 559–61. http://dx.doi.org/10.1515/cdbme-2017-0116.

Повний текст джерела
Анотація:
AbstractAutomation of the image acquisition process via robotic solutions offer a large leap towards resolving ultrasound’s user-dependency. This paper, as part of a larger project aimed to develop a multipurpose 4d-ultrasonic force-sensitive robot for medical applications, focuses on achieving real-time remote visualisation for 4d ultrasound image transfer. This was possible through implementing our software modification on a GE Vivid 7 Dimension workstation, which operates a matrix array probe controlled by a KUKA LBR iiwa 7 7-DOF robotic arm. With the help of robotic positioning and the matrix array probe, fast volumetric imaging of target regions was feasible. By testing ultrasound volumes, which were roughly 880 kB in size, while using gigabit Ethernet connection, a latency of ∼57 ms was achievable for volume transfer between the ultrasound station and a remote client application, which as a result allows a frame count of 17.4 fps. Our modification thus offers for the first time real-time remote visualization, recording and control of 4d ultrasound data, which can be implemented in teleoperation.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Liu, Rong. "AUDITORY DISPLAY WITH SENSORY SUBSTITUTION FOR INTERNET-BASED TELEOPERATION: A FEASIBILITY STUDY." Biomedical Engineering: Applications, Basis and Communications 21, no. 02 (April 2009): 131–37. http://dx.doi.org/10.4015/s1016237209001155.

Повний текст джерела
Анотація:
A critical challenge in telerobotic system is data communication over networks without performance guarantee. This paper proposes a novel way of using auditory feedback as the sensory feedback to ensure that the teleoperated robotic system still functions in a real-time fashion under the unfavorable communication conditions, such as image losses, visual failures, and low-bandwidth communication links. The newly proposed method is tested through psychoacoustic experiments with 10 subjects conducting real-time robotic navigation tasks. The performance is analyzed according to an objective point of view (time to finish task, distance away to the target measurements), as well as subjective workload assessments for different sensory feedbacks. Moreover, the bandwidth consumed when auditory information is applied is considerably lower, compared with the visual information. Preliminary results demonstrate the feasibility of auditory display as a complement or substitute to visual display for remote robotic navigation.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wang, Ping, Xin Gao, Rong Xin Fu, Si Yu Han, Xiao Jing Fang, and Xiao Ou Liu. "The Construction of Augmented Reality Teleoperation System with Force Feedback." Applied Mechanics and Materials 494-495 (February 2014): 1064–67. http://dx.doi.org/10.4028/www.scientific.net/amm.494-495.1064.

Повний текст джерела
Анотація:
Aiming at the time delay of telecommunication problem and the real-time correction problem in path planning, this paper presents a telerobot system based on augmented reality technology and force feedback technology. The core is the dynamic integration of live streaming video of the remote scene and virtual robot, and applying the force feedback sensing and control technologies to solve robotic arms path planning problem. Experiment proves that the system can basically solve the delay problem and make up for the limitation of merely relying on virtual reality simulation technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Popov, Dmitrii. "Teleoperation of ground-based mobile robotic systems with time delays in data transmission channels." Robotics and Technical Cybernetics 10, no. 3 (September 2022): 213–18. http://dx.doi.org/10.31776/rtcj.10306.

Повний текст джерела
Анотація:
The paper is devoted to the issues of teleoperation of ground mobile robots. Problems of moving a robot in an un-structured environment by commands of a human operator are considered. A significant problem that reduces a quality of control and often leads to loss of stability is time delays that occur in information channels of the complex. To partially compensate for the negative impact of the time delays, an approach based on the prediction of the local goal of movement, the real position of the robot at the time of commands formation and the model of an operator is proposed. To test the approach, a computer simulation of the robot control process was performed on the basis of a training complex based on the Unity engine. The task was consisted in controlling the movement of the robot along the reference trajectory displayed on the screen. The task execution time and the similarity of the recorded trajectory with the reference one were evaluated. The experimental results confirmed the positive effect of the proposed compensation method on the efficiency of the control system.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Miehlbradt, Jenifer, Alexandre Cherpillod, Stefano Mintchev, Martina Coscia, Fiorenzo Artoni, Dario Floreano, and Silvestro Micera. "Data-driven body–machine interface for the accurate control of drones." Proceedings of the National Academy of Sciences 115, no. 31 (July 16, 2018): 7913–18. http://dx.doi.org/10.1073/pnas.1718648115.

Повний текст джерела
Анотація:
The accurate teleoperation of robotic devices requires simple, yet intuitive and reliable control interfaces. However, current human–machine interfaces (HMIs) often fail to fulfill these characteristics, leading to systems requiring an intensive practice to reach a sufficient operation expertise. Here, we present a systematic methodology to identify the spontaneous gesture-based interaction strategies of naive individuals with a distant device, and to exploit this information to develop a data-driven body–machine interface (BoMI) to efficiently control this device. We applied this approach to the specific case of drone steering and derived a simple control method relying on upper-body motion. The identified BoMI allowed participants with no prior experience to rapidly master the control of both simulated and real drones, outperforming joystick users, and comparing with the control ability reached by participants using the bird-like flight simulator Birdly.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Buongiorno, Domenico, Domenico Chiaradia, Simone Marcheschi, Massimiliano Solazzi, and Antonio Frisoli. "Multi-DoFs Exoskeleton-Based Bilateral Teleoperation with the Time-Domain Passivity Approach." Robotica 37, no. 9 (March 1, 2019): 1641–62. http://dx.doi.org/10.1017/s0263574719000171.

Повний текст джерела
Анотація:
SummaryIt is well known that the sense of presence in a tele-robot system for both home-based tele-rehabilitation and rescue operations is enhanced by haptic feedback. Beyond several advantages, in the presence of communication delay haptic feedback can lead to an unstable teleoperation system. During the last decades, several control techniques have been proposed to ensure a good trade-off between transparency and stability in bilateral teleoperation systems under time delays. These proposed control approaches have been extensively tested with teleoperation systems based on identical master and slave robots having few degrees of freedom (DoF). However, a small number of DoFs cannot ensure both an effective restoration of the multi-joint coordination in tele-rehabilitation and an adequate dexterity during manipulation tasks in rescue scenario. Thus, a deep understanding of the applicability of such control techniques on a real bilateral teleoperation setup is needed. In this work, we investigated the behavior of the time-domain passivity approach (TDPA) applied on an asymmetrical teleoperator system composed by a 5-DoFs impedance designed upper-limb exoskeleton and a 4-DoFs admittance designed anthropomorphic robot. The conceived teleoperation architecture is based on a velocity–force (measured) architecture with position drift compensation and has been tested with a representative set of tasks under communication delay (80 ms round-trip). The results have shown that the TDPA is suitable for a multi-DoFs asymmetrical setup composed by two isomorphic haptic interfaces characterized by different mechanical features. The stability of the teleoperator has been proved during several (1) high-force contacts against stiff wall that involve more Cartesian axes simultaneously, (2) continuous contacts with a stiff edge tests, (3) heavy-load handling tests while following a predefined path and (4) high-force contacts against stiff wall while handling a load. The found results demonstrated that the TDPA could be used in several teleoperation scenarios like home-based tele-rehabilitation and rescue operations.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Su, Yun-Peng, Xiao-Qi Chen, Tony Zhou, Christopher Pretty, and Geoffrey Chase. "Mixed Reality-Enhanced Intuitive Teleoperation with Hybrid Virtual Fixtures for Intelligent Robotic Welding." Applied Sciences 11, no. 23 (November 29, 2021): 11280. http://dx.doi.org/10.3390/app112311280.

Повний текст джерела
Анотація:
This paper presents an integrated scheme based on a mixed reality (MR) and haptic feedback approach for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed robotic tele-welding system features imitative motion mapping from the user’s hand movements to the welding robot motions, and it enables the spatial velocity-based control of the robot tool center point (TCP). The proposed mixed reality virtual fixture (MRVF) integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Onsite welding and tele-welding experiments identify the operational differences between professional and unskilled welders and demonstrate the effectiveness of the proposed MRVF tele-welding framework for novice welders. The MRVF-integrated visual/haptic tele-welding scheme reduced the torch alignment times by 56% and 60% compared to the MRnoVF and baseline cases, with minimized cognitive workload and optimal usability. The MRVF scheme effectively stabilized welders’ hand movements and eliminated undesirable collisions while generating smooth welds.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Erdemir, Gokhan, Ahmet Emin Kuzucuoglu, Erkan Kaplanoglu, and Yasser El-Kahlout. "Design and Implementation of Web Based Mobile Robot Control Platform for Robotics Education." Applied Mechanics and Materials 704 (December 2014): 283–87. http://dx.doi.org/10.4028/www.scientific.net/amm.704.283.

Повний текст джерела
Анотація:
In this paper, we present a particular case of a dynamic, real-time and efficient web-based mobile robot experiment platform (WEB.MREP) design for mobile robotic applications. The design and construction of a multipurpose and WEB.MREP for the application of different path planning and tracking, simultaneous localization and mapping (SLAM) and robot vision techniques for mobile robotic system is the main purpose of this study. The designed and constructed experiment platform consists of five main components: Festo Robotino mobile robot sets, a designed experimental area, a server software, web interfaces (user interfaces) and security measures. The designed platform provides monitoring, real-time controlling and programming of mobile robots for experimental studies and, it helps the users to achieve these studies through a standard web browser without any additional supportive software.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Nakayama, Angelica, Daniel Ruelas, Jesus Savage, and Ernesto Bribiesca. "Teleoperated Service Robot with an Immersive Mixed Reality Interface." Informatics and Automation 20, no. 6 (September 10, 2021): 1187–223. http://dx.doi.org/10.15622/ia.20.6.1.

Повний текст джерела
Анотація:
Teleoperated service robots can perform more complex and precise tasks as they combine robot skills and human expertise. Communication between the operator and the robot is essential for remote operation and strongly affects system efficiency. Immersive interfaces are being used to enhance teleoperation experience. However, latency or time delay can impair the performance of the robot operation. Since remote visualization involves transmitting a large amount of video data, the challenge is to decrease communication instability. Then, an efficient teleoperation system must have a suitable operation interface capable of visualizing the remote environment, controlling the robot, and having a fast response time. This work presents the development of a service robot teleoperation system with an immersive mixed reality operation interface where the operator can visualize the real remote environment or a virtual 3D environment representing it. The virtual environment aims to reduce the latency on communication by reducing the amount of information sent over the network and improve user experience. The robot can perform navigation and simple tasks autonomously or change to the teleoperated mode for more complex tasks. The system was developed using ROS, UNITY 3D, and sockets to be exported with ease to different platforms. The experiments suggest that having an immersive operation interface provides improved usability for the operator. The latency appears to improve when using the virtual environment. The user experience seems to benefit from the use of mixed reality techniques; this may lead to the broader use of teleoperated service robot systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Vallés, Marina, José Cazalilla, Ángel Valera, Vicente Mata, Álvaro Page, and Miguel Díaz-Rodríguez. "A 3-PRS parallel manipulator for ankle rehabilitation: towards a low-cost robotic rehabilitation." Robotica 35, no. 10 (March 13, 2015): 1939–57. http://dx.doi.org/10.1017/s0263574715000120.

Повний текст джерела
Анотація:
SUMMARYThis paper presents the design, kinematics, dynamics and control of a low-cost parallel rehabilitation robot developed at the Universitat Politècnica de Valencia. Several position and force controllers have been tested to ensure accurate tracking performances. An orthopedic boot, equipped with a force sensor, has been placed over the platform of the parallel robot to perform exercises for injured ankles. Passive, active-assistive and active-resistive exercises have been implemented to train dorsi/plantar flexion, inversion and eversion ankle movements. In order to implement the controllers, the component-based middleware Orocos has been used with the advantage over other solutions that the whole scheme control can be implemented modularly. These modules are independent and can be configured and reconfigured in both configuration and runtime. This means that no specific knowledge is needed by medical staff, for example, to carry out rehabilitation exercises using this low-cost parallel robot. The integration between Orocos and ROS, with a CAD model displaying the actual position of the rehabilitation robot in real time, makes it possible to develop a teleoperation application. In addition, a teleoperated rehabilitation exercise can be performed by a specialist using a Wiimote (or any other Bluetooth device).
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Puente, Santiago T., Lucía Más, Fernando Torres, and and Francisco A. Candelas. "Virtualization of Robotic Hands Using Mobile Devices †." Robotics 8, no. 3 (September 16, 2019): 81. http://dx.doi.org/10.3390/robotics8030081.

Повний текст джерела
Анотація:
This article presents a multiplatform application for the tele-operation of a robot hand using virtualization in Unity 3D. This approach grants usability to users that need to control a robotic hand, allowing supervision in a collaborative way. This paper focuses on a user application designed for the 3D virtualization of a robotic hand and the tele-operation architecture. The designed system allows for the simulation of any robotic hand. It has been tested with the virtualization of the four-fingered Allegro Hand of SimLab with 16 degrees of freedom, and the Shadow hand with 24 degrees of freedom. The system allows for the control of the position of each finger by means of joint and Cartesian co-ordinates. All user control interfaces are designed using Unity 3D, such that a multiplatform philosophy is achieved. The server side allows the user application to connect to a ROS (Robot Operating System) server through a TCP/IP socket, to control a real hand or to share a simulation of it among several users. If a real robot hand is used, real-time control and feedback of all the joints of the hand is communicated to the set of users. Finally, the system has been tested with a set of users with satisfactory results.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Su, Yun-Peng, Xiao-Qi Chen, Tony Zhou, Christopher Pretty, and Geoffrey Chase. "Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System." Applied Sciences 12, no. 9 (May 8, 2022): 4740. http://dx.doi.org/10.3390/app12094740.

Повний текст джерела
Анотація:
This paper presents an integrated mapping of motion and visualization scheme based on a Mixed Reality (MR) subspace approach for the intuitive and immersive telemanipulation of robotic arm-hand systems. The effectiveness of different control-feedback methods for the teleoperation system is validated and compared. The robotic arm-hand system consists of a 6 Degrees-of-Freedom (DOF) industrial manipulator and a low-cost 2-finger gripper, which can be manipulated in a natural manner by novice users physically distant from the working site. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time 3D visual feedback from the robot working site. Imitation-based velocity-centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control and enables spatial velocity-based control of the robot Tool Center Point (TCP). The user control space and robot working space are overlaid through the MR subspace, and the local user and a digital twin of the remote robot share the same environment in the MR subspace. The MR-based motion and visualization mapping scheme for telerobotics is compared to conventional 2D Baseline and MR tele-control paradigms over two tabletop object manipulation experiments. A user survey of 24 participants was conducted to demonstrate the effectiveness and performance enhancements enabled by the proposed system. The MR-subspace-integrated 3D mapping of motion and visualization scheme reduced the aggregate task completion time by 48% compared to the 2D Baseline module and 29%, compared to the MR SpaceMouse module. The perceived workload decreased by 32% and 22%, compared to the 2D Baseline and MR SpaceMouse approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Pacheco-Gutierrez, Salvador, Hanlin Niu, Ipek Caliskanelli, and Robert Skilton. "A Multiple Level-of-Detail 3D Data Transmission Approach for Low-Latency Remote Visualisation in Teleoperation Tasks." Robotics 10, no. 3 (July 14, 2021): 89. http://dx.doi.org/10.3390/robotics10030089.

Повний текст джерела
Анотація:
In robotic teleoperation, the knowledge of the state of the remote environment in real time is paramount. Advances in the development of highly accurate 3D cameras able to provide high-quality point clouds appear to be a feasible solution for generating live, up-to-date virtual environments. Unfortunately, the exceptional accuracy and high density of these data represent a burden for communications requiring a large bandwidth affecting setups where the local and remote systems are particularly geographically distant. This paper presents a multiple level-of-detail (LoD) compression strategy for 3D data based on tree-like codification structures capable of compressing a single data frame at multiple resolutions using dynamically configured parameters. The level of compression (resolution) of objects is prioritised based on: (i) placement on the scene; and (ii) the type of object. For the former, classical point cloud fitting and segmentation techniques are implemented; for the latter, user-defined prioritisation is considered. The results obtained are compared using a single LoD (whole-scene) compression technique previously proposed by the authors. Results showed a considerable improvement to the transmitted data size and updated frame rate while maintaining low distortion after decompression.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Li, Bin, Xi Fan Yao, Chun Bao Wang, and Hui Dong Lou. "Dynamic Multibody Simulation of a 6-DOF Robotic Arm." Advanced Materials Research 139-141 (October 2010): 1001–4. http://dx.doi.org/10.4028/www.scientific.net/amr.139-141.1001.

Повний текст джерела
Анотація:
Based on the kinematic model with Cartesian structure, Newton-Euler like algo¬rithm being employed to solve the nonlinear equations of motion and constraints in real-time application, and dynamic multibody simulation, a novel integrated design for a 6-DOF ro¬bot is investigated, and the interfaces required for the implementation of different computer aided engineering (CAE) tools used in the design is addressed [1].The presented method in this paper was analyzed and verified by the numerical and physical 6-DOF robot model, the result shows that the topologic projection me-thod [2] is stable. The design experience accumulated will be very useful for the future product de-sign.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Ma, Jiaqi, Xiang Cheng, Pengfei Wang, Zhiwei Jiao, Yuan Yu, Meng Yu, Bin Luo, and Weimin Yang. "A Haptic Feedback Actuator Suitable for the Soft Wearable Device." Applied Sciences 10, no. 24 (December 10, 2020): 8827. http://dx.doi.org/10.3390/app10248827.

Повний текст джерела
Анотація:
Gaining direct tactile sensation is becoming increasingly important for humans in human–computer interaction fields such as space robot teleoperation and augmented reality (AR). In this study, a novel electro-hydraulic soft actuator was designed and manufactured. The proposed actuator is composed of polydimethylsiloxane (PDMS) films, flexible electrodes, and an insulating liquid dielectric. The influence of two different voltage loading methods on the output characteristics of the actuator was studied. The special voltage loading method (AC voltage) enables the actuator to respond rapidly (within 0.15 s), output a stable displacement in 3 s, and remain unchanged in the subsequent time. By adjusting the voltages and frequencies, a maximum output displacement of 1.1 mm and an output force of 1 N/cm2 can be rapidly achieved at a voltage of 12 kV (20 Hz). Finally, a haptic feedback system was built to control the robotic hand to perform gripping tasks in real time, and a more realistic tactile sensation could be realized, similar to that obtained when a human directly grabs objects. Therefore, the actuator has excellent portability, robustness, rapid response, and good compatibility with the human body for human–computer interaction.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Gundelakh, Filipp, Lev Stankevich, Konstantin Sonkin, Ganna Nagornova, and Natalia Shemyakina. "Application of Brain-computer Interfaces in Assistive Technologies." SPIIRAS Proceedings 19, no. 2 (April 23, 2020): 277–301. http://dx.doi.org/10.15622/sp.2020.19.2.2.

Повний текст джерела
Анотація:
In the paper issues of brain-computer interface applications in assistive technologies are considered in particular for robotic devices control. Noninvasive brain-computer interfaces are built based on the classification of electroencephalographic signals, which show bioelectrical activity in different zones of the brain. Such brain-computer interfaces after training are able to decode electroencephalographic patterns corresponding to different imaginary movements and patterns corresponding to different audio-visual stimulus. The requirements which must be met by brain-computer interfaces operating in real time, so that biological feedback is effective and the user's brain can correctly associate responses with events are formulated. The process of electroencephalographic signal processing in noninvasive brain-computer interface is examined including spatial and temporal filtering, artifact removal, feature selection, and classification. Descriptions and comparison of classifiers based on support vector machines, artificial neural networks, and Riemann geometry are presented. It was shown that such classifiers can provide accuracy at the level of 60-80% for recognition of imaginary movements from two to four classes. Examples of application of the classifiers to control robotic devices were presented. The approach is intended both to help healthy users to perform daily functions better and to increase the quality of life of people with movement disabilities. Tasks to increase the efficiency of technology application are formulated.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Connolly, Laura, Anton Deguet, Simon Leonard, Junichi Tokuda, Tamas Ungi, Axel Krieger, Peter Kazanzides, Parvin Mousavi, Gabor Fichtinger, and Russell H. Taylor. "Bridging 3D Slicer and ROS2 for Image-Guided Robotic Interventions." Sensors 22, no. 14 (July 17, 2022): 5336. http://dx.doi.org/10.3390/s22145336.

Повний текст джерела
Анотація:
Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot operating system (ROS) is the standard development framework. In the past, there have been several “ad hoc” attempts made to bridge both tools; however, they are all reliant on middleware and custom interfaces. Additionally, none of these attempts have been successful in bridging access to the full suite of tools provided by ROS or 3D Slicer. Therefore, in this paper, we present the SlicerROS2 module, which was designed for the direct use of ROS2 packages and libraries within 3D Slicer. The module was developed to enable real-time visualization of robots, accommodate different robot configurations, and facilitate data transfer in both directions (between ROS and Slicer). We demonstrate the system on multiple robots with different configurations, evaluate the system performance and discuss an image-guided robotic intervention that can be prototyped with this module. This module can serve as a starting point for clinical system development that reduces the need for custom interfaces and time-intensive platform setup.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Tucker, Luke A., Ji Chen, Lauren Hammel, Diane L. Damiano, and Thomas C. Bulea. "An open source graphical user interface for wireless communication and operation of wearable robotic technology." Journal of Rehabilitation and Assistive Technologies Engineering 7 (January 2020): 205566832096405. http://dx.doi.org/10.1177/2055668320964056.

Повний текст джерела
Анотація:
Introduction Wearable robotic exoskeletons offer the potential to move gait training from the clinic to the community thereby providing greater therapy dosage in more naturalistic settings. To capitalize on this potential, intuitive and robust interfaces are necessary between robotic devices and end users. Such interfaces hold great promise for research if they are also designed to record data from the robot during its use. Methods We present the design and validation of an open source graphical user interface (GUI) for wireless operation of and real-time data logging from a pediatric robotic exoskeleton. The GUI was designed for trained users such as an engineer or clinician. A simplified mobile application is also provided to enable exoskeleton operation by an end-user or their caretaker. GUI function was validated during simulated walking with the exoskeleton using a motion capture system. Results Our results demonstrate the ability of the GUI to wirelessly operate and save data from exoskeleton sensors with high fidelity comparable to motion capture. Conclusion The GUI code, available in a public repository with a detailed description and step-by-step tutorial, is configurable to interact with any robotic device operated by a microcontroller and therefore represents a potentially powerful tool for deployment and evaluation of community based robotics.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Holder, Sherrie, and Leia Stirling. "Effect of Gesture Interface Mapping on Controlling a Multi-degree-of-freedom Robotic Arm in a Complex Environment." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 183–87. http://dx.doi.org/10.1177/1071181320641045.

Повний текст джерела
Анотація:
There are many robotic scenarios that require real-time function in large or unconstrained environments, for example, the robotic arm on the International Space Station (ISS). Use of fully-wearable gesture control systems are well-suited to human-robot interaction scenarios where users are mobile and must have hands free. A human study examined operation of a simulated ISS robotic arm using three different gesture input mappings compared to the traditional joystick interface. Two gesture mappings permitted multiple simultaneous inputs (multi-input), while the third was a single-input method. Experimental results support performance advantages of multi-input gesture methods over single input. Differences between the two multi-input methods in task completion and workload display an effect of user-directed attention on interface success. Mappings based on natural human arm movement are promising for gesture interfaces in mobile robotic applications. This study also highlights challenges in gesture mapping, including how users align gestures with their body and environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Klimaszewski, Jan, and Michał Władziński. "Human Body Parts Proximity Measurement Using Distributed Tactile Robotic Skin." Sensors 21, no. 6 (March 18, 2021): 2138. http://dx.doi.org/10.3390/s21062138.

Повний текст джерела
Анотація:
Safety in human–machine cooperation is the current challenge in robotics. Safe human–robot interaction requires the development of sensors that detect human presence in the robot’s workspace. Detection of this presence should occur before the physical collision of the robot with the human. Human to robot proximity detection should be very fast, allowing machine elements deceleration to velocities safe for human–machine collision. The paper presents a new, low-cost design of distributed robotic skin, which allows real-time measurements of the human body parts proximity. The main advantages of the proposed solution are low cost of its implementation based on comb electrodes matrix and real-time operation due to fast and simple electronic design. The main contribution is the new idea of measuring the distance to human body parts by measuring the operating frequency of a rectangular signal generator, which depends on the capacity of the open capacitor. This capacitor is formed between the comb electrodes matrix and a reference plate located next to the matrix. The capacitance of the open capacitor changes if a human body part is in vicinity. The application of the developed device can be very wide. For example, in the field of cooperative robots, it can lead to the improvement of human–machine interfaces and increased safety of human–machine cooperation. The proposed construction can help to meet the increasing requirements for cooperative robots.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Klimaszewski, Jan, Daniel Janczak, and Paweł Piorun. "Tactile Robotic Skin with Pressure Direction Detection." Sensors 19, no. 21 (October 29, 2019): 4697. http://dx.doi.org/10.3390/s19214697.

Повний текст джерела
Анотація:
Tactile sensing is the current challenge in robotics and object manipulation by machines. The robot’s agile interaction with the environment requires pressure sensors to detect not only location and value, but also touch direction. The paper presents a new, two-layer construction of artificial robotic skin, which allows measuring the location, value, and direction of pressure from external force. The main advantages of the proposed solution are its low cost of implementation based on two FSR (Force Sensitive Resistor) matrices and real-time operation thanks to direction detection using fast matching algorithms. The main contribution is the idea of detecting the pressure direction by determining the shift between the pressure maps of the skin’s upper and lower layers. The pressure map of each layer is treated as an image and registered using a phase correlation (POC–Phase Only Correlation) method. The use of the developed device can be very wide. For example, in the field of cooperative robots, it can lead to the improvement of human machine interfaces and increased security of human–machine cooperation. The proposed construction can help meet the increasing requirements for robots in cooperation with humans, but also enable agile manipulation of objects from their surroundings.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Springer, Scott L., and Nicola J. Ferrier. "Design and Control of a Force-Reflecting Haptic Interface for Teleoperational Grasping." Journal of Mechanical Design 124, no. 2 (May 16, 2002): 277–83. http://dx.doi.org/10.1115/1.1470493.

Повний текст джерела
Анотація:
In this paper the design of a multi-finger force-reflecting haptic interface device for teleoperational grasping is introduced. The haptic interface or “master” controller device is worn on the human operator’s hand and measured human finger positions are used to control the finger positions of a remote grasping manipulator or “slave” device. The slave may be a physical robotic grasping manipulator, or a computer generated representation of a human hand such as used in virtual reality applications. The forces measured by the robotic slave, or calculated for the virtual slave, are presented to the operator’s fingertips through the master providing a means for deeper human sensation of presence and better control of grasping tasks in the slave environments. Design parameters and performance measures for haptic interfaces for teleoperation are discussed. One key performance issue involving the high-speed display of forces during initial contact, especially when interacting with rigid surfaces, is addressed by the present design, reducing slave controller computation requirements and overcoming actuator response time constraints. The design presented utilizes a planar four-bar linkage for each finger, to represent each finger bend motion as a single degree of freedom, and to provide a finger bend resistance force that is substantially perpendicular to the distal finger pad throughout the full 180 degrees of finger bend motion represented. The finger linkage design, in combination with a remote position measurement and force display assembly, provides a very lightweight and low inertia system with a large workspace. The concept of a replicated finger is introduced which, in combination with a decoupled actuator and feed forward control, provides improved performance in transparent free motion, and rapid, stable touch sensation of initial contact with rigid surfaces. A distributed computation architecture with a PC based haptic interface controller and associated control algorithms are also discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Zander, Thorsten O., Kunal Shetty, Romy Lorenz, Daniel R. Leff, Laurens R. Krol, Ara W. Darzi, Klaus Gramann, and Guang-Zhong Yang. "Automated Task Load Detection with Electroencephalography: Towards Passive Brain–Computer Interfacing in Robotic Surgery." Journal of Medical Robotics Research 02, no. 01 (February 26, 2017): 1750003. http://dx.doi.org/10.1142/s2424905x17500039.

Повний текст джерела
Анотація:
Automatic detection of the current task load of a surgeon in the theatre in real time could provide helpful information, to be used in supportive systems. For example, such information may enable the system to automatically support the surgeon when critical or stressful periods are detected, or to communicate to others when a surgeon is engaged in a complex maneuver and should not be disturbed. Passive brain–computer interfaces (BCI) infer changes in cognitive and affective state by monitoring and interpreting ongoing brain activity recorded via an electroencephalogram. The resulting information can then be used to automatically adapt a technological system to the human user. So far, passive BCI have mostly been investigated in laboratory settings, even though they are intended to be applied in real-world settings. In this study, a passive BCI was used to assess changes in task load of skilled surgeons performing both simple and complex surgical training tasks. Results indicate that the introduced methodology can reliably and continuously detect changes in task load in this realistic environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Wu, Chuhao, Jackie Cha, Jay Sulek, Tian Zhou, Chandru P. Sundaram, Juan Wachs, and Denny Yu. "Eye-Tracking Metrics Predict Perceived Workload in Robotic Surgical Skills Training." Human Factors: The Journal of the Human Factors and Ergonomics Society 62, no. 8 (September 27, 2019): 1365–86. http://dx.doi.org/10.1177/0018720819874544.

Повний текст джерела
Анотація:
Objective The aim of this study is to assess the relationship between eye-tracking measures and perceived workload in robotic surgical tasks. Background Robotic techniques provide improved dexterity, stereoscopic vision, and ergonomic control system over laparoscopic surgery, but the complexity of the interfaces and operations may pose new challenges to surgeons and compromise patient safety. Limited studies have objectively quantified workload and its impact on performance in robotic surgery. Although not yet implemented in robotic surgery, minimally intrusive and continuous eye-tracking metrics have been shown to be sensitive to changes in workload in other domains. Methods Eight surgical trainees participated in 15 robotic skills simulation sessions. In each session, participants performed up to 12 simulated exercises. Correlation and mixed-effects analyses were conducted to explore the relationships between eye-tracking metrics and perceived workload. Machine learning classifiers were used to determine the sensitivity of differentiating between low and high workload with eye-tracking features. Results Gaze entropy increased as perceived workload increased, with a correlation of .51. Pupil diameter and gaze entropy distinguished differences in workload between task difficulty levels, and both metrics increased as task level difficulty increased. The classification model using eye-tracking features achieved an accuracy of 84.7% in predicting workload levels. Conclusion Eye-tracking measures can detect perceived workload during robotic tasks. They can potentially be used to identify task contributors to high workload and provide measures for robotic surgery training. Application Workload assessment can be used for real-time monitoring of workload in robotic surgical training and provide assessments for performance and learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Mourtzis, Dimitris, John Angelopoulos, and Nikos Panopoulos. "Closed-Loop Robotic Arm Manipulation Based on Mixed Reality." Applied Sciences 12, no. 6 (March 14, 2022): 2972. http://dx.doi.org/10.3390/app12062972.

Повний текст джерела
Анотація:
Robotic manipulators have become part of manufacturing systems in recent decades. However, in the realm of Industry 4.0, a new type of manufacturing cell has been introduced—the so-called collaborative manufacturing cell. In such collaborative environments, communication between a human operator and robotic manipulators must be flawless, so that smooth collaboration, i.e., human safety, is ensured constantly. Therefore, engineers have focused on the development of suitable human–robot interfaces (HRI) in order to tackle this issue. This research work proposes a closed-loop framework for the human–robot interface based on the utilization of digital technologies, such as Mixed Reality (MR). Concretely, the framework can be realized as a methodology for the remote and safe manipulation of the robotic arm in near real-time, while, simultaneously, safety zones are displayed in the field of view of the shop-floor technician. The method is based on the creation of a Digital Twin of the robotic arm and the setup of a suitable communication framework for continuous and seamless communication between the user interface, the physical robot, and the Digital Twin. The development of the method is based on the utilization of a ROS (Robot Operating System) for the modelling of the Digital Twin, a Cloud database for data handling, and Mixed Reality (MR) for the Human–Machine Interface (HMI). The developed MR application is tested in a laboratory-based machine shop, incorporating collaborative cells.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Caliskanelli, Ipek, Matthew Goodliffe, Craig Whiffin, Michail Xymitoulias, Edward Whittaker, Swapnil Verma, and Robert Skilton. "Engineering Interoperable, Plug-and-Play, Distributed, Robotic Control Systems for Futureproof Fusion Power Plants." Robotics 10, no. 3 (September 16, 2021): 108. http://dx.doi.org/10.3390/robotics10030108.

Повний текст джерела
Анотація:
Maintenance and inspection systems for future fusion power plants (e.g., STEP and DEMO) are expected to require the integration of hundreds of systems from multiple suppliers, with lifetime expectancies of several decades, where requirements evolve over time and obsolescence management is required. There are significant challenges associated with the integration, deployment, and maintenance of very large-scale robotic systems incorporating devices from multiple suppliers, where each may utilise bespoke, non-standardised control systems and interfaces. Additionally, the unstructured, experimental, or unknown operational conditions frequently result in new or changing system requirements, meaning extension and adaptation are necessary. Whilst existing control frameworks (e.g., ROS, OPC-UA) allow for the robust integration of complex robotic systems, they are not compatible with highly efficient maintenance and extension in the face of changing requirements and obsolescence issues over decades-long periods. We present the CorteX software framework as well as results showing its effectiveness in addressing the above issues, whilst being demonstrated through hardware that is representative of real-world fusion applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Stawicki, Piotr, Felix Gembler, and Ivan Volosyak. "Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI." Computational Intelligence and Neuroscience 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/4909685.

Повний текст джерела
Анотація:
Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Vladareanu, Luige. "Advanced Intelligent Control through Versatile Intelligent Portable Platforms." Sensors 20, no. 13 (June 29, 2020): 3644. http://dx.doi.org/10.3390/s20133644.

Повний текст джерела
Анотація:
Deep research and communicating new trends in the design, control and applications of the real time control of intelligent sensors systems using advanced intelligent control methods and techniques is the main purpose of this research. The innovative multi-sensor fusion techniques, integrated through the Versatile Intelligent Portable (VIP) platforms are developed, combined with computer vision, virtual and augmented reality (VR&AR) and intelligent communication, including remote control, adaptive sensor networks, human-robot (H2R) interaction systems and machine-to-machine (M2M) interfaces. Intelligent decision support systems (IDSS), including remote sensing, and their integration with DSS, GA-based DSS, fuzzy sets DSS, rough sets-based DSS, intelligent agent-assisted DSS, process mining integration into decision support, adaptive DSS, computer vision based DSS, sensory and robotic DSS, are highlighted in the field of advanced intelligent control.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Edelman, B. J., J. Meng, D. Suma, C. Zurn, E. Nagarajan, B. S. Baxter, C. C. Cline, and B. He. "Noninvasive neuroimaging enhances continuous neural tracking for robotic device control." Science Robotics 4, no. 31 (June 19, 2019): eaaw6844. http://dx.doi.org/10.1126/scirobotics.aaw6844.

Повний текст джерела
Анотація:
Brain-computer interfaces (BCIs) using signals acquired with intracortical implants have achieved successful high-dimensional robotic device control useful for completing daily tasks. However, the substantial amount of medical and surgical expertise required to correctly implant and operate these systems greatly limits their use beyond a few clinical cases. A noninvasive counterpart requiring less intervention that can provide high-quality control would profoundly improve the integration of BCIs into the clinical and home setting. Here, we present and validate a noninvasive framework using electroencephalography (EEG) to achieve the neural control of a robotic device for continuous random target tracking. This framework addresses and improves upon both the “brain” and “computer” components by increasing, respectively, user engagement through a continuous pursuit task and associated training paradigm and the spatial resolution of noninvasive neural data through EEG source imaging. In all, our unique framework enhanced BCI learning by nearly 60% for traditional center-out tasks and by more than 500% in the more realistic continuous pursuit task. We further demonstrated an additional enhancement in BCI control of almost 10% by using online noninvasive neuroimaging. Last, this framework was deployed in a physical task, demonstrating a near-seamless transition from the control of an unconstrained virtual cursor to the real-time control of a robotic arm. Such combined advances in the quality of neural decoding and the practical utility of noninvasive robotic arm control will have major implications for the eventual development and implementation of neurorobotics by means of noninvasive BCI.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Vörös, Viktor, Ruixuan Li, Ayoob Davoodi, Gauthier Wybaillie, Emmanuel Vander Poorten, and Kenan Niu. "An Augmented Reality-Based Interaction Scheme for Robotic Pedicle Screw Placement." Journal of Imaging 8, no. 10 (October 6, 2022): 273. http://dx.doi.org/10.3390/jimaging8100273.

Повний текст джерела
Анотація:
Robot-assisted surgery is becoming popular in the operation room (OR) for, e.g., orthopedic surgery (among other surgeries). However, robotic executions related to surgical steps cannot simply rely on preoperative plans. Using pedicle screw placement as an example, extra adjustments are needed to adapt to the intraoperative changes when the preoperative planning is outdated. During surgery, adjusting a surgical plan is non-trivial and typically rather complex since the available interfaces used in current robotic systems are not always intuitive to use. Recently, thanks to technical advancements in head-mounted displays (HMD), augmented reality (AR)-based medical applications are emerging in the OR. The rendered virtual objects can be overlapped with real-world physical objects to offer intuitive displays of the surgical sites and anatomy. Moreover, the potential of combining AR with robotics is even more promising; however, it has not been fully exploited. In this paper, an innovative AR-based robotic approach is proposed and its technical feasibility in simulated pedicle screw placement is demonstrated. An approach for spatial calibration between the robot and HoloLens 2 without using an external 3D tracking system is proposed. The developed system offers an intuitive AR–robot interaction approach between the surgeon and the surgical robot by projecting the current surgical plan to the surgeon for fine-tuning and transferring the updated surgical plan immediately back to the robot side for execution. A series of bench-top experiments were conducted to evaluate system accuracy and human-related errors. A mean calibration error of 3.61 mm was found. The overall target pose error was 3.05 mm in translation and 1.12∘ in orientation. The average execution time for defining a target entry point intraoperatively was 26.56 s. This work offers an intuitive AR-based robotic approach, which could facilitate robotic technology in the OR and boost synergy between AR and robots for other medical applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Lebedev, Mikhail A., and Miguel A. L. Nicolelis. "Brain-Machine Interfaces: From Basic Science to Neuroprostheses and Neurorehabilitation." Physiological Reviews 97, no. 2 (April 2017): 767–837. http://dx.doi.org/10.1152/physrev.00027.2016.

Повний текст джерела
Анотація:
Brain-machine interfaces (BMIs) combine methods, approaches, and concepts derived from neurophysiology, computer science, and engineering in an effort to establish real-time bidirectional links between living brains and artificial actuators. Although theoretical propositions and some proof of concept experiments on directly linking the brains with machines date back to the early 1960s, BMI research only took off in earnest at the end of the 1990s, when this approach became intimately linked to new neurophysiological methods for sampling large-scale brain activity. The classic goals of BMIs are 1) to unveil and utilize principles of operation and plastic properties of the distributed and dynamic circuits of the brain and 2) to create new therapies to restore mobility and sensations to severely disabled patients. Over the past decade, a wide range of BMI applications have emerged, which considerably expanded these original goals. BMI studies have shown neural control over the movements of robotic and virtual actuators that enact both upper and lower limb functions. Furthermore, BMIs have also incorporated ways to deliver sensory feedback, generated from external actuators, back to the brain. BMI research has been at the forefront of many neurophysiological discoveries, including the demonstration that, through continuous use, artificial tools can be assimilated by the primate brain's body schema. Work on BMIs has also led to the introduction of novel neurorehabilitation strategies. As a result of these efforts, long-term continuous BMI use has been recently implicated with the induction of partial neurological recovery in spinal cord injury patients.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Schweitzer, Frédéric, and Alexandre Campeau-Lecours. "IMU-Based Hand Gesture Interface Implementing a Sequence-Matching Algorithm for the Control of Assistive Technologies." Signals 2, no. 4 (October 21, 2021): 729–53. http://dx.doi.org/10.3390/signals2040043.

Повний текст джерела
Анотація:
Assistive technologies (ATs) often have a high-dimensionality of possible movements (e.g., assistive robot with several degrees of freedom or a computer), but the users have to control them with low-dimensionality sensors and interfaces (e.g., switches). This paper presents the development of an open-source interface based on a sequence-matching algorithm for the control of ATs. Sequence matching allows the user to input several different commands with low-dimensionality sensors by not only recognizing their output, but also their sequential pattern through time, similarly to Morse code. In this paper, the algorithm is applied to the recognition of hand gestures, inputted using an inertial measurement unit worn by the user. An SVM-based algorithm, that is aimed to be robust, with small training sets (e.g., five examples per class) is developed to recognize gestures in real-time. Finally, the interface is applied to control a computer’s mouse and keyboard. The interface was compared against (and combined with) the head movement-based AssystMouse software. The hand gesture interface showed encouraging results for this application but could also be used with other body parts (e.g., head and feet) and could control various ATs (e.g., assistive robotic arm and prosthesis).
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Mitterberger, Daniela, Kathrin Dörfler, Timothy Sandy, Foteini Salveridou, Marco Hutter, Fabio Gramazio, and Matthias Kohler. "Augmented bricklaying." Construction Robotics 4, no. 3-4 (October 14, 2020): 151–61. http://dx.doi.org/10.1007/s41693-020-00035-8.

Повний текст джерела
Анотація:
AbstractAugmented bricklaying explores the manual construction of intricate brickwork through visual augmentation, and applies and validates the concept in a real-scale building project—a fair-faced brickwork facade for a winery in Greece. As shown in previous research, robotic systems have proven to be very suitable to achieve various differentiated brickwork designs with high efficiency but show certain limitations, for example, in regard to spatial freedom or the usage of mortar on site. Hence, this research aims to show that through the use of a craft-specific augmented reality system, the same geometric complexity and precision seen in robotic fabrication can be achieved with an augmented manual process. Towards this aim, a custom-built augmented reality system for in situ construction was established. This process allows bricklayers to not depend on physical templates, and it enables enhanced spatial freedom, preserving and capitalizing on the bricklayer’s craft of mortar handling. In extension to conventional holographic representations seen in current augmented reality fabrication processes that have limited context-awareness and insufficient geometric feedback capabilities, this system is based on an object-based visual–inertial tracking method to achieve dynamic optical guidance for bricklayers with real-time tracking and highly precise 3D registration features in on-site conditions. By integrating findings from the field of human–computer interfaces and human–machine communication, this research establishes, explores, and validates a human–computer interactive fabrication system, in which explicit machine operations and implicit craftsmanship knowledge are combined. In addition to the overall concept, the method of implementation, and the description of the project application, this paper also quantifies process parameters of the applied augmented reality assembly method concerning building accuracy and assembly speed. In the outlook, this paper aims to outline future directions and potential application areas of object-aware augmented reality systems and their implications for architecture and digital fabrication.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Sanna, Andrea, Federico Manuri, Jacopo Fiorenza, and Francesco De De Pace. "BARI: An Affordable Brain-Augmented Reality Interface to Support Human–Robot Collaboration in Assembly Tasks." Information 13, no. 10 (September 28, 2022): 460. http://dx.doi.org/10.3390/info13100460.

Повний текст джерела
Анотація:
Human–robot collaboration (HRC) is a new and challenging discipline that plays a key role in Industry 4.0. Digital transformation of industrial plants aims to introduce flexible production lines able to adapt to different products quickly. In this scenario, HRC can be a booster to support flexible manufacturing, thus introducing new interaction paradigms between humans and machines. Augmented reality (AR) can convey much important information to users: for instance, information related to the status and the intention of the robot/machine the user is collaborating with. On the other hand, traditional input interfaces based on physical devices, gestures, and voice might be precluded in industrial environments. Brain–computer interfaces (BCIs) can be profitably used with AR devices to provide technicians solutions to effectively collaborate with robots. This paper introduces a novel BCI–AR user interface based on the NextMind and the Microsoft Hololens 2. Compared to traditional BCI interfaces, the NextMind provides an intuitive selection mechanism based on visual cortex signals. This interaction paradigm is exploited to guide a collaborative robotic arm for a pick and place selection task. Since the ergonomic design of the NextMind allows its use in combination with the Hololens 2, users can visualize through AR the different parts composing the artifact to be assembled, the visual elements used by the NextMind to enable the selections, and the robot status. In this way, users’ hands are always free, and the focus can be always on the objects to be assembled. Finally, user tests are performed to evaluate the proposed system, assessing both its usability and the task’s workload; preliminary results are very encouraging, and the proposed solution can be considered a starting point to design and develop affordable hybrid-augmented interfaces to foster real-time human–robot collaboration.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Orlinski, Adam, Klaas De Rycke, and Moritz Heimrath. "Optimizing Reinforcement." Open Conference Proceedings 1 (February 15, 2022): 97–98. http://dx.doi.org/10.52825/ocp.v1i.83.

Повний текст джерела
Анотація:
The combination of parametric modelling with structural analysis prompted new synergies between designing and engineering. The connection of structural calculation models such as Karamba3d to algorithmic modelling platforms such as Grasshopper3d allowed to embed tools for analysis and simulation into the environment of creative design processes. This way structural analysis advanced from its single sided calculation “duty” towards participation in design language of articulated structural expression. 3d-concrete-printing serves as a great new territory to apply the potential of real-time parametric structural analysis in the built environment of rapid prototyping and robotic fabrication. While 3d-printed-concrete rapidly advanced in technology and empiric know-how, so also grew the ambition to utilize them for larger purposes and bigger building projects. Known requirements such as structural integrity according to building code, interfaces for construction or waterproofing pose clear challenges for further realizations. In order to develop 3d-concrete-printing to its full potential all challenges must serve as opportunities to dissect and rethink established norms and practices, and to construct a new interdisciplinary rule book for an emerging building technology. In this field working through challenges via prototypes serve as great basis for development and allows for an applied discourse within the wider community. Furthermore to address challenges a flexible toolbox for structural analysis such as Karamba3d offers the potential to serve as an open instrument for structural analysis and to promote 3d-concrete-printing within that interdisciplinary effort to its next level. On the one hand the tool enables to suggest customized and optimized rebar layouts which could support the development of 3D-concrete-printing further. By enabling this more customizable calculation and tailored real time feedback for complex structures, solutions can develop more structurally informed. On the other hand, recent developments with Karamba3D within our office of Bollinger+Grohmann have shown the possibility of real time feedback loops between the physical printing process and the digital calculation model. Both developments show the potential that material systems can be optimized towards specific patterns or values of forces, or structures can be evaluated in real time for their load bearing capacity while hardening during printing.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Shoureshi, Rahmat A., and Christopher M. Aasted. "Wearable Hybrid Sensor Array for Motor Cortex Monitoring." Advances in Science and Technology 85 (September 2012): 23–27. http://dx.doi.org/10.4028/www.scientific.net/ast.85.23.

Повний текст джерела
Анотація:
As part of the goal of developing wearable sensor technologies, we have designed and built a hybrid sensor headset for monitoring brain activity. Through the use of electroencephalography (EEG) and near-infrared spectroscopy (NIRS), the sensor array is capable of monitoring neural activity across the primary motor cortex and wirelessly transmitting data to a computer for real-time processing to generate control signals, which are transmitted to wireless devices for various applications. This paper focuses on current results using this technology for artificial limb control and discusses the development of the headset as well as the neural networks employed for processing motor cortex activity and determining the user’s intentions. Initial results relevant to artificial limb control are presented and discussed, including the performance of the system when actuating an artificial limb with four degrees of freedom. Our headset provides a more natural control mechanism than traditional solutions, through the use of direct brain control. The technology resulting from this research is currently also being investigated for application in areas including phantom limb pain treatment, robotic arm control, general brain-computer interfaces, lie detection, and even a video game interface.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Shoureshi, Rahmat A., and Christopher M. Aasted. "Fluctuations in Frequency Composition of Neural Activity Observed by Portable Brain Intention Detection Device." Advances in Science and Technology 96 (October 2014): 89–94. http://dx.doi.org/10.4028/www.scientific.net/ast.96.89.

Повний текст джерела
Анотація:
As part of the goal of developing wearable sensor technologies, we have continued the development of a headset system for monitoring activity across the primary motor cortex of the brain. Through the combination of electroencephalography (EEG) and near-infrared spectroscopy (NIRS), the headsets are capable of monitoring event-related potentials and hemodynamic activity, which are wirelessly transmitted to a computer for real-time processing to generate control signals for a motorized prosthetic limb or a virtual embodiment of one or more limbs. This paper focuses on recent observations that have been made regarding the frequency content of EEG data, which we believe is responsible for the high performance we have previously reported using artificial neural networks to infer user’s intentions. While the inference engine takes advantage of frequency content from 0-128 Hertz (Hz), distinct fluctuations in alpha (8-13 Hz), beta (13-30 Hz), and gamma (30-100 Hz) frequency bands are human-observable across varying upper limb motor exercises when observed at the group level. In addition to prosthetic limbs, this technology is continuing to be investigated for application in areas including pain treatment, robotic arm control, lie detection, and more general brain-computer interfaces.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Singh, Ravinder, Akshay Katyal, Mukesh Kumar, Kirti Singh, and Deepak Bhola. "Removal of sonar wave interference in multi-robot system for the efficient SLAM by randomized triggering time technique." World Journal of Engineering 17, no. 4 (June 4, 2020): 535–42. http://dx.doi.org/10.1108/wje-09-2019-0273.

Повний текст джерела
Анотація:
Purpose Sonar sensor-based mobile robot mapping is an efficient and low cost technique for the application such as localization, autonomous navigation, SLAM and path planning. In multi-robots system, numbers of sonar sensors are used and the sound waves from sonar are interacting with the sound wave of other sonar causes wave interference. Because of wave interference, the generated sonar grid maps get distorted which resulted in decreasing the reliability of mobile robot’s navigation in the generated grid maps. This research study focus in removing the effect of wave interfaces in the sonar mapping to achieve robust navigation of mobile robot. Design/methodology/approach The wrong perception (occupancy grid map) of the environment due to cross talk/wave interference is eliminated by randomized the triggering time of sonar by varying the delay/sleep time of each sonar sensor. A software-based approach randomized triggering technique (RTT) is design in laboratory virtual instrument engineering workbench (LabVIEW) that randomized the triggering time of the sonar sensor to eliminate the effect of wave interference/cross talk when multiple sonar are placed in face-forward directions. Findings To check the reliability of the RTT technique, various real-world experiments are perform and it is experimentally obtained that 64.8% improvement in terms of probabilities in the generated occupancy grid map has been attained when compared with the conventional approaches. Originality/value This proposed RTT technique maybe implementing for SLAM, reliable autonomous navigation, optimal path planning, efficient robotics vision, consistent multi-robotic system, etc.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Taylor, R., X. Du, D. Proops, A. Reid, C. Coulson, and P. N. Brett. "A sensory-guided surgical micro-drill." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 224, no. 7 (April 27, 2010): 1531–37. http://dx.doi.org/10.1243/09544062jmes1933.

Повний текст джерела
Анотація:
This article describes a surgical robotic device that is able to discriminate tissue interfaces and other controlling parameters ahead of the drill tip. The advantage in such a surgery is that the tissues at the interfaces can be preserved. A smart tool detects ahead of the tool point and is able to control the interaction with respect to the flexing tissue, to avoid penetration or to control the extent of protrusion with respect to the position of the tissue. For surgical procedures, where precision is required, the tool offers significant benefit. To interpret the drilling conditions and the conditions leading up to breakthrough at a tissue interface, a sensing scheme is used that discriminates between the variety of conditions posed in the drilling environment. The result is a fully autonomous system, which is able to respond to the tissue type, behaviour, and deflection in real-time. The system is also robust in terms of disturbances encountered in the operating theatre. The device is pragmatic. It is intuitive to use, efficient to set up, and uses standard drill bits. The micro-drill, which has been used to prepare cochleostomies in the theatre, was used to remove the bone tissue leaving the endosteal membrane intact. This has enabled the preservation of sterility and the drilling debris to be removed prior to the insertion of the electrode. It is expected that this technique will promote the preservation of hearing and reduce the possibility of complications. The article describes the device (including simulated drill progress and hardware set-up) and the stages leading up to its use in the theatre.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Tayeb, Zied, Juri Fedjaev, Nejla Ghaboosi, Christoph Richter, Lukas Everding, Xingwei Qu, Yingyu Wu, Gordon Cheng, and Jörg Conradt. "Validating Deep Neural Networks for Online Decoding of Motor Imagery Movements from EEG Signals." Sensors 19, no. 1 (January 8, 2019): 210. http://dx.doi.org/10.3390/s19010210.

Повний текст джерела
Анотація:
Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Pransky, Joanne. "The Pransky interview: Gianmarco Veruggio, Director of Research, CNR-IEIIT, Genoa Branch; Robotics Pioneer and Inventor." Industrial Robot: An International Journal 44, no. 1 (January 16, 2017): 6–10. http://dx.doi.org/10.1108/ir-10-2016-0271.

Повний текст джерела
Анотація:
Purpose The following paper is a “Q&A interview” conducted by Joanne Pransky of Industrial Robot journal as a method to impart the combined technological, business and personal experience of a prominent, robotic industry engineer-turned successful innovator and leader, regarding the challenges of bringing technological discoveries to fruition. The paper aims to discuss these issues. Design/methodology/approach The interviewee is Gianmarco Veruggio who is responsible for the Operational Unit of Genoa of the Italian National Research Council Institute of Electronics, Computer and Telecommunication Engineering (CNR-IEIIT). Veruggio is an early pioneer of telerobotics in extreme environments. Veruggio founded the new applicative field of Roboethics. In this interview, Veruggio shares some of his 30-year robotic journey along with his thoughts and concerns on robotics and society. Findings Gianmarco Veruggio received a master’s degree in electronic engineering, computer science, control and automation from Genoa University in 1980. From 1980 to 1983 he worked in the Automation Division of Ansaldo as a Designer of fault-tolerant multiprocessor architectures for fail-safe control systems and was part of the development team for the new automation of the Italian Railway Stations. In 1984, he joined the CNR-Institute of Naval Automation (IAN) in Genoa as a Research Scientist. There, he worked on real-time computer graphics for simulation, control techniques and naval and marine data-collection systems. In 1989, he founded the CNR-IAN Robotics Department (Robotlab), which he headed until 2003, to develop missions on experimental robotics in extreme environments. His approach utilized working prototypes in a virtual lab environment and focused on robot mission control, real-time human-machine interfaces, networked control system architectures for tele-robotics and Internet Robotics. In 2000, he founded the association “Scuola di Robotica” (School of Robotics) to promote this new science among young people and society at large by means of educational robotics. He joined the CNR-IEIIT in 2007 to continue his research in robotics and to also develop studies on the philosophical, social and ethical implications of Robotics. Originality/value Veruggio led the first Italian underwater robotics campaigns in Antarctica during the Italian expeditions in 1993, 1997 and 2001, and in the Arctic during 2002. During the 2001-2002 Antarctic expedition, he carried out the E-Robot Project, the first experiment of internet robotics via satellite in the Antarctica. In 2002, he designed and developed the Project E-Robot2, the first experiment of worldwide internet robotics ever carried out in the Arctic. During these projects, he organized a series of “live-science” sessions in collaboration with students and teachers of Italian schools. Beginning with his new “School of Robotics”, Veruggio continued to disseminate and educate young people on the complex relationship between robotics and society. This led him to coin the term and propose the concept of Roboethics in 2002, and he has since made worldwide efforts at dedicating resources to the development of this new field. He was the General Chair of the “First International Symposium on Roboethics” in 2004 and of the “EURON Roboethics Atelier” in 2006 that produced the Roboethics Roadmap. Veruggio is the author of more than 150 scientific publications. In 2006, he was presented with the Ligurian Region Award for Innovation, and in 2009, for his merits in the field of science and society, he was awarded the title of Commander of the Order of Merit of the Italian Republic, one of Italy’s highest civilian honors.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Mikami, Sadayoshi, and and Mitsuo Wada. "Special Issue on Complex Systems in Robotics." Journal of Robotics and Mechatronics 10, no. 4 (August 20, 1998): 283. http://dx.doi.org/10.20965/jrm.1998.p0283.

Повний текст джерела
Анотація:
The Really ""intelligent"" robots predicted by science fiction have yet to appear, and robotics research seems to have reached a wall in dealing with the real-world environment. The robot is a unique device that it interfaces directly with the environments, including humans, machines, and nature. The world is very complex and changes dynamically. Robotic research must thus consider how to deal with such dynamcal complex world by means of machines. Our special issues on the complex systems in robotics introduce current representative approaches and attempts to answer these questions. The approach from a complex system point of view deals with new directions in robotics, for the above reasons and provides ways to view things dynamically, in a way that goes beyond traditional static control laws and rules. As these issues show approaches are divergent and ongoing. Modeling and forecasting the world is not haphazard. If requires direction. Even robots that navigate traffic, for example, must have a model to forecast unknown dynamics. Human interfacing requires far more difficult approaches than we take now. Recent developments in theory of chaos and non-linear predictions are expected to provide ways to enable these approaches. Robot interaction with the environment is one of the fundamental characteristics robots, and any interaction incorporates underlying dynamics; even robot-to-robot interaction exhibits deterministic dynamics. We will see how to deal with such complex phenomena through the articles predicting chaotic time series in these issues. Very rapid adaptation to the world is another way of coping using a brute-force approach. Reinforcement learning is a promising tool for working in a complex unknown environment. Learning robots affect both their environment and other robots. This is the situation in which we must think of the emergence of complexity. This may provide a rich source of possible tasks, and we must consider its dynamic nature of it. Many interesting phenomena are shown in the papers we present, applying reinforcement learning in multi-robots, for example. Finding good solutions wherever possible is a rather static solution but must incorporate the mechanism of how nature generates complexities and rich variations. Evolutionary methods, which many papers deal with in this issue, involves trends in complex systems sciences. Robotics applications must consider practical achievements such as rapidity, robustness, and appropriateness for specific applications. These issues provide a variety of robots and automation problems. Of course there are lots of other ways for this quite new approach and it should be worth cultivating because it is just the way we expect that robots should go. These special issues are organized from many papers submitted by researchers, all of whom we thank for their contributions. We hope these issues will help readers to familiarize themselves with the many trends in researches beyond engineering approaches and treat their practical implementation. This area is now very active, and we hope to see many papers related to this theme submitted to this journal in future.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Kazanzides, Peter, Balazs P. Vagvolgyi, Will Pryor, Anton Deguet, Simon Leonard, and Louis L. Whitcomb. "Teleoperation and Visualization Interfaces for Remote Intervention in Space." Frontiers in Robotics and AI 8 (December 1, 2021). http://dx.doi.org/10.3389/frobt.2021.747917.

Повний текст джерела
Анотація:
Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore the control and visualization interfaces available to those operators, taking into account the challenges due to significant telemetry time delay. We consider one motivating application of remote teleoperation, which is ground-based control of a robot on-orbit for satellite servicing. This paper presents a model-based architecture that: 1) improves visualization and situation awareness, 2) enables more effective human/robot interaction and control, and 3) detects task failures based on anomalous sensor feedback. We illustrate elements of the architecture by drawing on 10 years of our research in this area. The paper further reports the results of several multi-user experiments to evaluate the model-based architecture, on ground-based test platforms, for satellite servicing tasks subject to round-trip communication latencies of several seconds. The most significant performance gains were obtained by enhancing the operators’ situation awareness via improved visualization and by enabling them to precisely specify intended motion. In contrast, changes to the control interface, including model-mediated control or an immersive 3D environment, often reduced the reported task load but did not significantly improve task performance. Considering the challenges of fully autonomous intervention, we expect that some form of teleoperation will continue to be necessary for robotic in-situ servicing, assembly, and manufacturing tasks for the foreseeable future. We propose that effective teleoperation can be enabled by modeling the remote environment, providing operators with a fused view of the real environment and virtual model, and incorporating interfaces and control strategies that enable interactive planning, precise operation, and prompt detection of errors.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Zhong, Chuanyu, Shumi Zhao, Yang Liu, Zhijun Li, Zhen Kan, and Ying Feng. "A flexible wearable e-skin sensing system for robotic teleoperation." Robotica, September 16, 2022, 1–14. http://dx.doi.org/10.1017/s026357472200131x.

Повний текст джерела
Анотація:
Abstract Electronic skin (e-skin) is playing an increasingly important role in health detection, robotic teleoperation, and human-machine interaction, but most e-skins currently lack the integration of on-site signal acquisition and transmission modules. In this paper, we develop a novel flexible wearable e-skin sensing system with 11 sensing channels for robotic teleoperation. The designed sensing system is mainly composed of three components: e-skin sensor, customized flexible printed circuit (FPC), and human-machine interface. The e-skin sensor has 10 stretchable resistors distributed at the proximal and metacarpal joints of each finger respectively and 1 stretchable resistor distributed at the purlicue. The e-skin sensor can be attached to the opisthenar, and thanks to its stretchability, the sensor can detect the bent angle of the finger. The customized FPC, with WiFi module, wirelessly transmits the signal to the terminal device with human-machine interface, and we design a graphical user interface based on the Qt framework for real-time signal acquisition, storage, and display. Based on this developed e-skin system and self-developed robotic multi-fingered hand, we conduct gesture recognition and robotic multi-fingered teleoperation experiments using deep learning techniques and obtain a recognition accuracy of 91.22%. The results demonstrate that the developed e-skin sensing system has great potential in human-machine interaction.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Naceri, Abdeldjallil, Dario Mazzanti, Joao Bimbo, Yonas T. Tefera, Domenico Prattichizzo, Darwin G. Caldwell, Leonardo S. Mattos, and Nikhil Deshpande. "The Vicarios Virtual Reality Interface for Remote Robotic Teleoperation." Journal of Intelligent & Robotic Systems 101, no. 4 (April 2021). http://dx.doi.org/10.1007/s10846-021-01311-7.

Повний текст джерела
Анотація:
AbstractIntuitive interaction is the cornerstone of accurate and effective performance in remote robotic teleoperation. It requires high-fidelity in control actions as well as perception (vision, haptic, and other sensory feedback) of the remote environment. This paper presentsVicarios, a Virtual Reality (VR) based interface with the aim of facilitating intuitive real-time remote teleoperation, while utilizing the inherent benefits of VR, including immersive visualization, freedom of user viewpoint selection, and fluidity of interaction through natural action interfaces.Vicariosaims to enhance the situational awareness, using the concept ofviewpoint-independent mappingbetween the operator and the remote scene, thereby giving the operator better control in the perception-action loop. The article describes the overall system ofVicarios, with its software, hardware, and communication framework. A comparative user study quantifies the impact of the interface and its features, including immersion and instantaneous user viewpoint changes, termed “teleporting”, on users’ performance. The results show that users’ performance with the VR-based interface was either similar to or better than the baseline condition of traditional stereo video feedback, approving the realistic nature of theVicariosinterface. Furthermore, including the teleporting feature in VR significantly improved participants’ performance and their appreciation for it, which was evident in the post-questionnaire results.Vicarioscapitalizes on the intuitiveness and flexibility of VR to improve accuracy in remote teleoperation.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії