Siga este enlace para ver otros tipos de publicaciones sobre el tema: Deep Learning and Perception for Grasping and Manipulation.

Artículos de revistas sobre el tema "Deep Learning and Perception for Grasping and Manipulation"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Deep Learning and Perception for Grasping and Manipulation".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Han, Dong, Hong Nie, Jinbao Chen, Meng Chen, Zhen Deng y Jianwei Zhang. "Multi-modal haptic image recognition based on deep learning". Sensor Review 38, n.º 4 (17 de septiembre de 2018): 486–93. http://dx.doi.org/10.1108/sr-08-2017-0160.

Texto completo
Resumen
Purpose This paper aims to improve the diversity and richness of haptic perception by recognizing multi-modal haptic images. Design/methodology/approach First, the multi-modal haptic data collected by BioTac sensors from different objects are pre-processed, and then combined into haptic images. Second, a multi-class and multi-label deep learning model is designed, which can simultaneously learn four haptic features (hardness, thermal conductivity, roughness and texture) from the haptic images, and recognize objects based on these features. The haptic images with different dimensions and modalities are provided for testing the recognition performance of this model. Findings The results imply that multi-modal data fusion has a better performance than single-modal data on tactile understanding, and the haptic images with larger dimension are conducive to more accurate haptic measurement. Practical implications The proposed method has important potential application in unknown environment perception, dexterous grasping manipulation and other intelligent robotics domains. Originality/value This paper proposes a new deep learning model for extracting multiple haptic features and recognizing objects from multi-modal haptic images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Valarezo Añazco, Edwin, Sara Guerrero, Patricio Rivera Lopez, Ji-Heon Oh, Ga-Hyeon Ryu y Tae-Seong Kim. "Deep Learning-Based Ensemble Approach for Autonomous Object Manipulation with an Anthropomorphic Soft Robot Hand". Electronics 13, n.º 2 (17 de enero de 2024): 379. http://dx.doi.org/10.3390/electronics13020379.

Texto completo
Resumen
Autonomous object manipulation is a challenging task in robotics because it requires an essential understanding of the object’s parameters such as position, 3D shape, grasping (i.e., touching) areas, and orientation. This work presents an autonomous object manipulation system using an anthropomorphic soft robot hand with deep learning (DL) vision intelligence for object detection, 3D shape reconstruction, and object grasping area generation. Object detection is performed using Faster-RCNN and an RGB-D sensor to produce a partial depth view of the objects randomly located in the working space. Three-dimensional object shape reconstruction is performed using U-Net based on 3D convolutions with bottle-neck layers and skip connections generating a complete 3D shape of the object from the sensed single-depth view. Then, the grasping position and orientation are computed based on the reconstructed 3D object information (e.g., object shape and size) using U-Net based on 3D convolutions and Principal Component Analysis (PCA), respectively. The proposed autonomous object manipulation system is evaluated by grasping and relocating twelve objects not included in the training database, achieving an average of 95% successful object grasping and 93% object relocations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Wang, Cong, Qifeng Zhang, Qiyan Tian, Shuo Li, Xiaohui Wang, David Lane, Yvan Petillot y Sen Wang. "Learning Mobile Manipulation through Deep Reinforcement Learning". Sensors 20, n.º 3 (10 de febrero de 2020): 939. http://dx.doi.org/10.3390/s20030939.

Texto completo
Resumen
Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful technique for fixed-base manipulation tasks, most of them are not applicable to mobile manipulation. This paper investigates how to leverage deep reinforcement learning to tackle whole-body mobile manipulation tasks in unstructured environments using only on-board sensors. A novel mobile manipulation system which integrates the state-of-the-art deep reinforcement learning algorithms with visual perception is proposed. It has an efficient framework decoupling visual perception from the deep reinforcement learning control, which enables its generalization from simulation training to real-world testing. Extensive simulation and experiment results show that the proposed mobile manipulation system is able to grasp different types of objects autonomously in various simulation and real-world scenarios, verifying the effectiveness of the proposed mobile manipulation system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zhao, Wenhui, Bin Xu y Xinzhong Wu. "Robot grasping system based on deep learning target detection". Journal of Physics: Conference Series 2450, n.º 1 (1 de marzo de 2023): 012071. http://dx.doi.org/10.1088/1742-6596/2450/1/012071.

Texto completo
Resumen
Abstract The traditional robot grasping system often uses fixed point grasping or demonstrative grasping, but with the increase in the diversity of grasping targets and the randomness of poses, the traditional grasping method is no longer sufficient. A robot grasping method based on deep learning target detection is proposed for a high error rate of target recognition and low success rate of grasping in the robot grasping process. The method investigates the robotic arm hand-eye calibration and the deep learning-based target detection and poses estimation algorithm. The Basler camera is used as the visual perception tool of the robot arm, the AUBO i10 robot arm is used as the main body of the experiment, and the PP-YOLO deep learning algorithm performs target detection and poses estimation on the object. Through the collection of experimental data, several grasping experiments were conducted on the diversity of targets randomly placed in the poses under real scenes. The results showed that the success rate of grasping target detection was 94.93% and the robot grasping success rate was 93.37%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhou, Hongyu, Jinhui Xiao, Hanwen Kang, Xing Wang, Wesley Au y Chao Chen. "Learning-Based Slip Detection for Robotic Fruit Grasping and Manipulation under Leaf Interference". Sensors 22, n.º 15 (22 de julio de 2022): 5483. http://dx.doi.org/10.3390/s22155483.

Texto completo
Resumen
Robotic harvesting research has seen significant achievements in the past decade, with breakthroughs being made in machine vision, robot manipulation, autonomous navigation and mapping. However, the missing capability of obstacle handling during the grasping process has severely reduced harvest success rate and limited the overall performance of robotic harvesting. This work focuses on leaf interference caused slip detection and handling, where solutions to robotic grasping in an unstructured environment are proposed. Through analysis of the motion and force of fruit grasping under leaf interference, the connection between object slip caused by leaf interference and inadequate harvest performance is identified for the first time in the literature. A learning-based perception and manipulation method is proposed to detect slip that causes problematic grasps of objects, allowing the robot to implement timely reaction. Our results indicate that the proposed algorithm detects grasp slip with an accuracy of 94%. The proposed sensing-based manipulation demonstrated great potential in robotic fruit harvesting, and could be extended to other pick-place applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Ruihua, Xujun Chen, Zhengzhong Wan, Meng Wang y Xinqing Xiao. "Deep Learning-Based Oyster Packaging System". Applied Sciences 13, n.º 24 (8 de diciembre de 2023): 13105. http://dx.doi.org/10.3390/app132413105.

Texto completo
Resumen
With the deepening understanding of the nutritional value of oysters by consumers, oysters as high-quality seafood are gradually entering the market. Raw edible oyster production lines mainly rely on manual sorting and packaging, which hinders the improvement of oyster packaging efficiency and quality, and it is easy to cause secondary oyster pollution and cross-contamination, which results in the waste of oysters. To enhance the production efficiency, technical level, and hygiene safety of the raw aquatic products production line, this study proposes and constructs a deep learning-based oyster packaging system. The system achieves intelligence and automation of the oyster packaging production line by integrating the deep learning algorithm, machine vision technology, and mechanical arm control technology. The oyster visual perception model is established by deep learning object detection techniques to realize fast and real-time detection of oysters. Using a simple online real-time tracking (SORT) algorithm, the grasping position of the oyster can be predicted, which enables dynamic grasping. Utilizing mechanical arm control technology, an automatic oyster packaging production line was designed and constructed to realize the automated grasping and packaging of raw edible oysters, which improves the efficiency and quality of oyster packaging. System tests showed that the absolute error in oyster pose estimation was less than 7 mm, which allowed the mechanical claw to consistently grasp and transport oysters. The static grasping and packing of a single oyster took about 7.8 s, and the success rate of grasping was 94.44%. The success rate of grasping under different transportation speeds was above 68%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Liu, Ning, Cangui Guo, Rongzhao Liang y Deping Li. "Collaborative Viewpoint Adjusting and Grasping via Deep Reinforcement Learning in Clutter Scenes". Machines 10, n.º 12 (29 de noviembre de 2022): 1135. http://dx.doi.org/10.3390/machines10121135.

Texto completo
Resumen
For the robotic grasping of randomly stacked objects in a cluttered environment, the active multiple viewpoints method can improve grasping performance by improving the environment perception ability. However, in many scenes, it is redundant to always use multiple viewpoints for grasping detection, which will reduce the robot’s grasping efficiency. To improve the robot’s grasping performance, we present a Viewpoint Adjusting and Grasping Synergy (VAGS) strategy based on deep reinforcement learning which coordinates the viewpoint adjusting and grasping directly. For the training efficiency of VAGS, we propose a Dynamic Action Exploration Space (DAES) method based on ε-greedy to reduce the training time. To address the sparse reward problem in reinforcement learning, a reward function is created to evaluate the impact of adjusting the camera pose on the grasping performance. According to experimental findings in simulation and the real world, the VAGS method can improve grasping success and scene clearing rate. Compared with only direct grasping, our proposed strategy increases the grasping success rate and the scene clearing rate by 10.49% and 11%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Han, Dong, Beni Mulyana, Vladimir Stankovic y Samuel Cheng. "A Survey on Deep Reinforcement Learning Algorithms for Robotic Manipulation". Sensors 23, n.º 7 (5 de abril de 2023): 3762. http://dx.doi.org/10.3390/s23073762.

Texto completo
Resumen
Robotic manipulation challenges, such as grasping and object manipulation, have been tackled successfully with the help of deep reinforcement learning systems. We give an overview of the recent advances in deep reinforcement learning algorithms for robotic manipulation tasks in this review. We begin by outlining the fundamental ideas of reinforcement learning and the parts of a reinforcement learning system. The many deep reinforcement learning algorithms, such as value-based methods, policy-based methods, and actor–critic approaches, that have been suggested for robotic manipulation tasks are then covered. We also examine the numerous issues that have arisen when applying these algorithms to robotics tasks, as well as the various solutions that have been put forth to deal with these issues. Finally, we highlight several unsolved research issues and talk about possible future directions for the subject.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Mohammed, Marwan Qaid, Lee Chung Kwek, Shing Chyi Chua, Abdulaziz Salamah Aljaloud, Arafat Al-Dhaqm, Zeyad Ghaleb Al-Mekhlafi y Badiea Abdulkarem Mohammed. "Deep Reinforcement Learning-Based Robotic Grasping in Clutter and Occlusion". Sustainability 13, n.º 24 (10 de diciembre de 2021): 13686. http://dx.doi.org/10.3390/su132413686.

Texto completo
Resumen
In robotic manipulation, object grasping is a basic yet challenging task. Dexterous grasping necessitates intelligent visual observation of the target objects by emphasizing the importance of spatial equivariance to learn the grasping policy. In this paper, two significant challenges associated with robotic grasping in both clutter and occlusion scenarios are addressed. The first challenge is the coordination of push and grasp actions, in which the robot may occasionally fail to disrupt the arrangement of the objects in a well-ordered object scenario. On the other hand, when employed in a randomly cluttered object scenario, the pushing behavior may be less efficient, as many objects are more likely to be pushed out of the workspace. The second challenge is the avoidance of occlusion that occurs when the camera itself is entirely or partially occluded during a grasping action. This paper proposes a multi-view change observation-based approach (MV-COBA) to overcome these two problems. The proposed approach is divided into two parts: 1) using multiple cameras to set up multiple views to address the occlusion issue; and 2) using visual change observation on the basis of the pixel depth difference to address the challenge of coordinating push and grasp actions. According to experimental simulation findings, the proposed approach achieved an average grasp success rate of 83.6%, 86.3%, and 97.8% in the cluttered, well-ordered object, and occlusion scenarios, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sayour, Malak H., Sharbel E. Kozhaya y Samer S. Saab. "Autonomous Robotic Manipulation: Real-Time, Deep-Learning Approach for Grasping of Unknown Objects". Journal of Robotics 2022 (30 de junio de 2022): 1–14. http://dx.doi.org/10.1155/2022/2585656.

Texto completo
Resumen
Recent advancement in vision-based robotics and deep-learning techniques has enabled the use of intelligent systems in a wider range of applications requiring object manipulation. Finding a robust solution for object grasping and autonomous manipulation became the focus of many engineers and is still one of the most demanding problems in modern robotics. This paper presents a full grasping pipeline proposing a real-time data-driven deep-learning approach for robotic grasping of unknown objects using MATLAB and convolutional neural networks. The proposed approach employs RGB-D image data acquired from an eye-in-hand camera centering the object of interest in the field of view using visual servoing. Our approach aims at reducing propagation errors and eliminating the need for complex hand tracking algorithm, image segmentation, or 3D reconstruction. The proposed approach is able to efficiently generate reliable multi-view object grasps regardless of the geometric complexity and physical properties of the object in question. The proposed system architecture enables simple and effective path generation and a real-time tracking control. In addition, our system is modular, reliable, and accurate in both end effector path generation and control. We experimentally justify the efficacy and effectiveness of our overall system on the Barrett Whole Arm Manipulator.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Rivera, Patricio, Edwin Valarezo Añazco y Tae-Seong Kim. "Object Manipulation with an Anthropomorphic Robotic Hand via Deep Reinforcement Learning with a Synergy Space of Natural Hand Poses". Sensors 21, n.º 16 (5 de agosto de 2021): 5301. http://dx.doi.org/10.3390/s21165301.

Texto completo
Resumen
Anthropomorphic robotic hands are designed to attain dexterous movements and flexibility much like human hands. Achieving human-like object manipulation remains a challenge especially due to the control complexity of the anthropomorphic robotic hand with a high degree of freedom. In this work, we propose a deep reinforcement learning (DRL) to train a policy using a synergy space for generating natural grasping and relocation of variously shaped objects using an anthropomorphic robotic hand. A synergy space is created using a continuous normalizing flow network with point clouds of haptic areas, representing natural hand poses obtained from human grasping demonstrations. The DRL policy accesses the synergistic representation and derives natural hand poses through a deep regressor for object grasping and relocation tasks. Our proposed synergy-based DRL achieves an average success rate of 88.38% for the object manipulation tasks, while the standard DRL without synergy space only achieves 50.66%. Qualitative results show the proposed synergy-based DRL policy produces human-like finger placements over the surface of each object including apple, banana, flashlight, camera, lightbulb, and hammer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Zhang, Tengteng y Hongwei Mo. "Research on Perception and Control Technology for Dexterous Robot Operation". Electronics 12, n.º 14 (13 de julio de 2023): 3065. http://dx.doi.org/10.3390/electronics12143065.

Texto completo
Resumen
Robotic grasping in cluttered environments is a fundamental and challenging task in robotics research. The ability to autonomously grasp objects in cluttered scenes is crucial for robots to perform complex tasks in real-world scenarios. Conventional grasping is based on the known object model in a structured environment, but the adaptability of unknown objects and complicated situations is constrained. In this paper, we present a robotic grasp architecture of attention-based deep reinforcement learning. To prevent the loss of local information, the prominent characteristics of input images are automatically extracted using a full convolutional network. In contrast to previous model-based and data-driven methods, the reward is remodeled in an effort to address the sparse rewards. The experimental results show that our method can double the learning speed in grasping a series of randomly placed objects. In real-word experiments, the grasping success rate of the robot platform reaches 90.4%, which outperforms several baselines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Mohammed, Marwan Qaid, Lee Chung Kwek, Shing Chyi Chua, Arafat Al-Dhaqm, Saeid Nahavandi, Taiseer Abdalla Elfadil Eisa, Muhammad Fahmi Miskon et al. "Review of Learning-Based Robotic Manipulation in Cluttered Environments". Sensors 22, n.º 20 (18 de octubre de 2022): 7938. http://dx.doi.org/10.3390/s22207938.

Texto completo
Resumen
Robotic manipulation refers to how robots intelligently interact with the objects in their surroundings, such as grasping and carrying an object from one place to another. Dexterous manipulating skills enable robots to assist humans in accomplishing various tasks that might be too dangerous or difficult to do. This requires robots to intelligently plan and control the actions of their hands and arms. Object manipulation is a vital skill in several robotic tasks. However, it poses a challenge to robotics. The motivation behind this review paper is to review and analyze the most relevant studies on learning-based object manipulation in clutter. Unlike other reviews, this review paper provides valuable insights into the manipulation of objects using deep reinforcement learning (deep RL) in dense clutter. Various studies are examined by surveying existing literature and investigating various aspects, namely, the intended applications, the techniques applied, the challenges faced by researchers, and the recommendations adopted to overcome these obstacles. In this review, we divide deep RL-based robotic manipulation tasks in cluttered environments into three categories, namely, object removal, assembly and rearrangement, and object retrieval and singulation tasks. We then discuss the challenges and potential prospects of object manipulation in clutter. The findings of this review are intended to assist in establishing important guidelines and directions for academics and researchers in the future.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Lopez, Patricio Rivera, Ji-Heon Oh, Jin Gyun Jeong, Hwanseok Jung, Jin Hyuk Lee, Ismael Espinoza Jaramillo, Channabasava Chola, Won Hee Lee y Tae-Seong Kim. "Dexterous Object Manipulation with an Anthropomorphic Robot Hand via Natural Hand Pose Transformer and Deep Reinforcement Learning". Applied Sciences 13, n.º 1 (28 de diciembre de 2022): 379. http://dx.doi.org/10.3390/app13010379.

Texto completo
Resumen
Dexterous object manipulation using anthropomorphic robot hands is of great interest for natural object manipulations across the areas of healthcare, smart homes, and smart factories. Deep reinforcement learning (DRL) is a particularly promising approach to solving dexterous manipulation tasks with five-fingered robot hands. Yet, controlling an anthropomorphic robot hand via DRL in order to obtain natural, human-like object manipulation with high dexterity remains a challenging task in the current robotic field. Previous studies have utilized some predefined human hand poses to control the robot hand’s movements for successful object-grasping. However, the hand poses derived from these grasping taxonomies are limited to a partial range of adaptability that could be performed by the robot hand. In this work, we propose a combinatory approach of a deep transformer network which produces a wider range of natural hand poses to configure the robot hand’s movements, and an adaptive DRL to control the movements of an anthropomorphic robot hand according to these natural hand poses. The transformer network learns and infers the natural robot hand poses according to the object affordance. Then, DRL trains a policy using the transformer output to grasp and relocate the object to the designated target location. Our proposed transformer-based DRL (T-DRL) has been tested using various objects, such as an apple, a banana, a light bulb, a camera, a hammer, and a bottle. Additionally, its performance is compared with a baseline DRL model via natural policy gradient (NPG). The results demonstrate that our T-DRL achieved an average manipulation success rate of 90.1% for object manipulation and outperformed NPG by 24.8%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Bütepage, Judith, Silvia Cruciani, Mia Kokic, Michael Welle y Danica Kragic. "From Visual Understanding to Complex Object Manipulation". Annual Review of Control, Robotics, and Autonomous Systems 2, n.º 1 (3 de mayo de 2019): 161–79. http://dx.doi.org/10.1146/annurev-control-053018-023735.

Texto completo
Resumen
Planning and executing object manipulation requires integrating multiple sensory and motor channels while acting under uncertainty and complying with task constraints. As the modern environment is tuned for human hands, designing robotic systems with similar manipulative capabilities is crucial. Research on robotic object manipulation is divided into smaller communities interested in, e.g., motion planning, grasp planning, sensorimotor learning, and tool use. However, few attempts have been made to combine these areas into holistic systems. In this review, we aim to unify the underlying mechanics of grasping and in-hand manipulation by focusing on the temporal aspects of manipulation, including visual perception, grasp planning and execution, and goal-directed manipulation. Inspired by human manipulation, we envision that an emphasis on the temporal integration of these processes opens the way for human-like object use by robots.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Cirillo, Andrea, Gianluca Laudante y Salvatore Pirozzi. "Tactile Sensor Data Interpretation for Estimation of Wire Features". Electronics 10, n.º 12 (18 de junio de 2021): 1458. http://dx.doi.org/10.3390/electronics10121458.

Texto completo
Resumen
At present, the tactile perception is essential for robotic applications when performing complex manipulation tasks, e.g., grasping objects of different shapes and sizes, distinguishing between different textures, and avoiding slips by grasping an object with a minimal force. Considering Deformable Linear Object manipulation applications, this paper presents an efficient and straightforward method to allow robots to autonomously work with thin objects, e.g., wires, and to recognize their features, i.e., diameter, by relying on tactile sensors developed by the authors. The method, based on machine learning algorithms, is described in-depth in the paper to make it easily reproducible by the readers. Experimental tests show the effectiveness of the approach that is able to properly recognize the considered object’s features with a recognition rate up to 99.9%. Moreover, a pick and place task, which uses the method to classify and organize a set of wires by diameter, is presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Zhou, Hongyu, Hanwen Kang, Xing Wang, Wesley Au, Michael Yu Wang y Chao Chen. "Branch Interference Sensing and Handling by Tactile Enabled Robotic Apple Harvesting". Agronomy 13, n.º 2 (9 de febrero de 2023): 503. http://dx.doi.org/10.3390/agronomy13020503.

Texto completo
Resumen
In the dynamic and unstructured environment where horticultural crops grow, obstacles and interference frequently occur but are rarely addressed, which poses significant challenges for robotic harvesting. This work proposed a tactile-enabled robotic grasping method that combines deep learning, tactile sensing, and soft robots. By integrating fin-ray fingers with embedded tactile sensing arrays and customized perception algorithms, the robot gains the ability to sense and handle branch interference during the harvesting process and thus reduce potential mechanical fruit damage. Through experimental validations, an overall 83.3–87.0% grasping status detection success rate, and a promising interference handling method have been demonstrated. The proposed grasping method can also be extended to broader robotic grasping applications wherever undesirable foreign object intrusion needs to be addressed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Xie, Zhen, Josh Ye Seng Chen, Guo Wei Lim y Fengjun Bai. "Data-Driven Robotic Tactile Grasping for Hyper-Personalization Line Pick-and-Place". Actuators 12, n.º 5 (1 de mayo de 2023): 192. http://dx.doi.org/10.3390/act12050192.

Texto completo
Resumen
Industries such as the manufacturing or logistics industry need algorithms that are flexible to handle novel or unknown objects. Many current solutions in the market are unsuitable for grasping these objects in high-mix and low-volume scenarios. Finally, there are still gaps in terms of grasping accuracy and speed that we would like to address in this research. This project aims to improve the robotic grasping capability for novel objects with varying shapes and textures through the use of soft grippers and data-driven learning in a hyper-personalization line. A literature review was conducted to understand the tradeoffs between the deep reinforcement learning (DRL) approach and the deep learning (DL) approach. The DRL approach was found to be data-intensive, complex, and collision-prone. As a result, we opted for a data-driven approach, which to be more specific, is PointNet GPD in this project. In addition, a comprehensive market survey was performed on tactile sensors and soft grippers with consideration of factors such as price, sensitivity, simplicity, and modularity. Based on our study, we chose the Rochu two-fingered soft gripper with our customized force-sensing resistor (FSR) force sensors mounted on the fingertips due to its modularity and compatibility with tactile sensors. A software architecture was proposed, including a perception module, picking module, transfer module, and packing module. Finally, we conducted model training using a soft gripper configuration and evaluated grasping with various objects, such as fast-moving consumer goods (FMCG) products, fruits, and vegetables, which are unknown to the robot prior to grasping. The grasping accuracy was improved from 75% based on push and grasp to 80% based on PointNetGPD. This versatile grasping platform is independent of gripper configurations and robot models. Future works are proposed to further enhance tactile sensing and grasping stability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Caldera, Shehan, Alexander Rassau y Douglas Chai. "Review of Deep Learning Methods in Robotic Grasp Detection". Multimodal Technologies and Interaction 2, n.º 3 (7 de septiembre de 2018): 57. http://dx.doi.org/10.3390/mti2030057.

Texto completo
Resumen
For robots to attain more general-purpose utility, grasping is a necessary skill to master. Such general-purpose robots may use their perception abilities to visually identify grasps for a given object. A grasp describes how a robotic end-effector can be arranged to securely grab an object and successfully lift it without slippage. Traditionally, grasp detection requires expert human knowledge to analytically form the task-specific algorithm, but this is an arduous and time-consuming approach. During the last five years, deep learning methods have enabled significant advancements in robotic vision, natural language processing, and automated driving applications. The successful results of these methods have driven robotics researchers to explore the use of deep learning methods in task-generalised robotic applications. This paper reviews the current state-of-the-art in regards to the application of deep learning methods to generalised robotic grasping and discusses how each element of the deep learning approach has improved the overall performance of robotic grasp detection. Several of the most promising approaches are evaluated and the most suitable for real-time grasp detection is identified as the one-shot detection method. The availability of suitable volumes of appropriate training data is identified as a major obstacle for effective utilisation of the deep learning approaches, and the use of transfer learning techniques is proposed as a potential mechanism to address this. Finally, current trends in the field and future potential research directions are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Zhang, Tengteng y Hongwei Mo. "Towards Multi-Objective Object Push-Grasp Policy Based on Maximum Entropy Deep Reinforcement Learning under Sparse Rewards". Entropy 26, n.º 5 (12 de mayo de 2024): 416. http://dx.doi.org/10.3390/e26050416.

Texto completo
Resumen
In unstructured environments, robots need to deal with a wide variety of objects with diverse shapes, and often, the instances of these objects are unknown. Traditional methods rely on training with large-scale labeled data, but in environments with continuous and high-dimensional state spaces, the data become sparse, leading to weak generalization ability of the trained models when transferred to real-world applications. To address this challenge, we present an innovative maximum entropy Deep Q-Network (ME-DQN), which leverages an attention mechanism. The framework solves complex and sparse reward tasks through probabilistic reasoning while eliminating the trouble of adjusting hyper-parameters. This approach aims to merge the robust feature extraction capabilities of Fully Convolutional Networks (FCNs) with the efficient feature selection of the attention mechanism across diverse task scenarios. By integrating an advantage function with the reasoning and decision-making of deep reinforcement learning, ME-DQN propels the frontier of robotic grasping and expands the boundaries of intelligent perception and grasping decision-making in unstructured environments. Our simulations demonstrate a remarkable grasping success rate of 91.6%, while maintaining excellent generalization performance in the real world.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Chen, Ao, Yongchun Xie, Yong Wang y Linfeng Li. "Knowledge Graph-Based Image Recognition Transfer Learning Method for On-Orbit Service Manipulation". Space: Science & Technology 2021 (6 de agosto de 2021): 1–9. http://dx.doi.org/10.34133/2021/9807452.

Texto completo
Resumen
Visual perception provides state information of current manipulation scene for control system, which plays an important role in on-orbit service manipulation. With the development of deep learning, deep convolutional neural networks (CNNs) have achieved many successful applications in the field of visual perception. Deep CNNs are only effective for the application condition containing a large number of training data with the same distribution as the test data; however, real space images are difficult to obtain during large-scale training. Therefore, deep CNNs can not be directly adopted for image recognition in the task of on-orbit service manipulation. In order to solve the problem of few-shot learning mentioned above, this paper proposes a knowledge graph-based image recognition transfer learning method (KGTL), which learns from training dataset containing dense source domain data and sparse target domain data, and can be transferred to the test dataset containing large number of data collected from target domain. The average recognition precision of the proposed method is 80.5%, and the average recall is 83.5%, which is higher than that of ResNet50-FC; the average precision is 60.2%, and the average recall is 67.5%. The proposed method significantly improves the training efficiency of the network and the generalization performance of the model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Desingh, Karthik. "Perception for General-purpose Robot Manipulation". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 13 (26 de junio de 2023): 15435. http://dx.doi.org/10.1609/aaai.v37i13.26802.

Texto completo
Resumen
To autonomously perform tasks, a robot should continually perceive the state of its environment, reason with the task at hand, plan and execute appropriate actions. In this pipeline, perception is largely unsolved and one of the more challenging problems. Common indoor environments typically pose two main problems: 1) inherent occlusions leading to unreliable observations of objects, and 2) the presence and involvement of a wide range of objects with varying physical and visual attributes (i.e., rigid, articulated, deformable, granular, transparent, etc.). Thus, we need algorithms that can accommodate perceptual uncertainty in the state estimation and generalize to a wide range of objects. Probabilistic inference methods have been highly suitable for modeling perceptual uncertainty, and data-driven approaches using deep learning techniques have shown promising advancements toward generalization. Perception for manipulation is a more intricate setting requiring the best from both worlds. My research aims to develop robot perception algorithms that can generalize over objects and tasks while accommodating perceptual uncertainty to support robust task execution in the real world. In this presentation, I will briefly highlight my research in these two research threads.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Zhu, Bo-Rui, Jin-Siang Shaw y Shih-Hao Lee. "Development of Annulus-Object Random Bin Picking System based on Rapid Establishment of RGB-D Images". WSEAS TRANSACTIONS ON INFORMATION SCIENCE AND APPLICATIONS 21 (28 de febrero de 2024): 128–38. http://dx.doi.org/10.37394/23209.2024.21.13.

Texto completo
Resumen
With the development of the automation industry, robotic arms, and vision applications are no longer limited to fixed actions of the past. Production lines increasingly require the recognition and grasping of objects in complex environments, emphasizing quick setup and stability. In this paper, a rapidly constructed eye-hand system for robotic arm grasping, which enables fast and efficient object manipulation, particularly for stacking, is introduced. Initially, images were captured using a camera to generate extensive datasets from a limited number of images. Objects were subsequently segmented and categorized using deep learning networks for object detection and instance segmentation. Three-dimensional position information was obtained from an RGB-D camera. Finally, object poses were determined based on plane normal vectors, and gripping positions were manually marked. This reduced the time required for grab-point identification, model training, and pose localization. Based on experimental results, the grasping procedure proposed in this paper is suitable for various object-grasping scenarios. It achieved impressive picking success rates of 96% for unstacked annular objects and 90.86% for random bin annular objects, respectively. In the final experiment, following depth information filtering, a success rate of 95.1% was attained with random bin annular object picking.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Huang, Shiyao y Hao Wu. "Texture Recognition Based on Perception Data from a Bionic Tactile Sensor". Sensors 21, n.º 15 (2 de agosto de 2021): 5224. http://dx.doi.org/10.3390/s21155224.

Texto completo
Resumen
Texture recognition is important for robots to discern the characteristics of the object surface and adjust grasping and manipulation strategies accordingly. It is still challenging to develop texture classification approaches that are accurate and do not require high computational costs. In this work, we adopt a bionic tactile sensor to collect vibration data while sliding against materials of interest. Under a fixed contact pressure and speed, a total of 1000 sets of vibration data from ten different materials were collected. With the tactile perception data, four types of texture recognition algorithms are proposed. Three machine learning algorithms, including support vector machine, random forest, and K-nearest neighbor, are established for texture recognition. The test accuracy of those three methods are 95%, 94%, 94%, respectively. In the detection process of machine learning algorithms, the asamoto and polyester are easy to be confused with each other. A convolutional neural network is established to further increase the test accuracy to 98.5%. The three machine learning models and convolutional neural network demonstrate high accuracy and excellent robustness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Zhou, Huaidong, Wusheng Chou, Wanchen Tuo, Yongfeng Rong y Song Xu. "Mobile Manipulation Integrating Enhanced AMCL High-Precision Location and Dynamic Tracking Grasp". Sensors 20, n.º 22 (23 de noviembre de 2020): 6697. http://dx.doi.org/10.3390/s20226697.

Texto completo
Resumen
Mobile manipulation, which has more flexibility than fixed-base manipulation, has always been an important topic in the field of robotics. However, for sophisticated operation in complex environments, efficient localization and dynamic tracking grasp still face enormous challenges. To address these challenges, this paper proposes a mobile manipulation method integrating laser-reflector-enhanced adaptive Monte Carlo localization (AMCL) algorithm and a dynamic tracking and grasping algorithm. First, by fusing the information of laser-reflector landmarks to adjust the weight of particles in AMCL, the localization accuracy of mobile platforms can be improved. Second, deep-learning-based multiple-object detection and visual servo are exploited to efficiently track and grasp dynamic objects. Then, a mobile manipulation system integrating the above two algorithms into a robotic with a 6-degrees-of-freedom (DOF) operation arm is implemented in an indoor environment. Technical components, including localization, multiple-object detection, dynamic tracking grasp, and the integrated system, are all verified in real-world scenarios. Experimental results demonstrate the efficacy and superiority of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Zapata-Impata, Brayan S., Pablo Gil y Fernando Torres. "Tactile-Driven Grasp Stability and Slip Prediction". Robotics 8, n.º 4 (26 de septiembre de 2019): 85. http://dx.doi.org/10.3390/robotics8040085.

Texto completo
Resumen
One of the challenges in robotic grasping tasks is the problem of detecting whether a grip is stable or not. The lack of stability during a manipulation operation usually causes the slippage of the grasped object due to poor contact forces. Frequently, an unstable grip can be caused by an inadequate pose of the robotic hand or by insufficient contact pressure, or both. The use of tactile data is essential to check such conditions and, therefore, predict the stability of a grasp. In this work, we present and compare different methodologies based on deep learning in order to represent and process tactile data for both stability and slip prediction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Cordeiro, Artur, João Pedro Souza, Carlos M. Costa, Vítor Filipe, Luís F. Rocha y Manuel F. Silva. "Bin Picking for Ship-Building Logistics Using Perception and Grasping Systems". Robotics 12, n.º 1 (18 de enero de 2023): 15. http://dx.doi.org/10.3390/robotics12010015.

Texto completo
Resumen
Bin picking is a challenging task involving many research domains within the perception and grasping fields, for which there are no perfect and reliable solutions available that are applicable to a wide range of unstructured and cluttered environments present in industrial factories and logistics centers. This paper contributes with research on the topic of object segmentation in cluttered scenarios, independent of previous object shape knowledge, for textured and textureless objects. In addition, it addresses the demand for extended datasets in deep learning tasks with realistic data. We propose a solution using a Mask R-CNN for 2D object segmentation, trained with real data acquired from a RGB-D sensor and synthetic data generated in Blender, combined with 3D point-cloud segmentation to extract a segmented point cloud belonging to a single object from the bin. Next, it is employed a re-configurable pipeline for 6-DoF object pose estimation, followed by a grasp planner to select a feasible grasp pose. The experimental results show that the object segmentation approach is efficient and accurate in cluttered scenarios with several occlusions. The neural network model was trained with both real and simulated data, enhancing the success rate from the previous classical segmentation, displaying an overall grasping success rate of 87.5%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Pastor, Francisco, Da-hui Lin-Yang, Jesús M. Gómez-de-Gabriel y Alfonso J. García-Cerezo. "Dataset with Tactile and Kinesthetic Information from a Human Forearm and Its Application to Deep Learning". Sensors 22, n.º 22 (12 de noviembre de 2022): 8752. http://dx.doi.org/10.3390/s22228752.

Texto completo
Resumen
There are physical Human–Robot Interaction (pHRI) applications where the robot has to grab the human body, such as rescue or assistive robotics. Being able to precisely estimate the grasping location when grabbing a human limb is crucial to perform a safe manipulation of the human. Computer vision methods provide pre-grasp information with strong constraints imposed by the field environments. Force-based compliant control, after grasping, limits the amount of applied strength. On the other hand, valuable tactile and proprioceptive information can be obtained from the pHRI gripper, which can be used to better know the features of the human and the contact state between the human and the robot. This paper presents a novel dataset of tactile and kinesthetic data obtained from a robot gripper that grabs a human forearm. The dataset is collected with a three-fingered gripper with two underactuated fingers and a fixed finger with a high-resolution tactile sensor. A palpation procedure is performed to record the shape of the forearm and to recognize the bones and muscles in different sections. Moreover, an application for the use of the database is included. In particular, a fusion approach is used to estimate the actual grasped forearm section using both kinesthetic and tactile information on a regression deep-learning neural network. First, tactile and kinesthetic data are trained separately with Long Short-Term Memory (LSTM) neural networks, considering the data are sequential. Then, the outputs are fed to a Fusion neural network to enhance the estimation. The experiments conducted show good results in training both sources separately, with superior performance when the fusion approach is considered.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Imtiaz, Muhammad Babar, Yuansong Qiao y Brian Lee. "Prehensile and Non-Prehensile Robotic Pick-and-Place of Objects in Clutter Using Deep Reinforcement Learning". Sensors 23, n.º 3 (29 de enero de 2023): 1513. http://dx.doi.org/10.3390/s23031513.

Texto completo
Resumen
In this study, we develop a framework for an intelligent and self-supervised industrial pick-and-place operation for cluttered environments. Our target is to have the agent learn to perform prehensile and non-prehensile robotic manipulations to improve the efficiency and throughput of the pick-and-place task. To achieve this target, we specify the problem as a Markov decision process (MDP) and deploy a deep reinforcement learning (RL) temporal difference model-free algorithm known as the deep Q-network (DQN). We consider three actions in our MDP; one is ‘grasping’ from the prehensile manipulation category and the other two are ‘left-slide’ and ‘right-slide’ from the non-prehensile manipulation category. Our DQN is composed of three fully convolutional networks (FCN) based on the memory-efficient architecture of DenseNet-121 which are trained together without causing any bottleneck situations. Each FCN corresponds to each discrete action and outputs a pixel-wise map of affordances for the relevant action. Rewards are allocated after every forward pass and backpropagation is carried out for weight tuning in the corresponding FCN. In this manner, non-prehensile manipulations are learnt which can, in turn, lead to possible successful prehensile manipulations in the near future and vice versa, thus increasing the efficiency and throughput of the pick-and-place task. The Results section shows performance comparisons of our approach to a baseline deep learning approach and a ResNet architecture-based approach, along with very promising test results at varying clutter densities across a range of complex scenario test cases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Yang, Zeshi, Kangkang Yin y Libin Liu. "Learning to use chopsticks in diverse gripping styles". ACM Transactions on Graphics 41, n.º 4 (julio de 2022): 1–17. http://dx.doi.org/10.1145/3528223.3530057.

Texto completo
Resumen
Learning dexterous manipulation skills is a long-standing challenge in computer graphics and robotics, especially when the task involves complex and delicate interactions between the hands, tools and objects. In this paper, we focus on chopsticks-based object relocation tasks, which are common yet demanding. The key to successful chopsticks skills is steady gripping of the sticks that also supports delicate maneuvers. We automatically discover physically valid chopsticks holding poses by Bayesian Optimization (BO) and Deep Reinforcement Learning (DRL), which works for multiple gripping styles and hand morphologies without the need of example data. Given as input the discovered gripping poses and desired objects to be moved, we build physics-based hand controllers to accomplish relocation tasks in two stages. First, kinematic trajectories are synthesized for the chopsticks and hand in a motion planning stage. The key components of our motion planner include a grasping model to select suitable chopsticks configurations for grasping the object, and a trajectory optimization module to generate collision-free chopsticks trajectories. Then we train physics-based hand controllers through DRL again to track the desired kinematic trajectories produced by the motion planner. We demonstrate the capabilities of our framework by relocating objects of various shapes and sizes, in diverse gripping styles and holding positions for multiple hand morphologies. Our system achieves faster learning speed and better control robustness, when compared to vanilla systems that attempt to learn chopstick-based skills without a gripping pose optimization module and/or without a kinematic motion planner. Our code and models are available at this link. 1
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Li, Guozhen, Shiqiang Liu, Liangqi Wang y Rong Zhu. "Skin-inspired quadruple tactile sensors integrated on a robot hand enable object recognition". Science Robotics 5, n.º 49 (16 de diciembre de 2020): eabc8134. http://dx.doi.org/10.1126/scirobotics.abc8134.

Texto completo
Resumen
Robot hands with tactile perception can improve the safety of object manipulation and also improve the accuracy of object identification. Here, we report the integration of quadruple tactile sensors onto a robot hand to enable precise object recognition through grasping. Our quadruple tactile sensor consists of a skin-inspired multilayer microstructure. It works as thermoreceptor with the ability to perceive thermal conductivity of a material, measure contact pressure, as well as sense object temperature and environment temperature simultaneously and independently. By combining tactile sensing information and machine learning, our smart hand has the capability to precisely recognize different shapes, sizes, and materials in a diverse set of objects. We further apply our smart hand to the task of garbage sorting and demonstrate a classification accuracy of 94% in recognizing seven types of garbage.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Agarwal, Aditya, Yash Oza, Maxim Likhachev y Chad Kessens. "Fast and High-Quality, GPU-based, Deliberative, Object-Pose Estimation". Field Robotics 1, n.º 1 (19 de octubre de 2021): 34–69. http://dx.doi.org/10.55417/fr.2021002.

Texto completo
Resumen
Pose estimation of recognized objects is fundamental to tasks such as robotic grasping and manipulation. The need for reliable grasping imposes stringent accuracy requirements on pose estimation in cluttered, occluded scenes in dynamic environments. Modern methods employ large sets of training data to learn features and object templates in order to find correspondence between models and observed data. However, these methods require extensive annotation of ground-truth poses. An alternative is to use algorithms, such as PERCH (PErception Via SeaRCH) that seek an optimal explanation of the observed scene in a space of possible rendered versions. While PERCH offers strong guarantees on accuracy, the initial formulation suffers from poor scalability owing to its high runtime. In this work, we present PERCH 2.0, a deliberative approach that takes advantage of GPU acceleration and RGB data by formulating pose estimation as a single-shot, fully parallel approach. We show that PERCH 2.0 achieves a two orders of magnitude speedup (∼100X) over the hierarchical PERCH by evaluating thousands of poses in parallel. In addition, we propose a combined deliberative and discriminative framework for 6-DoF pose estimation that doesn’t require any ground-truth pose-annotation. Our work shows that PERCH 2.0 achieves, on the YCB-Video Dataset, a higher accuracy than DenseFusion, a state-of-the-art, end-to-end, learning-based approach. We also demonstrate that our work leads directly to an extension of deliberative pose estimation methods like PERCH to new domains, such as conveyor picking, which was previously infeasible due to high runtime. Our code is available at https://sbpl-cruz.github.io/perception/
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Schwarz, Max, Anton Milan, Arul Selvam Periyasamy y Sven Behnke. "RGB-D object detection and semantic segmentation for autonomous manipulation in clutter". International Journal of Robotics Research 37, n.º 4-5 (20 de junio de 2017): 437–51. http://dx.doi.org/10.1177/0278364917713117.

Texto completo
Resumen
Autonomous robotic manipulation in clutter is challenging. A large variety of objects must be perceived in complex scenes, where they are partially occluded and embedded among many distractors, often in restricted spaces. To tackle these challenges, we developed a deep-learning approach that combines object detection and semantic segmentation. The manipulation scenes are captured with RGB-D cameras, for which we developed a depth fusion method. Employing pretrained features makes learning from small annotated robotic datasets possible. We evaluate our approach on two challenging datasets: one captured for the Amazon Picking Challenge 2016, where our team NimbRo came in second in the Stowing and third in the Picking task; and one captured in disaster-response scenarios. The experiments show that object detection and semantic segmentation complement each other and can be combined to yield reliable object perception.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Liu, Kainan, Meiyun Zhang y Mohammed K. Hassan. "Intelligent image recognition system for detecting abnormal features of scenic spots based on deep learning". Journal of Intelligent & Fuzzy Systems 39, n.º 4 (21 de octubre de 2020): 5149–59. http://dx.doi.org/10.3233/jifs-189000.

Texto completo
Resumen
To monitor the scene anomaly in real-time through video and image and identify the emergencies, try to respond quickly at the beginning of the emergency and reduce the loss. This paper mainly focuses on the realization of the image recognition system for the anomalous characteristics of tourism emergencies. The problem is to study the number of people in the scenic spot based on scenic spot monitoring. The video-based population anomaly monitoring method has improved the AUC index of the W-SFM method by 0.423, and the AUC has increased by 0.0844 compared with the optical flow method; Degree-enhanced algorithm (BCOF), by grasping the micro-blog data related to the scenic spot, comprehensively predicts the overall comfort of the current tourists in the scenic spot, and establishes a tourist state expression model. Compared with the BN algorithm and the NEG algorithm, the BCOF algorithm is the accuracy and the recall rate of tourists in the scenic spots was improved by 14% and 18% respectively. The image recognition system of tourism emergency anomaly was established, and the early warning model of tourism emergency based on group intelligence perception was used to implement early warning on scenic spots. Monitoring, can achieve an overall accuracy of 83.33%, the model has a strong predictive ability, and achieves a scenic spot Real-time monitoring of events.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Massalim, Yerkebulan, Zhanat Kappassov y Huseyin Atakan Varol. "Deep Vibro-Tactile Perception for Simultaneous Texture Identification, Slip Detection, and Speed Estimation". Sensors 20, n.º 15 (25 de julio de 2020): 4121. http://dx.doi.org/10.3390/s20154121.

Texto completo
Resumen
Autonomous dexterous manipulation relies on the ability to recognize an object and detect its slippage. Dynamic tactile signals are important for object recognition and slip detection. An object can be identified based on the acquired signals generated at contact points during tactile interaction. The use of vibrotactile sensors can increase the accuracy of texture recognition and preempt the slippage of a grasped object. In this work, we present a Deep Learning (DL) based method for the simultaneous texture recognition and slip detection. The method detects non-slip and slip events, the velocity, and discriminate textures—all within 17 ms. We evaluate the method for three objects grasped using an industrial gripper with accelerometers installed on its fingertips. A comparative analysis of convolutional neural networks (CNNs), feed-forward neural networks, and long short-term memory networks confirmed that deep CNNs have a higher generalization accuracy. We also evaluated the performance of the highest accuracy method for different signal bandwidths, which showed that a bandwidth of 125 Hz is enough to classify textures with 80% accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Meckel, Miriam y Léa Steinacker. "Hybrid Reality: The Rise of Deepfakes and Diverging Truths". Morals & Machines 1, n.º 1 (2021): 12–23. http://dx.doi.org/10.5771/2747-5182-2021-1-12.

Texto completo
Resumen
While the manipulation of media has existed as long as their creation, recent advances in Artificial Intelligence (AI) have expedited the range of tampering techniques. Pictures, sound and moving images can now be altered and even generated entirely by computation. We argue that this development contributes to a “hybrid reality”, a construct of both human perception and technologically driven fabrications. In using synthetic media involving deep learning, called deepfakes, as one manifestation,we show how this technological progress leads to a distorted marketplace of ideas and truths that necessitates a renegotiation of democratic processes. We synthesize implications and conclude with recommendations for how to reach a new consensus on the construction of reality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Meckel, Miriam y Léa Steinacker. "Hybrid Reality: The Rise of Deepfakes and Diverging Truths". Morals & Machines 1, n.º 1 (2021): 12–23. http://dx.doi.org/10.5771/2747-5174-2021-1-12.

Texto completo
Resumen
While the manipulation of media has existed as long as their creation, recent advances in Artificial Intelligence (AI) have expedited the range of tampering techniques. Pictures, sound and moving images can now be altered and even generated entirely by computation. We argue that this development contributes to a “hybrid reality”, a construct of both human perception and technologically driven fabrications. In using synthetic media involving deep learning, called deepfakes, as one manifestation,we show how this technological progress leads to a distorted marketplace of ideas and truths that necessitates a renegotiation of democratic processes. We synthesize implications and conclude with recommendations for how to reach a new consensus on the construction of reality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Meckel, Miriam y Léa Steinacker. "Hybrid Reality: The Rise of Deepfakes and Diverging Truths". Morals & Machines 1, n.º 1 (2021): 10–21. http://dx.doi.org/10.5771/2747-5174-2021-1-10.

Texto completo
Resumen
While the manipulation of media has existed as long as their creation, recent advances in Artificial Intelligence (AI) have expedited the range of tampering techniques. Pictures, sound and moving images can now be altered and even generated entirely by computation. We argue that this development contributes to a “hybrid reality”, a construct of both human perception and technologically driven fabrications. In using synthetic media involving deep learning, called deepfakes, as one manifestation,we show how this technological progress leads to a distorted marketplace of ideas and truths that necessitates a renegotiation of democratic processes. We synthesize implications and conclude with recommendations for how to reach a new consensus on the construction of reality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Meckel, Miriam y Léa Steinacker. "Hybrid Reality: The Rise of Deepfakes and Diverging Truths". Morals & Machines 1, n.º 1 (2021): 10–21. http://dx.doi.org/10.5771/2747-5182-2021-1-10.

Texto completo
Resumen
While the manipulation of media has existed as long as their creation, recent advances in Artificial Intelligence (AI) have expedited the range of tampering techniques. Pictures, sound and moving images can now be altered and even generated entirely by computation. We argue that this development contributes to a “hybrid reality”, a construct of both human perception and technologically driven fabrications. In using synthetic media involving deep learning, called deepfakes, as one manifestation,we show how this technological progress leads to a distorted marketplace of ideas and truths that necessitates a renegotiation of democratic processes. We synthesize implications and conclude with recommendations for how to reach a new consensus on the construction of reality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Zhang, Haiming, Mingchang Wang, Yongxian Zhang y Guorui Ma. "TDA-Net: A Novel Transfer Deep Attention Network for Rapid Response to Building Damage Discovery". Remote Sensing 14, n.º 15 (1 de agosto de 2022): 3687. http://dx.doi.org/10.3390/rs14153687.

Texto completo
Resumen
The rapid and accurate discovery of damage information of the affected buildings is of great significance for postdisaster emergency rescue. In some related studies, the models involved can detect damaged buildings relatively accurately, but their time cost is high. Models that can guarantee both detection accuracy and high efficiency are urgently needed. In this paper, we propose a new transfer-learning deep attention network (TDA-Net). It can achieve a balance of accuracy and efficiency. The benchmarking network for TDA-Net uses a pair of deep residual networks and is pretrained on a large-scale dataset of disaster-damaged buildings. The pretrained deep residual networks have strong sensing properties on the damage information, which ensures the effectiveness of the network in prefeature grasping. In order to make the network have a more robust perception of changing features, a set of deep attention bidirectional encoding and decoding modules is connected after the TDA-Net benchmark network. When performing a new task, only a small number of samples are needed to train the network, and the damage information of buildings in the whole area can be extracted. The bidirectional encoding and decoding structure of the network allows two images to be input into the model independently, which can effectively capture the features of a single image, thereby improving the detection accuracy. Our experiments on the xView2 dataset and three datasets of disaster regions achieve high detection accuracy, which demonstrates the feasibility of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Lee, Jongseok, Ribin Balachandran, Konstantin Kondak, Andre Coelho, Marco De Stefano, Matthias Humt, Jianxiang Feng, Tamim Asfour y Rudolph Triebel. "Virtual Reality via Object Pose Estimation and Active Learning: Realizing Telepresence Robots with Aerial Manipulation Capabilities". Field Robotics 3, n.º 1 (10 de enero de 2023): 323–67. http://dx.doi.org/10.55417/fr.2023010.

Texto completo
Resumen
This paper presents a novel telepresence system for advancing aerial manipulation in dynamic and unstructured environments. The proposed system not only features a haptic device, but also a virtual reality (VR) interface that provides real-time 3D displays of the robot’s workspace as well as a haptic guidance to its remotely located operator. To realize this, multiple sensors, namely, a LiDAR, cameras, and IMUs are utilized. For processing of the acquired sensory data, pose estimation pipelines are devised for industrial objects of both known and unknown geometries. We further propose an active learning pipeline in order to increase the sample efficiency of a pipeline component that relies on a Deep Neural Network (DNN) based object detector. All these algorithms jointly address various challenges encountered during the execution of perception tasks in industrial scenarios. In the experiments, exhaustive ablation studies are provided to validate the proposed pipelines. Methodologically, these results commonly suggest how an awareness of the algorithms’ own failures and uncertainty (“introspection”) can be used to tackle the encountered problems. Moreover, outdoor experiments are conducted to evaluate the effectiveness of the overall system in enhancing aerial manipulation capabilities. In particular, with flight campaigns over days and nights, from spring to winter, and with different users and locations, we demonstrate over 70 robust executions of pick-andplace, force application and peg-in-hole tasks with the DLR cable-Suspended Aerial Manipulator (SAM). As a result, we show the viability of the proposed system in future industrial applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Seetohul, Jenna y Mahmood Shafiee. "Snake Robots for Surgical Applications: A Review". Robotics 11, n.º 3 (5 de mayo de 2022): 57. http://dx.doi.org/10.3390/robotics11030057.

Texto completo
Resumen
Although substantial advancements have been achieved in robot-assisted surgery, the blueprint to existing snake robotics predominantly focuses on the preliminary structural design, control, and human–robot interfaces, with features which have not been particularly explored in the literature. This paper aims to conduct a review of planning and operation concepts of hyper-redundant serpentine robots for surgical use, as well as any future challenges and solutions for better manipulation. Current researchers in the field of the manufacture and navigation of snake robots have faced issues, such as a low dexterity of the end-effectors around delicate organs, state estimation and the lack of depth perception on two-dimensional screens. A wide range of robots have been analysed, such as the i²Snake robot, inspiring the use of force and position feedback, visual servoing and augmented reality (AR). We present the types of actuation methods, robot kinematics, dynamics, sensing, and prospects of AR integration in snake robots, whilst addressing their shortcomings to facilitate the surgeon’s task. For a smoother gait control, validation and optimization algorithms such as deep learning databases are examined to mitigate redundancy in module linkage backlash and accidental self-collision. In essence, we aim to provide an outlook on robot configurations during motion by enhancing their material compositions within anatomical biocompatibility standards.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Sharma, Santosh Kumar. "Failed Nerve Blocks: Prevention and Management". Journal of Anaesthesia and Critical Care Reports 4, n.º 3 (2018): 3–6. http://dx.doi.org/10.13107/jaccr.2018.v04i03.101.

Texto completo
Resumen
“The secret of success is constancy of purpose” – Benjamin Disraeli, British politician Success and failure go side by side in regional anesthesia. No anesthesiologist can claim a 100% success record while giving nerve blocks. Hence, it is always better to focus on how to prevent causes of block failure rather than focusing on managing a failed block. Abdallah and Brull did a comprehensive literature hunt to find out the meaning of block “success” which were used by various authors in their studies and found that it was highly variable and there was lack of consensus regarding its meaning [1]. The most common definition of block success was an achievement of a surgical block within a designated period. There are essentially four stakeholders for defining success criteria: Namely the patient, the anesthesiologist, the surgeon, and the hospital administrator. The various parameters of success for a patient which included post-operative pain and patient satisfaction were evaluated in four trials only. The anesthesiologist-related indicators such as block onset time and complications were reported most frequently. The surgeon and hospital administrator-related indicators were not collected in any trial. For all practical purposes, especially from our perspective, a block failure may be accepted when complying with any one of the following after giving an adequate time of approximately 30 min: Conversion to general anesthesia (GA) after surgical incision. Use of intravenous (IV) opioid analgesics ≥100 μg fentanyl or equivalent after incision. Rescue peripheral nerve block given (a second block after completion of an initial block). Infiltration of local anesthetic agent (LA) into the surgical site. The above four criteria are routinely recorded in medical records and have also been accepted in previous research papers. We may have (a) a total failure which is defined as block where bolus of LA completely misses its target and surgery cannot proceed, (b) an incomplete block where patient has numbness in the area of nerve distribution but not adequate for incision, (c) a patchy block in which some areas in distribution of plexus usually have escaped, (d) a wear off block or secondary failure seen when surgery outlasts the duration of block, and (e) a misdirected block is when part or whole of the drug is injected into the neighboring structures, for example, into a different fascial or muscular plane or a vessel. Morgan had stated that “Regional anesthesia always works – provided you put the right dose of the right drug in the right place.” Failure occurs due to blocking the wrong nerve or not blocking all the nerves for planned surgery. Three primary keys to successful regional anesthesia are, therefore, nerve location, nerve location, and nerve location! – N.M. Denny. Every anesthesiologist must “pause” just before placing the needle at the site of nerve block. While doing so, he re-confirms the patient’s identity, the intended procedure and the correct side of the intended nerve block. There are numerous factors which play a crucial role in the success or failure of a peripheral nerve block. The operator’s technical skills and experience play a substantial role. An unskilled anesthesiologist is perhaps the biggest cause of failure. It has been found that exposure to multiple techniques at the same time is confusing for the beginner. A pearl of wisdom is that one should avoid “over-selling” regional anesthesia (RA) techniques in the initial days of their independent practice. Dr. Gaston Labat in 1924 had wisely sermoned that “A thorough knowledge of the descriptive and topographic anatomy with regard to nerve distribution is a condition which anyone desirous of attempting to study regional anesthesia should fulfill.” If ultrasound (US) is being used, then knowledge of sono-anatomy is equally essential. Gross anatomic distortion will, however, remain a challenge to the success of nerve blocks. It is essential to give appropriate blocks for appropriate surgery. According to Hilton’s Law, the nerve trunk innervating a joint also supplies the overlying skin and the muscles that move that joint, and one must block all the nerves for a successful block. On the contrary, one must also understand the limitations of a particular nerve plexus block and the most common nerves that may be spared in a plexus block. It is better to choose one technique, become familiar, confident and comfortable with it and stay with the technique for a reasonable time, rather than trying unfamiliar nerve block techniques at the first go. Sub-optimal placement of LA in landmark-based technique leads to the highly variable success rate of these blocks. Using proper equipment is always advisable, and both peripheral nerve stimulator (PNS) and US have been validated to increase success rates in multiple studies. Block success rates are similar between US and PNS when the block was performed by experts [2]. Whatever the equipment, knowing and familiarizing with it is a bare minimum requirement. While using a nerve stimulator, the current intensity is essentially the most important factor. An evoked motor response at a current of ≤0.5 mA (0.3–0.5 mA) ensures a successful nerve block. Knowledge of an appropriate motor response of the innervating nerve is crucial for the success of the nerve block, and any non-ideal motor responses will increase the failure rates. In recent times, everybody is laying emphasis in US-guided blocks and the target nerve is no longer invisible. Does US-guided blocks lead to a 100% success? Sites et al. identified 398 of 520 peripheral nerve block errors committed by the US novices during their performance [3]. The crux is that the US may not eliminate failures completely. The major limitation of US technology is the dependence on the operator. One needs adequate training and has a definite curve in honing the skills. The most common errors during US-guided blocks are too much of hand motions while holding the needle or probe, poor choice of needle-insertion site and angle, difficulty in aligning needle with the US-beam thus preventing needle visualization, failure to recognize needle tip before injection, anatomic artifacts (tissue resembling target nerve), and failure to recognize maldistribution of LA [4]. Combination of US and PNS (Dual guidance), for nerve identification and blockade, has also been proposed. Using both facilitates learning, improve trainee performance and provide an increased level of confidence and comfort. For superficial blocks, US alone is usually sufficient and PNS may be used to monitor for an overlooked intraneural placement. For deep or anatomically challenging US-guided blocks with inadequate images, PNS can be used to identify the nerve structures of interest. Multi-stimulation, a technique where each component of the nerve plexus is stimulated separately has been proved to increase the success rate and reduce the dose of LA. It, however, requires multiple passes or multiple skin punctures with the block needle. The best results are seen for the infra-clavicular block, mid-humeral block, axillary block, popliteal or sciatic block, and most US-guided nerve blocks. No additional risk of nerve injury during redirection of the needle through partially anesthetized nerves has been reported. Excessively anxious or an uncooperative patient, patients with any mental illness are not the ideal candidates for RA. The patient’s anxiety may affect the anesthesiologist adversely making him anxious, denting his confidence, and consequently ruining his chances of a successful nerve block (Table 1) [5]. Underlying comorbidities in the patient such as obesity, arthritis, and diabetes may affect positioning, access, nerve localization, and identification. A history of a good previous experience of anesthesia or surgery is predictive of a more relaxed patient and a successful block. The management in such patients comprises good pre-operative counseling with a gentle, unhurried patient handling. Subsequent management may include use of a light anxiolytic premedication, followed with lifting drapes off patient’s eyes, shielding of the ears from noise, and applying headphones with soft music in the operation theaters (OT) (Fig. 1). Patients may still claim that their block has failed due to the conscious awareness of OT settings and “sensations” transmitted through unblocked nerve fibers. IV analgesia or sedation with appropriate monitoring for relieving anxiety and pain is essential and considered “standard care” and should not be considered as a failure. Drugs are an important factor for the success of nerve blocks. Usage of a sufficient volume and appropriate concentration of LA solution is the key to a successful nerve block. Too much of volume or concentration of LA may lead to an enhanced risk of side effects rather than increasing efficacy. Likewise, too less of volume or concentration of LA increase chances of failure. The anesthesiologist should always check for wrong dispensing and expiry date of drug personally, before proceeding with the nerve block. Mixing of LA is often misinterpreted to provide significant advantages such as prolongation of the block duration and decreased toxicity; instead, they provide effects which only mimic an intermediate acting agent with higher chances of toxicity. Isolated case reports professing very low volumes of LA must be taken in the right context and should not be made the universal rule. Perineural opioid and non-opioid adjuvants prolong the duration of the block, but none have prolonged duration >24 h. Alkalinization does not improve the block success rate. The adjuvants allow only dose reductions of LA, rather than preventing block failure. The environment where anaesthesiologists who are in a hurry or work under undue pressure, often face higher failure rates . Organizational changes like instituting a “block room” for RA will improve success. Indirectly it will lead to standardization of block procedures in that institution as well. In addition, an area separate from the operation table allows adequate time to test and top up ineffective blocks. Block rooms are a novel way of pooling of expertise, thus allowing excellent teaching opportunities for trainees. Poor ergonomics lead to increased fatigue and poor performance, especially among anesthesia residents and novice operators using US-guided blocks [6]. Our teammates (specifically surgeon’s) personality and their technical skills play a role in the selection of the type of anesthesia, nerve block technique, choice of drug, and need of adjuvants. An uncooperative surgeon is a strong predictor of failure of the nerve blocks. One should always discuss with the surgeon about the surgical plan, site of incision, area to be operated, and position of the patient during surgery. A clinical pearl is not to allow surgeons and OT staff to interrupt while one is giving the block as it will invariably increase the anxiety level. Once a patient is in the OT, the momentum shifts in favor of performing the surgery and only a few surgeons (including mine) have the patience to wait for the block to work. Allowing adequate “soak time” (time for a block to take effect) is mandatory for a block to be successful. 30 min are considered the minimum waiting time before calling any block a failure. Once an incomplete block has been diagnosed preoperatively, the management options are re-block, additional injections or rescue blocks, a different nerve block, spinal, or combined spinal-epidural anesthesia in lower limb surgeries, systemic analgesia with opioids or adjuvants, local infiltration anesthesia, and GA success and failure go side by side in regional anesthesia. No anesthesiologist can claim a 100% success record while giving nerve blocks. Hence, it is always better to focus on how to prevent causes of block failure rather than focusing on managing a failed block. Abdallah and Brull did a comprehensive literature hunt to find out the meaning of block “success” which were used by various authors in their studies and found that it was highly variable and there was lack of consensus regarding its meaning [1]. The most common definition of block success was an achievement of a surgical block within a designated period. There are essentially four stakeholders for defining success criteria: Namely the patient, the anesthesiologist, the surgeon, and the hospital administrator. The various parameters of success for a patient which included post-operative pain and patient satisfaction were evaluated in four trials only. The anesthesiologist-related indicators such as block onset time and complications were reported most frequently. The surgeon and hospital administrator-related indicators were not collected in any trial. For all practical purposes, especially from our perspective, a block failure may be accepted when complying with any one of the following after giving an adequate time of approximately 30 min: Conversion to general anesthesia (GA) after surgical incision. Use of intravenous (IV) opioid analgesics ≥100 μg fentanyl or equivalent after incision. Rescue peripheral nerve block given (a second block after completion of an initial block). Infiltration of local anesthetic agent (LA) into the surgical site. The above four criteria are routinely recorded in medical records and have also been accepted in previous research papers. We may have (a) a total failure which is defined as block where bolus of LA completely misses its target and surgery cannot proceed, (b) an incomplete block where patient has numbness in the area of nerve distribution but not adequate for incision, (c) a patchy block in which some areas in distribution of plexus usually have escaped, (d) a wear off block or secondary failure seen when surgery outlasts the duration of block, and (e) a misdirected block is when part or whole of the drug is injected into the neighboring structures, for example, into a different fascial or muscular plane or a vessel. Morgan had stated that “Regional anesthesia always works – provided you put the right dose of the right drug in the right place.” Failure occurs due to blocking the wrong nerve or not blocking all the nerves for planned surgery. Three primary keys to successful regional anesthesia are, therefore, nerve location, nerve location, and nerve location! – N.M. Denny. Every anesthesiologist must “pause” just before placing the needle at the site of nerve block. While doing so, he re-confirms the patient’s identity, the intended procedure and the correct side of the intended nerve block. There are numerous factors which play a crucial role in the success or failure of a peripheral nerve block. The operator’s technical skills and experience play a substantial role. An unskilled anesthesiologist is perhaps the biggest cause of failure. It has been found that exposure to multiple techniques at the same time is confusing for the beginner. A pearl of wisdom is that one should avoid “over-selling” regional anesthesia (RA) techniques in the initial days of their independent practice. Dr. Gaston Labat in 1924 had wisely sermoned that “A thorough knowledge of the descriptive and topographic anatomy with regard to nerve distribution is a condition which anyone desirous of attempting to study regional anesthesia should fulfill.” If ultrasound (US) is being used, then knowledge of sono-anatomy is equally essential. Gross anatomic distortion will, however, remain a challenge to the success of nerve blocks. It is essential to give appropriate blocks for appropriate surgery. According to Hilton’s Law, the nerve trunk innervating a joint also supplies the overlying skin and the muscles that move that joint, and one must block all the nerves for a successful block. On the contrary, one must also understand the limitations of a particular nerve plexus block and the most common nerves that may be spared in a plexus block. It is better to choose one technique, become familiar, confident and comfortable with it and stay with the technique for a reasonable time, rather than trying unfamiliar nerve block techniques at the first go. Sub-optimal placement of LA in landmark-based technique leads to the highly variable Agree n (%) Disagree n (%) Patients’ anxiety is common during regional anesthesia 36 (33) 74 (67) Anxiety is mostly pre-operative 69 (62) 41 (38) Patients’ anxiety concerns me a lot 25 (23) 85 (77) I underestimate patients’ anxiety 49 (44) 61 (55) I am always prepared to manage patients’ anxiety 66 (60) 44 (40) Patients’ anxiety may affect my anxiety 59 (53) 51 (46) Patients’ anxiety affects my confidence in performing regional anesthesia 39 (35) 71 (65) Patients’ anxiety may affect block success 63 (57) 47 (43) Differing advice from surgeon and anesthesiologist increases patient anxiety 100 (90) 10 (9) n: Number of respondents who agree/disagree with the statements; %: Percentages Table 1: Anesthesiologists perception of patients’ anxiety, its frequency and effects during regional anesthesia. (Adapted from Jlala et al., Anaesthesiologists’ perception of patients’ anxiety under regional anesthesia. Local and Regional Anesthesia 2010 success rate of these blocks. Using proper equipment is always advisable, and both peripheral nerve stimulator (PNS) and US have been validated to increase success rates in multiple studies. Block success rates are similar between US and PNS when the block was performed by experts [2]. Whatever the equipment, knowing and familiarizing with it is a bare minimum requirement. While using a nerve stimulator, the current intensity is essentially the most important factor. An evoked motor response at a current of ≤0.5 mA (0.3–0.5 mA) ensures a successful nerve block. Knowledge of an appropriate motor response of the innervating nerve is crucial for the success of the nerve block, and any non-ideal motor responses will increase the failure rates. In recent times, everybody is laying emphasis in US-guided blocks and the target nerve is no longer invisible. Does US-guided blocks lead to a 100% success? Sites et al. identified 398 of 520 peripheral nerve block errors committed by the US novices during their performance [3]. The crux is that the US may not eliminate failures completely. The major limitation of US technology is the dependence on the operator. One needs adequate training and has a definite curve in honing the skills. The most common errors during US-guided blocks are too much of hand motions while holding the needle or probe, poor choice of needle insertion site and angle, difficulty in aligning needle with the US-beam thus preventing needle visualization, failure to recognize needle tip before injection, anatomic artifacts (tissue resembling target nerve), and failure to recognize maldistribution of LA [4]. Combination of US and PNS (Dual guidance), for nerve identification and blockade, has also been proposed. Using both facilitates learning, improve trainee performance and provide an increased level of confidence and comfort. For superficial blocks, US alone is usually sufficient and PNS may be used to monitor for an overlooked intraneural placement. For deep or anatomically challenging US-guided blocks with inadequate images, PNS can be used to identify the nerve structures of interest. Multi-stimulation, a technique where each component of the nerve plexus is stimulated separately has been proved to increase the success rate and reduce the dose of LA. It, however, requires multiple passes or multiple skin punctures with the block needle. The best results are seen for the infra-clavicular block, mid-humeral block, axillary block, popliteal or sciatic block, and most US-guided nerve blocks. No additional risk of nerve injury during redirection of the needle through partially anesthetized nerves has been reported. Excessively anxious or an uncooperative patient, patients with any mental illness are not the ideal candidates for RA. The patient’s anxiety may affect the anesthesiologist adversely making him anxious, denting his confidence, and consequently ruining his chances of a successful nerve block (Table 1) [5]. Underlying comorbidities in the patient such as obesity, arthritis, and diabetes may affect positioning, access, nerve localization, and identification. A history of a good previous experience of anesthesia or surgery is predictive of a more relaxed patient and a successful block. The management in such patients comprises good pre-operative counseling with a gentle, unhurried patient handling. Subsequent management may include use of a light anxiolytic premedication, followed with lifting drapes off patient’s eyes, shielding of the ears from noise, and applying headphones with soft music in the operation theaters (OT) (Fig. 1). Patients may still claim that their block has failed due to the conscious awareness of OT settings and “sensations” transmitted through unblocked nerve fibers. IV analgesia or sedation with appropriate monitoring for relieving anxiety and pain is essential and considered “standard care” and should not be considered as a failure. Drugs are an important factor for the success of nerve blocks. Usage of a sufficient volume and appropriate concentration of LA solution is the key to a successful nerve block. Too much of volume or concentration of LA may lead to an enhanced risk of side effects rather than increasing efficacy. Likewise, too less of volume or concentration of LA increase chances of failure. The anesthesiologist should always check for wrong dispensing and expiry date of drug personally, before proceeding with the nerve block. Mixing of LA is often misinterpreted to provide significant advantages such as prolongation of the block duration and decreased toxicity; instead, they provide effects which only mimic an intermediate-acting agent with higher chances of toxicity. Isolated case reports professing very low volumes of LA must be taken in the right context and should not be made the universal rule. Perineural opioid and non-opioid adjuvants prolong the duration of the block, but none have prolonged duration >24 h. Alkalinization does not improve the block success rate. The adjuvants allow only dose reductions of LA, rather than preventing block failure. The environment where anaesthesiologists who are in a hurry or work under undue pressure, often face higher failure rates . Organizational changes like instituting a “block room” for RA will improve success. Indirectly it will lead to standardization of block procedures in that institution as well. In addition, an area separate from the operation table allows adequate time to test and top up ineffective blocks. Block rooms are a novel way of pooling of expertise, thus allowing excellent teaching opportunities for trainees. Poor ergonomics lead to increased fatigue and poor performance, especially among anesthesia residents and novice operators using US-guided blocks [6]. Our teammates (specifically surgeon’s) personality and their technical skills play a role in the selection of the type of anesthesia, nerve block technique, choice of drug, and need of adjuvants. An uncooperative surgeon is a strong predictor of failure of the nerve blocks. One should always discuss with the surgeon about the surgical plan, site of incision, area to be operated, and position of the patient during surgery. A clinical pearl is not to allow surgeons and OT staff to interrupt while one is giving the block as it will invariably increase the anxiety level. Once a patient is in the OT, the momentum shifts in favor of performing the surgery and only a few surgeons (including mine) have the patience to wait for the block to work. Allowing adequate “soak time” (time for a block to take effect) is mandatory for a block to be successful. 30 min are considered the minimum waiting time before calling any block a failure. Once an incomplete block has been diagnosed preoperatively, the management options are re-block, additional injections or rescue blocks, a different nerve block, spinal, or combined spinal-epidural anesthesia in lower limb surgeries, systemic analgesia with opioids or adjuvants, local infiltration anesthesia, and GA. “It is not a failure to fail, it is a failure not to have a plan in case you fail” (unknown). The decision to re-block depends on the dose already administered and time allowed to initiate surgery. A lower volume of LA required in US-guided blocks allows for a repeat block to be performed within the maximum permissible dose of LA. Once the surgical procedure has already begun, we are left with very limited options for the management of a failed block. We may still be able to successfully conduct the surgery with analgesic supplementation in the form of opioids and anesthesthetic drug supplementation such as ketamine or propofol in incremental doses in situations where we face partial effect or are expecting the block to take its effect with time. Local infiltration anesthesia (LIA) must be considered as one of the options [7]. The volume of LA is dependent on the extent of the incision, and one should not exceed the upper dose limit. It may be noted that incisional infiltration comprises not only a subcutaneous injection but also intramuscular, interfascial and as deep tissue injections. LIA is also an integral component of multimodal analgesia. If all feasible efforts have been unsuccessful and the patient continues to have persistent pain, then GA is the last resort. Surgical manipulation should then be stopped momentarily, and GA with rapid sequence induction and intubation is to be followed without further delays. Alternatively, GA may be continued with a face mask or a laryngeal mask airway and spontaneous ventilation. When patients having no or little pain during blocks starts having pain when the block has worn off is defined as Rebound pain [8]. Various interventions may be tried for preventing rebound failure [9]. During pre-anesthetic check-up patients must be educated regarding what to expect when block wears off. This remains the most useful strategy. Continuous peripheral nerve block (CPNB) using perineural catheters (PNCs) is most efficacious. Other options are wound catheter infusion, oral or IV multimodal analgesics and IV or perineural adjuvants. A secondary failure is seen in CPNB, where a repeat dose of LA fails to provide effective analgesia after the initial primary block has resolved. If faced with PNCs failures, it may be addressed by usage of US guidance which improves the success of catheter insertion compared to NS. Sub-circumneural space is considered the ideal space for catheter placement. Tunneling improves catheter security and prevents inadvertent misplacements. Tissue glue may be applied to puncture sites to stop leakage of LA. Intermittent bolus doses are better than continuous basal infusion. Stimulating catheters available in the market have been reported to decrease secondary failure rates. Multimodal analgesia should be provided in all cases, more so in case of a nonfunctioning catheter. There is an ongoing debate on whether blocks should be done after GA. Melissa et al. have rightly questioned – “Nerve Blocks Under GA: Time to liberalize Indications?” [10] Marhofer has tried to demystify the myths related to regional blocks carried out during GA or deep sedation [11]. So how ready are we to change the rules? Taking a lead from our past successful experiences in pediatric patients and truncal or chest blocks, it is advantageous to use a combination of GA with low-volume, lowconcentration single-shot or CPNB [12]. Thus, there will be no risk of failure, no delays and all stakeholders (surgeon, anesthesiologist, patient, and hospital administrator) will be satisfied. As healthcare systems continue to move toward patient-centered parameters, the patient criteria for success of a nerve block will become foremost. Broader questions will emerge beyond mere pain relief. In the attainment of success, there will always remain barriers for nerve blocks (Fig. 2). All effort should be made to encourage every anesthesiologist to practice RA and not utilizing it is probably the biggest failure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Gorjup, Gal, Lucas Gerez y Minas Liarokapis. "Leveraging Human Perception in Robot Grasping and Manipulation Through Crowdsourcing and Gamification". Frontiers in Robotics and AI 8 (29 de abril de 2021). http://dx.doi.org/10.3389/frobt.2021.652760.

Texto completo
Resumen
Robot grasping in unstructured and dynamic environments is heavily dependent on the object attributes. Although Deep Learning approaches have delivered exceptional performance in robot perception, human perception and reasoning are still superior in processing novel object classes. Furthermore, training such models requires large, difficult to obtain datasets. This work combines crowdsourcing and gamification to leverage human intelligence, enhancing the object recognition and attribute estimation processes of robot grasping. The framework employs an attribute matching system that encodes visual information into an online puzzle game, utilizing the collective intelligence of players to expand the attribute database and react to real-time perception conflicts. The framework is deployed and evaluated in two proof-of-concept applications: enhancing the control of a robotic exoskeleton glove and improving object identification for autonomous robot grasping. In addition, a model for estimating the framework response time is proposed. The obtained results demonstrate that the framework is capable of rapid adaptation to novel object classes, based purely on visual information and human experience.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Zhao, Min, Guoyu Zuo, Shuangyue Yu, Daoxiong Gong, Zihao Wang y Ouattara Sie. "Position‐aware pushing and grasping synergy with deep reinforcement learning in clutter". CAAI Transactions on Intelligence Technology, 2 de agosto de 2023. http://dx.doi.org/10.1049/cit2.12264.

Texto completo
Resumen
AbstractThe positional information of objects is crucial to enable robots to perform grasping and pushing manipulations in clutter. To effectively perform grasping and pushing manipulations, robots need to perceive the position information of objects, including the coordinates and spatial relationship between objects (e.g., proximity, adjacency). The authors propose an end‐to‐end position‐aware deep Q‐learning framework to achieve efficient collaborative pushing and grasping in clutter. Specifically, a pair of conjugate pushing and grasping attention modules are proposed to capture the position information of objects and generate high‐quality affordance maps of operating positions with features of pushing and grasping operations. In addition, the authors propose an object isolation metric and clutter metric based on instance segmentation to measure the spatial relationships between objects in cluttered environments. To further enhance the perception capacity of position information of the objects, the authors associate the change in the object isolation metric and clutter metric in cluttered environment before and after performing the action with reward function. A series of experiments are carried out in simulation and real‐world which indicate that the method improves sample efficiency, task completion rate, grasping success rate and action efficiency compared to state‐of‐the‐art end‐to‐end methods. Noted that the authors’ system can be robustly applied to real‐world use and extended to novel objects. Supplementary material is available at https://youtu.be/NhG\_k5v3NnM}{https://youtu.be/NhG\_k5v3NnM.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Park, Su-Young, Cheonghwa Lee, Suhwan Jeong, Junghyuk Lee, Dohyeon Kim, Youhyun Jang, Woojin Seol, Hyungjung Kim y Sung-Hoon Ahn. "Digital Twin and Deep Reinforcement Learning-Driven Robotic Automation System for Confined Workspaces: A Nozzle Dam Replacement Case Study in Nuclear Power Plants". International Journal of Precision Engineering and Manufacturing-Green Technology, 18 de marzo de 2024. http://dx.doi.org/10.1007/s40684-023-00593-6.

Texto completo
Resumen
AbstractRobotic automation has emerged as a leading solution for replacing human workers in dirty, dangerous, and demanding industries to ensure the safety of human workers. However, practical implementation of this technology remains limited, requiring substantial effort and costs. This study addresses the challenges specific to nuclear power plants, characterized by hazardous environments and physically demanding tasks such as nozzle dam replacement in confined workspaces. We propose a digital twin and deep-reinforcement-learning-driven robotic automation system with an autonomous mobile manipulator. The study follows a four-step process. First, we establish a simplified testbed for a nozzle dam replacement task and implement a high-fidelity digital twin model of the real-world testbed. Second, we employ a hybrid visual perception system that combines deep object pose estimation and an iterative closest point algorithm to enhance the accuracy of the six-dimensional pose estimation. Third, we use a deep-reinforcement-learning method, particularly the proximal policy optimization algorithm with inverse reachability map, and a centroidal waypoint strategy, to improve the controllability of an autonomous mobile manipulator. Finally, we conduct pre-performed simulations of the nozzle dam replacement in the digital twin and evaluate the system on a robot in the real-world testbed. The nozzle dam replacement with precise object pose estimation, navigation, target object grasping, and collision-free motion generation was successful. The robotic automation system achieved a $$92.0\%$$ 92.0 % success rate in the digital twin. Our proposed method can improve the efficiency and reliability of robotic automation systems for extreme workspaces and other perilous environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Zheng, Senjing y Marco Castellani. "Primitive shape recognition from real-life scenes using the PointNet deep neural network". International Journal of Advanced Manufacturing Technology, 2 de agosto de 2022. http://dx.doi.org/10.1007/s00170-022-09791-z.

Texto completo
Resumen
AbstractIn many industrial applications, it is possible to approximate the shape of mechanical parts with geometric primitives such as spheres, boxes, and cylinders. This information can be used to plan robotic grasping and manipulation procedures. The work presented in this paper investigated the use of the state-of-the-art PointNet deep neural network for primitive shape recognition in 3D scans of real-life objects. To obviate the need of collecting a large set of training models, it was decided to train PointNet using examples generated from artificial geometric models. The motivation of the study was the achievement of fully automated disassembly operations in remanufacturing applications. PointNet was chosen due to its suitability to process 3D models, and ability to recognise objects irrespective of their poses. The use of simpler shallow neural network procedures was also evaluated. Twenty-eight point cloud scenes of everyday objects selected from the popular Yale-CMU-Berkeley benchmark model set were used in the experiments. Experimental evidence showed that PointNet is able to generalise the knowledge gained on artificial shapes, to recognise shapes in ordinary objects with reasonable accuracy. However, the experiments showed some limitations in this ability of generalisation, in terms of average accuracy (78% circa) and consistency of the learning procedure. Using a feature extraction procedure, a multi-layer-perceptron architecture was able to achieve nearly 83% classification accuracy. A practical solution was proposed to improve PointNet generalisation capabilities: by training the neural network using an error-corrupted scene, its accuracy could be raised to nearly 86%, and the consistency of the learning results was visibly improved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Ku, Subyeong, Byung-Hyun Song, Taejun Park, Younghoon Lee y Yong-Lae Park. "Soft modularized robotic arm for safe human–robot interaction based on visual and proprioceptive feedback". International Journal of Robotics Research, 20 de enero de 2024. http://dx.doi.org/10.1177/02783649241227249.

Texto completo
Resumen
This study proposes a modularized soft robotic arm with integrated sensing of human touches for physical human–robot interactions. The proposed robotic arm is constructed by connecting multiple soft manipulator modules, each of which consists of three bellow-type soft actuators, pneumatic valves, and an on-board sensing and control circuit. By employing stereolithography three-dimensional (3D) printing technique, the bellow actuator is capable of incorporating embedded organogel channels in the thin wall of its body that are used for detecting human touches. The organogel thus serves as a soft interface for recognizing the intentions of the human operators, enabling the robot to interact with them while generating desired motions of the manipulator. In addition to the touch sensors, each manipulator module has compact, soft string sensors for detecting the displacements of the bellow actuators. When combined with an inertial measurement unit (IMU), the manipulator module has a capability of estimating its own pose or orientation internally. We also propose a localization method that allows us to estimate the location of the manipulator module and to acquire the 3D information of the target point in an uncontrolled environment. The proposed method uses only a single depth camera combined with a deep learning model and is thus much simpler than those of conventional motion capture systems that usually require multiple cameras in a controlled environment. Using the feedback information from the internal sensors and camera, we implemented closed-loop control algorithms to carry out tasks of reaching and grasping objects. The manipulator module shows structural robustness and the performance reliability over 5,000 cycles of repeated actuation. It shows a steady-state error and a standard deviation of 0.8 mm and 0.3 mm, respectively, using the proposed localization method and the string sensor data. We demonstrate an application example of human–robot interaction that uses human touches as triggers to pick up and manipulate target objects. The proposed soft robotic arm can be easily installed in a variety of human workspaces, since it has the ability to interact safely with humans, eliminating the need for strict control of the environments for visual perception. We believe that the proposed system has the potential to integrate soft robots into our daily lives.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Duan, Haonan, Peng Wang, Yayu Huang, Guangyun Xu, Wei Wei y Xiaofei Shen. "Robotics Dexterous Grasping: The Methods Based on Point Cloud and Deep Learning". Frontiers in Neurorobotics 15 (9 de junio de 2021). http://dx.doi.org/10.3389/fnbot.2021.658280.

Texto completo
Resumen
Dexterous manipulation, especially dexterous grasping, is a primitive and crucial ability of robots that allows the implementation of performing human-like behaviors. Deploying the ability on robots enables them to assist and substitute human to accomplish more complex tasks in daily life and industrial production. A comprehensive review of the methods based on point cloud and deep learning for robotics dexterous grasping from three perspectives is given in this paper. As a new category schemes of the mainstream methods, the proposed generation-evaluation framework is the core concept of the classification. The other two classifications based on learning modes and applications are also briefly described afterwards. This review aims to afford a guideline for robotics dexterous grasping researchers and developers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Wang, Lufeng, Qu Li, Wei Fu, Fei Jiang, Tianxing Song, Guangbo Pi y Shijie Sun. "Enhancing Automated Loading and Unloading of Ship Unloaders through Dynamic 3D Coordinate System with Deep Learning". INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL 19, n.º 2 (1 de marzo de 2024). http://dx.doi.org/10.15837/ijccc.2024.2.6234.

Texto completo
Resumen
This paper proposes a deep learning approach for accurate pose estimation in ship unloaders, improving grasping accuracy by reconstructing 3D coordinates. A convolutional neural network optimizes depth map prediction from RGB images, further enhanced by a conditional generative adversarial network to refine quality. Evaluation of simulated ship unloading tasks showed over 90% grasping success rate, outperforming baseline methods. This research offers valuable insights into advanced visual perception and deep learning for next-generation automated cargo handling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía