Literatura académica sobre el tema "Deep Learning and Perception for Grasping and Manipulation"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Deep Learning and Perception for Grasping and Manipulation".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Deep Learning and Perception for Grasping and Manipulation"

1

Han, Dong, Hong Nie, Jinbao Chen, Meng Chen, Zhen Deng y Jianwei Zhang. "Multi-modal haptic image recognition based on deep learning". Sensor Review 38, n.º 4 (17 de septiembre de 2018): 486–93. http://dx.doi.org/10.1108/sr-08-2017-0160.

Texto completo
Resumen
Purpose This paper aims to improve the diversity and richness of haptic perception by recognizing multi-modal haptic images. Design/methodology/approach First, the multi-modal haptic data collected by BioTac sensors from different objects are pre-processed, and then combined into haptic images. Second, a multi-class and multi-label deep learning model is designed, which can simultaneously learn four haptic features (hardness, thermal conductivity, roughness and texture) from the haptic images, and recognize objects based on these features. The haptic images with different dimensions and modalities are provided for testing the recognition performance of this model. Findings The results imply that multi-modal data fusion has a better performance than single-modal data on tactile understanding, and the haptic images with larger dimension are conducive to more accurate haptic measurement. Practical implications The proposed method has important potential application in unknown environment perception, dexterous grasping manipulation and other intelligent robotics domains. Originality/value This paper proposes a new deep learning model for extracting multiple haptic features and recognizing objects from multi-modal haptic images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Valarezo Añazco, Edwin, Sara Guerrero, Patricio Rivera Lopez, Ji-Heon Oh, Ga-Hyeon Ryu y Tae-Seong Kim. "Deep Learning-Based Ensemble Approach for Autonomous Object Manipulation with an Anthropomorphic Soft Robot Hand". Electronics 13, n.º 2 (17 de enero de 2024): 379. http://dx.doi.org/10.3390/electronics13020379.

Texto completo
Resumen
Autonomous object manipulation is a challenging task in robotics because it requires an essential understanding of the object’s parameters such as position, 3D shape, grasping (i.e., touching) areas, and orientation. This work presents an autonomous object manipulation system using an anthropomorphic soft robot hand with deep learning (DL) vision intelligence for object detection, 3D shape reconstruction, and object grasping area generation. Object detection is performed using Faster-RCNN and an RGB-D sensor to produce a partial depth view of the objects randomly located in the working space. Three-dimensional object shape reconstruction is performed using U-Net based on 3D convolutions with bottle-neck layers and skip connections generating a complete 3D shape of the object from the sensed single-depth view. Then, the grasping position and orientation are computed based on the reconstructed 3D object information (e.g., object shape and size) using U-Net based on 3D convolutions and Principal Component Analysis (PCA), respectively. The proposed autonomous object manipulation system is evaluated by grasping and relocating twelve objects not included in the training database, achieving an average of 95% successful object grasping and 93% object relocations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Wang, Cong, Qifeng Zhang, Qiyan Tian, Shuo Li, Xiaohui Wang, David Lane, Yvan Petillot y Sen Wang. "Learning Mobile Manipulation through Deep Reinforcement Learning". Sensors 20, n.º 3 (10 de febrero de 2020): 939. http://dx.doi.org/10.3390/s20030939.

Texto completo
Resumen
Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful technique for fixed-base manipulation tasks, most of them are not applicable to mobile manipulation. This paper investigates how to leverage deep reinforcement learning to tackle whole-body mobile manipulation tasks in unstructured environments using only on-board sensors. A novel mobile manipulation system which integrates the state-of-the-art deep reinforcement learning algorithms with visual perception is proposed. It has an efficient framework decoupling visual perception from the deep reinforcement learning control, which enables its generalization from simulation training to real-world testing. Extensive simulation and experiment results show that the proposed mobile manipulation system is able to grasp different types of objects autonomously in various simulation and real-world scenarios, verifying the effectiveness of the proposed mobile manipulation system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zhao, Wenhui, Bin Xu y Xinzhong Wu. "Robot grasping system based on deep learning target detection". Journal of Physics: Conference Series 2450, n.º 1 (1 de marzo de 2023): 012071. http://dx.doi.org/10.1088/1742-6596/2450/1/012071.

Texto completo
Resumen
Abstract The traditional robot grasping system often uses fixed point grasping or demonstrative grasping, but with the increase in the diversity of grasping targets and the randomness of poses, the traditional grasping method is no longer sufficient. A robot grasping method based on deep learning target detection is proposed for a high error rate of target recognition and low success rate of grasping in the robot grasping process. The method investigates the robotic arm hand-eye calibration and the deep learning-based target detection and poses estimation algorithm. The Basler camera is used as the visual perception tool of the robot arm, the AUBO i10 robot arm is used as the main body of the experiment, and the PP-YOLO deep learning algorithm performs target detection and poses estimation on the object. Through the collection of experimental data, several grasping experiments were conducted on the diversity of targets randomly placed in the poses under real scenes. The results showed that the success rate of grasping target detection was 94.93% and the robot grasping success rate was 93.37%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhou, Hongyu, Jinhui Xiao, Hanwen Kang, Xing Wang, Wesley Au y Chao Chen. "Learning-Based Slip Detection for Robotic Fruit Grasping and Manipulation under Leaf Interference". Sensors 22, n.º 15 (22 de julio de 2022): 5483. http://dx.doi.org/10.3390/s22155483.

Texto completo
Resumen
Robotic harvesting research has seen significant achievements in the past decade, with breakthroughs being made in machine vision, robot manipulation, autonomous navigation and mapping. However, the missing capability of obstacle handling during the grasping process has severely reduced harvest success rate and limited the overall performance of robotic harvesting. This work focuses on leaf interference caused slip detection and handling, where solutions to robotic grasping in an unstructured environment are proposed. Through analysis of the motion and force of fruit grasping under leaf interference, the connection between object slip caused by leaf interference and inadequate harvest performance is identified for the first time in the literature. A learning-based perception and manipulation method is proposed to detect slip that causes problematic grasps of objects, allowing the robot to implement timely reaction. Our results indicate that the proposed algorithm detects grasp slip with an accuracy of 94%. The proposed sensing-based manipulation demonstrated great potential in robotic fruit harvesting, and could be extended to other pick-place applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Ruihua, Xujun Chen, Zhengzhong Wan, Meng Wang y Xinqing Xiao. "Deep Learning-Based Oyster Packaging System". Applied Sciences 13, n.º 24 (8 de diciembre de 2023): 13105. http://dx.doi.org/10.3390/app132413105.

Texto completo
Resumen
With the deepening understanding of the nutritional value of oysters by consumers, oysters as high-quality seafood are gradually entering the market. Raw edible oyster production lines mainly rely on manual sorting and packaging, which hinders the improvement of oyster packaging efficiency and quality, and it is easy to cause secondary oyster pollution and cross-contamination, which results in the waste of oysters. To enhance the production efficiency, technical level, and hygiene safety of the raw aquatic products production line, this study proposes and constructs a deep learning-based oyster packaging system. The system achieves intelligence and automation of the oyster packaging production line by integrating the deep learning algorithm, machine vision technology, and mechanical arm control technology. The oyster visual perception model is established by deep learning object detection techniques to realize fast and real-time detection of oysters. Using a simple online real-time tracking (SORT) algorithm, the grasping position of the oyster can be predicted, which enables dynamic grasping. Utilizing mechanical arm control technology, an automatic oyster packaging production line was designed and constructed to realize the automated grasping and packaging of raw edible oysters, which improves the efficiency and quality of oyster packaging. System tests showed that the absolute error in oyster pose estimation was less than 7 mm, which allowed the mechanical claw to consistently grasp and transport oysters. The static grasping and packing of a single oyster took about 7.8 s, and the success rate of grasping was 94.44%. The success rate of grasping under different transportation speeds was above 68%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Liu, Ning, Cangui Guo, Rongzhao Liang y Deping Li. "Collaborative Viewpoint Adjusting and Grasping via Deep Reinforcement Learning in Clutter Scenes". Machines 10, n.º 12 (29 de noviembre de 2022): 1135. http://dx.doi.org/10.3390/machines10121135.

Texto completo
Resumen
For the robotic grasping of randomly stacked objects in a cluttered environment, the active multiple viewpoints method can improve grasping performance by improving the environment perception ability. However, in many scenes, it is redundant to always use multiple viewpoints for grasping detection, which will reduce the robot’s grasping efficiency. To improve the robot’s grasping performance, we present a Viewpoint Adjusting and Grasping Synergy (VAGS) strategy based on deep reinforcement learning which coordinates the viewpoint adjusting and grasping directly. For the training efficiency of VAGS, we propose a Dynamic Action Exploration Space (DAES) method based on ε-greedy to reduce the training time. To address the sparse reward problem in reinforcement learning, a reward function is created to evaluate the impact of adjusting the camera pose on the grasping performance. According to experimental findings in simulation and the real world, the VAGS method can improve grasping success and scene clearing rate. Compared with only direct grasping, our proposed strategy increases the grasping success rate and the scene clearing rate by 10.49% and 11%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Han, Dong, Beni Mulyana, Vladimir Stankovic y Samuel Cheng. "A Survey on Deep Reinforcement Learning Algorithms for Robotic Manipulation". Sensors 23, n.º 7 (5 de abril de 2023): 3762. http://dx.doi.org/10.3390/s23073762.

Texto completo
Resumen
Robotic manipulation challenges, such as grasping and object manipulation, have been tackled successfully with the help of deep reinforcement learning systems. We give an overview of the recent advances in deep reinforcement learning algorithms for robotic manipulation tasks in this review. We begin by outlining the fundamental ideas of reinforcement learning and the parts of a reinforcement learning system. The many deep reinforcement learning algorithms, such as value-based methods, policy-based methods, and actor–critic approaches, that have been suggested for robotic manipulation tasks are then covered. We also examine the numerous issues that have arisen when applying these algorithms to robotics tasks, as well as the various solutions that have been put forth to deal with these issues. Finally, we highlight several unsolved research issues and talk about possible future directions for the subject.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Mohammed, Marwan Qaid, Lee Chung Kwek, Shing Chyi Chua, Abdulaziz Salamah Aljaloud, Arafat Al-Dhaqm, Zeyad Ghaleb Al-Mekhlafi y Badiea Abdulkarem Mohammed. "Deep Reinforcement Learning-Based Robotic Grasping in Clutter and Occlusion". Sustainability 13, n.º 24 (10 de diciembre de 2021): 13686. http://dx.doi.org/10.3390/su132413686.

Texto completo
Resumen
In robotic manipulation, object grasping is a basic yet challenging task. Dexterous grasping necessitates intelligent visual observation of the target objects by emphasizing the importance of spatial equivariance to learn the grasping policy. In this paper, two significant challenges associated with robotic grasping in both clutter and occlusion scenarios are addressed. The first challenge is the coordination of push and grasp actions, in which the robot may occasionally fail to disrupt the arrangement of the objects in a well-ordered object scenario. On the other hand, when employed in a randomly cluttered object scenario, the pushing behavior may be less efficient, as many objects are more likely to be pushed out of the workspace. The second challenge is the avoidance of occlusion that occurs when the camera itself is entirely or partially occluded during a grasping action. This paper proposes a multi-view change observation-based approach (MV-COBA) to overcome these two problems. The proposed approach is divided into two parts: 1) using multiple cameras to set up multiple views to address the occlusion issue; and 2) using visual change observation on the basis of the pixel depth difference to address the challenge of coordinating push and grasp actions. According to experimental simulation findings, the proposed approach achieved an average grasp success rate of 83.6%, 86.3%, and 97.8% in the cluttered, well-ordered object, and occlusion scenarios, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sayour, Malak H., Sharbel E. Kozhaya y Samer S. Saab. "Autonomous Robotic Manipulation: Real-Time, Deep-Learning Approach for Grasping of Unknown Objects". Journal of Robotics 2022 (30 de junio de 2022): 1–14. http://dx.doi.org/10.1155/2022/2585656.

Texto completo
Resumen
Recent advancement in vision-based robotics and deep-learning techniques has enabled the use of intelligent systems in a wider range of applications requiring object manipulation. Finding a robust solution for object grasping and autonomous manipulation became the focus of many engineers and is still one of the most demanding problems in modern robotics. This paper presents a full grasping pipeline proposing a real-time data-driven deep-learning approach for robotic grasping of unknown objects using MATLAB and convolutional neural networks. The proposed approach employs RGB-D image data acquired from an eye-in-hand camera centering the object of interest in the field of view using visual servoing. Our approach aims at reducing propagation errors and eliminating the need for complex hand tracking algorithm, image segmentation, or 3D reconstruction. The proposed approach is able to efficiently generate reliable multi-view object grasps regardless of the geometric complexity and physical properties of the object in question. The proposed system architecture enables simple and effective path generation and a real-time tracking control. In addition, our system is modular, reliable, and accurate in both end effector path generation and control. We experimentally justify the efficacy and effectiveness of our overall system on the Barrett Whole Arm Manipulator.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Deep Learning and Perception for Grasping and Manipulation"

1

Zapata-Impata, Brayan S. "Robotic manipulation based on visual and tactile perception". Doctoral thesis, Universidad de Alicante, 2020. http://hdl.handle.net/10045/118217.

Texto completo
Resumen
We still struggle to deliver autonomous robots that perform manipulation tasks as simple for a human as picking up items. A portion of the difficulty of this task lays on the fact that such operation requires a robot that can deal with uncertainty in an unstructured environment. We propose in this thesis the use of visual and tactile perception for providing solutions that can improve the robustness of a robotic manipulator in such environment. In this thesis, we approach robotic grasping using a single 3D point cloud with a partial view of the objects present in the scene. Moreover, the objects are unknown: they have not been previously recognised and we do not have a 3D model to compute candidate grasping points. In experimentation, we prove that our solution is fast and robust, taking in average 17 ms to find a grasp which is stable 85% of the time. Tactile sensors provide a rich source of information regarding the contact experienced by a robotic hand during the manipulation of an object. In this thesis, we exploit with deep learning this type of data for approaching the prediction of the stability of a grasp and the detection of the direction of slip of a contacted object. We prove that our solutions could correctly predict stability 76% of the time with a single tactile reading. We also demonstrate that learning temporal and spatial patterns leads to detections of the direction of slip which are correct up to 82% of the time and are only delayed 50 ms after the actual slip event begins. Despite the good results achieved on the previous two tactile tasks, this data modality has a serious flaw: it can only be registered during contact. In contrast, humans can estimate the feeling of grasping an object just by looking at it. Inspired by this, we present in this thesis our contributions for learning to generate tactile responses from vision. We propose a supervised solution based on training a deep neural network that models the behaviour of a tactile sensor, given 3D visual information of the target object and grasp data as an input. As a result, our system has to learn to link vision to touch. We prove in experimentation that our system learns to generate tactile responses on a set of 12 items, being off by only 0.06 relative error points. Furthermore, we also experiment with a semi-supervised solution for learning this task with a reduced need of labelled data. In experimentation, we show that it learns our tactile data generation task with 50% less data than the supervised solution, incrementing only 17% the error. Last, we introduce our work in the generation of candidate grasps which are improved through simulation of the tactile responses they would generate. This work unifies the contributions presented in this thesis, as it applies modules on calculating grasps, stability prediction and tactile data generation. In early experimentation, it finds grasps which are more stable than the original ones produced by our method based on 3D point clouds.
This doctoral thesis has been carried out with the support of the Spanish Ministry of Economy, Industry and Competitiveness through the grant BES-2016-078290.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Tahoun, Mohamed. "Object Shape Perception for Autonomous Dexterous Manipulation Based on Multi-Modal Learning Models". Electronic Thesis or Diss., Bourges, INSA Centre Val de Loire, 2021. http://www.theses.fr/2021ISAB0003.

Texto completo
Resumen
Cette thèse propose des méthodes de reconstruction 3D d’objets basées sur des stratégies multimodales d'apprentissage profond. Les applications visées concernent la manipulation robotique. Dans un premier temps, la thèse propose une méthode de reconstruction visuelle 3D à partir d’une seule vue de l’objet obtenue par un capteur RGB-D. Puis, afin d’améliorer la qualité de reconstruction 3D des objets à partir d’une seule vue, une nouvelle méthode combinant informations visuelles et tactiles a été proposée en se basant sur un modèle de reconstruction par apprentissage. La méthode proposée a été validée sur un ensemble de données visuo-tactiles respectant les contraintes cinématique d’une main robotique. L’ensemble de données visuo-tactiles respectant les propriétés cinématiques de la main robotique à plusieurs doigts a été créé dans le cadre de ce travail doctoral. Cette base de données est unique dans la littérature et constitue également une contribution de la thèse. Les résultats de validation montrent que les informations tactiles peuvent avoir un apport important pour la prédiction de la forme complète d’un objet, en particulier de la partie invisible pour le capteur RGD-D. Ils montrent également que le modèle proposé permet d’obtenir de meilleurs résultats en comparaison à ceux obtenus avec les méthodes les plus performantes de l’état de l’art
This thesis proposes 3D object reconstruction methods based on multimodal deep learning strategies. The targeted applications concern robotic manipulation. First, the thesis proposes a 3D visual reconstruction method from a single view of the object obtained by an RGB-D sensor. Then, in order to improve the quality of 3D reconstruction of objects from a single view, a new method combining visual and tactile information has been proposed based on a learning reconstruction model. The proposed method has been validated on a visual-tactile dataset respecting the kinematic constraints of a robotic hand. The visual-tactile dataset respecting the kinematic properties of the multi-fingered robotic hand has been created in the framework of this PhD work. This dataset is unique in the literature and is also a contribution of the thesis. The validation results show that the tactile information can have an important contribution for the prediction of the complete shape of an object, especially the part that is not visible to the RGD-D sensor. They also show that the proposed model allows to obtain better results compared to those obtained with the best performing methods of the state of the art
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Morrison, Douglas. "Robotic grasping in unstructured and dynamic environments". Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/207886/1/Douglas_Morrison_Thesis.pdf.

Texto completo
Resumen
Grasping and transporting objects is a fundamental trait that underpins many robotics applications, but existing works in this area are not robust to real-world challenges such as moving objects, human interaction, clutter and occlusion. In this thesis, we combine state-of-the-art computer vision techniques with real-time robotic control to overcome these limitations. We present a number of algorithms that can compute grasps for new items in a fraction of a second, react to dynamic changes in the environment, and intelligently choose improved viewpoints of occluded objects in clutter.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Deep Learning and Perception for Grasping and Manipulation"

1

Blank, Andreas, Lukas Zikeli, Sebastian Reitelshöfer, Engin Karlidag y Jörg Franke. "Augmented Virtuality Input Demonstration Refinement Improving Hybrid Manipulation Learning for Bin Picking". En Lecture Notes in Mechanical Engineering, 332–41. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18326-3_32.

Texto completo
Resumen
AbstractBeyond conventional automated tasks, autonomous robot capabilities aside human cognitive skills are gaining importance in industrial applications. Although machine learning is a major enabler of autonomous robots, system adaptation remains challenging and time-consuming. The objective of this research work is to propose and evaluate an augmented virtuality-based input demonstration refinement method improving hybrid manipulation learning for industrial bin picking. To this end, deep reinforcement and imitation learning are combined to shorten required adaptation timespans to new components and changing scenarios. The method covers initial learning and dataset tuning during ramp-up as well as fault intervention and dataset refinement. For evaluation standard industrial components and systems serve within a real-world experimental bin picking setup utilizing an articulated robot. As part of the quantitative evaluation, the method is benchmarked against conventional learning methods. As a result, required annotation efforts for successful object grasping are reduced. Thereby, final grasping success rates are increased. Implementation samples are available on: https://github.com/FAU-FAPS/hybrid_manipulationlearning_unity3dros
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Mehman Sefat, Amir, Saad Ahmad, Alexandre Angleraud, Esa Rahtu y Roel Pieters. "Robotic grasping in agile production". En Deep Learning for Robot Perception and Cognition, 407–33. Elsevier, 2022. http://dx.doi.org/10.1016/b978-0-32-385787-1.00021-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kantor, George y Francisco Yandun. "Advances in grasping techniques in agricultural robots". En Burleigh Dodds Series in Agricultural Science, 355–86. Burleigh Dodds Science Publishing, 2024. http://dx.doi.org/10.19103/as.2023.0124.09.

Texto completo
Resumen
This chapter provides an overview of the state of the art for grasping and manipulation in agricultural settings. It begins with a review of the robotic mechanisms commonly used for manipulation and grasping. The discussion then addresses issues associated with the integration of different technologies required to create fieldable manipulation systems, namely perception and control. Finally, a review of some specific application areas being addressed is provided, including harvest, pruning and food handling. The chapter is intended to serve as an useful starting point for researchers and practitioners interested in learning more about the challenges and associated approaches being used for grasping and manipulation for agricultural applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

"Deep learning techniques for modelling human manipulation and its translation for autonomous robotic grasping with soft end-effe". En AI for Emerging Verticals: Human-robot computing, sensing and networking, 3–28. Institution of Engineering and Technology, 2020. http://dx.doi.org/10.1049/pbpc034e_ch1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Deep Learning and Perception for Grasping and Manipulation"

1

Chu, You-Rui, Haiyue Zhu y Zhiping Lin. "Intelligent 6-DoF Robotic Grasping and Manipulation System Using Deep Learning". En International Conference of Asian Society for Precision Engineering and Nanotechnology. Singapore: Research Publishing Services, 2022. http://dx.doi.org/10.3850/978-981-18-6021-8_or-02-0217.html.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Chi y Yingzhao Zhu. "A review of robot grasping tactile perception based on deep learning". En Third International Conference on Control and Intelligent Robotics (ICCIR 2023), editado por Kechao Wang y M. Vijayalakshmi. SPIE, 2023. http://dx.doi.org/10.1117/12.3011588.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Pavlichenko, Dmytro y Sven Behnke. "Deep Reinforcement Learning of Dexterous Pre-Grasp Manipulation for Human-Like Functional Categorical Grasping". En 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE). IEEE, 2023. http://dx.doi.org/10.1109/case56687.2023.10260385.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Fang, Jianhao, Weifei Hu, Chuxuan Wang, Zhenyu Liu y Jianrong Tan. "Deep Reinforcement Learning Enhanced Convolutional Neural Networks for Robotic Grasping". En ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/detc2021-67225.

Texto completo
Resumen
Abstract Robotic grasping is an important task for various industrial applications. However, combining detecting and grasping to perform a dynamic and efficient object moving is still a challenge for robotic grasping. Meanwhile, it is time consuming for robotic algorithm training and testing in realistic. Here we present a framework for dynamic robotic grasping based on deep Q-network (DQN) in a virtual grasping space. The proposed dynamic robotic grasping framework mainly consists of the DQN, the convolutional neural network (CNN), and the virtual model of robotic grasping. After observing the result generated by applying the generative grasping convolutional neural network (GG-CNN), a robotic manipulation conducts actions according to Q-network. Different actions generate different rewards, which are implemented to update the neural network through loss function. The goal of this method is to find a reasonable strategy to optimize the total reward and finally accomplish a dynamic grasping process. In the test of virtual space, we achieve an 85.5% grasp success rate on a set of previously unseen objects, which demonstrates the accuracy of DQN enhanced GG-CNN model. The experimental results show that the DQN can efficiently enhance the GG-CNN by considering the grasping procedure (i.e. the grasping time and the gripper’s posture), which makes the grasping procedure stable and increases the success rate of robotic grasping.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Lu, Jingpei, Ambareesh Jayakumari, Florian Richter, Yang Li y Michael C. Yip. "SuPer Deep: A Surgical Perception Framework for Robotic Tissue Manipulation using Deep Learning for Feature Extraction". En 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. http://dx.doi.org/10.1109/icra48506.2021.9561249.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Rakhimkul, Sanzhar, Anton Kim, Askarbek Pazylbekov y Almas Shintemirov. "Autonomous Object Detection and Grasping Using Deep Learning for Design of an Intelligent Assistive Robot Manipulation System". En 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 2019. http://dx.doi.org/10.1109/smc.2019.8914465.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Imran, Alishba, William Escobar y Fred Barez. "Design of an Affordable Prosthetic Arm Equipped With Deep Learning Vision-Based Manipulation". En ASME 2021 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/imece2021-68714.

Texto completo
Resumen
Abstract Many amputees throughout the world are left with limited options to personally own a prosthetic arm due to the expensive cost, mechanical system complexity, and lack of availability. The three main control methods of prosthetic hands are: (1) body-powered control, (2) extrinsic mechanical control, and (3) myoelectric control. These methods can perform well under a controlled situation but will often break down in clinical and everyday use due to poor robustness, weak adaptability, long-term training, and heavy mental burden during use. This paper lays the complete outline of the design process of an affordable and easily accessible novel prosthetic arm that reduces the cost of prosthetics from $10,000 to $700 on average. The 3D printed prosthetic arm is equipped with a depth camera and closed-loop off-policy deep learning algorithm to help form grasps to the object in view. Current work in reinforcement learning masters only individual skills and is heavily focused on parallel jaw grippers for in-hand manipulation. In order to create generalization which better performs real-world manipulation, the focus is specifically on using the general framework of Markov Decision Process (MDP) through scalable learning with off-policy algorithms such as deep deterministic policy gradient (DDPG) and to study this question in the context of grasping a prosthetic arm. We were able to achieve a 78% grasp success rate on previously unseen objects and generalize across multiple objects for manipulation tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Chen, Zhu, Xiao Liang y Minghui Zheng. "Including Image-Based Perception in Disturbance Observer for Warehouse Drones". En ASME 2020 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/dscc2020-3284.

Texto completo
Resumen
Abstract Grasping and releasing objects would cause oscillations to delivery drones in the warehouse. To reduce such undesired oscillations, this paper treats the to-be-delivered object as an unknown external disturbance and presents an image-based disturbance observer (DOB) to estimate and reject such disturbance. Different from the existing DOB technique that can only compensate for the disturbance after the oscillations happen, the proposed image-based one incorporates image-based disturbance prediction into the control loop to further improve the performance of the DOB. The proposed image-based DOB consists of two parts. The first one is deep-learning-based disturbance prediction. By taking an image of the to-be-delivered object, a sequential disturbance signal is predicted in advance using a connected pre-trained convolutional neural network (CNN) and a long short-term memory (LSTM) network. The second part is a conventional DOB in the feedback loop with a feedforward correction, which utilizes the deep learning prediction to generate a learning signal. Numerical studies are performed to validate the proposed image-based DOB regarding oscillation reduction for delivery drones during the grasping and releasing periods of the objects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía