Tesi sul tema "Robot vision"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Robot vision".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Grech, Raphael. "Multi-robot vision". Thesis, Kingston University, 2013. http://eprints.kingston.ac.uk/27790/.
Testo completoLi, Wan-chiu. "Localization of a mobile robot by monocular vision /". Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23765896.
Testo completoBaba, Akihiko. "Robot navigation using ultrasonic feedback". Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=677.
Testo completoTitle from document title page. Document formatted into pages; contains viii, 122 p. : ill. Includes abstract. Includes bibliographical references (p. 57-59).
Roth, Daniel R. (Daniel Risner) 1979. "Vision based robot navigation". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17978.
Testo completoIncludes bibliographical references (p. 53-54).
In this thesis we propose a vision-based robot navigation system that constructs a high level topological representation of the world. A robot using this system learns to recognize rooms and spaces by building a hidden Markov model of the environment. Motion planning is performed by doing bidirectional heuristic search with a discrete set of actions that account for the robot's nonholonomic constraints. The intent of this project is to create a system that allows a robot to be able to explore and to navigate in a wide variety of environments in a way that facilitates goal-oriented tasks.
by Daniel R. Roth.
M.Eng.
Skinner, John R. "Simulation for robot vision". Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227404/1/John_Skinner_Thesis.pdf.
Testo completo李宏釗 e Wan-chiu Li. "Localization of a mobile robot by monocular vision". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226371.
Testo completoLuh, Cheng-Jye 1960. "Hierarchical modelling of mobile, seeing robots". Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/276998.
Testo completoChen, Haoyao. "Towards multi-robot formations : study on vision-based localization system /". access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-meem-b3008295xf.pdf.
Testo completo"Submitted to Department of Manufacturing Engineering and Engineering Management in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves 87-100)
Öfjäll, Kristoffer. "Online Learning for Robot Vision". Licentiate thesis, Linköpings universitet, Datorseende, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110892.
Testo completoDevillard, François. "Vision du robot mobile Mithra". Grenoble INPG, 1993. http://www.theses.fr/1993INPG0112.
Testo completoSequeira, Gerard. "Vision based leader-follower formation control for mobile robots". Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.mst.edu/thesis/pdf/Sequeira_09007dcc804429d4.pdf.
Testo completoVita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed February 13, 2008) Includes bibliographical references (p. 39-41).
Michaud, Christian 1958. "Multi-robot workcell with vision for integrated circuit assembly". Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65433.
Testo completoWirbel, Émilie. "Localisation et navigation d’un robot humanoïde en environnement domestique". Thesis, Paris, ENMP, 2014. http://www.theses.fr/2014ENMP0058/document.
Testo completoThis thesis covers the topic of low cost humanoid robots localization and navigation in a dynamic unconstrained environment. It is the result of a collaboration between the Centre for Robotics of Mines ParisTech and Aldebaran, whose robots, NAO and Pepper, are used as experimental platforms.We will describe how to derive information on the orientation and the position of the robot, under high constraints on computing power, sensor field of view and environment genericity. The environment is represented using a topological formalism : places are stored in vertices, and connected by transitions. The environment is learned in a preliminary phase, which allows the robot to construct a reference.The main contribution of this PHD thesis lies in orientation and approximate position measurement methods, based on monocular cameras with a restricted field of view, and their integration into a topological structure. To localize the robot in the robot, we use mainly data providing by the monocular cameras of the robot, while also allowing extensions, for example with a 3D camera. The different localization methods are combined into a hierarchical structure, which makes the whole process more robust and merges the estimations. A trajectory control has also been developped in order to transition accurately from one vertex to another, and incidently to provide a feedback on the walk of the robot.The results of this thesis have been integrated into Aldebaran software suite, and thoroughly tested in various conditions, in order to validate the conclusions and prepare a client delivery
Harper, Jason W. "Fast Template Matching For Vision-Based Localization". Cleveland, Ohio : Case Western Reserve University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1238689057.
Testo completoDepartment of Computer Engineering Abstract Title from OhioLINK abstract screen (viewed on 13 April 2009) Available online via the OhioLINK ETD Center
Switzer, Barbara T. "Robotic path planning with obstacle avoidance /". Online version of thesis, 1993. http://hdl.handle.net/1850/11712.
Testo completoGarratt, Matthew A. "Biologically inspired vision and control for an autonomous flying vehicle /". View thesis entry in Australian Digital Theses Program, 2007. http://thesis.anu.edu.au/public/adt-ANU20090116.154822/index.html.
Testo completoBautista, Ballester Jordi. "Human-robot interaction and computer-vision-based services for autonomous robots". Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/398647.
Testo completoEl Aprendizaje por Imitación (IL), o Programación de robots por Demostración (PbD), abarca métodos por los cuales un robot aprende nuevas habilidades a través de la orientación humana y la imitación. La PbD se inspira en la forma en que los seres humanos aprenden nuevas habilidades por imitación con el fin de desarrollar métodos por los cuales las nuevas tareas se pueden transferir a los robots. Esta tesis está motivada por la pregunta genérica de "qué imitar?", que se refiere al problema de cómo extraer las características esenciales de una tarea. Con este fin, aquí adoptamos la perspectiva del Reconocimiento de Acciones (AR) con el fin de permitir que el robot decida lo que hay que imitar o inferir al interactuar con un ser humano. El enfoque propuesto se basa en un método bien conocido que proviene del procesamiento del lenguaje natural: es decir, la bolsa de palabras (BoW). Este método se aplica a grandes bases de datos con el fin de obtener un modelo entrenado. Aunque BoW es una técnica de aprendizaje de máquinas que se utiliza en diversos campos de la investigación, en la clasificación de acciones para el aprendizaje en robots está lejos de ser acurada. Además, se centra en la clasificación de objetos y gestos en lugar de acciones. Por lo tanto, en esta tesis se demuestra que el método es adecuado, en escenarios de clasificación de acciones, para la fusión de información de diferentes fuentes o de diferentes ensayos. Esta tesis hace tres contribuciones: (1) se propone un método general para hacer frente al reconocimiento de acciones y por lo tanto contribuir al aprendizaje por imitación; (2) la metodología puede aplicarse a grandes bases de datos, que incluyen diferentes modos de captura de las acciones; y (3) el método se aplica específicamente en un proyecto internacional de innovación real llamado Vinbot.
Imitation Learning (IL), or robot Programming by Demonstration (PbD), covers methods by which a robot learns new skills through human guidance and imitation. PbD takes its inspiration from the way humans learn new skills by imitation in order to develop methods by which new tasks can be transmitted to robots. This thesis is motivated by the generic question of “what to imitate?” which concerns the problem of how to extract the essential features of a task. To this end, here we adopt Action Recognition (AR) perspective in order to allow the robot to decide what has to be imitated or inferred when interacting with a human kind. The proposed approach is based on a well-known method from natural language processing: namely, Bag of Words (BoW). This method is applied to large databases in order to obtain a trained model. Although BoW is a machine learning technique that is used in various fields of research, in action classification for robot learning it is far from accurate. Moreover, it focuses on the classification of objects and gestures rather than actions. Thus, in this thesis we show that the method is suitable in action classification scenarios for merging information from different sources or different trials. This thesis makes three contributions: (1) it proposes a general method for dealing with action recognition and thus to contribute to imitation learning; (2) the methodology can be applied to large databases which include different modes of action captures; and (3) the method is applied specifically in a real international innovation project called Vinbot.
Onder, Murat. "Face Detection And Active Robot Vision". Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605290/index.pdf.
Testo completoDavison, Andrew John. "Mobile robot navigation using active vision". Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298606.
Testo completoBurbridge, Christopher James Charles. "Efficient robot navigation with omnidirectional vision". Thesis, University of Ulster, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.522396.
Testo completoMarshall, Christopher. "Robot trajectory generation using computer vision". Thesis, University of Newcastle Upon Tyne, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443107.
Testo completoRibeiro, Luís Miguel Saraiva. "Object recognition for semantic robot vision". Master's thesis, Universidade de Aveiro, 2008. http://hdl.handle.net/10773/2057.
Testo completoReconhecer todos os objectos presentes numa qualquer imagem do dia-a-dia será um importante contributo para a compreensão autónoma de imagens. Um agente inteligente para perceber todas as dinâmicas do conteúdo semântico precisa primeiramente de reconhecer cada objecto na cena. Contudo, a aprendizagem e o reconhecimento de objectos sem supervisão, con- tinuam a ser um dos grandes desafios na área da visão robótica. O nosso trabalho é uma abordagem transversal a este problema. Nós construímos um agente capaz de localizar, numa cena complexa, instâncias de categorias previamente requisitadas. Com o nome da categoria o agente procura autonomamente imagens representativas da categoria na Internet. Com estas imagens aprende sem supervisão a aparência da categoria. Após a fase de aprendizagem, o agente procura instâncias da categoria numa fotografia estática do cenário. Esta dissertação é orientada á detecção e ao reconhecimento de objectos numa cena complexa. São usados dois modelos para descrever os objectos: Scale Invariant Feature Transform (SIFT) e o descritor de forma proposto por Deb Kumar Roy. Para localizar diferentes objectos de interesse na cena efectuamos segmentação de cena baseada nas saliências de cor. Após localizado, extraímos o objecto da imagem através da análise dos seus contornos, para finalmente reconhece-lo através da combinação de vários métodos de classificação. ABSTRACT: Recognizing objects in an everyday scene is a major step in unsupervised image understanding. An intelligent agent needs to first identify each object in an environment scene, so it could eventually understand all the dynamics of the semantic content. However, unsupervised learning and unsupervised object recognition remains a great challenge in the vision research area. Our work is a transverse approach in unsupervised object learning and object recognition. We built an agent capable of locating, in a complex scene, an instance of a requested category. The name of a category is uploaded to the agent's system and it autonomously learns the category appearance, by searching the Internet and looking for category examples. Then it explores a static picture of the surrounding environment, looking for an instance of the previously learned category. This dissertation focus on the object detection and object recognition in a complex picture scene. We use Scale Invariant Feature Transform (SIFT) and Roy's Shape Representation (RSR) to represent an object, and an ensemble of several classification techniques to recognize an object. To obtain the object's location on the complex scene we used scene segmentation, based on image colour saliencies, and object extraction based on contour analysis.
Arthur, Richard B. "Vision-Based Human Directed Robot Guidance". Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.
Testo completoNitz, Pettersson Hannes, e Samuel Vikström. "VISION-BASED ROBOT CONTROLLER FOR HUMAN-ROBOT INTERACTION USING PREDICTIVE ALGORITHMS". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54609.
Testo completoStark, Per. "Machine vision camera calibration and robot communication". Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.
Testo completoThis thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.
MARTURI, ANJANILAKSHMIKRISNANARESH. "Vision Based Grasp Planning for Robot Assembly". Thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-12402.
Testo completoZhang, Chi. "Vision-based robot localization without explicit landmarks". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0003/MQ44050.pdf.
Testo completoZhang, Chi 1969. "Vision based robot localization without explicit landmarks". Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20527.
Testo completoThis thesis presents a new approach to mobile robot localization that avoids the selection of landmarks and the use of an explicit model. Instead, it uses the low level primitive features from video data, and learns to convert these features into a representation of the robot pose. The conversion from video data to robot poses is implemented using a multi-layer neural network trained by back-propagation. In addition, a key aspect of the approach is the use of the confidence measure to eliminate incorrect estimate components of pose vectors, and dead reckoning to complement the neural network estimates. Finally, the approach is generalized to allow a mobile robot navigate in a large environment.
Presenting a number of experimental results in several sample environments, the thesis suggests the accuracy of the technique is good while the on-line computational cost is very low. Thus, accurate localization of a mobile robot is achievable in real time.
Herrod, Nicholas John. "Three-dimensional robot vision using structured illumination". Thesis, University of Cambridge, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.257308.
Testo completoBeardsley, Paul Anthony. "Applications of projective geometry to robot vision". Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316854.
Testo completoSnailum, Nicholas. "Mobile robot navigation using single camera vision". Thesis, University of East London, 2001. http://roar.uel.ac.uk/3565/.
Testo completoBetke, Margrit. "Learning and vision algorithms for robot navigation". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11402.
Testo completoMiura, Kanako. "Robot hand positioning and grasping using vision". Université Louis Pasteur (Strasbourg) (1971-2008), 2004. https://publication-theses.unistra.fr/public/theses_doctorat/2004/MIURA_Kanako_2004.pdf.
Testo completoRecently, significant developments have been made in the design of practical robot manipulators and hands that can perform various manipulation tasks required in different fields. However, most industrial robots have been designed to perform only specific movements based on a priori knowledge of the object to be manipulated. Therefore they cannot accomplish tasks when the target object (e. G. , object mass, shape, or position) is unknown, or when the relative position of the vision system with respect to the robot is unknown. In this thesis, the total grasping task is investigated. The manipulator has an uncalibrated camera system and the simple precision gripper has two fingers on the end-effector. Then, the problem is divided into two tasks; the positioning task of the manipulator, and the grasping task of the robot hand. Most of the previous works on visual servoing assume that the kinematic model of the robot, a model the object, and the camera intrinsic parameters are known. They would fail if the robot and the vision system were not fully known. We employ an indirect (look-and-move) scheme for the versatility and stability brought by the internal joint controllers. A novel approach for uncalibrated and model-less visual servoing using a modified simplex iterative search method is proposed. The basic idea behind this method is to compare the value of the objective function in several configurations and to move to the next configurations in order to decrease this value. Demonstrations with a 6DOF industrial manipulator show the efficiency of this method. Human have the ability to touch an object without inducing large displacements even if it is light and could easily fall. Though such skill is very important when the object is fragile, few investigations have been made so far on soft grasping. Furthermore, it is not applied yet to control laws of robot hands. In this thesis, experimental studies are carried out on the human grasping with the index finger and the thumb (precision grasp). The features of contact motions given by the measurement of human motions are applied to a robot grasping task. The " soft " contact motion is demonstrated with a robot hand with two fingers controlled individually. Each finger has two pairs of strain gauges as force sensors. A vision system is also available with a camera for real-time visual feedback
Alkhulayfi, Khalid Abdullah. "Vision-Based Motion for a Humanoid Robot". PDXScholar, 2016. https://pdxscholar.library.pdx.edu/open_access_etds/3176.
Testo completoMikhalsky, Maxim. "Efficient biomorphic vision for autonomous mobile robots". Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16206/1/Maxim_Mikhalsky_Thesis.pdf.
Testo completoMikhalsky, Maxim. "Efficient biomorphic vision for autonomous mobile robots". Queensland University of Technology, 2006. http://eprints.qut.edu.au/16206/.
Testo completoGu, Lifang. "Visual guidance of robot motion". University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.
Testo completoPan, Wendy. "A simulated shape recognition system using feature extraction /". Online version of thesis, 1989. http://hdl.handle.net/1850/10496.
Testo completoWooden, David T. "Graph-based Path Planning for Mobile Robots". Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-11092006-180958/.
Testo completoMagnus Egerstedt, Committee Chair ; Patricio Vela, Committee Member ; Ayanna Howard, Committee Member ; Tucker Balch, Committee Member ; Wayne Book, Committee Member.
Shah, Syed Irtiza Ali. "Vision based 3D obstacle detection". Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29741.
Testo completoCommittee Co-Chair: Johnson, Eric; Committee Co-Chair: Lipkin, Harvey; Committee Member: Sadegh, Nader. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Pudney, Christopher John. "Surface modelling and surface following for robots equipped with range sensors". University of Western Australia. Dept. of Computer Science, 1994. http://theses.library.uwa.edu.au/adt-WU2003.0002.
Testo completoFung, Hong Chee. "Image processing & robot positioning". Ohio University / OhioLINK, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1183490009.
Testo completoFutterlieb, Marcus. "Vision based navigation in a dynamic environment". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30191/document.
Testo completoThis thesis is directed towards the autonomous long range navigation of wheeled robots in dynamic environments. It takes place within the Air-Cobot project. This project aims at designing a collaborative robot (cobot) able to perform the preflight inspection of an aircraft. The considered environment is then highly structured (airport runway and hangars) and may be cluttered with both static and dynamic unknown obstacles (luggage or refueling trucks, pedestrians, etc.). Our navigation framework relies on previous works and is based on the switching between different control laws (go to goal controller, visual servoing, obstacle avoidance) depending on the context. Our contribution is twofold. First of all, we have designed a visual servoing controller able to make the robot move over a long distance thanks to a topological map and to the choice of suitable targets. In addition, multi-camera visual servoing control laws have been built to benefit from the image data provided by the different cameras which are embedded on the Air-Cobot system. The second contribution is related to obstacle avoidance. A control law based on equiangular spirals has been designed to guarantee non collision. This control law, based on equiangular spirals, is fully sensor-based, and allows to avoid static and dynamic obstacles alike. It then provides a general solution to deal efficiently with the collision problem. Experimental results, performed both in LAAS and in Airbus hangars and runways, show the efficiency of the developed techniques
Foster, D. J. "Pipelining : an approach for machine vision". Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:1258e292-2603-4941-87db-d2a56b8856a2.
Testo completoSimon, D. G. "A new sensor for robot arm and tool calibration". Thesis, University of Surrey, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266337.
Testo completoWesterling, Magnus, e Nils Eriksson. "Förstudie för automatisering av slipningsprocess". Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-31054.
Testo completoThis thesis has been carried out on Ovako Bar in Hellefors in order to check the possibility of automating the manual grinding process on bars with surface defects. The biggest reason for the occurrence of this thesis is the vibration damages in operators' hands, which in turn leads to sick leave. Work began with a current situation analysis of KS-7 to observe how the process works, how operators perceive the situation and get a better understanding of the situation as well as acquiring information on the current equipment. Contacts with potential suppliers were taken as well in order to obtain additional exchange about the potential products. After the current situation analysis the improvement suggestions phase took place. Solutions to the problem arose out of brainstorming according to the specifications. The solutions that were approved were sent to various vendors for further exchange of ideas and the possibility of execution. IM Teknik and Robotslipning are the companies that were contacted. Investment analysis shows that the solution with robotic grinding with vision system is the most cost efficient to meet the demands and requirements to be fulfilled.
Gaskett, Chris, e cgaskett@it jcu edu au. "Q-Learning for Robot Control". The Australian National University. Research School of Information Sciences and Engineering, 2002. http://thesis.anu.edu.au./public/adt-ANU20041108.192425.
Testo completoScaramuzza, Davide. "Omnidirectional vision : from calibration to robot motion estimation". Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17635.
Testo completoHallenberg, Johan. "Robot Tool Center Point Calibration using Computer Vision". Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9520.
Testo completoToday, tool center point calibration is mostly done by a manual procedure. The method is very time consuming and the result may vary due to how skilled the operators are.
This thesis proposes a new automated iterative method for tool center point calibration of industrial robots, by making use of computer vision and image processing techniques. The new method has several advantages over the manual calibration method. Experimental verifications have shown that the proposed method is much faster, still delivering a comparable or even better accuracy. The setup of the proposed method is very easy, only one USB camera connected to a laptop computer is needed and no contact with the robot tool is necessary during the calibration procedure.
The method can be split into three different parts. Initially, the transformation between the robot wrist and the tool is determined by solving a closed loop of homogeneous transformations. Second an image segmentation procedure is described for finding point correspondences on a rotation symmetric robot tool. The image segmentation part is necessary for performing a measurement with six degrees of freedom of the camera to tool transformation. The last part of the proposed method is an iterative procedure which automates an ordinary four point tool center point calibration algorithm. The iterative procedure ensures that the accuracy of the tool center point calibration only depends on the accuracy of the camera when registering a movement between two positions.
Marin, Hernandez Antonio. "Vision dynamique pour la navigation d'un robot mobile". Phd thesis, Toulouse, INPT, 2004. http://oatao.univ-toulouse.fr/7346/1/marin_hernandez.pdf.
Testo completo