Dissertations / Theses on the topic 'Robot vision systems'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Robot vision systems.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Öfjäll, Kristoffer. "Online Learning for Robot Vision." Licentiate thesis, Linköpings universitet, Datorseende, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110892.
Full textPudney, Christopher John. "Surface modelling and surface following for robots equipped with range sensors." University of Western Australia. Dept. of Computer Science, 1994. http://theses.library.uwa.edu.au/adt-WU2003.0002.
Full textKarr, Roger W. "The assembly of a microcomputer controlled low cost vision-robot system and the design of software." Ohio : Ohio University, 1985. http://www.ohiolink.edu/etd/view.cgi?ohiou1184010908.
Full textSridaran, S. "Off-line robot vision system programming using a computer aided design system." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54373.
Full textMaster of Science
Damweber, Michael Frank. "Model independent offset tracking with virtual feature points." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/17651.
Full textMa, Mo. "Navigation using one camera in structured environment /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20MA.
Full textCipolla, Roberto. "Active visual inference of surface shape." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293392.
Full textJansen, van Nieuwenhuizen Rudolph Johannes. "Development of an automated robot vision component handling system." Thesis, Bloemfontein : Central University of Technology, Free State, 2013. http://hdl.handle.net/11462/213.
Full textIn the industry, automation is used to optimize production, improve product quality and increase profitability. By properly implementing automation systems, the risk of injury to workers can be minimized. Robots are used in many low-level tasks to perform repetitive, undesirable or dangerous work. Robots can perform a task with higher precision and accuracy to lower errors and waste of material. Machine Vision makes use of cameras, lighting and software to do visual inspections that a human would normally do. Machine Vision is useful in application where repeatability, high speed and accuracy are important. This study concentrates on the development of a dedicated robot vision system to automatically place components exiting from a conveyor system onto Automatic Guided Vehicles (AGV). A personal computer (PC) controls the automated system. Software modules were developed to do image processing for the Machine Vision system as well as software to control a Cartesian robot. These modules were integrated to work in a real-time system. The vision system is used to determine the parts‟ position and orientation. The orientation data are used to rotate a gripper and the position data are used by the Cartesian robot to position the gripper over the part. Hardware for the control of the gripper, pneumatics and safety systems were developed. The automated system‟s hardware was integrated by the use of the different communication protocols, namely DeviceNet (Cartesian robot), RS-232 (gripper) and Firewire (camera).
Ukidve, Chinmay S. "Quantifying optimum fault tolerance of manipulators and robotic vision systems." Laramie, Wyo. : University of Wyoming, 2008. http://proquest.umi.com/pqdweb?did=1605147571&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.
Full textHallenberg, Johan. "Robot Tool Center Point Calibration using Computer Vision." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9520.
Full textToday, tool center point calibration is mostly done by a manual procedure. The method is very time consuming and the result may vary due to how skilled the operators are.
This thesis proposes a new automated iterative method for tool center point calibration of industrial robots, by making use of computer vision and image processing techniques. The new method has several advantages over the manual calibration method. Experimental verifications have shown that the proposed method is much faster, still delivering a comparable or even better accuracy. The setup of the proposed method is very easy, only one USB camera connected to a laptop computer is needed and no contact with the robot tool is necessary during the calibration procedure.
The method can be split into three different parts. Initially, the transformation between the robot wrist and the tool is determined by solving a closed loop of homogeneous transformations. Second an image segmentation procedure is described for finding point correspondences on a rotation symmetric robot tool. The image segmentation part is necessary for performing a measurement with six degrees of freedom of the camera to tool transformation. The last part of the proposed method is an iterative procedure which automates an ordinary four point tool center point calibration algorithm. The iterative procedure ensures that the accuracy of the tool center point calibration only depends on the accuracy of the camera when registering a movement between two positions.
Bayraktar, Hakan. "Development Of A Stereo Vision System For An Industrial Robot." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605732/index.pdf.
Full textWu, Jianxin. "Visual place categorization." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29784.
Full textCommittee Chair: Rehg, James M.; Committee Member: Christensen, Henrik; Committee Member: Dellaert, Frank; Committee Member: Essa, Irfan; Committee Member: Malik, Jitendra. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Brooks, Douglas Antwonne. "Control of reconfigurability and navigation of a wheel-legged robot based on active vision." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26545.
Full textCommittee Chair: Howard, Ayanna; Committee Member: Egerstedt, Magnus; Committee Member: Vela, Patricio. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Brink, Wikus. "Stereo vision for simultaneous localization and mapping." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71593.
Full textENGLISH ABSTRACT: Simultaneous localization and mapping (SLAM) is vital for autonomous robot navigation. The robot must build a map of its environment while tracking its own motion through that map. Although many solutions to this intricate problem have been proposed, one of the most prominent issues that still needs to be resolved is to accurately measure and track landmarks over time. In this thesis we investigate the use of stereo vision for this purpose. In order to find landmarks in images we explore the use of two feature detectors: the scale-invariant feature transform (SIFT) and speeded-up robust features (SURF). Both these algorithms find salient points in images and calculate a descriptor for each point that is invariant to scale, rotation and illumination. By using the descriptors we match these image features between stereo images and use the geometry of the system to calculate a set of 3D landmark measurements. A Taylor approximation of this transformation is used to derive a Gaussian noise model for the measurements. The measured landmarks are matched to landmarks in a map to find correspondences. We find that this process often incorrectly matches ambiguous landmarks. To find these mismatches we develop a novel outlier detection scheme based on the random sample consensus (RANSAC) framework. We use a similarity transformation for the RANSAC model and derive a probabilistic consensus measure that takes the uncertainties of landmark locations into account. Through simulation and practical tests we find that this method is a significant improvement on the standard approach of using the fundamental matrix. With accurately identified landmarks we are able to perform SLAM. We investigate the use of three popular SLAM algorithms: EKF SLAM, FastSLAM and FastSLAM 2. EKF SLAM uses a Gaussian distribution to describe the systems states and linearizes the motion and measurement equations with Taylor approximations. The two FastSLAM algorithms are based on the Rao-Blackwellized particle filter that uses particles to describe the robot states, and EKFs to estimate the landmark states. FastSLAM 2 uses a refinement process to decrease the size of the proposal distribution and in doing so decreases the number of particles needed for accurate SLAM. We test the three SLAM algorithms extensively in a simulation environment and find that all three are capable of very accurate results under the right circumstances. EKF SLAM displays extreme sensitivity to landmark mismatches. FastSLAM, on the other hand, is considerably more robust against landmark mismatches but is unable to describe the six-dimensional state vector required for 3D SLAM. FastSLAM 2 offers a good compromise between efficiency and accuracy, and performs well overall. In order to evaluate the complete system we test it with real world data. We find that our outlier detection algorithm is very effective and greatly increases the accuracy of the SLAM systems. We compare results obtained by all three SLAM systems, with both feature detection algorithms, against DGPS ground truth data and achieve accuracies comparable to other state-of-the-art systems. From our results we conclude that stereo vision is viable as a sensor for SLAM.
AFRIKAANSE OPSOMMING: Gelyktydige lokalisering en kartering (simultaneous localization and mapping, SLAM) is ’n noodsaaklike proses in outomatiese robot-navigasie. Die robot moet ’n kaart bou van sy omgewing en tegelykertyd sy eie beweging deur die kaart bepaal. Alhoewel daar baie oplossings vir hierdie ingewikkelde probleem bestaan, moet een belangrike saak nog opgelos word, naamlik om landmerke met verloop van tyd akkuraat op te spoor en te meet. In hierdie tesis ondersoek ons die moontlikheid om stereo-visie vir hierdie doel te gebruik. Ons ondersoek die gebruik van twee beeldkenmerk-onttrekkers: scale-invariant feature transform (SIFT) en speeded-up robust features (SURF). Altwee algoritmes vind toepaslike punte in beelde en bereken ’n beskrywer vir elke punt wat onveranderlik is ten opsigte van skaal, rotasie en beligting. Deur die beskrywer te gebruik, kan ons ooreenstemmende beeldkenmerke soek en die geometrie van die stelsel gebruik om ’n stel driedimensionele landmerkmetings te bereken. Ons gebruik ’n Taylor- benadering van hierdie transformasie om ’n Gaussiese ruis-model vir die metings te herlei. Die gemete landmerke se beskrywers word dan vergelyk met dié van landmerke in ’n kaart om ooreenkomste te vind. Hierdie proses maak egter dikwels foute. Om die foutiewe ooreenkomste op te spoor het ons ’n nuwe uitskieterherkenningsalgoritme ontwikkel wat gebaseer is op die RANSAC-raamwerk. Ons gebruik ’n gelykvormigheidstransformasie vir die RANSAC-model en lei ’n konsensusmate af wat die onsekerhede van die ligging van landmerke in ag neem. Met simulasie en praktiese toetse stel ons vas dat die metode ’n beduidende verbetering op die standaardprosedure, waar die fundamentele matriks gebruik word, is. Met ons akkuraat geïdentifiseerde landmerke kan ons dan SLAM uitvoer. Ons ondersoek die gebruik van drie SLAM-algoritmes: EKF SLAM, FastSLAM en FastSLAM 2. EKF SLAM gebruik ’n Gaussiese verspreiding om die stelseltoestande te beskryf en Taylor-benaderings om die bewegings- en meetvergelykings te lineariseer. Die twee FastSLAM-algoritmes is gebaseer op die Rao-Blackwell partikelfilter wat partikels gebruik om robottoestande te beskryf en EKF’s om die landmerktoestande af te skat. FastSLAM 2 gebruik ’n verfyningsproses om die grootte van die voorstelverspreiding te verminder en dus die aantal partikels wat vir akkurate SLAM benodig word, te verminder. Ons toets die drie SLAM-algoritmes deeglik in ’n simulasie-omgewing en vind dat al drie onder die regte omstandighede akkurate resultate kan behaal. EKF SLAM is egter baie sensitief vir foutiewe landmerkooreenkomste. FastSLAM is meer bestand daarteen, maar kan nie die sesdimensionele verspreiding wat vir 3D SLAM vereis word, beskryf nie. FastSLAM 2 bied ’n goeie kompromie tussen effektiwiteit en akkuraatheid, en presteer oor die algemeen goed. Ons toets die hele stelsel met werklike data om dit te evalueer, en vind dat ons uitskieterherkenningsalgoritme baie effektief is en die akkuraatheid van die SLAM-stelsels beduidend verbeter. Ons vergelyk resultate van die drie SLAM-stelsels met onafhanklike DGPS-data, wat as korrek beskou kan word, en behaal akkuraatheid wat vergelykbaar is met ander toonaangewende stelsels. Ons resultate lei tot die gevolgtrekking dat stereo-visie ’n lewensvatbare sensor vir SLAM is.
Shah, Syed Irtiza Ali. "Single camera based vision systems for ground and; aerial robots." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37143.
Full textYung, Ho-lam, and 容浩霖. "Position and pose estimation for visual control of robot manipulators in planar tasks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43224283.
Full textViljoen, Vernon. "Integration of a vision-guided robot into a reconfigurable component- handling platform." Thesis, [Bloemfontein?] : Central University of Technology, Free State, 2014. http://hdl.handle.net/11462/120.
Full textThe latest technological trend in manufacturing worldwide is automation. Reducing human labour by using robots to do the work is purely a business decision. The reasons for automating a plant include: Improving productivity Reducing labour and equipment costs Reducing product damage Monitoring system reliability Improving plant safety. The use of robots in the automation sector adds value to the production line because of their versatility. They can be programmed to follow specific paths when moving material from one point to another and their biggest advantage is that they can operate for twenty-four hours a day while delivering consistent quality and accuracy. Vision-Guided Robots (VGRs) are developed for many different applications and therefore many different combinations of VGR systems are available. All VGRs are equipped with vision sensors which are used to locate and inspect various objects. In this study a robot and a vision system were combined for a pick-and-place application. Research was done on the design of a robot for locating, inspecting and picking selected components from a moving conveyor system.
Irwansyah, Arif [Verfasser], Ulrich [Akademischer Betreuer] Rückert, and Franz [Akademischer Betreuer] Kummert. "Heterogeneous computing systems for vision-based multi-robot tracking / Arif Irwansyah ; Ulrich Rückert, Franz Kummert." Bielefeld : Universitätsbibliothek Bielefeld, 2017. http://d-nb.info/1140586009/34.
Full textKontitsis, Michail. "Design and implementation of an integrated dynamic vision system for autonomous systems operating in uncertain domains." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0002852.
Full textEntschev, Peter Andreas. "Efficient construction of multi-scale image pyramids for real-time embedded robot vision." Universidade Tecnológica Federal do Paraná, 2013. http://repositorio.utfpr.edu.br/jspui/handle/1/720.
Full textInterest point detectors, or keypoint detectors, have been of great interest for embedded robot vision for a long time, especially those which provide robustness against geometrical variations, such as rotation, affine transformations and changes in scale. The detection of scale invariant features is normally done by constructing multi-scale image pyramids and performing an exhaustive search for extrema in the scale space, an approach that is present in object recognition methods such as SIFT and SURF. These methods are able to find very robust interest points with suitable properties for object recognition, but at the same time are computationally expensive. In this work we present an efficient method for the construction of SIFT-like image pyramids in embedded systems such as the BeagleBoard-xM. The method we present here aims at using computationally less expensive techniques and reusing already processed information in an efficient manner in order to reduce the overall computational complexity. To simplify the pyramid building process we use binomial filters instead of conventional Gaussian filters used in the original SIFT method to calculate multiple scales of an image. Binomial filters have the advantage of being able to be implemented by using fixed-point notation, which is a big advantage for many embedded systems that do not provide native floating-point support. We also reduce the amount of convolution operations needed by resampling already processed scales of the pyramid. After presenting our efficient pyramid construction method, we show how to implement it in an efficient manner in an SIMD (Single Instruction, Multiple Data) platform -- the SIMD platform we use is the ARM Neon extension available in the BeagleBoard-xM ARM Cortex-A8 processor. SIMD platforms in general are very useful for multimedia applications, where normally it is necessary to perform the same operation over several elements, such as pixels in images, enabling multiple data to be processed with a single instruction of the processor. However, the Neon extension in the Cortex-A8 processor does not support floating-point operations, so the whole method was carefully implemented to overcome this limitation. Finally, we provide some comparison results regarding the method we propose here and the original SIFT approach, including performance regarding execution time and repeatability of detected keypoints. With a straightforward implementation (without the use of the SIMD platform), we show that our method takes approximately 1/4 of the time taken to build the entire original SIFT pyramid, while repeating up to 86% of the interest points found with the original method. With a complete fixed-point approach (including vectorization within the SIMD platform) we show that repeatability reaches up to 92% of the original SIFT keypoints while reducing the processing time to less than 3%.
Hussein, Mustafa Turki Verfasser], Dirk [Akademischer Betreuer] [Söffker, and Josef [Akademischer Betreuer] Pauli. "Vision-Based Control of Flexible Robot Systems / Mustafa Turki Hussein. Gutachter: Josef Pauli. Betreuer: Dirk Söffker." Duisburg, 2014. http://d-nb.info/1064264611/34.
Full textHussein, Mustafa Turki [Verfasser], Dirk [Akademischer Betreuer] Söffker, and Josef [Akademischer Betreuer] Pauli. "Vision-Based Control of Flexible Robot Systems / Mustafa Turki Hussein. Gutachter: Josef Pauli. Betreuer: Dirk Söffker." Duisburg, 2014. http://d-nb.info/1064264611/34.
Full textMeger, David Paul. "Planning, localization, and mapping for a mobile robot in a camera network." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101623.
Full textAdeboye, Taiyelolu. "Robot Goalkeeper : A robotic goalkeeper based on machine vision and motor control." Thesis, Högskolan i Gävle, Avdelningen för elektronik, matematik och naturvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-27561.
Full textAndersson, Olov. "Methods for Scalable and Safe Robot Learning." Licentiate thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138398.
Full textModi, Kalpesh Prakash. "Vision application of human robot interaction : development of a ping pong playing robotic arm /." Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/943.
Full textFredriksson, Scott. "Design, Development and Control of a Quadruped Robot." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-86897.
Full textWatanabe, Yoko. "Stochastically optimized monocular vision-based navigation and guidance." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/22545.
Full textCommittee Chair: Johnson, Eric; Committee Co-Chair: Calise, Anthony; Committee Member: Prasad, J.V.R.; Committee Member: Tannenbaum, Allen; Committee Member: Tsiotras, Panagiotis.
Damaryam, Gideon Kanji. "Vision systems for a mobile robot based on line detection using the Hough Transform and artificial neural networks." Thesis, Robert Gordon University, 2008. http://hdl.handle.net/10059/450.
Full textWang, Xuerui, and Li Zhao. "Navigation and Automatic Ground Mapping by Rover Robot." Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-6185.
Full textKira, Zsolt. "Communication and alignment of grounded symbolic knowledge among heterogeneous robots." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33941.
Full textStrineholm, Philippe. "Exploring Human-Robot Interaction Through Explainable AI Poetry Generation." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54606.
Full textNorén, Karl. "Obstacle Avoidance for an Autonomous Robot Car using Deep Learning." Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160551.
Full textWikander, Gustav. "Three dimensional object recognition for robot conveyor picking." Thesis, Linköping University, Department of Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18373.
Full textShape-based matching (SBM) is a method for matching objects in greyscale images. It extracts edges from search images and matches them to a model using a similarity measure. In this thesis we extend SBM to find the tilt and height position of the object in addition to the z-plane rotation and x-y-position. The search is conducted using a scale pyramid to improve the search speed. A 3D matching can be done for small tilt angles by using SBM on height data and extending it with additional steps to calculate the tilt of the object. The full pose is useful for picking objects with an industrial robot.
The tilt of the object is calculated using a RANSAC plane estimator. After the 2D search the differences in height between all corresponding points of the model and the live image are calculated. By estimating a plane to this difference the tilt of the object can be calculated. Using the tilt the model edges are tilted in order to improve the matching at the next scale level.
The problems that arise with occlusion and missing data have been studied. Missing data and erroneous data have been thresholded manually after conducting tests where automatic filling of missing data did not noticeably improve the matching. The automatic filling could introduce new false edges and remove true ones, thus lowering the score.
Experiments have been conducted where objects have been placed at increasing tilt angles. The results show that the matching algorithm is object dependent and correct matches are almost always found for tilt angles less than 10 degrees. This is very similar to the original 2D SBM because the model edges does not change much for such small angels. For tilt angles up to about 25 degrees most objects can be matched and for nice objects correct matches can be done at large tilt angles of up to 40 degrees.
Sattigeri, Ramachandra Jayant. "Adaptive Estimation and Control with Application to Vision-based Autonomous Formation Flight." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16272.
Full textChen, Haoyao. "Towards multi-robot formations : study on vision-based localization system /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-meem-b3008295xf.pdf.
Full text"Submitted to Department of Manufacturing Engineering and Engineering Management in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves 87-100)
Somani, Nikhil [Verfasser], Alois C. [Akademischer Betreuer] Knoll, Torsten [Gutachter] Kröger, and Alois C. [Gutachter] Knoll. "Constraint-based Approaches for Robotic Systems: from Computer Vision to Real-Time Robot Control / Nikhil Somani ; Gutachter: Torsten Kröger, Alois C. Knoll ; Betreuer: Alois C. Knoll." München : Universitätsbibliothek der TU München, 2018. http://d-nb.info/1172414947/34.
Full textBuason, Gunnar. "Competitive co-evolution of sensory-motor systems." Thesis, University of Skövde, Department of Computer Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-733.
Full textA recent trend in evolutionary robotics and artificial life research is to maximize self-organization in the design of robotic systems, in particular using artificial evolutionary techniques, in order to reduce the human designer bias. This dissertation presents experiments in competitive co-evolutionary robotics that integrate and extend previous work on competitive co-evolution of neural robot controllers in a predator-prey scenario with work on the ‘co-evolution’ of robot morphology and control systems. The focus here is on a systematic investigation of tradeoffs and interdependencies between morphological parameters and behavioral strategies through a series of predator-prey experiments in which increasingly many aspects are subject to self-organization through competitive co-evolution. The results show that there is a strong interdependency between morphological parameters and behavioral strategies evolved, and that the competitive co-evolutionary process was able to find a balance between and within these two aspects. It is therefore concluded that competitive co-evolution has great potential as a method for the automatic design of robotic systems.
Billing, Erik. "Cognition Rehearsed : Recognition and Reproduction of Demonstrated Behavior." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-50980.
Full textWernersson, Björn, and Mikael Södergren. "Automatiserad inlärning av detaljer för igenkänning och robotplockning." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170.
Full textJust how far is it possible to make learning of new parts for recognition and robot picking autonomous? This thesis initially gives the prerequisites for the steps in learning and calibration that are to be automated. Among these tasks are to select a suitable part model from numerous candidates with the help of a new part segmenter, as well as computing the spatial extent of this part, facilitating robotic collision handling. Other tasks are to analyze the part model in order to highlight correct and suitable edge segments for increasing pattern matching certainty, and to choose appropriate acceptance levels for pattern matching. Furthermore, tasks deal with simplifying camera calibration by analyzing the calibration pattern, as well as compensating for differences in perspective at great depth variations, by calculating the centre of perspective of the image. The image processing algorithms created in order to solve the tasks are described and evaluated thoroughly. This thesis shows that simplification of steps of learning and calibration, by the help of advanced image processing, really is possible.
Rafikova, Elvira. "Controle de um robô móvel através de realimentação de estados utilizando visão estereoscópica." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/264560.
Full textTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-17T01:31:31Z (GMT). No. of bitstreams: 1 Rafikova_Elvira_D.pdf: 4596616 bytes, checksum: 0cd6c928bfae826c2cea718bdb3843e5 (MD5) Previous issue date: 2010
Resumo: O enfoque principal desse trabalho é o controle de trajetória e navegação no ambiente através da visão estereoscópica de um robô móvel de duas rodas de acionamento diferencial. Para o controle de posicionamento, são utilizadas: uma estratégia de controle ótima linear e uma estratégia subótima, não linear, em tempo contínuo, chamada de SDRE (State Dependent Riccati Equation), e por fim, uma estratégia de controle SDRE em tempo discreto. Todas essas estratégias são baseadas em funções de Lyapunov e aplicadas ao problema de regulação do robô a uma referência. Para a navegação do robô no ambiente é considerado um modelo navegação por odometria e um mecanismo de visão estereoscópica. A estimação do estado é realizada através do filtro de Kalman clássico. São apresentadas duas estratégias para a navegação do robô no ambiente. Uma delas, totalmente discreta com a utilização do métodos de controle SDRE discreto, observação de estado discreta através das câmeras e estimação de estado através do filtro de Kalman discreto. Outra, com a abordagem de horizonte recuável, utilizando controle SDRE contínuo e, observação e estimação de estado discretas. A eficácia dos métodos de controle e das estratégias de navegação do robô é verificada através de simulações computacionais, nas quais a estratégia de navegação com horizonte recuável se mostra eficaz para a navegação precisa no ambiente
Abstract: The main approach of this thesis is the trajectory control and navigation of a differential steering mobile robot in the environment. For the position control problem are used? A continuous-time, linear feedback control; a suboptimal, nonlinear, continuous-time feedback called SDRE (StateDependent Riccati Equation) control and a discrete - time SDRE control method. All of these methods are Lyapunov functions based and appplied to the reference tracking problem oh the nonholonomic robot. For the purpose of the environmental navigation a model of odometry-stereo vision state observation system is considered. Meanwhile, the state estimation is given by classic Kalman filter. Futhermore, two different navigation strategies are presented. The discret-time one, using both discret SDRE control method and state estimation. Another one, is a receding horizon strategy, using continuous-time SDRE controler and sicret-time state estimation. The control method and navigation strategies eficaccy is verified through numerical simulations. Both navigation stategies demostrate good results, although the receding horizon one provides more precise navigation
Doutorado
Mecanica dos Sólidos e Projeto Mecanico
Doutor em Engenharia Mecânica
Einevik, Johan, and John Kurri. "Emulering av en produktioncell med Visionguidning : Virtuell idrifttagning." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-14143.
Full textUsing a virtual twin of a production cell, makes it possible for programming and different functional testing of panels to be performed in early stages of development. A virtual twin contributes to a simpler debugging and to identify problems and minimize cost in commissioning of the production cell. The aim for the project is to investigate how well an emulated cell will perform compared to the real production cell in a factory acceptance test. Another objective is to investigate how you can use real CAD models in the emulation and what type of criteria the models should meet. The project had a lot of challenges and one of them was the difficulty to emulate the safety systems. This was solved by bypassing the safety in the PLC program. One important thing about emulation is communication between the different software used in the system. In this project, it proved successful to distribute the software on three computers to ease the workload of the programs used in the emulation. To use the emulated model instead of the real system is still in the research phase but in this project a lot of useful applications could be identified that could change commissioning in the future.
Zhou, Dingfu. "Vision-based moving pedestrian recognition from imprecise and uncertain data." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2162/document.
Full textVision-based Advanced Driver Assistance Systems (ADAS) is a complex and challenging task in real world traffic scenarios. The ADAS aims at perceiving andunderstanding the surrounding environment of the ego-vehicle and providing necessary assistance for the drivers if facing some emergencies. In this thesis, we will only focus on detecting and recognizing moving objects because they are more dangerous than static ones. Detecting these objects, estimating their positions and recognizing their categories are significantly important for ADAS and autonomous navigation. Consequently, we propose to build a complete system for moving objects detection and recognition based on vision sensors. The proposed approach can detect any kinds of moving objects based on two adjacent frames only. The core idea is to detect the moving pixels by using the Residual Image Motion Flow (RIMF). The RIMF is defined as the residual image changes caused by moving objects with compensated camera motion. In order to robustly detect all kinds of motion and remove false positive detections, uncertainties in the ego-motion estimation and disparity computation should also be considered. The main steps of our general algorithm are the following : first, the relative camera pose is estimated by minimizing the sum of the reprojection errors of matched features and its covariance matrix is also calculated by using a first-order errors propagation strategy. Next, a motion likelihood for each pixel is obtained by propagating the uncertainties of the ego-motion and disparity to the RIMF. Finally, the motion likelihood and the depth gradient are used in a graph-cut-based approach to obtain the moving objects segmentation. At the same time, the bounding boxes of moving object are generated based on the U-disparity map. After obtaining the bounding boxes of the moving object, we want to classify the moving objects as a pedestrian or not. Compared to supervised classification algorithms (such as boosting and SVM) which require a large amount of labeled training instances, our proposed semi-supervised boosting algorithm is trained with only a few labeled instances and many unlabeled instances. Firstly labeled instances are used to estimate the probabilistic class labels of the unlabeled instances using Gaussian Mixture Models after a dimension reduction step performed via Principal Component Analysis. Then, we apply a boosting strategy on decision stumps trained using the calculated soft labeled instances. The performances of the proposed method are evaluated on several state-of-the-art classification datasets, as well as on a pedestrian detection and recognition problem.Finally, both our moving objects detection and recognition algorithms are tested on the public images dataset KITTI and the experimental results show that the proposed methods can achieve good performances in different urban scenarios
Roos, André Filipe. "Controle de fixação atentivo para uma cabeça robótica com visão binocular." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/2648.
Full textComputer vision research is still far from replicating the adaptability and performance of the Human Visual System. Most of its consolidated techniques are valid only over static scenes and restrictive conditions. Robot heads represent an advance in terms of flexibility by carrying cameras that can be freely moved to explore the surroundings. Artificial observation of dynamic environments requires the solution of at least two problems: to determine what is the relevant perceptual information to be extracted from the sensors and how to control their movement in order to shift and hold gaze on targets featuring arbitrary shapes and motions. In this work, a general binocular gaze control system is proposed, and the subsystem responsible for targeting and following lateral displacements is designed, tested and assessed in a four degrees-of-freedom robot head. The subsystem employs a popular low-level visual attention model to detect the most salient point in the scene, and a proportional-integral controller generates a conjunctive movement of the cameras to center it in the left camera image, assumed to be dominant. The development started with a detailed physical modeling of the pan and tilt mechanism that drives the cameras. Then, the linearized structure obtained was fitted via least squares estimation to experimental input-output data. Finally, the controller gains were tuned by optimization and manual adjustment. The OpenCV-based implementation in C++ allowed real-time execution at 30 Hz. Experiments demonstrate that the system is capable of fixating highly salient and static targets without any prior knowledge or strong assumptions. Targets describing harmonic motion are naturally pursued, albeit with a phase shift. In cluttered scenes, where multiple potential targets compete for attention, the system may present oscillatory behavior, requiring fine adjustment of algorithm weights for smooth operation. The addition of a controller for the neck and a vergence controller to compensate for depth displacements are the next steps towards a generic artificial observer.
Pan, Wendy. "A simulated shape recognition system using feature extraction /." Online version of thesis, 1989. http://hdl.handle.net/1850/10496.
Full textMikhalsky, Maxim. "Efficient biomorphic vision for autonomous mobile robots." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16206/.
Full textBenlamri, Rachid. "A multiple-sensor based system for image inspection." Thesis, University of Manchester, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307427.
Full textNg, Romney K. H. "Geon recognition using a mobile robot vision system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0025/MQ50648.pdf.
Full textLagarde, Matthieu, Philippe Gaussier, and Pierre Andry. "Apprentissage de nouveaux comportements: vers le développement épigénétique d'un robot autonome." Phd thesis, Université de Cergy Pontoise, 2010. http://tel.archives-ouvertes.fr/tel-00749761.
Full textBerg, Paula M. "Intergrating vision into a computer integrated manufacturing system." Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/43754.
Full textMaster of Science