Tesis sobre el tema "Robot vision"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Robot vision.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Robot vision".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Grech, Raphael. "Multi-robot vision". Thesis, Kingston University, 2013. http://eprints.kingston.ac.uk/27790/.

Texto completo
Resumen
It is expected nowadays that robots are able to work in real-life environments, possibly also sharing the same space with humans. These environments are generally considered as being cluttered and hard to train for. The work presented in this thesis focuses on developing an online and real-time biologically inspired model for teams of robots to collectively learn and memorise their visual environment in a very concise and compact manner, whilst sharing their experience to their peers (robots and possibly also humans). This work forms part of a larger project to develop a multi-robot platform capable of performing security patrol checks whilst also assisting people with physical and cognitive impairments to be used in public places such as museums and airports. The main contribution of this thesis is the development of a model which makes robots capable of handling visual information, retain information that is relevant to whatever task is at hand and eliminate superfluous information, trying to mimic human performance. This leads towards the great milestone of having a fully autonomous team of robots capable of collectively surveying, learning and sharing salient visual information of the environment even without any prior information. Solutions to endow a distributed team of robots with object detection and environment understanding capabilities are also provided. The way in which humans process, interpret and store visual information are studied and their visual processes are emulated by a team of robots. In an ideal scenario, robots are deployed in a totally unknown environment and incrementally learn and adapt to operate within that environment. Each robot is an expert of its area however, they possess enough knowledge about other areas to be able to guide users sufficiently till another more knowledgeable robot takes over. Although not limited, it is assumed that, once deployed, each robot operates in its own environment for most of its lifetime and the longer the robots remains in the area the more refined their memory will become. Robots should to be able to automatically recognize previously learnt features, such as faces and known objects, whilst also learning other new information. Salient information extracted from the incoming video streams can be used to select keyframes to be fed into a visual memory thus allowing the robot to learn new interesting areas within its environment. The cooperating robots are to successfully operate within their environment, automatically gather visual information and store it in a compact yet meaningful representation. The storage has to be dynamic, as visual information extracted by the robot team might change. Due to the initial lack of knowledge, small sets of visual memory classes need to evolve as the robots acquire visual information. Keeping memory size within limits whilst at the same time maximising the information content is one of the main factors to consider.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Li, Wan-chiu. "Localization of a mobile robot by monocular vision /". Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23765896.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Baba, Akihiko. "Robot navigation using ultrasonic feedback". Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=677.

Texto completo
Resumen
Thesis (M.S.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains viii, 122 p. : ill. Includes abstract. Includes bibliographical references (p. 57-59).
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Roth, Daniel R. (Daniel Risner) 1979. "Vision based robot navigation". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17978.

Texto completo
Resumen
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 53-54).
In this thesis we propose a vision-based robot navigation system that constructs a high level topological representation of the world. A robot using this system learns to recognize rooms and spaces by building a hidden Markov model of the environment. Motion planning is performed by doing bidirectional heuristic search with a discrete set of actions that account for the robot's nonholonomic constraints. The intent of this project is to create a system that allows a robot to be able to explore and to navigate in a wide variety of environments in a way that facilitates goal-oriented tasks.
by Daniel R. Roth.
M.Eng.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Skinner, John R. "Simulation for robot vision". Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227404/1/John_Skinner_Thesis.pdf.

Texto completo
Resumen
This thesis examined the effectiveness of using computer graphics technologies to create simulated data for vision-enabled robots. The research demonstrated that while the behaviour of a robot is greatly affected by what it is looking at, simulated scenes can produce position estimates similar to specific real locations. The findings show that robots need to be tested in a very wide range of different contexts to understand their performance, and simulation provides a cost-effective route to that evaluation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

李宏釗 y Wan-chiu Li. "Localization of a mobile robot by monocular vision". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226371.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Luh, Cheng-Jye 1960. "Hierarchical modelling of mobile, seeing robots". Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/276998.

Texto completo
Resumen
This thesis describes the implementation of a hierarchical robot simulation environment which supports the design of robots with vision and mobility. A seeing robot model applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Chen, Haoyao. "Towards multi-robot formations : study on vision-based localization system /". access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-meem-b3008295xf.pdf.

Texto completo
Resumen
Thesis (Ph.D.)--City University of Hong Kong, 2009.
"Submitted to Department of Manufacturing Engineering and Engineering Management in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves 87-100)
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Öfjäll, Kristoffer. "Online Learning for Robot Vision". Licentiate thesis, Linköpings universitet, Datorseende, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110892.

Texto completo
Resumen
In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and extracting the relevant information, something humans and animals perform strikingly well. On the other hand, humans have great diculty expressing what they are actually looking for on a low level, suitable for direct implementation on a machine. For instance objects tend to be already detected when the visual information reaches the conscious mind, with almost no clues remaining regarding how the object was identied in the rst place. This became apparent already when Seymour Papert gathered a group of summer workers to solve the computer vision problem 48 years ago [35]. Articial learning systems can overcome this gap between the level of human visual reasoning and low-level machine vision processing. If a human teacher can provide examples of what to be extracted and if the learning system is able to extract the gist of these examples, the gap is bridged. There are however some special demands on a learning system for it to perform successfully in a visual context. First, low level visual input is often of high dimensionality such that the learning system needs to handle large inputs. Second, visual information is often ambiguous such that the learning system needs to be able to handle multi modal outputs, i.e. multiple hypotheses. Typically, the relations to be learned  are non-linear and there is an advantage if data can be processed at video rate, even after presenting many examples to the learning system. In general, there seems to be a lack of such methods. This thesis presents systems for learning perception-action mappings for robotic systems with visual input. A range of problems are discussed, such as vision based autonomous driving, inverse kinematics of a robotic manipulator and controlling a dynamical system. Operational systems demonstrating solutions to these problems are presented. Two dierent approaches for providing training data are explored, learning from demonstration (supervised learning) and explorative learning (self-supervised learning). A novel learning method fullling the stated demands is presented. The method, qHebb, is based on associative Hebbian learning on data in channel representation. Properties of the method are demonstrated on a vision-based autonomously driving vehicle, where the system learns to directly map low-level image features to control signals. After an initial training period, the system seamlessly continues autonomously. In a quantitative evaluation, the proposed online learning method performed comparably with state of the art batch learning methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Devillard, François. "Vision du robot mobile Mithra". Grenoble INPG, 1993. http://www.theses.fr/1993INPG0112.

Texto completo
Resumen
Nous proposons un ensemble de vision stereoscopique embarque, destine a la navigation d'un robot mobile en site industriel. En robotique mobile, les systemes de vision sont soumis a de severes contraintes de fonctionnement (traitement en temps reel, volume, consommation. . . ). Pour une modelisation 3D de l'environnement, le systeme de vision doit utiliser des indices visuels permettant un codage compact, precis et robuste de la scene observee. Afin de repondre au mieux aux contraintes de vitesse, nous nous sommes attaches a extraire, des images, les informations les plus significatives d'un point de vue topologique. Dans le cas de missions en sites industriels, l'ensemble des projets presente des geometries orthogonales telles que les intersections de cloisons, les portes, les fenetres, le mobilier. . . La detection des geometries proches de la verticale permet une definition suffisante de l'environnement tout en reduisant la redondance des informations visuelles dans des proportions satisfaisantes. Les indices utilises sont des segments de droite verticaux extraits de deux images stereoscopiques. Nous proposons des solutions algorithmiques pour la detection de contours et l'approximation polygonale adaptees a une implementation temps reel. Ensuite, nous presentons le systeme de vision realise. L'ensemble est constitue de 2 cartes VME. La premiere carte est un operateur cable systolique implementant l'acquisition d'images et la detection de contours. La seconde est concue a partir d'un processeur de traitement de signal et realise l'approximation polygonale. La conception et la realisation de ce systeme de vision a ete realisee dans le cadre du projet de robotique mobile EUREKA EU 110 (Mithra)
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Sequeira, Gerard. "Vision based leader-follower formation control for mobile robots". Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.mst.edu/thesis/pdf/Sequeira_09007dcc804429d4.pdf.

Texto completo
Resumen
Thesis (M.S.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed February 13, 2008) Includes bibliographical references (p. 39-41).
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Michaud, Christian 1958. "Multi-robot workcell with vision for integrated circuit assembly". Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65433.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Wirbel, Émilie. "Localisation et navigation d’un robot humanoïde en environnement domestique". Thesis, Paris, ENMP, 2014. http://www.theses.fr/2014ENMP0058/document.

Texto completo
Resumen
Cette thèse traite du problème de la localisation et de la navigation de robots humanoïdes à bas coût dans un environnement dynamique non contraint. Elle a été réalisée en collaboration entre le laboratoire de robotique CAOR de Mines ParisTech et Aldebaran, dont les robots NAO et Pepper sont utilisés comme plateformes.On verra ici comment il est possible de déduire des informations d'orientation et de position du robot malgré les fortes contraintes de puissance de calcul, de champ de vision et de généricité de l'environnement. L'environnement est représenté sous une forme topologique : les lieux sont stockés dans des nœuds, reliés par des transitions. On apprend l'environnement dans une phase préalable permettant de construire une référence. Les contributions principales de la thèse reposent sur les méthodes de calcul de l'orientation et d'une mesure de position du robot à l'aide des caméras monoculaires à faible champ de vision,et leur intégration dans une structure topologique. Pour se localiser dans le graphe, on utilise principalement les données de vision fournies par les caméras monoculaires du robot, tout en laissant la possibilité de compléter à l'aide de caméras 3D. Les différentes méthodes de localisation sont combinées dans une structure hiérarchique qui permet à la fois d'améliorer la robustesse et de fusionner les données de localisation. Un contrôle de la trajectoire est également mis en place pour permettre d'effectuer de façon fiable les transitions d'un nœud à l'autre, et accessoirement fournir un système de retour pour la marche du robot.Les travaux de cette thèse ont été intégrés dans la suite logicielle d'Aldebaran, et testés intensivement dans différents environnements afin de valider les résultats obtenus et préparer une livraison aux clients
This thesis covers the topic of low cost humanoid robots localization and navigation in a dynamic unconstrained environment. It is the result of a collaboration between the Centre for Robotics of Mines ParisTech and Aldebaran, whose robots, NAO and Pepper, are used as experimental platforms.We will describe how to derive information on the orientation and the position of the robot, under high constraints on computing power, sensor field of view and environment genericity. The environment is represented using a topological formalism : places are stored in vertices, and connected by transitions. The environment is learned in a preliminary phase, which allows the robot to construct a reference.The main contribution of this PHD thesis lies in orientation and approximate position measurement methods, based on monocular cameras with a restricted field of view, and their integration into a topological structure. To localize the robot in the robot, we use mainly data providing by the monocular cameras of the robot, while also allowing extensions, for example with a 3D camera. The different localization methods are combined into a hierarchical structure, which makes the whole process more robust and merges the estimations. A trajectory control has also been developped in order to transition accurately from one vertex to another, and incidently to provide a feedback on the walk of the robot.The results of this thesis have been integrated into Aldebaran software suite, and thoroughly tested in various conditions, in order to validate the conclusions and prepare a client delivery
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Harper, Jason W. "Fast Template Matching For Vision-Based Localization". Cleveland, Ohio : Case Western Reserve University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1238689057.

Texto completo
Resumen
Thesis (M.S.)--Case Western Reserve University, 2009
Department of Computer Engineering Abstract Title from OhioLINK abstract screen (viewed on 13 April 2009) Available online via the OhioLINK ETD Center
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Switzer, Barbara T. "Robotic path planning with obstacle avoidance /". Online version of thesis, 1993. http://hdl.handle.net/1850/11712.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Garratt, Matthew A. "Biologically inspired vision and control for an autonomous flying vehicle /". View thesis entry in Australian Digital Theses Program, 2007. http://thesis.anu.edu.au/public/adt-ANU20090116.154822/index.html.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Bautista, Ballester Jordi. "Human-robot interaction and computer-vision-based services for autonomous robots". Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/398647.

Texto completo
Resumen
L'Aprenentatge per Imitació (IL), o Programació de robots per Demostració (PbD), abasta mètodes pels quals un robot aprèn noves habilitats a través de l'orientació humana i la imitació. La PbD s'inspira en la forma en què els éssers humans aprenen noves habilitats per imitació amb la finalitat de desenvolupar mètodes pels quals les noves tasques es poden transferir als robots. Aquesta tesi està motivada per la pregunta genèrica de "què imitar?", Que es refereix al problema de com extreure les característiques essencials d'una tasca. Amb aquesta finalitat, aquí adoptem la perspectiva del Reconeixement d'Accions (AR) per tal de permetre que el robot decideixi el què cal imitar o inferir en interactuar amb un ésser humà. L'enfoc proposat es basa en un mètode ben conegut que prové del processament del llenguatge natural: és a dir, la bossa de paraules (BoW). Aquest mètode s'aplica a grans bases de dades per tal d'obtenir un model entrenat. Encara que BoW és una tècnica d'aprenentatge de màquines que s'utilitza en diversos camps de la investigació, en la classificació d'accions per a l'aprenentatge en robots està lluny de ser acurada. D'altra banda, se centra en la classificació d'objectes i gestos en lloc d'accions. Per tant, en aquesta tesi es demostra que el mètode és adequat, en escenaris de classificació d'accions, per a la fusió d'informació de diferents fonts o de diferents assajos. Aquesta tesi fa tres contribucions: (1) es proposa un mètode general per fer front al reconeixement d'accions i per tant contribuir a l'aprenentatge per imitació; (2) la metodologia pot aplicar-se a grans bases de dades, que inclouen diferents modes de captura de les accions; i (3) el mètode s'aplica específicament en un projecte internacional d'innovació real anomenat Vinbot.
El Aprendizaje por Imitación (IL), o Programación de robots por Demostración (PbD), abarca métodos por los cuales un robot aprende nuevas habilidades a través de la orientación humana y la imitación. La PbD se inspira en la forma en que los seres humanos aprenden nuevas habilidades por imitación con el fin de desarrollar métodos por los cuales las nuevas tareas se pueden transferir a los robots. Esta tesis está motivada por la pregunta genérica de "qué imitar?", que se refiere al problema de cómo extraer las características esenciales de una tarea. Con este fin, aquí adoptamos la perspectiva del Reconocimiento de Acciones (AR) con el fin de permitir que el robot decida lo que hay que imitar o inferir al interactuar con un ser humano. El enfoque propuesto se basa en un método bien conocido que proviene del procesamiento del lenguaje natural: es decir, la bolsa de palabras (BoW). Este método se aplica a grandes bases de datos con el fin de obtener un modelo entrenado. Aunque BoW es una técnica de aprendizaje de máquinas que se utiliza en diversos campos de la investigación, en la clasificación de acciones para el aprendizaje en robots está lejos de ser acurada. Además, se centra en la clasificación de objetos y gestos en lugar de acciones. Por lo tanto, en esta tesis se demuestra que el método es adecuado, en escenarios de clasificación de acciones, para la fusión de información de diferentes fuentes o de diferentes ensayos. Esta tesis hace tres contribuciones: (1) se propone un método general para hacer frente al reconocimiento de acciones y por lo tanto contribuir al aprendizaje por imitación; (2) la metodología puede aplicarse a grandes bases de datos, que incluyen diferentes modos de captura de las acciones; y (3) el método se aplica específicamente en un proyecto internacional de innovación real llamado Vinbot.
Imitation Learning (IL), or robot Programming by Demonstration (PbD), covers methods by which a robot learns new skills through human guidance and imitation. PbD takes its inspiration from the way humans learn new skills by imitation in order to develop methods by which new tasks can be transmitted to robots. This thesis is motivated by the generic question of “what to imitate?” which concerns the problem of how to extract the essential features of a task. To this end, here we adopt Action Recognition (AR) perspective in order to allow the robot to decide what has to be imitated or inferred when interacting with a human kind. The proposed approach is based on a well-known method from natural language processing: namely, Bag of Words (BoW). This method is applied to large databases in order to obtain a trained model. Although BoW is a machine learning technique that is used in various fields of research, in action classification for robot learning it is far from accurate. Moreover, it focuses on the classification of objects and gestures rather than actions. Thus, in this thesis we show that the method is suitable in action classification scenarios for merging information from different sources or different trials. This thesis makes three contributions: (1) it proposes a general method for dealing with action recognition and thus to contribute to imitation learning; (2) the methodology can be applied to large databases which include different modes of action captures; and (3) the method is applied specifically in a real international innovation project called Vinbot.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Onder, Murat. "Face Detection And Active Robot Vision". Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605290/index.pdf.

Texto completo
Resumen
The main task in this thesis is to design a robot vision system with face detection and tracking capability. Hence there are two main works in the thesis: Firstly, the detection of the face on an image that is taken from the camera on the robot must be achieved. Hence this is a serious real time image processing task and time constraints are very important because of this reason. A processing rate of 1 frame/second is tried to be achieved and hence a fast face detection algorithm had to be used. The Eigenface method and the Subspace LDA (Linear Discriminant Analysis) method are implemented, tested and compared for face detection and Eigenface method proposed by Turk and Pentland is decided to be used. The images are first passed through a number of preprocessing algorithms to obtain better performance, like skin detection, histogram equalization etc. After this filtering process the face candidate regions are put through the face detection algorithm to understand whether there is a face or not in the image. Some modifications are applied to the eigenface algorithm to detect the faces better and faster. Secondly, the robot must move towards the face in the image. This task includes robot motion. The robot to be used for this purpose is a Pioneer 2-DX8 Plus, which is a product of ActivMedia Robotics Inc. and only the interfaces to move the robot have been implemented in the thesis software. The robot is to detect the faces at different distances and arrange its position according to the distance of the human to the robot. Hence a scaling mechanism must be used either in the training images, or in the input image taken from the camera. Because of timing constraint and low camera resolution, a limited number of scaling is applied in the face detection process. With this reason faces of people who are very far or very close to the robot will not be detected. A background independent face detection system is tried to be designed. However the resultant algorithm is slightly dependent to the background. There is no any other constraints in the system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Davison, Andrew John. "Mobile robot navigation using active vision". Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298606.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Burbridge, Christopher James Charles. "Efficient robot navigation with omnidirectional vision". Thesis, University of Ulster, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.522396.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Marshall, Christopher. "Robot trajectory generation using computer vision". Thesis, University of Newcastle Upon Tyne, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443107.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Ribeiro, Luís Miguel Saraiva. "Object recognition for semantic robot vision". Master's thesis, Universidade de Aveiro, 2008. http://hdl.handle.net/10773/2057.

Texto completo
Resumen
Mestrado em Engenharia de Computadores e Telemática
Reconhecer todos os objectos presentes numa qualquer imagem do dia-a-dia será um importante contributo para a compreensão autónoma de imagens. Um agente inteligente para perceber todas as dinâmicas do conteúdo semântico precisa primeiramente de reconhecer cada objecto na cena. Contudo, a aprendizagem e o reconhecimento de objectos sem supervisão, con- tinuam a ser um dos grandes desafios na área da visão robótica. O nosso trabalho é uma abordagem transversal a este problema. Nós construímos um agente capaz de localizar, numa cena complexa, instâncias de categorias previamente requisitadas. Com o nome da categoria o agente procura autonomamente imagens representativas da categoria na Internet. Com estas imagens aprende sem supervisão a aparência da categoria. Após a fase de aprendizagem, o agente procura instâncias da categoria numa fotografia estática do cenário. Esta dissertação é orientada á detecção e ao reconhecimento de objectos numa cena complexa. São usados dois modelos para descrever os objectos: Scale Invariant Feature Transform (SIFT) e o descritor de forma proposto por Deb Kumar Roy. Para localizar diferentes objectos de interesse na cena efectuamos segmentação de cena baseada nas saliências de cor. Após localizado, extraímos o objecto da imagem através da análise dos seus contornos, para finalmente reconhece-lo através da combinação de vários métodos de classificação. ABSTRACT: Recognizing objects in an everyday scene is a major step in unsupervised image understanding. An intelligent agent needs to first identify each object in an environment scene, so it could eventually understand all the dynamics of the semantic content. However, unsupervised learning and unsupervised object recognition remains a great challenge in the vision research area. Our work is a transverse approach in unsupervised object learning and object recognition. We built an agent capable of locating, in a complex scene, an instance of a requested category. The name of a category is uploaded to the agent's system and it autonomously learns the category appearance, by searching the Internet and looking for category examples. Then it explores a static picture of the surrounding environment, looking for an instance of the previously learned category. This dissertation focus on the object detection and object recognition in a complex picture scene. We use Scale Invariant Feature Transform (SIFT) and Roy's Shape Representation (RSR) to represent an object, and an ensemble of several classification techniques to recognize an object. To obtain the object's location on the complex scene we used scene segmentation, based on image colour saliencies, and object extraction based on contour analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Arthur, Richard B. "Vision-Based Human Directed Robot Guidance". Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Nitz, Pettersson Hannes y Samuel Vikström. "VISION-BASED ROBOT CONTROLLER FOR HUMAN-ROBOT INTERACTION USING PREDICTIVE ALGORITHMS". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54609.

Texto completo
Resumen
The demand for robots to work in environments together with humans is growing. This calls for new requirements on robots systems, such as the need to be perceived as responsive and accurate in human interactions. This thesis explores the possibility of using AI methods to predict the movement of a human and evaluating if that information can assist a robot with human interactions. The AI methods that were used is a Long Short Term Memory(LSTM) network and an artificial neural network(ANN). Both networks were trained on data from a motion capture dataset and on four different prediction times: 1/2, 1/4, 1/8 and a 1/16 second. The evaluation was performed directly on the dataset to determine the prediction error. The neural networks were also evaluated on a robotic arm in a simulated environment, to show if the prediction methods would be suitable for a real-life system. Both methods show promising results when comparing the prediction error. From the simulated system, it could be concluded that with the LSTM prediction the robotic arm would generally precede the actual position. The results indicate that the methods described in this thesis report could be used as a stepping stone for a human-robot interactive system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Stark, Per. "Machine vision camera calibration and robot communication". Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.

Texto completo
Resumen

This thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.

Los estilos APA, Harvard, Vancouver, ISO, etc.
26

MARTURI, ANJANILAKSHMIKRISNANARESH. "Vision Based Grasp Planning for Robot Assembly". Thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-12402.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Zhang, Chi. "Vision-based robot localization without explicit landmarks". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0003/MQ44050.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Zhang, Chi 1969. "Vision based robot localization without explicit landmarks". Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20527.

Texto completo
Resumen
The problem of locating a robot within an environment is significant particularly in the context of mobile robot localization and navigation.
This thesis presents a new approach to mobile robot localization that avoids the selection of landmarks and the use of an explicit model. Instead, it uses the low level primitive features from video data, and learns to convert these features into a representation of the robot pose. The conversion from video data to robot poses is implemented using a multi-layer neural network trained by back-propagation. In addition, a key aspect of the approach is the use of the confidence measure to eliminate incorrect estimate components of pose vectors, and dead reckoning to complement the neural network estimates. Finally, the approach is generalized to allow a mobile robot navigate in a large environment.
Presenting a number of experimental results in several sample environments, the thesis suggests the accuracy of the technique is good while the on-line computational cost is very low. Thus, accurate localization of a mobile robot is achievable in real time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Herrod, Nicholas John. "Three-dimensional robot vision using structured illumination". Thesis, University of Cambridge, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.257308.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Beardsley, Paul Anthony. "Applications of projective geometry to robot vision". Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316854.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Snailum, Nicholas. "Mobile robot navigation using single camera vision". Thesis, University of East London, 2001. http://roar.uel.ac.uk/3565/.

Texto completo
Resumen
This thesis describes the research carried out in overcoming the problems encountered during the development of an autonomous mobile robot (AMR) which uses a single television camera for navigation in environments with visible edges, such as corridors and hallways. The objective was to determine the minimal sensing and signal processing requirements for a real AMR that could achieve self-steering, navigation and obstacle avoidance in real unmodified environments. A goal was to design algorithms that could meet the objective while being able to run on a laptop personal computer (PC). This constraint confined the research to computationally efficient algorithms and memory management techniques. The methods by which the objective was successfully achieved are described. A number of noise reduction and feature extraction algorithms have been tested to determine their suitability in this type of environment, and where possible these have been modified to improve their performance. The current methods of locating lines of perspective and vanishing points in images are described, and a novel algorithm has been devised for this application which is more efficient in both its memory usage and execution time. A novel obstacle avoidance mechanism is described which is shown to provide the low level piloting capability necessary to deal with unexpected situations. The difficulties of using a single camera are described, and it is shown that a second camera is required in order to provide robust performance. A prototype AMR was built and used to demonstrate reliable navigation and obstacle avoidance in real time in real corridors. Test results indicate that the prototype could be developed into a competitively priced commercial service robot.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Betke, Margrit. "Learning and vision algorithms for robot navigation". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11402.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Miura, Kanako. "Robot hand positioning and grasping using vision". Université Louis Pasteur (Strasbourg) (1971-2008), 2004. https://publication-theses.unistra.fr/public/theses_doctorat/2004/MIURA_Kanako_2004.pdf.

Texto completo
Resumen
La robotique d'assistance aux gestes humains est en plein essor. Ce type de robots doit être capable de gérer les demandes humaines et d'y répondre de manière flexible avec grande précision. Par exemple, quand un robot doit amener un objet à un être humain, il est très important de saisir l'objet quel qu'il soit sans prendre le risque de le briser. Dans cette thèse, le problème de la saisie d'objet est étudié dans son ensemble. Le robot étudié est un bras anthropomorphe, équipé d'un système de vision monoculaire non calibré et d'une main à deux doigts indépendants avec capteurs d'efforts. Le robot doit pouvoir retrouver un objet à portée de main, l'approcher et le saisir délicatement. Les caractéristiques de la saisie humaine sont étudiées et appliquées à la commande de la main robotique pour saisir des objets inconnus et fragiles. Le travail de thèse se décompose en deux parties. Une première partie concerne l'approche d'un objet inconnu sans connaissance a priori d'un modèle du robot, de la caméra ou de l'objet, en s'aidant d'un retour visuel. Une approche nouvelle est proposée dans cette thèse pour la commande par vision sans modèle et sans étalonnage basée. L'idée principale dans cette méthode consiste dans un premier temps à évaluer en temps-réel une fonction objectif qui à partir de l'image vue par la caméra et de la mesure de la position des axes du robot " quantifie " la distance par rapport à l'objectif d'approche de l'objet. Nous avons montré en simulation et par des expériences réelles avec un robot six axes que cet algorithme permettait d'approcher correctement des objets inconnus sans modèle du robot ou de la caméra. La deuxième partie concerne la modélisation de la saisie humaine d'objet et la saisie de l'objet par la main robotique par retour visuel et retour d'efforts combinés. Ce travail a pour objectif de s'en rapprocher. Les deux doigts du robot sont commandés par retour visuel et par retour d'effort. Un objectif de synchronisation par retour visuel a été rajouté à une loi de commande classique de fermeture des doigts de manière à obtenir un contact simultané des deux doigts avec l'objet. L'objet est ensuite transporté par la main robotique sans glissement de l'objet entre les deux doigts. Ces expériences sont réalisées avec succès à l'aide d'une pince robotique à deux doigts expérimentale
Recently, significant developments have been made in the design of practical robot manipulators and hands that can perform various manipulation tasks required in different fields. However, most industrial robots have been designed to perform only specific movements based on a priori knowledge of the object to be manipulated. Therefore they cannot accomplish tasks when the target object (e. G. , object mass, shape, or position) is unknown, or when the relative position of the vision system with respect to the robot is unknown. In this thesis, the total grasping task is investigated. The manipulator has an uncalibrated camera system and the simple precision gripper has two fingers on the end-effector. Then, the problem is divided into two tasks; the positioning task of the manipulator, and the grasping task of the robot hand. Most of the previous works on visual servoing assume that the kinematic model of the robot, a model the object, and the camera intrinsic parameters are known. They would fail if the robot and the vision system were not fully known. We employ an indirect (look-and-move) scheme for the versatility and stability brought by the internal joint controllers. A novel approach for uncalibrated and model-less visual servoing using a modified simplex iterative search method is proposed. The basic idea behind this method is to compare the value of the objective function in several configurations and to move to the next configurations in order to decrease this value. Demonstrations with a 6DOF industrial manipulator show the efficiency of this method. Human have the ability to touch an object without inducing large displacements even if it is light and could easily fall. Though such skill is very important when the object is fragile, few investigations have been made so far on soft grasping. Furthermore, it is not applied yet to control laws of robot hands. In this thesis, experimental studies are carried out on the human grasping with the index finger and the thumb (precision grasp). The features of contact motions given by the measurement of human motions are applied to a robot grasping task. The " soft " contact motion is demonstrated with a robot hand with two fingers controlled individually. Each finger has two pairs of strain gauges as force sensors. A vision system is also available with a camera for real-time visual feedback
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Alkhulayfi, Khalid Abdullah. "Vision-Based Motion for a Humanoid Robot". PDXScholar, 2016. https://pdxscholar.library.pdx.edu/open_access_etds/3176.

Texto completo
Resumen
The overall objective of this thesis is to build an integrated, inexpensive, human-sized humanoid robot from scratch that looks and behaves like a human. More specifically, my goal is to build an android robot called Marie Curie robot that can act like a human actor in the Portland Cyber Theater in the play Quantum Debate with a known script of every robot behavior. In order to achieve this goal, the humanoid robot need to has degrees of freedom (DOF) similar to human DOFs. Each part of the Curie robot was built to achieve the goal of building a complete humanoid robot. The important additional constraints of this project were: 1) to build the robot from available components, 2) to minimize costs, and 3) to be simple enough that the design can be replicated by non-experts, so they can create robot theaters worldwide. Furthermore, the robot appears lifelike because it executes two main behaviors like a human being. The first behavior is tracking where the humanoid robot uses a tracking algorithm to follow a human being. In other words, the tracking algorithm allows the robot to control its neck using the information taken from the vision system to look at the nearest human face. In addition, the robot uses the same vision system to track labeled objects. The second behavior is grasping where the inverse kinematics (IK) is calculated so the robot can move its hand to a specific coordinate in the surrounding space. IK gives the robot the ability to move its end-effector (hand) closer to how humans move their hands.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Mikhalsky, Maxim. "Efficient biomorphic vision for autonomous mobile robots". Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16206/1/Maxim_Mikhalsky_Thesis.pdf.

Texto completo
Resumen
Autonomy is the most enabling and the least developed robot capability. A mobile robot is autonomous if capable of independently attaining its objectives in unpredictable environment. This requires interaction with the environment by sensing, assessing, and responding to events. Such interaction has not been achieved. The core problem consists in limited understanding of robot autonomy and its aspects, and is exacerbated by the limited resources available in a small autonomous mobile robot such as energy, information, and space. This thesis describes an efficient biomorphic visual capability that can provide purposeful interaction with environment for a small autonomous mobile robot. The method used for achieving this capability comprises synthesis of an integral paradigm of a purposeful autonomous mobile robot, formulation of requirements for the visual capability, and development of efficient algorithmic and technological solutions. The paradigm is a product of analysis of fundamental aspects of the problem, and the insights found in inherently autonomous biological organisms. Based on this paradigm, analysis of the biological vision and the available technological basis, and the state-of-the-art in vision algorithms, the requirements were formulated for a biomorphic visual capability that provides the situation awareness capability for a small autonomous mobile robot. The developed visual capability is comprised of a sensory and processing architecture, an integral set of motion vision algorithms, and a method for visual ranging of still objects that is based on them. These vision algorithms provide motion detection, fixation, and tracking functionality with low latency and computational complexity. High temporal resolution of CMOS imagers is exploited for reducing the logical complexity of image analysis, and consequently the computational complexity of the algorithms. The structure of the developed algorithms conforms to the arithmetic and memory resources available in a system on a programmable chip (SoPC), which allows complete confinement of the high-bandwidth datapath within a SoPC device and therefore high-speed operation by design. The algorithms proved to be functional, which validates the developed visual capability. The experiments confirm that high temporal resolution imaging simplifies image motion structure, and ultimately the design of the robot vision system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Mikhalsky, Maxim. "Efficient biomorphic vision for autonomous mobile robots". Queensland University of Technology, 2006. http://eprints.qut.edu.au/16206/.

Texto completo
Resumen
Autonomy is the most enabling and the least developed robot capability. A mobile robot is autonomous if capable of independently attaining its objectives in unpredictable environment. This requires interaction with the environment by sensing, assessing, and responding to events. Such interaction has not been achieved. The core problem consists in limited understanding of robot autonomy and its aspects, and is exacerbated by the limited resources available in a small autonomous mobile robot such as energy, information, and space. This thesis describes an efficient biomorphic visual capability that can provide purposeful interaction with environment for a small autonomous mobile robot. The method used for achieving this capability comprises synthesis of an integral paradigm of a purposeful autonomous mobile robot, formulation of requirements for the visual capability, and development of efficient algorithmic and technological solutions. The paradigm is a product of analysis of fundamental aspects of the problem, and the insights found in inherently autonomous biological organisms. Based on this paradigm, analysis of the biological vision and the available technological basis, and the state-of-the-art in vision algorithms, the requirements were formulated for a biomorphic visual capability that provides the situation awareness capability for a small autonomous mobile robot. The developed visual capability is comprised of a sensory and processing architecture, an integral set of motion vision algorithms, and a method for visual ranging of still objects that is based on them. These vision algorithms provide motion detection, fixation, and tracking functionality with low latency and computational complexity. High temporal resolution of CMOS imagers is exploited for reducing the logical complexity of image analysis, and consequently the computational complexity of the algorithms. The structure of the developed algorithms conforms to the arithmetic and memory resources available in a system on a programmable chip (SoPC), which allows complete confinement of the high-bandwidth datapath within a SoPC device and therefore high-speed operation by design. The algorithms proved to be functional, which validates the developed visual capability. The experiments confirm that high temporal resolution imaging simplifies image motion structure, and ultimately the design of the robot vision system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Gu, Lifang. "Visual guidance of robot motion". University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.

Texto completo
Resumen
Future robots are expected to cooperate with humans in daily activities. Efficient cooperation requires new techniques for transferring human skills to robots. This thesis presents an approach on how a robot can extract and replicate a motion by observing how a human instructor conducts it. In this way, the robot can be taught without any explicit instructions and the human instructor does not need any expertise in robot programming. A system has been implemented which consists of two main parts. The first part is data acquisition and motion extraction. Vision is the most important sensor with which a human can interact with the surrounding world. Therefore two cameras are used to capture the image sequences of a moving rigid object. In order to compress the incoming images from the cameras and extract 3D motion information of the rigid object, feature detection and tracking are applied to the images. Corners are chosen as the main features because they are more stable under perspective projection and during motion. A reliable corner detector is implemented and a new corner tracking algorithm is proposed based on smooth motion constraints. With both spatial and temporal constraints, 3D trajectories of a set of points on the object can be obtained and the 3D motion parameters of the object can be reliably calculated by the algorithm proposed in this thesis. Once the 3D motion parameters are available through the vision system, the robot should be programmed to replicate this motion. Since we are interested in smooth motion and the similarity between two motions, the task of the second part of our system is therefore to extract motion characteristics and to transfer these to the robot. It can be proven that the characteristics of a parametric cubic B-spline curve are completely determined by its control points, which can be obtained by the least-squares fitting method, given some data points on the curve. Therefore a parametric cubic B–spline curve is fitted to the motion data and its control points are calculated. Given the robot configuration the obtained control points can be scaled, translated, and rotated so that a motion trajectory can be generated for the robot to replicate the given motion in its own workspace with the required smoothness and similarity, although the absolute motion trajectories of the robot and the instructor can be different. All the above modules have been integrated and results of an experiment with the whole system show that the approach proposed in this thesis can extract motion characteristics and transfer these to a robot. A robot arm has successfully replicated a human arm movement with similar shape characteristics by our approach. In conclusion, such a system collects human skills and intelligence through vision and transfers them to the robot. Therefore, a robot with such a system can interact with its environment and learn by observation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Pan, Wendy. "A simulated shape recognition system using feature extraction /". Online version of thesis, 1989. http://hdl.handle.net/1850/10496.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Wooden, David T. "Graph-based Path Planning for Mobile Robots". Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-11092006-180958/.

Texto completo
Resumen
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2007.
Magnus Egerstedt, Committee Chair ; Patricio Vela, Committee Member ; Ayanna Howard, Committee Member ; Tucker Balch, Committee Member ; Wayne Book, Committee Member.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Shah, Syed Irtiza Ali. "Vision based 3D obstacle detection". Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29741.

Texto completo
Resumen
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2010.
Committee Co-Chair: Johnson, Eric; Committee Co-Chair: Lipkin, Harvey; Committee Member: Sadegh, Nader. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Pudney, Christopher John. "Surface modelling and surface following for robots equipped with range sensors". University of Western Australia. Dept. of Computer Science, 1994. http://theses.library.uwa.edu.au/adt-WU2003.0002.

Texto completo
Resumen
The construction of surface models from sensor data is an important part of perceptive robotics. When the sensor data are obtained from fixed sensors the problem of occlusion arises. To overcome occlusion, sensors may be mounted on a robot that moves the sensors over the surface. In this thesis the sensors are single–point range finders. The range finders provide a set of sensor points, that is, the surface points detected by the sensors. The sets of sensor points obtained during the robot’s motion are used to construct a surface model. The surface model is used in turn in the computation of the robot’s motion, so surface modelling is performed on–line, that is, the surface model is constructed incrementally from the sensor points as they are obtained. A planar polyhedral surface model is used that is amenable to incremental surface modelling. The surface model consists of a set of model segments, where a neighbour relation allows model segments to share edges. Also sets of adjacent shared edges may form corner vertices. Techniques are presented for incrementally updating the surface model using sets of sensor points. Various model segment operations are employed to do this: model segments may be merged, fissures in model segment perimeters are filled, and shared edges and corner vertices may be formed. Details of these model segment operations are presented. The robot’s control point is moved over the surface model at a fixed distance. This keeps the sensors around the control point within sensing range of the surface, and keeps the control point from colliding with the surface. The remainder of the robot body is kept from colliding with the surface by using redundant degrees–of–freedom. The goal of surface modelling and surface following is to model as much of the surface as possible. The incomplete parts of the surface model (non–shared edges) indicate where sections of surface that have not been exposed to the robot’s sensors lie. The direction of the robot’s motion is chosen such that the robot’s control point is directed to non–shared edges, and then over the unexposed surface near the edge. These techniques have been implemented and results are presented for a variety of simulated robots combined with real range sensor data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Fung, Hong Chee. "Image processing & robot positioning". Ohio University / OhioLINK, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1183490009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Futterlieb, Marcus. "Vision based navigation in a dynamic environment". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30191/document.

Texto completo
Resumen
Cette thèse s'intéresse au problème de la navigation autonome au long cours de robots mobiles à roues dans des environnements dynamiques. Elle s'inscrit dans le cadre du projet FUI Air-Cobot. Ce projet, porté par Akka Technologies, a vu collaborer plusieurs entreprises (Akka, Airbus, 2MORROW, Sterela) ainsi que deux laboratoires de recherche, le LAAS et Mines Albi. L'objectif est de développer un robot collaboratif (ou cobot) capable de réaliser l'inspection d'un avion avant le décollage ou en hangar. Différents aspects ont donc été abordés : le contrôle non destructif, la stratégie de navigation, le développement du système robotisé et de son instrumentation, etc. Cette thèse répond au second problème évoqué, celui de la navigation. L'environnement considéré étant aéroportuaire, il est hautement structuré et répond à des normes de déplacement très strictes (zones interdites, etc.). Il peut être encombré d'obstacles statiques (attendus ou non) et dynamiques (véhicules divers, piétons, ...) qu'il conviendra d'éviter pour garantir la sécurité des biens et des personnes. Cette thèse présente deux contributions. La première porte sur la synthèse d'un asservissement visuel permettant au robot de se déplacer sur de longues distances (autour de l'avion ou en hangar) grâce à une carte topologique et au choix de cibles dédiées. De plus, cet asservissement visuel exploite les informations fournies par toutes les caméras embarquées. La seconde contribution porte sur la sécurité et l'évitement d'obstacles. Une loi de commande basée sur les spirales équiangulaires exploite seulement les données sensorielles fournies par les lasers embarqués. Elle est donc purement référencée capteur et permet de contourner tout obstacle, qu'il soit fixe ou mobile. Il s'agit donc d'une solution générale permettant de garantir la non collision. Enfin, des résultats expérimentaux, réalisés au LAAS et sur le site d'Airbus à Blagnac, montrent l'efficacité de la stratégie développée
This thesis is directed towards the autonomous long range navigation of wheeled robots in dynamic environments. It takes place within the Air-Cobot project. This project aims at designing a collaborative robot (cobot) able to perform the preflight inspection of an aircraft. The considered environment is then highly structured (airport runway and hangars) and may be cluttered with both static and dynamic unknown obstacles (luggage or refueling trucks, pedestrians, etc.). Our navigation framework relies on previous works and is based on the switching between different control laws (go to goal controller, visual servoing, obstacle avoidance) depending on the context. Our contribution is twofold. First of all, we have designed a visual servoing controller able to make the robot move over a long distance thanks to a topological map and to the choice of suitable targets. In addition, multi-camera visual servoing control laws have been built to benefit from the image data provided by the different cameras which are embedded on the Air-Cobot system. The second contribution is related to obstacle avoidance. A control law based on equiangular spirals has been designed to guarantee non collision. This control law, based on equiangular spirals, is fully sensor-based, and allows to avoid static and dynamic obstacles alike. It then provides a general solution to deal efficiently with the collision problem. Experimental results, performed both in LAAS and in Airbus hangars and runways, show the efficiency of the developed techniques
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Foster, D. J. "Pipelining : an approach for machine vision". Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:1258e292-2603-4941-87db-d2a56b8856a2.

Texto completo
Resumen
Much effort has been spent over the last decade in producing so called "Machine Vision" systems for use in robotics, automated inspection, assembly and numerous other fields. Because of the large amount of data involved in an image (typically ¼ MByte) and the complexity of many algorithms used, the processing times required have been far in excess of real time on a VAX-class serial processor. We review a number of image understanding algorithms that compute a globally defined "state", and show that they may be computed using simple local operations that are suited to parallel implementation. In recent years, many massively parallel machines have been designed to apply local operations rapidly across an image. We review several vision machines. We develop an algebraic analysis of the performance of a vision machine and show that, contrary to the commonly-held belief, the time taken to relay images between serial streams can exceed by far the time spent processing. We proceed to investigate the roles that a variety of pipelining techniques might play. We then present three pipelined designs for vision, one of which has been built. This is a parallel pipelined bit slice convolution processor, capable of operating at video rates. This design is examined in detail, and its performance analysed in relation to the theoretical framework of the preceeding chapters. The construction and debugging of the device, which is now operational in its hardware is detailed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Simon, D. G. "A new sensor for robot arm and tool calibration". Thesis, University of Surrey, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266337.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Westerling, Magnus y Nils Eriksson. "Förstudie för automatisering av slipningsprocess". Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-31054.

Texto completo
Resumen
Detta examensarbete har utförts på Ovako Bar i Hällefors i syfte att kontrollera möjligheterna att automatisera en slipningsprocess på stänger med ytdefekter. Det största skälet till arbetets uppkomst är de vibrationsskador som uppstår i operatörers händer vilket i sin tur leder till sjukskrivningar. Arbetet inleddes med en nulägesanalys på Kontrollstation 7 för att kontrollera hur processen går till, hur operatörerna upplever situationen och få en större förståelse för situationen. Information om nuvarande utrustning inhämtas i syfte att kunna utvärdera möjligheter. Kontakter med potentiella leverantörer togs också, i syfte att få ytterligare bollplank. Efter nulägesanalysen inleddes förbättringsförslagsprocessen. Lösningar till problemet uppkom ur brainstorming efter kravspecifikationen. De lösningar som blev godkända skickades ut till diverse leverantörer som tidigare kontaktats för vidare idéutbyte och möjlighet till utförande. De leverantörer som kontaktades var IM Teknik och Robotslipning AB. Investeringsanalysen visar att lösningen ”robotslipning med visionssystem” är den mest kostnadseffektiva för att uppnå de krav och önskemål som skulle uppfyllas.
This thesis has been carried out on Ovako Bar in Hellefors in order to check the possibility of automating the manual grinding process on bars with surface defects. The biggest reason for the occurrence of this thesis is the vibration damages in operators' hands, which in turn leads to sick leave. Work began with a current situation analysis of KS-7 to observe how the process works, how operators perceive the situation and get a better understanding of the situation as well as acquiring information on the current equipment. Contacts with potential suppliers were taken as well in order to obtain additional exchange about the potential products. After the current situation analysis the improvement suggestions phase took place. Solutions to the problem arose out of brainstorming according to the specifications. The solutions that were approved were sent to various vendors for further exchange of ideas and the possibility of execution. IM Teknik and Robotslipning are the companies that were contacted. Investment analysis shows that the solution with robotic grinding with vision system is the most cost efficient to meet the demands and requirements to be fulfilled.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Gaskett, Chris y cgaskett@it jcu edu au. "Q-Learning for Robot Control". The Australian National University. Research School of Information Sciences and Engineering, 2002. http://thesis.anu.edu.au./public/adt-ANU20041108.192425.

Texto completo
Resumen
Q-Learning is a method for solving reinforcement learning problems. Reinforcement learning problems require improvement of behaviour based on received rewards. Q-Learning has the potential to reduce robot programming effort and increase the range of robot abilities. However, most currentQ-learning systems are not suitable for robotics problems: they treat continuous variables, for example speeds or positions, as discretised values. Discretisation does not allow smooth control and does not fully exploit sensed information. A practical algorithm must also cope with real-time constraints, sensing and actuation delays, and incorrect sensor data. This research describes an algorithm that deals with continuous state and action variables without discretising. The algorithm is evaluated with vision-based mobile robot and active head gaze control tasks. As well as learning the basic control tasks, the algorithm learns to compensate for delays in sensing and actuation by predicting the behaviour of its environment. Although the learned dynamic model is implicit in the controller, it is possible to extract some aspects of the model. The extracted models are compared to theoretically derived models of environment behaviour. The difficulty of working with robots motivates development of methods that reduce experimentation time. This research exploits Q-learning’s ability to learn by passively observing the robot’s actions—rather than necessarily controlling the robot. This is a valuable tool for shortening the duration of learning experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Scaramuzza, Davide. "Omnidirectional vision : from calibration to robot motion estimation". Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17635.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Hallenberg, Johan. "Robot Tool Center Point Calibration using Computer Vision". Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9520.

Texto completo
Resumen

Today, tool center point calibration is mostly done by a manual procedure. The method is very time consuming and the result may vary due to how skilled the operators are.

This thesis proposes a new automated iterative method for tool center point calibration of industrial robots, by making use of computer vision and image processing techniques. The new method has several advantages over the manual calibration method. Experimental verifications have shown that the proposed method is much faster, still delivering a comparable or even better accuracy. The setup of the proposed method is very easy, only one USB camera connected to a laptop computer is needed and no contact with the robot tool is necessary during the calibration procedure.

The method can be split into three different parts. Initially, the transformation between the robot wrist and the tool is determined by solving a closed loop of homogeneous transformations. Second an image segmentation procedure is described for finding point correspondences on a rotation symmetric robot tool. The image segmentation part is necessary for performing a measurement with six degrees of freedom of the camera to tool transformation. The last part of the proposed method is an iterative procedure which automates an ordinary four point tool center point calibration algorithm. The iterative procedure ensures that the accuracy of the tool center point calibration only depends on the accuracy of the camera when registering a movement between two positions.

Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Marin, Hernandez Antonio. "Vision dynamique pour la navigation d'un robot mobile". Phd thesis, Toulouse, INPT, 2004. http://oatao.univ-toulouse.fr/7346/1/marin_hernandez.pdf.

Texto completo
Resumen
Les travaux présentés dans cette thèse concernent l’étude des fonctionnalités visuelles sur des scènes dynamiques et ses applications à la robotique mobile. Ces fonctionnalités visuelles traitent plus précisément du suivi visuel d’objets dans des séquences d’images. Quatre méthodes de suivi visuel ont été étudiées, dont trois ont été développées spécifiquement dans le cadre de cette thèse. Ces méthodes sont : (1) le suivi de contours par un snake, avec deux variantes permettant son application à des séquences d’images couleur ou la prise en compte de contraintes sur la forme de l’objet suivi, (2) le suivi de régions par différences de motifs, (3) le suivi de contours par corrélation 1D, et enfin (4) la méthode de suivi d’un ensemble de points, fondée sur la distance de Hausdorff, développée lors d’une thèse précédente. Ces méthodes ont été analysées pour différentes tâches relatives à la navigation d’un robot mobile; une comparaison dans différents contextes a été effectuée, donnant lieu à une caractérisation des cibles et des conditions pour lesquelles chaque méthode donne de bons résultats. Les résultats de cette analyse sont pris en compte dans un module de planification perceptuelle, qui détermine quels objets (amers plans) le robot doit suivre pour se guider le long d’une trajectoire. Afin de contrôler l’exécution d’un tel plan perceptuel, plusieurs protocoles de collaboration ou d’enchaînement entre méthodes de suivi visuel ont été proposés. Finalement, ces méthodes, ainsi qu’un module de contrôle d’une caméra active (site, azimut, zoom), ont été intégrées sur un robot. Trois expérimentations ont été effectuées: a) le suivi de route en milieu extérieur, b) le suivi de primitives pour la navigation visuelle en milieu intérieur, et c) le suivi d’amers plans pour la navigation fondée sur la localisation explicite du robot.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía