Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Machine vision for robot guidance.

Dissertationen zum Thema „Machine vision for robot guidance“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Machine vision for robot guidance" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Arthur, Richard B. „Vision-Based Human Directed Robot Guidance“. Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pearson, Christopher Mark. „Linear array cameras for mobile robot guidance“. Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318875.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Pretlove, John. „Stereoscopic eye-in-hand active machine vision for real-time adaptive robot arm guidance“. Thesis, University of Surrey, 1993. http://epubs.surrey.ac.uk/843230/.

Der volle Inhalt der Quelle
Annotation:
This thesis describes the design, development and implementation of a robot mounted active stereo vision system for adaptive robot arm guidance. This provides a very flexible and intelligent system that is able to react to uncertainty in a manufacturing environment. It is capable of tracking and determining the 3D position of an object so that the robot can move towards, and intercept, it. Such a system has particular applications in remotely controlled robot arms, typically working in hostile environments. The stereo vision system is designed on mechatronic principles and is modular, light-weight and uses state-of-the-art dc servo-motor technology. Based on visual information, it controls camera vergence and focus independently while making use of the flexibility of the robot for positioning. Calibration and modelling techniques have been developed to determine the geometry of the stereo vision system so that the 3D position of objects can be estimated from the 2D camera information. 3D position estimates are obtained by stereo triangulation. A method for obtaining a quantitative measure of the confidence of the 3D position estimate is presented which is a useful built-in error checking mechanism to reject false or poor 3D matches. A predictive gaze controller has been incorporated into the stereo head control system. This anticipates the relative 3D motion of the object to alleviate the effect of computational delays and ensures a smooth trajectory. Validation experiments have been undertaken with a Puma 562 industrial robot to show the functional integration of the camera system with the robot controller. The vision system is capable of tracking moving objects and the information this provides is used to update command information to the controller. The vision system has been shown to be in full control of the robot during a tracking and intercept duty cycle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Grepl, Pavel. „Strojové vidění pro navádění robotu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443727.

Der volle Inhalt der Quelle
Annotation:
Master's thesis deals with the design, assembly, and testing of a camera system for localization of randomly placed and oriented objects on a conveyor belt with the purpose of guiding a robot on those objects. The theoretical part is focused on research in individual components making a camera system and on the field of 2D and 3D localization of objects. The practical part consists of two possible arrangements of the camera system, solution of the chosen arrangement, creating testing images, programming the algorithm for image processing, creating HMI, and testing the complete system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bohora, Anil R. „Visual robot guidance in time-varying environment using quadtree data structure and parallel processing“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182282896.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gu, Lifang. „Visual guidance of robot motion“. University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.

Der volle Inhalt der Quelle
Annotation:
Future robots are expected to cooperate with humans in daily activities. Efficient cooperation requires new techniques for transferring human skills to robots. This thesis presents an approach on how a robot can extract and replicate a motion by observing how a human instructor conducts it. In this way, the robot can be taught without any explicit instructions and the human instructor does not need any expertise in robot programming. A system has been implemented which consists of two main parts. The first part is data acquisition and motion extraction. Vision is the most important sensor with which a human can interact with the surrounding world. Therefore two cameras are used to capture the image sequences of a moving rigid object. In order to compress the incoming images from the cameras and extract 3D motion information of the rigid object, feature detection and tracking are applied to the images. Corners are chosen as the main features because they are more stable under perspective projection and during motion. A reliable corner detector is implemented and a new corner tracking algorithm is proposed based on smooth motion constraints. With both spatial and temporal constraints, 3D trajectories of a set of points on the object can be obtained and the 3D motion parameters of the object can be reliably calculated by the algorithm proposed in this thesis. Once the 3D motion parameters are available through the vision system, the robot should be programmed to replicate this motion. Since we are interested in smooth motion and the similarity between two motions, the task of the second part of our system is therefore to extract motion characteristics and to transfer these to the robot. It can be proven that the characteristics of a parametric cubic B-spline curve are completely determined by its control points, which can be obtained by the least-squares fitting method, given some data points on the curve. Therefore a parametric cubic B–spline curve is fitted to the motion data and its control points are calculated. Given the robot configuration the obtained control points can be scaled, translated, and rotated so that a motion trajectory can be generated for the robot to replicate the given motion in its own workspace with the required smoothness and similarity, although the absolute motion trajectories of the robot and the instructor can be different. All the above modules have been integrated and results of an experiment with the whole system show that the approach proposed in this thesis can extract motion characteristics and transfer these to a robot. A robot arm has successfully replicated a human arm movement with similar shape characteristics by our approach. In conclusion, such a system collects human skills and intelligence through vision and transfers them to the robot. Therefore, a robot with such a system can interact with its environment and learn by observation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sonmez, Ahmet Coskun. „Robot guidance using image features and fuzzy logic“. Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259476.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Stark, Per. „Machine vision camera calibration and robot communication“. Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.

Der volle Inhalt der Quelle
Annotation:

This thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Foster, D. J. „Pipelining : an approach for machine vision“. Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:1258e292-2603-4941-87db-d2a56b8856a2.

Der volle Inhalt der Quelle
Annotation:
Much effort has been spent over the last decade in producing so called "Machine Vision" systems for use in robotics, automated inspection, assembly and numerous other fields. Because of the large amount of data involved in an image (typically ¼ MByte) and the complexity of many algorithms used, the processing times required have been far in excess of real time on a VAX-class serial processor. We review a number of image understanding algorithms that compute a globally defined "state", and show that they may be computed using simple local operations that are suited to parallel implementation. In recent years, many massively parallel machines have been designed to apply local operations rapidly across an image. We review several vision machines. We develop an algebraic analysis of the performance of a vision machine and show that, contrary to the commonly-held belief, the time taken to relay images between serial streams can exceed by far the time spent processing. We proceed to investigate the roles that a variety of pipelining techniques might play. We then present three pipelined designs for vision, one of which has been built. This is a parallel pipelined bit slice convolution processor, capable of operating at video rates. This design is examined in detail, and its performance analysed in relation to the theoretical framework of the preceeding chapters. The construction and debugging of the device, which is now operational in its hardware is detailed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Leidenkrantz, Axel, und Erik Westbrandt. „Implementation of machine vision on a collaborative robot“. Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17039.

Der volle Inhalt der Quelle
Annotation:
This project is developed with the University of Skövde and Volvo GTO. Purpose of the project is to complement and facilitate the quality insurance when gluing the engine frame. Quality defects in today’s industry is a major concern due to how costly it is to fix them. With competition rising and quality demands increasing, companies are looking for new and more efficient ways to ensure quality. Collaborative robots is a rising and unexplored technology in most industries. It is an upcoming field with great flexibility that could solve many issues and can assist its processes that are difficult to automate. The project aims to investigate if it is possible and beneficial to implement a vision system on a collaborative robot which ensures quality. Also, investigate if the collaborative robot could work with other tasks as well. This project also includes training and learning an artificial network with CAD generated models and real-life prototypes. The project had a lot of challenges with both training the AI and how the robot would communicate with it. The final results stated that a collaborative robot more specific UR10e could work with machine vision. This solution was based on using a camera which was compatible with the built-in robot software. However, this does not mean that other type of cameras cannot be used for this type of functions as well. Using machine vision based on artificial intelligence is a valid solution but requires further development and training to get a software function working in industry. Working with collaborative robots could change the industry for the better in many ways. Implementing collaborative robots could ease the work for the operators to aid in heavy lifting and repetitive work. Being able to combine a collaborative robot with a vision system could increase productivity and economic benefits.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Brohan, Kevin Patrick. „Search and attention for machine vision“. Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/search-and-attention-for-machine-vision(a4747c9b-ac13-46d1-8895-5f2d88523d80).html.

Der volle Inhalt der Quelle
Annotation:
This thesis addresses the generation of behaviourally useful, robust representations of the sensory world in the context of machine vision and behaviour. The goals of the work presented in this thesis are to investigate strategies for representing the visual world in a way which is behaviourally useful, to investigate the use of a neurally inspired early perceptual organisation system upon high-level processing in an object recognition system and to investigate the use of a perceptual organisation system on driving an object-based selection process. To address these problems, a biologically inspired framework for machine attention has been developed at a high level of neural abstraction, which has been heavily inspired by the psychological and physiological literature. The framework is described in this thesis, and three system implementations, which investigate the above issues, are described and analysed in detail. The primate brain has access to a coherent representation of the external world, which appears as objects at different spatial locations. It is through these representations that appropriate behavioural responses may be generated. For example, we do not become confused by cluttered scenes or by occluded objects. The representation of the visual scene is generated in a hierarchical computing structure in the primate brain: while shape and position information are able to drive attentional selection rapidly, high-level processes such as object recognition must be performed serially, passing through an attentional bottleneck. Through the process of attentional selection, the primate visual system identifies behaviourally relevant regions of the visual scene, which allows it to prioritise serial attentional shifts towards certain locations. In primates, the process of attentional selection is complex, operating upon surface representations which are robust to occlusion. Attention itself suppresses neural activity related to distractor objects, while sustaining activity relating to the target, allowing the target object to have a clear neural representation upon which the recognition process can operate. This thesis concludes that dynamic representations that are both early and robust against occlusion have the potential to be highly useful in machine vision and behaviour applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Dunn, Mark. „Applications of vision sensing in agriculture“. University of Southern Queensland, Faculty of Engineering and Surveying, 2007. http://eprints.usq.edu.au/archive/00004102/.

Der volle Inhalt der Quelle
Annotation:
[Abstract]: Machine vision systems in agricultural applications are becoming commonplace as technology becomes both affordable and robust. Applications such as fruit and vegetable grading were amongst the earliest applications, but the field has diversified into areas such as yield monitoring, weed identification and spraying, and tractor guidance. Machine vision systems generally consist of a number of steps that are similar between applications. These steps include image pre-processing, analysis, and post- processing. This leads the way towards a generalisation of the systems to an almost ‘colour by number’ methodology where the platform may be consistent between many applications, and only algorithms specific to the application differ. Shape analysis is an important part of many machine vision applications. Many methods exist for determining existence of particular objects, such as Hough Transforms and statistical matching. A method of describing the outline of objects, called s-ψ (s-psi) offers advantages over other methods in that it reduces a two dimensional object to a series of one dimensional numbers. This graph, or chain, of numbers may be directly manipulated to perform such tasks as determining the convex hull, or template matching. A machine vision system to automate yield monitoring macadamia harvesting is proposed as a partial solution to the labour shortage problems facing researchers undertaking macadamia varietal trials in Australia. A novel method for objectively measuring citrus texture is to measure the shape of a light terminator as the fruit is spun in front of a video camera. A system to accomplish this task is described. S-psi template matching is used to identify animals to species level in another case study. The system implemented has the capability to identify animals, record video and also open or shut a gate remotely, allowing control over limited resources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Subramanian, Vijay. „Autonomous vehicle guidance using machine vision and laser radar for agricultural applications“. [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011323.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Larsson, Mathias. „Machine vision for finding a joint to guide a welding robot“. Thesis, University West, Department of Engineering Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1783.

Der volle Inhalt der Quelle
Annotation:

This report contains a description on how it is possible to guide a robot along an edge, by using a camera mounted on the robot. If stereo matching is used to calculate 3Dcoordinates of an object or an edge, it requires two images from different known positions and orientations to calculate where it is. In the image analysis in this project, the Canny edge filter has been used. The result from the filter is not useful directly, because it finds too many edges and it misses some pixels. The Canny edge result must be sorted and finally filled up before the final calculations can be started. This additional work with the image decreases unfortunately the accuracy in the calculations. The accuracy is estimated through comparison between measured coordinates of the edge using a coordinate measuring machine and the calculated coordinates. There is a deviation of up to three mm in the calculated edge. The camera calibration has been described in earlier thesis so it is not mentioned in this report, although it is a prerequisite of this project.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Watanabe, Yoko. „Stochastically optimized monocular vision-based navigation and guidance“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/22545.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Aerospace Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Johnson, Eric; Committee Co-Chair: Calise, Anthony; Committee Member: Prasad, J.V.R.; Committee Member: Tannenbaum, Allen; Committee Member: Tsiotras, Panagiotis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Adeboye, Taiyelolu. „Robot Goalkeeper : A robotic goalkeeper based on machine vision and motor control“. Thesis, Högskolan i Gävle, Avdelningen för elektronik, matematik och naturvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-27561.

Der volle Inhalt der Quelle
Annotation:
This report shows a robust and efficient implementation of a speed-optimized algorithm for object recognition, 3D real world location and tracking in real time. It details a design that was focused on detecting and following objects in flight as applied to a football in motion. An overall goal of the design was to develop a system capable of recognizing an object and its present and near future location while also actuating a robotic arm in response to the motion of the ball in flight. The implementation made use of image processing functions in C++, NVIDIA Jetson TX1, Sterolabs’ ZED stereoscopic camera setup in connection to an embedded system controller for the robot arm. The image processing was done with a textured background and the 3D location coordinates were applied to the correction of a Kalman filter model that was used for estimating and predicting the ball location. A capture and processing speed of 59.4 frames per second was obtained with good accuracy in depth detection while the ball was well tracked in the tests carried out.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Tessier, Cédric. „Système de localisation basé sur une stratégie de perception cognitive appliqué à la navigation autonome d'un robot mobile“. Clermont-Ferrand 2, 2007. http://www.theses.fr/2007CLF21784.

Der volle Inhalt der Quelle
Annotation:
Cette thèse concerne le guidage automatique de véhicule agricole en milieu ouvert. Les systèmes de navigation autonome présentés dans la littérature sont issus de la mise en commun d'un algorithme et d'un algorithme de commande. Malheureusement, la juxtaposition de ces deux modules ne permet pas de garantir la stabilité ni la justesse du guidage. Aussi, ce mémoire présente la réalisation d'un système de localisation dédié à la navigation autonome de robots. Les algorithmes développés reposent sur une approche radicalement différente des méthodes de localisation classiques. Le système intègre une statégie de perception cognitive qui pilote et sollicite les capteurs pour rechercher les informations dont il a besoin, quand il en a besoin. Les résultats, issus d'expérimentations réelles de suivi automatique de trajectoires ont montré la pertinence de notre approche pour le guidage en temps réel d'un véhicule tout-terrain évoluant à environ 12km/h
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Harper, Jason W. „Fast Template Matching For Vision-Based Localization“. Cleveland, Ohio : Case Western Reserve University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1238689057.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Case Western Reserve University, 2009
Department of Computer Engineering Abstract Title from OhioLINK abstract screen (viewed on 13 April 2009) Available online via the OhioLINK ETD Center
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Mathavan, Senthan. „Trajectory solutions for a game-playing robot using nonprehensile manipulation methods and machine vision“. Thesis, Loughborough University, 2009. https://dspace.lboro.ac.uk/2134/34146.

Der volle Inhalt der Quelle
Annotation:
The need for autonomous systems designed to play games, both strategy-based and physical, comes from the quest to model human behaviour under tough and competitive environments that require human skill at its best. In the last two decades, and especially after the 1996 defeat of the world chess champion by a chess-playing computer, physical games have been receiving greater attention. Robocup TM, i.e. robotic football, is a well-known example, with the participation of thousands of researchers all over the world. The robots created to play snooker/pool/billiards are placed in this context. Snooker, as well as being a game of strategy, also requires accurate physical manipulation skills from the player, and these two aspects qualify snooker as a potential game for autonomous system development research. Although research into playing strategy in snooker has made considerable progress using various artificial intelligence methods, the physical manipulation part of the game is not fully addressed by the robots created so far. This thesis looks at the different ball manipulation options snooker players use, like the shots that impart spin to the ball in order to accurately position the balls on the table, by trying to predict the ball trajectories under the action of various dynamic phenomena, such as impacts. A 3-degree of freedom robot, which can manipulate the snooker cue on a par with humans, at high velocities, using a servomotor, and position the snooker cue on the ball accurately with the help of a stepper drive, is designed and fabricated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

SORIANO, PINTER JAUME. „Machine learning-based image processing for human-robot collaboration“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278899.

Der volle Inhalt der Quelle
Annotation:
Human-robot Collaboration as a new paradigm in manufacturing has already been a hot topic in both manufacturing science, production research, intelligent robotics, and computer science. Due to the boost of deep learning technologies in the nearly ten years, advanced information processing technologies bring the new possibility to human-robot Collaboration. Meanwhile, machine learning-based image processing such as convolutional neural network has become a powerful tool in dealing with problems like target recognizing and locating. This kind of technologies shows potentials on robotic manufacturing and human-robot Collaboration. A challenge is to implement well-designed deep neural networks linked to a robotic system that can conduct collaborative works with the human. Accuracy and robustness need also be concerned in the development. This thesis work will address this challenge. This thesis tries to implement a solution based in Machine Learning methods for image detection which permits us to, using a low cost image solutions (RGB single camera), detect and localize manufacturing components to pick them and finish an assembly, helping the human co-workers, using an industrial robot, simplifying also the IT tasks to run it.
Människa-robot samarbete, som ett nytt paradigm inom tillverkningsindustrin, har redan blivit ett omtalat ämne inom tillverkningsvetenskapen, produktforskningen, intelligent robotik och datavetenskapen. På grund av det senaste decenniets ökning av "deep learning" teknologier kan avancerade information-processerings teknologier bringa nya möjligheter för människarobot samarbete. Under tiden har även maskininlärnings-baserad bildklassificering med "convolutional neural network" blivit ett kraftfullt verktyg för att hantera problem så som måligenkänning och lokalisering. Dessa typer av teknologier har potential att implementeras nom robotiserad tillverkning och människa-robot samarbete. En utmaning är att implementera väldesignade "convolutional neural networks" kopplat till ett robot system som kan utföra arbete i samarbete med människan. Noggranhet och robusthet behöver också avvägas i utvecklingsarbetet. Detta examensarbete kommer att ta itu med denna utmaning. Detta examensarbete försöker att implementera en lösning baserad på maskininlärnings-metoder för bildigenkänning som tillåter oss att, med hjälp av en billig bild lösning (RGB enkel kamera), detektera och lokalisera tillverkningskomponenter att plocka upp och slutföra en montering, vilket hjälper den mänskliga medhjälparen, med en industriell robot. Detta förenklar också IT-uppgifterna för att köra den.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Krajcar, Milan. „Robotické vidění s průmyslovými roboty Kuka“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228676.

Der volle Inhalt der Quelle
Annotation:
Master’s thesis is describing main terms in machine vision. It defines the basic working principles, its advantages and disadvantages. It is dividing machine vision systems into several classes. The thesis is solving design of an end effector, of an inspection process and program for smart camera Siemens SIMATIC VS722A, of a program for KUKA KR3 and of mutual communication. At the end the demonstrational application verifying the functionality of entire system is introduced.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Klein, Joëlle. „Contribution à la commande orale d'un robot doté d'un système de vision“. Nancy 1, 1990. http://www.theses.fr/1990NAN10393.

Der volle Inhalt der Quelle
Annotation:
La commande d'un robot doté d'un système de vision nécessite la reconnaissance et l'interprétation d'expressions naturelles complexes nécessaires à la description des objets. Dans le but de donner une grande souplesse à notre système, nous autorisons les ellipses, les anaphores et même les constructions asyntaxiques. Les moyens mis en œuvre pour atteindre nos objectifs reposent sur une grammaire de cas et des grammaires locales combinées avec des règles linguistiques de qualification et de coordination de groupes. L'interprétation consiste dans un premier temps à compléter les cas et, dans un deuxième temps, à rechercher les objets décrits dans l'univers du robot. Nous nous sommes intéressés au problème de l'ambiguité pour lequel nous proposons quelques stratégies de résolution
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Condom, Jean-Marie. „Un système de dialogue multimodal pour la communication avec un robot manipuleur“. Toulouse 3, 1992. http://www.theses.fr/1992TOU30155.

Der volle Inhalt der Quelle
Annotation:
Le travail porte sur la realisation d'un systeme de dialogue multimodal permettant a un operateur d'interagir avec un robot manipulateur dote d'un systeme de vision, au moyen de la parole (en entree et en sortie), de l'affichage graphique et, dans une moindre mesure, de l'ecrit. Sur le plan de la communication multimodale, notre systeme ouvre des perspectives nouvelles en etendant la mise en relation d'evenements issus de media differents et provenant d'une meme source (l'operateur ou le systeme), a des evenements issus de media differents mais provenant de sources differentes (l'operateur et l'univers). Nous mettons en avant un systeme reactif base d'une part sur une grande rapidite de decision tirant au mieux parti des connaissances pragmatiques, et d'autre part sur un protocole lache de validation de la commande vocale et de gestion des limites de l'analyse de scene. Nous utilisons un formalisme similaire pour decrire la structure du dialogue et les activites du robot qui sont etroitement liees. Le modele du dialogue est represente par une grammaire decrite par des reseaux de transitions augmentees. Les activites du robot sont modelisees par des automates a etats finis. Une architecture multiagent est proposee; sa justification principale tient aux contraintes de rapidite imposees par le mode de fonctionnement choisi, qui nous amene a considerer comme concourantes, les taches de perception, de decision et d'action
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Miller, Michael E. „The development of an improved low cost machine vision system for robotic guidance and manipulation of randomly oriented, straight edged objects“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182445639.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Modi, Kalpesh Prakash. „Vision application of human robot interaction : development of a ping pong playing robotic arm /“. Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/943.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Massé, Benoît. „Etude de la direction du regard dans le cadre d'interactions sociales incluant un robot“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM055/document.

Der volle Inhalt der Quelle
Annotation:
Les robots sont de plus en plus utilisés dans un cadre social. Il ne suffit plusde partager l’espace avec des humains, mais aussi d’interagir avec eux. Dansce cadre, il est attendu du robot qu’il comprenne un certain nombre de signauxambiguës, verbaux et visuels, nécessaires à une interaction humaine. En particulier, on peut extraire beaucoup d’information, à la fois sur l’état d’esprit despersonnes et sur la dynamique de groupe à l’œuvre, en connaissant qui ou quoichaque personne regarde. On parle de la Cible d’attention visuelle, désignéepar l’acronyme anglais VFOA. Dans cette thèse, nous nous intéressons auxdonnées perçues par un robot humanoı̈de qui participe activement à une in-teraction sociale, et à leur utilisation pour deviner ce que chaque personneregarde.D’une part, le robot doit “regarder les gens”, à savoir orienter sa tête(et donc la caméra) pour obtenir des images des personnes présentes. Nousprésentons une méthode originale d’apprentissage par renforcement pourcontrôler la direction du regard d’un robot. Cette méthode utilise des réseauxde neurones récurrents. Le robot s’entraı̂ne en autonomie à déplacer sa tête enfonction des données visuelles et auditives. Il atteint une stratégie efficace, quilui permet de cibler des groupes de personnes dans un environnement évolutif.D’autre part, les images du robot peuvent être utilisée pour estimer lesVFOAs au cours du temps. Pour chaque visage visible, nous calculons laposture 3D de la tête (position et orientation dans l’espace) car très fortementcorrélée avec la direction du regard. Nous l’utilisons dans deux applications.Premièrement, nous remarquons que les gens peuvent regarder des objets quine sont pas visible depuis le point de vue du robot. Sous l’hypothèse quelesdits objets soient regardés au moins une partie du temps, nous souhaitonsestimer leurs positions exclusivement à partir de la direction du regard despersonnes visibles. Nous utilisons une représentation sous forme de carte dechaleur. Nous avons élaboré et entraı̂né plusieurs réseaux de convolutions afinde d’estimer la régression entre une séquence de postures des têtes, et les posi-tions des objets. Dans un second temps, les positions des objets d’intérêt, pou-vant être ciblés, sont supposées connues. Nous présentons alors un modèleprobabiliste, suggéré par des résultats en psychophysique, afin de modéliserla relation entre les postures des têtes, les positions des objets, la directiondu regard et les VFOAs. La formulation utilise un modèle markovien à dy-namiques multiples. En appliquant une approches bayésienne, nous obtenonsun algorithme pour calculer les VFOAs au fur et à mesure, et une méthodepour estimer les paramètres du modèle.Nos contributions reposent sur la possibilité d’utiliser des données, afind’exploiter des approches d’apprentissage automatique. Toutes nos méthodessont validées sur des jeu de données disponibles publiquement. De plus, lagénération de scénarios synthétiques permet d’agrandir à volonté la quantitéde données disponibles; les méthodes pour simuler ces données sont explicite-ment détaillée
Robots are more and more used in a social context. They are required notonly to share physical space with humans but also to interact with them. Inthis context, the robot is expected to understand some verbal and non-verbalambiguous cues, constantly used in a natural human interaction. In particular,knowing who or what people are looking at is a very valuable information tounderstand each individual mental state as well as the interaction dynamics. Itis called Visual Focus of Attention or VFOA. In this thesis, we are interestedin using the inputs from an active humanoid robot – participating in a socialinteraction – to estimate who is looking at whom or what.On the one hand, we want the robot to look at people, so it can extractmeaningful visual information from its video camera. We propose a novelreinforcement learning method for robotic gaze control. The model is basedon a recurrent neural network architecture. The robot autonomously learns astrategy for moving its head (and camera) using audio-visual inputs. It is ableto focus on groups of people in a changing environment.On the other hand, information from the video camera images are used toinfer the VFOAs of people along time. We estimate the 3D head poses (lo-cation and orientation) for each face, as it is highly correlated with the gazedirection. We use it in two tasks. First, we note that objects may be lookedat while not being visible from the robot point of view. Under the assump-tion that objects of interest are being looked at, we propose to estimate theirlocations relying solely on the gaze direction of visible people. We formulatean ad hoc spatial representation based on probability heat-maps. We designseveral convolutional neural network models and train them to perform a re-gression from the space of head poses to the space of object locations. Thisprovide a set of object locations from a sequence of head poses. Second, wesuppose that the location of objects of interest are known. In this context, weintroduce a Bayesian probabilistic model, inspired from psychophysics, thatdescribes the dependency between head poses, object locations, eye-gaze di-rections, and VFOAs, along time. The formulation is based on a switchingstate-space Markov model. A specific filtering procedure is detailed to inferthe VFOAs, as well as an adapted training algorithm.The proposed contributions use data-driven approaches, and are addressedwithin the context of machine learning. All methods have been tested on pub-licly available datasets. Some training procedures additionally require to sim-ulate synthetic scenarios; the generation process is then explicitly detailed
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Chevalier, Pauline. „Impact of sensory preferences in individuals with autism spectrum disorderon their social interaction with a robot“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLY017/document.

Der volle Inhalt der Quelle
Annotation:
L'objectif de ce travail de thèse est de permettre sur le long terme de proposer des interactions sociales personnalisées exploitant l’attirance que les personnes souffrant de Troubles du Spectre Autistique (TSA) ont envers les robots humanoïdes, tels que Nao (Softbank Robotics), afin d’améliorer leurs capacités d’interaction sociale. Trois institutions spécialisées pour personnes atteintes de TSA participent à notre étude : l'IME Notre Ecole et l'IME MAIA, Institut médico-éducatif pour enfants et adolescents atteints de TSA et la Lendemaine, Foyer d’Aide Médicalisée pour adultes atteints de TSA.Les différences inter-individuelles sont très présentes dans les TSA et elles impactent différemment le comportement de chaque individu avec TSA dans sa vie (par exemples la communication, l'attention jointe, ou encore les troubles moteurs, à des degrés différents pour chaque individu), et dans cette étude, durant leur interaction sociale avec un robot.Afin d’envisager à long terme une interaction personnalisée pour chaque participant, une première étape a consisté à définir leur profil sensorimoteur. L'hypothèse qui guide notre étude est que l'intégration des informations visuelles et proprioceptives (perception, consciente ou non, de la position et des changements des différentes parties du corps) d'une personne joue un rôle sur ses capacités sociales. Une personne qui réagit peu aux informations visuelles et qui utilise les informations proprioceptives relatives à ses mouvements ou à la position de son corps de manière exacerbée, aurait plus de difficultés à s’engager et à maintenir une interaction sociale.Les profils sensoriels des participants ont été évalués à l’aide du test du Profil Sensoriel de Dunn et du test sensorimoteur impliquant une scène mobile virtuelle afin d’évaluer leur dépendance visuelle et proprioceptive. L'analyse des données a permis de classer nos participants en trois groupes montrant des comportements différents face aux informations proprioceptives et visuelles, et à leur intégration.Nous avons ensuite étudié les liens entre les profils sensoriels des participants et leurs différents comportements sociaux à travers plusieurs tâches impliquées dans les interactions sociales : (1) reconnaissance d'émotions exprimées par deux robots, un avatar et une personne ; (2) interaction sociale avec le robot Nao sur la salutation ; (3) attention conjointe avec le robot Nao, et (4) imitation avec le robot Nao. Cette dernière tâche a fait l’objet de sessions répétées sur huit semaines (modèle de thérapie sur l'apprentissage et de renforcement de l'imitation pour enfants avec TSA).A travers ces études, nous avons pu observer que les participants ayant une plus forte dépendance à la proprioception et une indépendance au champ visuel ont eu plus de difficultés à interagir avec le robot (moins de regards vers le robot, moins de réponses à l'attention conjointe, plus de difficultés à reconnaitre les émotions, et à imiter un partenaire) que les autres participants.Nous avons pu observer que les sessions avec le robot Nao ont eu un effet bénéfique chez les participants avec TSA. A la suite des sessions répétées avec le robot Nao, les participants ont montré une amélioration de leurs capacités sociales (regard vers le partenaire, imitations) vers un partenaire d'imitation humain.Ces résultats confortent l'idée d'utiliser les profils sensoriels des personnes avec TSA pour leur proposer, dans des recherches futures, des interactions personnalisées avec les robots
The goal of this thesis is to provide contributions that will help in the long term to enable personalized robot-based social interaction for individuals with Autism Spectrum Disorders (ASD). This work was done in collaboration with three care facilities for people suffering from ASD: IME MAIA (France) and IME Notre Ecole, medical and educative schools for children and teenagers with ASD, and FAM La Lendemaine (France), a medical house for adults with ASD.Inter-individual differences are present in ASD, and impact the behaviors of each individual in their lives, and in this study, during their interactions with a robot.The first step of our work was to propose an appropriate method to define the proprioceptive and visual profiles of each of our participants. We based our work on the hypothesis that the proprioceptive (the ability of an individual to determine body segment positions (i.e., joint position sense and to detect limb movements in space) and visual integration of cues of an individual with ASD is an indicator of their social and communication skills. We posit that a mitigated behavioral response (i.e., hyporeactivity) to visual motion and an overreliance on proprioceptive information are linked in individuals with ASD to their difficulties in integrating social cues and engaging in successful social interactions.We used two methods to define the proprioceptive and visual profile of our participant: a well-known questionnaire on sensory preferences and an experimental setup. With the setup, we were able to observe three different groups of postural behaviors in our participants. Thanks to these individual profiles, we could make assumptions on the behaviors that one can expect from each of our participants during interactions with the robot.We aimed to assess various social skills of our participants in regards to their profiles. We designed three single case studies: (1) emotion recognition with different embodiments (two robots, a virtual agent and a human); (2) a short greeting social task with the robot Nao; and (3) a game evaluating joint attention response to the robot Nao. We also conducted eight weeks-long sessions with an imitation task with Nao.Through these studies, we were able to observe that the participants that display an overreliance on proprioceptive cues and a hyporeactivity to visual cues had more difficulties to interact with the robot (less gaze towards the robot, less answers to joint attention initiation behaviors, more difficulties to recognize emotions and to imitate a partner) than the other participants.We were able to observe that the repeated sessions with the robot Nao were benefic for participants with ASD: after the sessions with the robot Nao, the participants showed an improvement in their social skills (gaze to the partner, imitations).Defining such individual profiles could provide promising strategies for designing successful and adapted Human-Robot Interaction for individuals with ASD
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Marín, Urías Luis Felipe. „Reasoning about space for human-robot interaction“. Toulouse 3, 2009. http://thesesups.ups-tlse.fr/1195/.

Der volle Inhalt der Quelle
Annotation:
L'interaction Homme-Robot est un domaine de recherche qui se développe de manière exponentielle durant ces dernières années, ceci nous procure de nouveaux défis au raisonnement géométrique du robot et au partage d'espace. Le robot pour accomplir une tâche, doit non seulement raisonner sur ses propres capacités, mais également prendre en considération la perception humaine, c'est à dire "Le robot doit se placer du point de vue de l'humain". Chez l'homme, la capacité de prise de perspective visuelle commence à se manifester à partir du 24ème mois. Cette capacité est utilisée pour déterminer si une autre personne peut voir un objet ou pas. La mise en place de ce genre de capacités sociales améliorera les capacités cognitives du robot et aidera le robot pour une meilleure interaction avec les hommes. Dans ce travail, nous présentons un mécanisme de raisonnement spatial de point de vue géométrique qui utilise des concepts psychologiques de la "prise de perspective" et "de la rotation mentale" dans deux cadres généraux: - La planification de mouvement pour l'interaction homme-robot: le robot utilise "la prise de perspective égocentrique" pour évaluer plusieurs configurations où le robot peut effectuer différentes tâches d'interaction. - Une interaction face à face entre l'homme et le robot : le robot emploie la prise de point de vue de l'humain comme un outil géométrique pour comprendre l'attention et l'intention humaine afin d'effectuer des tâches coopératives
Human Robot Interaction is a research area that is growing exponentially in last years. This fact brings new challenges to the robot's geometric reasoning and space sharing abilities. The robot should not only reason on its own capacities but also consider the actual situation by looking from human's eyes, thus "putting itself into human's perspective". In humans, the "visual perspective taking" ability begins to appear by 24 months of age and is used to determine if another person can see an object or not. The implementation of this kind of social abilities will improve the robot's cognitive capabilities and will help the robot to perform a better interaction with human beings. In this work, we present a geometric spatial reasoning mechanism that employs psychological concepts of "perspective taking" and "mental rotation" in two general frameworks: - Motion planning for human-robot interaction: where the robot uses "egocentric perspective taking" to evaluate several configurations where the robot is able to perform different tasks of interaction. - A face-to-face human-robot interaction: where the robot uses perspective taking of the human as a geometric tool to understand the human attention and intention in order to perform cooperative tasks
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Sattigeri, Ramachandra Jayant. „Adaptive Estimation and Control with Application to Vision-based Autonomous Formation Flight“. Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16272.

Der volle Inhalt der Quelle
Annotation:
The role of vision as an additional sensing mechanism has received a lot of attention in recent years in the context of autonomous flight applications. Modern Unmanned Aerial Vehicles (UAVs) are equipped with vision sensors because of their light-weight, low-cost characteristics and also their ability to provide a rich variety of information of the environment in which the UAVs are navigating in. The problem of vision based autonomous flight is very difficult and challenging since it requires bringing together concepts from image processing and computer vision, target tracking and state estimation, and flight guidance and control. This thesis focuses on the adaptive state estimation, guidance and control problems involved in vision-based formation flight. Specifically, the thesis presents a composite adaptation approach to the partial state estimation of a class of nonlinear systems with unmodeled dynamics. In this approach, a linear time-varying Kalman filter is the nominal state estimator which is augmented by the output of an adaptive neural network (NN) that is trained with two error signals. The benefit of the proposed approach is in its faster and more accurate adaptation to the modeling errors over a conventional approach. The thesis also presents two approaches to the design of adaptive guidance and control (G&C) laws for line-of-sight formation flight. In the first approach, the guidance and autopilot systems are designed separately and then combined together by assuming time-scale separation. The second approach is based on integrating the guidance and autopilot design process. The developed G&C laws using both approaches are adaptive to unmodeled leader aircraft acceleration and to own aircraft aerodynamic uncertainties. The thesis also presents theoretical justification based on Lyapunov-like stability analysis for integrating the adaptive state estimation and adaptive G&C designs. All the developed designs are validated in nonlinear, 6DOF fixed-wing aircraft simulations. Finally, the thesis presents a decentralized coordination strategy for vision-based multiple-aircraft formation control. In this approach, each aircraft in formation regulates range from up to two nearest neighboring aircraft while simultaneously tracking nominal desired trajectories common to all aircraft and avoiding static obstacles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Envall, Zakarias. „Robot Racking : A Racking Solution for Autonomous Production“. Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-70097.

Der volle Inhalt der Quelle
Annotation:
As an engineering student, the most natural way of summarizing this thesis project is by relating it to a mathematical equation. The solution to this equation is given, and it is in the form of a racking concept that enables the use of robots. The other side of the equation is however a bit more complex. This side contains several undefined variables, which can only be solved by delving into various theoretical fields and exploring unchartered depths of the creative space.The project’s main objective is to design a concept rack for Gestamp HardTech in Luleå, Sweden, for storage and in-house transport of the beams which are produced at the HardTech facility. The rack is meant to be loaded both into and out of by robots and should suit an as wide array of beams as possible. To determine the possibilities and limitations of the rack’s robot-user, several automation aspects are researched, centered on industrial robots and machine vision. The beams which are produced at the Gestamp HardTech Luleå production plant today are analyzed, whereby twelve of them are ultimately chosen for the rack’s design to be focused on. What follows this is a creative process consisting of a creative idea-generating phase, an evaluative phase focused on implementation of the ideas, and a refinement phase where the rack concept is finalized. The process includes various methods of idea generating, a great deal of sketching, physical testing of the concepts, and finally CAD-modeling. The result, named 4.0-Rack, is in the form of a modular rack-concept which balances the aspects of flexibility, by suiting ten of the reviewed beams, with a high packing-grade, providing a mean packing-grade of 83% in relation to the way the beams are currently packed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Ekvall, Staffan. „Robot Task Learning from Human Demonstration“. Doctoral thesis, Stockholm : School of Computer Science and Communication, Kungliga Tekniska högskolan (KTH), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4279.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Nagy, Marek. „Synchronizace pohybu průmyslového robotu s pohybem pásového dopravníku“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231322.

Der volle Inhalt der Quelle
Annotation:
Diploma thesis is focused on the solution of synchronization of the robot motion with a moving conveyor belt. It informs about basic principles and possibilities of using similar applications. It describes individual elements used in the application, their importance and function. It provides an overview of proposed program codes for the programmable logic controller, the smart camera and the robot. The result is the creation of a functional illustrative application with KUKA robot.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Loffreno, Michele. „Computer Vision and Machine Learning for a Spoon-feeding Robot : A prototype solution based on ABB YuMi and an Intel RealSense camera“. Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-182503.

Der volle Inhalt der Quelle
Annotation:
A lot of people worldwide are affected by limitations and disabilities that make it hard to do even essential actions and everyday tasks, such as eating. The impact of robotics on the lives of elder people or people having any kind of inability, which makes it hard everyday actions as to eat, was considered. The aim of this thesis is to study the implementation of a robotic system in order to achieve an automatic feeding process. Different kinds of robots and solutions were taken into account, for instance, the Obi and the prototype realized by the Washington University. The system considered uses an RGBD camera, an Intel RealSense D400 series camera, to detect pieces of cutlery and food on a table and a robotic arm, an ABB-YuMi, to pick up the identified objects. The spoon detection is based on the pre-trained convolutional neural network AlexNet provided by MATLAB. Two detectors were implemented. The first one can detect up to four different objects (spoon, plate, fork and knife), the second one can detect only spoon and plate. Different algorithms based on morphology were tested in order to compute the pose of the objects detected. RobotStudio was used to establish a connection between MATLAB and the robot. The goal was to make the whole process as automated as possible. The neural network trained on two objects reached 100% of accuracy during the training test. The detector based on it was tested on the real system. It was possible to detect the spoon and the plate and to draw a good centered boundary box. The accuracy reached can be considered satisfying since it has been possible to grasp a spoon using the YuMi based on a picture of the table. It was noticed that the lighting condition is the key factor to get a satisfying result or to miss the detection of the spoon. The best result was archived when the light is uniform and there are no reflections and shadows on the objects. The pictures which get a better result for the detection were taken in an apartment. Despite the limitations of the interface between MATLAB and the controller of the YuMi, a good level of automation was reached. The influence of lighting conditions in this setting was discussed and some practical suggestions and considerations were made.
No
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

山本, 聡史. „下側接近を特徴とする定置型イチゴ収穫ロボットの開発“. 京都大学, 2011. http://hdl.handle.net/2433/135408.

Der volle Inhalt der Quelle
Annotation:
This study explored the development of a stationary robotic strawberry harvester that was combined with a movable bench system as part of the development of an industrially production system for a strawberry in a plant factory. At first the difficulty of approaching target fruit was investigated using table-top plants cultured in a greenhouse. Then the maximum force needed to separate fruit from the peduncle was measured. Based on these results, an end-effector was designed with three unique functions; (1) suction cup was vibrated to minimize the influence of the adjoining fruits at the time of approach; (2) compressed air was blown toward the adjoining fruits to force them away from the target fruit; (3) peduncle was removed with the motion of tilting and pulling the target fruit. Next, an optical system to equip the machine with the ability to detect and determine the position and coloration of strawberry fruit was constructed. The position of the fruit was detected from below with a stereo-camera. The coloration measurement unit was set against the bed of the movable bench system at fruit level to capture images of target fruit. Considering the spectral reflectance characteristics of strawberry fruit, the coloration measurement unit was equipped with red, green, and white LEDs. Finally the stationary robot was tested in an experimental harvesting system in which the robot was combined with a movable bench unit. In the experiment system, the stationary robot enabled highly stable harvesting operation.
Kyoto University (京都大学)
0048
新制・論文博士
博士(農学)
乙第12528号
論農博第2747号
新制||農||988(附属図書館)
学位論文||H23||N4584(農学部図書室)
28350
(主査)教授 近藤 直, 教授 清水 浩, 准教授 飯田 訓久
学位規則第4条第2項該当
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Kira, Zsolt. „Communication and alignment of grounded symbolic knowledge among heterogeneous robots“. Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33941.

Der volle Inhalt der Quelle
Annotation:
Experience forms the basis of learning. It is crucial in the development of human intelligence, and more broadly allows an agent to discover and learn about the world around it. Although experience is fundamental to learning, it is costly and time-consuming to obtain. In order to speed this process up, humans in particular have developed communication abilities so that ideas and knowledge can be shared without requiring first-hand experience. Consider the same need for knowledge sharing among robots. Based on the recent growth of the field, it is reasonable to assume that in the near future there will be a collection of robots learning to perform tasks and gaining their own experiences in the world. In order to speed this learning up, it would be beneficial for the various robots to share their knowledge with each other. In most cases, however, the communication of knowledge among humans relies on the existence of similar sensory and motor capabilities. Robots, on the other hand, widely vary in perceptual and motor apparatus, ranging from simple light sensors to sophisticated laser and vision sensing. This dissertation defines the problem of how heterogeneous robots with widely different capabilities can share experiences gained in the world in order to speed up learning. The work focus specifically on differences in sensing and perception, which can be used both for perceptual categorization tasks as well as determining actions based on environmental features. Motivating the problem, experiments first demonstrate that heterogeneity does indeed pose a problem during the transfer of object models from one robot to another. This is true even when using state of the art object recognition algorithms that use SIFT features, designed to be unique and reproducible. It is then shown that the abstraction of raw sensory data into intermediate categories for multiple object features (such as color, texture, shape, etc.), represented as Gaussian Mixture Models, can alleviate some of these issues and facilitate effective knowledge transfer. Object representation, heterogeneity, and knowledge transfer is framed within Gärdenfors' conceptual spaces, or geometric spaces that utilize similarity measures as the basis of categorization. This representation is used to model object properties (e.g. color or texture) and concepts (object categories and specific objects). A framework is then proposed to allow heterogeneous robots to build models of their differences with respect to the intermediate representation using joint interaction in the environment. Confusion matrices are used to map property pairs between two heterogeneous robots, and an information-theoretic metric is proposed to model information loss when going from one robot's representation to another. We demonstrate that these metrics allow for cognizant failure, where the robots can ascertain if concepts can or cannot be shared, given their respective capabilities. After this period of joint interaction, the learned models are used to facilitate communication and knowledge transfer in a manner that is sensitive to the robots' differences. It is shown that heterogeneous robots are able to learn accurate models of their similarities and difference, and to use these models to transfer learned concepts from one robot to another in order to bootstrap the learning of the receiving robot. In addition, several types of communication tasks are used in the experiments. For example, how can a robot communicate a distinguishing property of an object to help another robot differentiate it from its surroundings? Throughout the dissertation, the claims will be validated through both simulation and real-robot experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Coupeté, Eva. „Reconnaissance de gestes et actions pour la collaboration homme-robot sur chaîne de montage“. Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEM062/document.

Der volle Inhalt der Quelle
Annotation:
Les robots collaboratifs sont de plus en plus présents dans nos vies quotidiennes. En milieu industriel, ils sont une solution privilégiée pour rendre les chaînes de montage plus flexibles, rentables et diminuer la pénibilité du travail des opérateurs. Pour permettre une collaboration fluide et efficace, les robots doivent être capables de comprendre leur environnement, en particulier les actions humaines.Dans cette optique, nous avons décidé d’étudier la reconnaissance de gestes techniques afin que le robot puisse se synchroniser avec l’opérateur, adapter son allure et comprendre si quelque chose d’inattendu survient.Pour cela, nous avons considéré deux cas d’étude, un cas de co-présence et un cas de collaboration, tous les deux inspirés de cas existant sur les chaînes de montage automobiles.Dans un premier temps, pour le cas de co-présence, nous avons étudié la faisabilité de la reconnaissance des gestes en utilisant des capteurs inertiels. Nos très bons résultats (96% de reconnaissances correctes de gestes isolés avec un opérateur) nous ont encouragés à poursuivre dans cette voie.Sur le cas de collaboration, nous avons privilégié l’utilisation de capteurs non-intrusifs pour minimiser la gêne des opérateurs, en l’occurrence une caméra de profondeur positionnée avec une vue de dessus pour limiter les possibles occultations.Nous proposons un algorithme de suivi des mains en calculant les distances géodésiques entre les points du haut du corps et le haut de la tête. Nous concevons également et évaluons un système de reconnaissance de gestes basé sur des Chaînes de Markov Cachées (HMM) discrètes et prenant en entrée les positions des mains. Nous présentons de plus une méthode pour adapter notre système de reconnaissance à un nouvel opérateur et nous utilisons des capteurs inertiels sur les outils pour affiner nos résultats. Nous obtenons le très bon résultat de 90% de reconnaissances correctes en temps réel pour 13 opérateurs.Finalement, nous formalisons et détaillons une méthodologie complète pour réaliser une reconnaissance de gestes techniques sur les chaînes de montage
Collaborative robots are becoming more and more present in our everyday life. In particular, within the industrial environment, they emerge as one of the preferred solution to make assembly line in factories more flexible, cost-effective and to reduce the hardship of the operators’ work. However, to enable a smooth and efficient collaboration, robots should be able to understand their environment and in particular the actions of the humans around them.With this aim in mind, we decided to study technical gestures recognition. Specifically, we want the robot to be able to synchronize, adapt its speed and understand if something unexpected arises.We considered two use-cases, one dealing with copresence, the other with collaboration. They are both inspired by existing task on automotive assembly lines.First, for the co-presence use case, we evaluated the feasibility of technical gestures recognition using inertial sensors. We obtained a very good result (96% of correct recognition with one operator) which encouraged us to follow this idea.On the collaborative use-case, we decided to focus on non-intrusive sensors to minimize the disturbance for the operators and we chose to use a depth-camera. We filmed the operators with a top view to prevent most of the potential occultations.We introduce an algorithm that tracks the operator’s hands by calculating the geodesic distances between the points of the upper body and the top of the head.We also design and evaluate an approach based on discrete Hidden Markov Models (HMM) taking the hand positions as an input to recognize technical gestures. We propose a method to adapt our system to new operators and we embedded inertial sensors on tools to refine our results. We obtain the very good result of 90% of correct recognition in real time for 13 operators.Finally, we formalize and detail a complete methodology to realize technical gestures recognition on assembly lines
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Edström, Jacob, und Pontus Mjöberg. „The Optimal Hardware Architecture for High Precision 3D Localization on the Edge. : A Study of Robot Guidance for Automated Bolt Tightening“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263104.

Der volle Inhalt der Quelle
Annotation:
The industry is moving towards a higher degree of automation and connectivity, where previously manual operations are being adapted for interconnected industrial robots. This thesis focuses specifically on the automation of tightening applications with pre-tightened bolts and collaborative robots. The use of 3D computer vision is investigated for direct localization of bolts, to allow for flexible assembly solutions. A localization algorithm based on 3D data is developed with the intention to create a lightweight software to be run on edge devices. A restrictive use of deep learning classification is therefore included, to enable product flexibility while minimizing the computational load. The cloud-to-edge and cluster-to-edge trade-offs for the chosen application are investigated to identify smart offloading possibilities to cloud or cluster resources. To reduce operational delay, image partitioning to sub-image processing is also evaluated, to more quickly start the operation with a first coordinate and to enable processing in parallel with robot movement. Four different hardware architectures are tested, consisting of two different Single Board Computers (SBC), a cluster of SBCs and a high-end computer as an emulated local cloud solution. All systems but the cluster is seen to perform without operational delay for the application. The optimal hardware architecture is therefore found to be a consumer grade SBC, being optimized on energy efficiency, cost and size. If only the variance in communication time can be minimized, the cluster shows potential to reduce the total calculation time without causing an operational delay. Smart offloading to deep learning optimized cloud resources or a cluster of interconnected robot stations is found to enable increasing complexity and robustness of the algorithm. The SBC is also found to be able to switch between an edge and a cluster setup, to either optimize on the time to start the operation or the total calculation time. This offers a high flexibility in industrial settings, where product changes can be handled without the need for a change in visual processing hardware, further enabling its integration in factory devices.
Industrin rör sig mot en högre grad av automatisering och uppkoppling, där tidigare manuella operationer anpassas för sammankopplade industriella robotar. Denna masteruppsats fokuserar specifikt på automatiseringen av åtdragningsapplikationer med förmonterade bultar och kollaborativa robotar. Användningen av 3D-datorseende undersöks för direkt lokalisering av bultar, för att möjliggöra flexibla monteringslösningar. En lokaliseringsalgoritm baserad på 3Ddata utvecklas med intentionen att skapa en lätt mjukvara för att köras på Edge-enheter. En restriktiv användning av djupinlärningsklassificering är därmed inkluderad, för att möjliggöra produktflexibilitet tillsammans med en minimering av den behövda beräkningskraften. Avvägningarna mellan edge- och moln- eller klusterberäkning för den valda applikationen undersöks för att identifiera smarta avlastningsmöjligheter till moln- eller klusterresurser. För att minska operationell fördröjning utvärderas även bildpartitionering, för att snabbare kunna starta operationen med en första koordinat och möjliggöra beräkningar parallellt med robotrörelser. Fyra olika hårdvaruarkitekturer testas, bestående av två olika enkortsdatorer, ett kluster av enkortsdatorer och en marknadsledande dator som en efterliknad lokal molnlösning. Alla system utom klustret visar sig prestera utan operationell fördröjning för applikationen. Den optimala hårdvaruarkitekturen visar sig därmed vara en konsumentklassad enkortsdator, optimerad på energieffektivitet, kostnad och storlek. Om endast variansen i kommunikationstid kan minskas visar klustret potential för att kunna reducera den totala beräkningstiden utan att skapa operationell fördröjning. Smart avlastning till djupinlärningsoptimerade molnresurser eller kluster av sammankopplade robotstationer visar sig möjliggöra ökad komplexitet och tillförlitlighet av algoritmen. Enkortsdatorn visar sig även kunna växla mellan en edge- och en klusterkonfiguration, för att antingen optimera för tiden att starta operationen eller för den totala beräkningstiden. Detta medför en hög flexibilitet i industriella sammanhang, där produktändringar kan hanteras utan behovet av hårdvaruförändringar för visuella beräkningar, vilket ytterligare möjliggör dess integrering i fabriksenheter.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Brèthes, Ludovic. „Suivi visuel par filtrage particulaire : application à l'interaction Homme-robot“. Toulouse 3, 2005. http://www.theses.fr/2005TOU30282.

Der volle Inhalt der Quelle
Annotation:
Cette thèse porte sur la détection et le suivi de personnes ainsi que la reconnaissance de gestes élémentaires à partir du flot vidéo d'une caméra couleur embarquée sur le robot. Le filtrage particulaire très adapté dans ce contexte permet de combiner/fusionner aisément différentes sources de mesures. Nous proposons ici différents schémas de filtrage, où l'information visuelle est prise en compte dans les fonctions d'importance et de vraisemblance au moyen de primitives forme, couleur et mouvement image. Nous évaluons alors quelles combinaisons de primitives visuelles et d'algorithmes de filtrage répondent au mieux aux modalités d'interaction envisagées pour notre robot "guide de musée. Notre dernière contribution porte sur la reconnaissance de gestes symboliques permettant de communiquer avec le robot. Une stratégie de filtrage particulaire efficace est proposée afin de suivre et reconnaître simultanément des configurations de la main et des dynamiques gestuelles dans le flot vidéo
This thesis is focused on the detection and the tracking of people and also on the recognition of elementary gestures from video stream of a color camera embeded on the robot. Particle filter well suited to this context enables a straight combination/fusion of several measurement cues. We propose here various filtering strategies where visual information such as shape, color and motion are taken into account in the importance function and the measurement model. We compare and evaluate these filtering strategies in order to show which combination of visual cues and particle filter algorithm are more suitable to the interaction modalities that we consider for our tour-robot. Our last contribution relates to the recognition of symbolic gestures which enable to communicate with the robot. An efficient particle filter strategy is proposed in order to track the hand and to recognize at the same time its configuration and gesture dynamic in video stream
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Colbert, Steven C. „Shape and Pose Recovery of Novel Objects Using Three Images from a Monocular Camera in an Eye-In-Hand Configuration“. Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3515.

Der volle Inhalt der Quelle
Annotation:
Knowing the shape and pose of objects of interest is critical information when planning robotic grasping and manipulation maneuvers. The ability to recover this information from objects for which the system has no prior knowledge is a valuable behavior for an autonomous or semiautonomous robot. This work develops and presents an algorithm for the shape and pose recovery of unknown objects using no a priori information. Using a monocular camera in an eye-in-hand configuration, three images of the object of interest are captured from three disparate viewing directions. Machine vision techniques are employed to process these images into silhouettes. The silhouettes are used to generate an approximation of the surface of the object in the form of a three dimensional point cloud. The accuracy of this approximation is improved by fitting an eleven parameter geometric shape to the points such that the fitted shape ignores disturbances from noise and perspective projection effects. The parametrized shape represents the model of the unknown object and can be utilized for planning robot grasping maneuvers or other object classification tasks. This work is implemented and tested in simulation and hardware. A simulator is developed to test the algorithm for various three dimensional shapes and any possible imaging positions. Several shapes and viewing configurations are tested and the accuracy of the recoveries are reported and analyzed. After thorough testing of the algorithm in simulation, it is implemented on a six axis industrial manipulator and tested on a range of real world objects: both geometric and amorphous. It is shown that the accuracy of the hardware implementation performs exceedingly well and approaches the accuracy of the simulator, despite the additional sources of error and uncertainty present.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Rutkowski, Adam J. „A BIOLOGICALLY-INSPIRED SENSOR FUSION APPROACH TO TRACKING A WIND-BORNE ODOR IN THREE DIMENSIONS“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1196447143.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Melikian, Simon Haig. „Visual Search for Objects with Straight Lines“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=case1134003738.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Uskarci, Algan. „Human Arm Mimicking Using Visual Data“. Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605620/index.pdf.

Der volle Inhalt der Quelle
Annotation:
This thesis analyzes the concept of robot mimicking in the field of Human-Machine Interaction (HMI). Gestures are investigated for HMI applications and the preliminary work of the mimicking of a model joint with markers is presented. Two separate systems are proposed finally which are capable of detecting a moving human arm in a video sequence and calculating the orientation of the arm. The angle of orientation found is passed to robot arm in order to realize robot mimicking. The simulations show that it is possible to determine human arm orientation either by using some markers or some initial background image information or tracking of features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Krutílek, Jan. „Systémy průmyslového vidění s roboty Kuka a jeho aplikace na rozpoznávání volně ložených prvků“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229174.

Der volle Inhalt der Quelle
Annotation:
Diploma thesis deals with a robot vision and its application to the problem of manipulation of coincidentally placed objects. There is mentioned an overview of current principles of the most frequently used vision systems on the market. With regard to the required task to be solved, there are mentioned various possibilities of using basic softsensors during the recognition of different objects. The objective of this Diploma thesis is also programming and realization of a demonstration application applying knowledge of PLC programming, knowledge of expert programming KRL language (for KUKA robots), knowledge of designing scripts for smart camera in Spectation software and knowledge of network communication among all devices used in this case.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Avvari, Ddanukash. „A Literature Review on Differences Between Robotic and Human In-Line Quality Inspection in Automotive Manufacturing Assembly Line“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-56038.

Der volle Inhalt der Quelle
Annotation:
The advent of the industrial revolution has brought a great number of changes in the functioning of various processes in manufacturing industries. The ways and means of working have evolved exponentially with the implementation of advanced technology. Moreover, with the increasing technology, the customer demands have also been varying dynamically due to changes in customer requirements focusing on individual customization. To cope with the dynamic demand, manufacturing industries had to make sure their products are manufactured with higher quality and shorter lead times. Implementation and efficient usage of technology has provided industries with the necessary tools to achieve market demand and stay competitive by growing continuously. The transformation aims to reach the level of zero-defect manufacturing and ensure higher first-time right yield capability with minimum utilization of available resources. However, technological advancements have not developed the quality inspection process of the manufacturing industry at the same level as other processes. Due to this, the quality inspection processes are still human dependent which requires a highly skilled human operator to perform inspection procedures using sensory abilities to detect deviations. Research suggests that human quality inspection is prone to errors due to fatigue as the process is continuous, strenuous, and tedious work. The efficiency of human inspection is around 80% which becomes a chronic problem in safety-critical and high-value manufacturing environments. Moreover, with the increasing level of customization and technology, the products are becoming more complex with intricate shapes and only human inspection is not enough to meet the customer requirements. Especially in the case of automotive industry in Body in White applications, human inspection of outer body panels, engine parts with tighter tolerances alone does not make the cut. Advancements in the field of metrology have led to the introduction of Coordinate measuring machines (CMM), which are classified as contact and non-contact measuring machines. The measurements are performed offline away from the production line, using the sampling method. The contact measuring machines are equipped with touch trigger probe devices that travel all over the part to make a virtual image of the product which is time-consuming but accurate. Whereas the noncontact measuring machines are equipped with laser scanners or optical devices which scan the part and develop a virtual model which is fast but has accuracy and repeatability issues due to external factors. But coordinate measuring machines have proven to be bottlenecks as they were not able to synchronize with the production pace and could not perform aninspection on all the produced parts, which would help in collecting data. The gathered data can be used to analyse root causes and generate trends in defect detection. With the advancements in non-contact measuring systems, automotive industries have also realized the potential of implementing inline measurement techniques to perform quality inspection. The non-contact measuring system consists of a robotic arm or setup which is equipped with a camera, sensors, and a complex algorithm to identify defects. This provides the robotic arm with machine vision which is works by taking a series of images of the product from various and process these images to detect deviations using digital image processing techniques. The inline measurement has proven to be accurate, fast, and repeatable to be implemented in synchronization with the production line. Further, the automotive industries are moving towards hybrid inspection systems which capitalize on the measuring speed of the robot and the fast decision-making ability of human senses.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Selingerová, Simona. „Systémy průmyslového vidění s roboty Kuka a jeho aplikace na synchronizaci pohybu robotu s pohybujícím se prvkem“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229178.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis deals with a practical application employing an industrial robot KUKA, a vision system – smart camera Siemens. The application is focused on synchronizing or robot movements with objects moving on a conveyor belt. The introductory and theoretical part of this thesis is concerned with various systems for machine vision currently available on the market. Practical part is then focused on the demonstration application: setting-up the robotic cell and description of all devices, robot and vision system programming.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Burger, Brice. „Fusion de données audio-visuelles pour l'interaction Homme-Robot“. Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00494382.

Der volle Inhalt der Quelle
Annotation:
Dans le cadre de la robotique d'assistance, cette thèse a pour but de fusionner deux canaux d'informations (visuelles et auditives) dont peut disposer un robot afin de compléter et/ou confirmer les données qu'un seul canal aurait pu fournir, et ce, en vue d'une interaction avancée entre homme et robot. Pour ce faire, nos travaux proposent une interface perceptuelle pour l'interaction multimodale ayant vocation à interpréter conjointement parole et geste, notamment pour le traitement des références spatiales. Nous décrivons dans un premier temps la composante parole de nos travaux qui consiste en un système embarqué de reconnaissance et d'interprétation de la parole continue. Nous détaillons ensuite la partie vision composée d'un traqueur visuel multi-cibles chargé du suivi en 3D de la tête et des deux mains, ainsi que d'un second traqueur chargé du suivi de l'orientation du visage. Ces derniers alimentent un système de reconnaissance de gestes par DBNs décrit par la suite. Nous poursuivons par la description d'un module chargé de la fusion des données issues de ces sources d'informations dans un cadre probabiliste. Enfin, nous démontrons l'intérêt et la faisabilité d'une telle interface multimodale à travers un certains nombre de démonstrations sur les robots du LAAS-CNRS. L'ensemble de ces travaux est fonctionnel en quasi-temps réel sur ces plateformes robotiques réelles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Manorathna, Prasad. „Intelligent 3D seam tracking and adaptable weld process control for robotic TIG welding“. Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/18794.

Der volle Inhalt der Quelle
Annotation:
Tungsten Inert Gas (TIG) welding is extensively used in aerospace applications, due to its unique ability to produce higher quality welds compared to other shielded arc welding types. However, most TIG welding is performed manually and has not achieved the levels of automation that other welding techniques have. This is mostly attributed to the lack of process knowledge and adaptability to complexities, such as mismatches due to part fit-up. Recent advances in automation have enabled the use of industrial robots for complex tasks that require intelligent decision making, predominantly through sensors. Applications such as TIG welding of aerospace components require tight tolerances and need intelligent decision making capability to accommodate any unexpected variation and to carry out welding of complex geometries. Such decision making procedures must be based on the feedback about the weld profile geometry. In this thesis, a real-time position based closed loop system was developed with a six axis industrial robot (KUKA KR 16) and a laser triangulation based sensor (Micro-Epsilon Scan control 2900-25).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Hasasneh, Ahmad. „Robot semantic place recognition based on deep belief networks and a direct use of tiny images“. Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00960289.

Der volle Inhalt der Quelle
Annotation:
Usually, human beings are able to quickly distinguish between different places, solely from their visual appearance. This is due to the fact that they can organize their space as composed of discrete units. These units, called ''semantic places'', are characterized by their spatial extend and their functional unity. Such a semantic category can thus be used as contextual information which fosters object detection and recognition. Recent works in semantic place recognition seek to endow the robot with similar capabilities. Contrary to classical localization and mapping works, this problem is usually addressed as a supervised learning problem. The question of semantic places recognition in robotics - the ability to recognize the semantic category of a place to which scene belongs to - is therefore a major requirement for the future of autonomous robotics. It is indeed required for an autonomous service robot to be able to recognize the environment in which it lives and to easily learn the organization of this environment in order to operate and interact successfully. To achieve that goal, different methods have been already proposed, some based on the identification of objects as a prerequisite to the recognition of the scenes, and some based on a direct description of the scene characteristics. If we make the hypothesis that objects are more easily recognized when the scene in which they appear is identified, the second approach seems more suitable. It is however strongly dependent on the nature of the image descriptors used, usually empirically derived from general considerations on image coding.Compared to these many proposals, another approach of image coding, based on a more theoretical point of view, has emerged the last few years. Energy-based models of feature extraction based on the principle of minimizing the energy of some function according to the quality of the reconstruction of the image has lead to the Restricted Boltzmann Machines (RBMs) able to code an image as the superposition of a limited number of features taken from a larger alphabet. It has also been shown that this process can be repeated in a deep architecture, leading to a sparse and efficient representation of the initial data in the feature space. A complex problem of classification in the input space is thus transformed into an easier one in the feature space. This approach has been successfully applied to the identification of tiny images from the 80 millions image database of the MIT. In the present work, we demonstrate that semantic place recognition can be achieved on the basis of tiny images instead of conventional Bag-of-Word (BoW) methods and on the use of Deep Belief Networks (DBNs) for image coding. We show that after appropriate coding a softmax regression in the projection space is sufficient to achieve promising classification results. To our knowledge, this approach has not yet been investigated for scene recognition in autonomous robotics. We compare our methods with the state-of-the-art algorithms using a standard database of robot localization. We study the influence of system parameters and compare different conditions on the same dataset. These experiments show that our proposed model, while being very simple, leads to state-of-the-art results on a semantic place recognition task.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Stránský, Václav. „Vizuální systém pro detekci obsazenosti parkoviště pomocí hlubokých neuronových sítí“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363868.

Der volle Inhalt der Quelle
Annotation:
The concept of smart cities is inherently connected with efficient parking solutions based on the knowledge of individual parking space occupancy. The subject of this paper is the design and implementation of a robust system for analyzing parking space occupancy from a multi-camera system with the possibility of visual overlap between cameras. The system is designed and implemented in Robot Operating System (ROS) and its core consists of two separate classifiers. The more successful, however, a slower option is detection by a deep neural network. A quick interaction is provided by a less accurate classifier of movement with a background model. The system is capable of working in real time on a graphic card as well as on a processor. The success rate of the system on a testing data set from real operation exceeds 95 %.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Lagarde, Matthieu, Philippe Gaussier und Pierre Andry. „Apprentissage de nouveaux comportements: vers le développement épigénétique d'un robot autonome“. Phd thesis, Université de Cergy Pontoise, 2010. http://tel.archives-ouvertes.fr/tel-00749761.

Der volle Inhalt der Quelle
Annotation:
La problématique de l'apprentissage de comportements sur un robot autonome soulève de nombreuses questions liées au contrôle moteur, à l'encodage du comportement, aux stratégies comportementales et à la sélection de l'action. Utiliser une approche développementale présente un intérêt tout particulier dans le cadre de la robotique autonome. Le comportement du robot repose sur des mécanismes de bas niveau dont les interactions permettent de faire émerger des comportements plus complexes. Le robot ne possède pas d'informations a priori sur ses caractéristiques physiques ou sur l'environnement, il doit apprendre sa propre dynamique sensori-motrice. J'ai débuté ma thèse par l'étude d'un modèle d'imitation bas niveau. Du point de vue du développement, l'imitation est présente dès la naissance et accompagne, sous de multiples formes, le développement du jeune enfant. Elle présente une fonction d'apprentissage et se révèle alors être un atout en terme de temps d'acquisition de comportements, ainsi qu'une fonction de communication participant à l'amorce et au maintien d'interactions non verbales et naturelles. De plus, même s'il n'y a pas de réelle intention d'imiter, l'observation d'un autre agent permet d'extraire suffisamment d'informations pour être capable de reproduire la tâche. Mon travail a donc dans un premier temps consisté à appliquer et tester un modèle développemental qui permet l'émergence de comportements d'imitation de bas niveau sur un robot autonome. Ce modèle est construit comme un homéostat qui tend à équilibrer par l'action ses informations perceptives frustres (détection du mouvement, détection de couleur, informations sur les angles des articulations d'un bras de robot). Ainsi, lorsqu'un humain bouge sa main dans le champ visuel du robot, l'ambigüité de la perception de ce dernier lui fait confondre la main de l'humain avec l'extrémité de son bras. De l'erreur qui en résulte émerge un comportement d'imitation immédiate des gestes de l'humain par action de l'homéostat. Bien sûr, un tel modèle implique que le robot soit capable d'associer au préalable les positions visuelles de son effecteur avec les informations proprioceptives de ses moteurs. Grace au comportement d'imitation, le robot réalise des mouvements qu'il peut ensuite apprendre pour construire des comportements plus complexes. Comment alors passer d'un simple mouvement à un geste plus complexe pouvant impliquer un objet ou un lieu ? Je propose une architecture qui permet à un robot d'apprendre un comportement sous forme de séquences temporelles complexes (avec répétition d'éléments) de mouvements. Deux modèles différents permettant l'apprentissage de séquences ont été développés et testés. Le premier apprend en ligne le timing de séquences temporelles simples. Ce modèle ne permettant pas d'apprendre des séquences complexes, le second modèle testé repose sur les propriétés d'un réservoir de dynamiques, il apprend en ligne des séquences complexes. A l'issue de ces travaux, une architecture apprenant le timing d'une séquence complexe a été proposée. Les tests en simulation et sur robot ont montré la nécessité d'ajouter un mécanisme de resynchronisation permettant de retrouver les bons états cachés pour permettre d'amorcer une séquence complexe par un état intermédiaire. Dans un troisième temps, mes travaux ont consisté à étudier comment deux stratégies sensorimotrices peuvent cohabiter dans le cadre d'une tâche de navigation. La première stratégie encode le comportement à partir d'informations spatiales alors que la seconde utilise des informations temporelles. Les deux architectures ont été testées indépendamment sur une même tâche. Ces deux stratégies ont ensuite été fusionnées et exécutées en parallèle. La fusion des réponses délivrées par les deux stratégies a été réalisée avec l'utilisation de champs de neurones dynamiques. Un mécanisme de "chunking" représentant l'état instantané du robot (le lieu courant avec l'action courante) permet de resynchroniser les dynamiques des séquences temporelles. En parallèle, un certain nombre de problème de programmation et de conception des réseaux de neurones sont apparus. En effet, nos réseaux peuvent compter plusieurs centaines de milliers de neurones. Il devient alors difficile de les exécuter sur une seule unité de calcul. Comment concevoir des architectures neuronales avec des contraintes de répartition de calcul, de communications réseau et de temps réel ? Une autre partie de mon travail a consisté à apporter des outils permettant la modélisation, la communication et l'exécution en temps réel d'architecture distribuées. Pour finir, dans le cadre du projet européen Feelix Growing, j'ai également participé à l'intégration de mes travaux avec ceux du laboratoire LASA de l'EPFL pour l'apprentissage de comportements complexes mêlant la navigation, le geste et l'objet. En conclusion, cette thèse m'a permis de développer à la fois de nouveaux modèles pour l'apprentissage de comportements - dans le temps et dans l'espace, de nouveaux outils pour maîtriser des réseaux de neurones de très grande taille et de discuter à travers les limitations du système actuel, les éléments importants pour un système de sélection de l'action.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie