Dissertationen zum Thema „Machine vision for robot guidance“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Machine vision for robot guidance" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Arthur, Richard B. „Vision-Based Human Directed Robot Guidance“. Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.
Der volle Inhalt der QuellePearson, Christopher Mark. „Linear array cameras for mobile robot guidance“. Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318875.
Der volle Inhalt der QuellePretlove, John. „Stereoscopic eye-in-hand active machine vision for real-time adaptive robot arm guidance“. Thesis, University of Surrey, 1993. http://epubs.surrey.ac.uk/843230/.
Der volle Inhalt der QuelleGrepl, Pavel. „Strojové vidění pro navádění robotu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443727.
Der volle Inhalt der QuelleBohora, Anil R. „Visual robot guidance in time-varying environment using quadtree data structure and parallel processing“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182282896.
Der volle Inhalt der QuelleGu, Lifang. „Visual guidance of robot motion“. University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.
Der volle Inhalt der QuelleSonmez, Ahmet Coskun. „Robot guidance using image features and fuzzy logic“. Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259476.
Der volle Inhalt der QuelleStark, Per. „Machine vision camera calibration and robot communication“. Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.
Der volle Inhalt der QuelleThis thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.
Foster, D. J. „Pipelining : an approach for machine vision“. Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:1258e292-2603-4941-87db-d2a56b8856a2.
Der volle Inhalt der QuelleLeidenkrantz, Axel, und Erik Westbrandt. „Implementation of machine vision on a collaborative robot“. Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17039.
Der volle Inhalt der QuelleBrohan, Kevin Patrick. „Search and attention for machine vision“. Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/search-and-attention-for-machine-vision(a4747c9b-ac13-46d1-8895-5f2d88523d80).html.
Der volle Inhalt der QuelleDunn, Mark. „Applications of vision sensing in agriculture“. University of Southern Queensland, Faculty of Engineering and Surveying, 2007. http://eprints.usq.edu.au/archive/00004102/.
Der volle Inhalt der QuelleSubramanian, Vijay. „Autonomous vehicle guidance using machine vision and laser radar for agricultural applications“. [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011323.
Der volle Inhalt der QuelleLarsson, Mathias. „Machine vision for finding a joint to guide a welding robot“. Thesis, University West, Department of Engineering Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1783.
Der volle Inhalt der QuelleThis report contains a description on how it is possible to guide a robot along an edge, by using a camera mounted on the robot. If stereo matching is used to calculate 3Dcoordinates of an object or an edge, it requires two images from different known positions and orientations to calculate where it is. In the image analysis in this project, the Canny edge filter has been used. The result from the filter is not useful directly, because it finds too many edges and it misses some pixels. The Canny edge result must be sorted and finally filled up before the final calculations can be started. This additional work with the image decreases unfortunately the accuracy in the calculations. The accuracy is estimated through comparison between measured coordinates of the edge using a coordinate measuring machine and the calculated coordinates. There is a deviation of up to three mm in the calculated edge. The camera calibration has been described in earlier thesis so it is not mentioned in this report, although it is a prerequisite of this project.
Watanabe, Yoko. „Stochastically optimized monocular vision-based navigation and guidance“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/22545.
Der volle Inhalt der QuelleCommittee Chair: Johnson, Eric; Committee Co-Chair: Calise, Anthony; Committee Member: Prasad, J.V.R.; Committee Member: Tannenbaum, Allen; Committee Member: Tsiotras, Panagiotis.
Adeboye, Taiyelolu. „Robot Goalkeeper : A robotic goalkeeper based on machine vision and motor control“. Thesis, Högskolan i Gävle, Avdelningen för elektronik, matematik och naturvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-27561.
Der volle Inhalt der QuelleTessier, Cédric. „Système de localisation basé sur une stratégie de perception cognitive appliqué à la navigation autonome d'un robot mobile“. Clermont-Ferrand 2, 2007. http://www.theses.fr/2007CLF21784.
Der volle Inhalt der QuelleHarper, Jason W. „Fast Template Matching For Vision-Based Localization“. Cleveland, Ohio : Case Western Reserve University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1238689057.
Der volle Inhalt der QuelleDepartment of Computer Engineering Abstract Title from OhioLINK abstract screen (viewed on 13 April 2009) Available online via the OhioLINK ETD Center
Mathavan, Senthan. „Trajectory solutions for a game-playing robot using nonprehensile manipulation methods and machine vision“. Thesis, Loughborough University, 2009. https://dspace.lboro.ac.uk/2134/34146.
Der volle Inhalt der QuelleSORIANO, PINTER JAUME. „Machine learning-based image processing for human-robot collaboration“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278899.
Der volle Inhalt der QuelleMänniska-robot samarbete, som ett nytt paradigm inom tillverkningsindustrin, har redan blivit ett omtalat ämne inom tillverkningsvetenskapen, produktforskningen, intelligent robotik och datavetenskapen. På grund av det senaste decenniets ökning av "deep learning" teknologier kan avancerade information-processerings teknologier bringa nya möjligheter för människarobot samarbete. Under tiden har även maskininlärnings-baserad bildklassificering med "convolutional neural network" blivit ett kraftfullt verktyg för att hantera problem så som måligenkänning och lokalisering. Dessa typer av teknologier har potential att implementeras nom robotiserad tillverkning och människa-robot samarbete. En utmaning är att implementera väldesignade "convolutional neural networks" kopplat till ett robot system som kan utföra arbete i samarbete med människan. Noggranhet och robusthet behöver också avvägas i utvecklingsarbetet. Detta examensarbete kommer att ta itu med denna utmaning. Detta examensarbete försöker att implementera en lösning baserad på maskininlärnings-metoder för bildigenkänning som tillåter oss att, med hjälp av en billig bild lösning (RGB enkel kamera), detektera och lokalisera tillverkningskomponenter att plocka upp och slutföra en montering, vilket hjälper den mänskliga medhjälparen, med en industriell robot. Detta förenklar också IT-uppgifterna för att köra den.
Krajcar, Milan. „Robotické vidění s průmyslovými roboty Kuka“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228676.
Der volle Inhalt der QuelleKlein, Joëlle. „Contribution à la commande orale d'un robot doté d'un système de vision“. Nancy 1, 1990. http://www.theses.fr/1990NAN10393.
Der volle Inhalt der QuelleCondom, Jean-Marie. „Un système de dialogue multimodal pour la communication avec un robot manipuleur“. Toulouse 3, 1992. http://www.theses.fr/1992TOU30155.
Der volle Inhalt der QuelleMiller, Michael E. „The development of an improved low cost machine vision system for robotic guidance and manipulation of randomly oriented, straight edged objects“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182445639.
Der volle Inhalt der QuelleModi, Kalpesh Prakash. „Vision application of human robot interaction : development of a ping pong playing robotic arm /“. Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/943.
Der volle Inhalt der QuelleMassé, Benoît. „Etude de la direction du regard dans le cadre d'interactions sociales incluant un robot“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM055/document.
Der volle Inhalt der QuelleRobots are more and more used in a social context. They are required notonly to share physical space with humans but also to interact with them. Inthis context, the robot is expected to understand some verbal and non-verbalambiguous cues, constantly used in a natural human interaction. In particular,knowing who or what people are looking at is a very valuable information tounderstand each individual mental state as well as the interaction dynamics. Itis called Visual Focus of Attention or VFOA. In this thesis, we are interestedin using the inputs from an active humanoid robot – participating in a socialinteraction – to estimate who is looking at whom or what.On the one hand, we want the robot to look at people, so it can extractmeaningful visual information from its video camera. We propose a novelreinforcement learning method for robotic gaze control. The model is basedon a recurrent neural network architecture. The robot autonomously learns astrategy for moving its head (and camera) using audio-visual inputs. It is ableto focus on groups of people in a changing environment.On the other hand, information from the video camera images are used toinfer the VFOAs of people along time. We estimate the 3D head poses (lo-cation and orientation) for each face, as it is highly correlated with the gazedirection. We use it in two tasks. First, we note that objects may be lookedat while not being visible from the robot point of view. Under the assump-tion that objects of interest are being looked at, we propose to estimate theirlocations relying solely on the gaze direction of visible people. We formulatean ad hoc spatial representation based on probability heat-maps. We designseveral convolutional neural network models and train them to perform a re-gression from the space of head poses to the space of object locations. Thisprovide a set of object locations from a sequence of head poses. Second, wesuppose that the location of objects of interest are known. In this context, weintroduce a Bayesian probabilistic model, inspired from psychophysics, thatdescribes the dependency between head poses, object locations, eye-gaze di-rections, and VFOAs, along time. The formulation is based on a switchingstate-space Markov model. A specific filtering procedure is detailed to inferthe VFOAs, as well as an adapted training algorithm.The proposed contributions use data-driven approaches, and are addressedwithin the context of machine learning. All methods have been tested on pub-licly available datasets. Some training procedures additionally require to sim-ulate synthetic scenarios; the generation process is then explicitly detailed
Chevalier, Pauline. „Impact of sensory preferences in individuals with autism spectrum disorderon their social interaction with a robot“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLY017/document.
Der volle Inhalt der QuelleThe goal of this thesis is to provide contributions that will help in the long term to enable personalized robot-based social interaction for individuals with Autism Spectrum Disorders (ASD). This work was done in collaboration with three care facilities for people suffering from ASD: IME MAIA (France) and IME Notre Ecole, medical and educative schools for children and teenagers with ASD, and FAM La Lendemaine (France), a medical house for adults with ASD.Inter-individual differences are present in ASD, and impact the behaviors of each individual in their lives, and in this study, during their interactions with a robot.The first step of our work was to propose an appropriate method to define the proprioceptive and visual profiles of each of our participants. We based our work on the hypothesis that the proprioceptive (the ability of an individual to determine body segment positions (i.e., joint position sense and to detect limb movements in space) and visual integration of cues of an individual with ASD is an indicator of their social and communication skills. We posit that a mitigated behavioral response (i.e., hyporeactivity) to visual motion and an overreliance on proprioceptive information are linked in individuals with ASD to their difficulties in integrating social cues and engaging in successful social interactions.We used two methods to define the proprioceptive and visual profile of our participant: a well-known questionnaire on sensory preferences and an experimental setup. With the setup, we were able to observe three different groups of postural behaviors in our participants. Thanks to these individual profiles, we could make assumptions on the behaviors that one can expect from each of our participants during interactions with the robot.We aimed to assess various social skills of our participants in regards to their profiles. We designed three single case studies: (1) emotion recognition with different embodiments (two robots, a virtual agent and a human); (2) a short greeting social task with the robot Nao; and (3) a game evaluating joint attention response to the robot Nao. We also conducted eight weeks-long sessions with an imitation task with Nao.Through these studies, we were able to observe that the participants that display an overreliance on proprioceptive cues and a hyporeactivity to visual cues had more difficulties to interact with the robot (less gaze towards the robot, less answers to joint attention initiation behaviors, more difficulties to recognize emotions and to imitate a partner) than the other participants.We were able to observe that the repeated sessions with the robot Nao were benefic for participants with ASD: after the sessions with the robot Nao, the participants showed an improvement in their social skills (gaze to the partner, imitations).Defining such individual profiles could provide promising strategies for designing successful and adapted Human-Robot Interaction for individuals with ASD
Marín, Urías Luis Felipe. „Reasoning about space for human-robot interaction“. Toulouse 3, 2009. http://thesesups.ups-tlse.fr/1195/.
Der volle Inhalt der QuelleHuman Robot Interaction is a research area that is growing exponentially in last years. This fact brings new challenges to the robot's geometric reasoning and space sharing abilities. The robot should not only reason on its own capacities but also consider the actual situation by looking from human's eyes, thus "putting itself into human's perspective". In humans, the "visual perspective taking" ability begins to appear by 24 months of age and is used to determine if another person can see an object or not. The implementation of this kind of social abilities will improve the robot's cognitive capabilities and will help the robot to perform a better interaction with human beings. In this work, we present a geometric spatial reasoning mechanism that employs psychological concepts of "perspective taking" and "mental rotation" in two general frameworks: - Motion planning for human-robot interaction: where the robot uses "egocentric perspective taking" to evaluate several configurations where the robot is able to perform different tasks of interaction. - A face-to-face human-robot interaction: where the robot uses perspective taking of the human as a geometric tool to understand the human attention and intention in order to perform cooperative tasks
Sattigeri, Ramachandra Jayant. „Adaptive Estimation and Control with Application to Vision-based Autonomous Formation Flight“. Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16272.
Der volle Inhalt der QuelleEnvall, Zakarias. „Robot Racking : A Racking Solution for Autonomous Production“. Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-70097.
Der volle Inhalt der QuelleEkvall, Staffan. „Robot Task Learning from Human Demonstration“. Doctoral thesis, Stockholm : School of Computer Science and Communication, Kungliga Tekniska högskolan (KTH), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4279.
Der volle Inhalt der QuelleNagy, Marek. „Synchronizace pohybu průmyslového robotu s pohybem pásového dopravníku“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231322.
Der volle Inhalt der QuelleLoffreno, Michele. „Computer Vision and Machine Learning for a Spoon-feeding Robot : A prototype solution based on ABB YuMi and an Intel RealSense camera“. Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-182503.
Der volle Inhalt der QuelleNo
山本, 聡史. „下側接近を特徴とする定置型イチゴ収穫ロボットの開発“. 京都大学, 2011. http://hdl.handle.net/2433/135408.
Der volle Inhalt der QuelleKyoto University (京都大学)
0048
新制・論文博士
博士(農学)
乙第12528号
論農博第2747号
新制||農||988(附属図書館)
学位論文||H23||N4584(農学部図書室)
28350
(主査)教授 近藤 直, 教授 清水 浩, 准教授 飯田 訓久
学位規則第4条第2項該当
Kira, Zsolt. „Communication and alignment of grounded symbolic knowledge among heterogeneous robots“. Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33941.
Der volle Inhalt der QuelleCoupeté, Eva. „Reconnaissance de gestes et actions pour la collaboration homme-robot sur chaîne de montage“. Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEM062/document.
Der volle Inhalt der QuelleCollaborative robots are becoming more and more present in our everyday life. In particular, within the industrial environment, they emerge as one of the preferred solution to make assembly line in factories more flexible, cost-effective and to reduce the hardship of the operators’ work. However, to enable a smooth and efficient collaboration, robots should be able to understand their environment and in particular the actions of the humans around them.With this aim in mind, we decided to study technical gestures recognition. Specifically, we want the robot to be able to synchronize, adapt its speed and understand if something unexpected arises.We considered two use-cases, one dealing with copresence, the other with collaboration. They are both inspired by existing task on automotive assembly lines.First, for the co-presence use case, we evaluated the feasibility of technical gestures recognition using inertial sensors. We obtained a very good result (96% of correct recognition with one operator) which encouraged us to follow this idea.On the collaborative use-case, we decided to focus on non-intrusive sensors to minimize the disturbance for the operators and we chose to use a depth-camera. We filmed the operators with a top view to prevent most of the potential occultations.We introduce an algorithm that tracks the operator’s hands by calculating the geodesic distances between the points of the upper body and the top of the head.We also design and evaluate an approach based on discrete Hidden Markov Models (HMM) taking the hand positions as an input to recognize technical gestures. We propose a method to adapt our system to new operators and we embedded inertial sensors on tools to refine our results. We obtain the very good result of 90% of correct recognition in real time for 13 operators.Finally, we formalize and detail a complete methodology to realize technical gestures recognition on assembly lines
Edström, Jacob, und Pontus Mjöberg. „The Optimal Hardware Architecture for High Precision 3D Localization on the Edge. : A Study of Robot Guidance for Automated Bolt Tightening“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263104.
Der volle Inhalt der QuelleIndustrin rör sig mot en högre grad av automatisering och uppkoppling, där tidigare manuella operationer anpassas för sammankopplade industriella robotar. Denna masteruppsats fokuserar specifikt på automatiseringen av åtdragningsapplikationer med förmonterade bultar och kollaborativa robotar. Användningen av 3D-datorseende undersöks för direkt lokalisering av bultar, för att möjliggöra flexibla monteringslösningar. En lokaliseringsalgoritm baserad på 3Ddata utvecklas med intentionen att skapa en lätt mjukvara för att köras på Edge-enheter. En restriktiv användning av djupinlärningsklassificering är därmed inkluderad, för att möjliggöra produktflexibilitet tillsammans med en minimering av den behövda beräkningskraften. Avvägningarna mellan edge- och moln- eller klusterberäkning för den valda applikationen undersöks för att identifiera smarta avlastningsmöjligheter till moln- eller klusterresurser. För att minska operationell fördröjning utvärderas även bildpartitionering, för att snabbare kunna starta operationen med en första koordinat och möjliggöra beräkningar parallellt med robotrörelser. Fyra olika hårdvaruarkitekturer testas, bestående av två olika enkortsdatorer, ett kluster av enkortsdatorer och en marknadsledande dator som en efterliknad lokal molnlösning. Alla system utom klustret visar sig prestera utan operationell fördröjning för applikationen. Den optimala hårdvaruarkitekturen visar sig därmed vara en konsumentklassad enkortsdator, optimerad på energieffektivitet, kostnad och storlek. Om endast variansen i kommunikationstid kan minskas visar klustret potential för att kunna reducera den totala beräkningstiden utan att skapa operationell fördröjning. Smart avlastning till djupinlärningsoptimerade molnresurser eller kluster av sammankopplade robotstationer visar sig möjliggöra ökad komplexitet och tillförlitlighet av algoritmen. Enkortsdatorn visar sig även kunna växla mellan en edge- och en klusterkonfiguration, för att antingen optimera för tiden att starta operationen eller för den totala beräkningstiden. Detta medför en hög flexibilitet i industriella sammanhang, där produktändringar kan hanteras utan behovet av hårdvaruförändringar för visuella beräkningar, vilket ytterligare möjliggör dess integrering i fabriksenheter.
Brèthes, Ludovic. „Suivi visuel par filtrage particulaire : application à l'interaction Homme-robot“. Toulouse 3, 2005. http://www.theses.fr/2005TOU30282.
Der volle Inhalt der QuelleThis thesis is focused on the detection and the tracking of people and also on the recognition of elementary gestures from video stream of a color camera embeded on the robot. Particle filter well suited to this context enables a straight combination/fusion of several measurement cues. We propose here various filtering strategies where visual information such as shape, color and motion are taken into account in the importance function and the measurement model. We compare and evaluate these filtering strategies in order to show which combination of visual cues and particle filter algorithm are more suitable to the interaction modalities that we consider for our tour-robot. Our last contribution relates to the recognition of symbolic gestures which enable to communicate with the robot. An efficient particle filter strategy is proposed in order to track the hand and to recognize at the same time its configuration and gesture dynamic in video stream
Colbert, Steven C. „Shape and Pose Recovery of Novel Objects Using Three Images from a Monocular Camera in an Eye-In-Hand Configuration“. Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3515.
Der volle Inhalt der QuelleRutkowski, Adam J. „A BIOLOGICALLY-INSPIRED SENSOR FUSION APPROACH TO TRACKING A WIND-BORNE ODOR IN THREE DIMENSIONS“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1196447143.
Der volle Inhalt der QuelleMelikian, Simon Haig. „Visual Search for Objects with Straight Lines“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=case1134003738.
Der volle Inhalt der QuelleUskarci, Algan. „Human Arm Mimicking Using Visual Data“. Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605620/index.pdf.
Der volle Inhalt der QuelleKrutílek, Jan. „Systémy průmyslového vidění s roboty Kuka a jeho aplikace na rozpoznávání volně ložených prvků“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229174.
Der volle Inhalt der QuelleAvvari, Ddanukash. „A Literature Review on Differences Between Robotic and Human In-Line Quality Inspection in Automotive Manufacturing Assembly Line“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-56038.
Der volle Inhalt der QuelleSelingerová, Simona. „Systémy průmyslového vidění s roboty Kuka a jeho aplikace na synchronizaci pohybu robotu s pohybujícím se prvkem“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229178.
Der volle Inhalt der QuelleBurger, Brice. „Fusion de données audio-visuelles pour l'interaction Homme-Robot“. Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00494382.
Der volle Inhalt der QuelleManorathna, Prasad. „Intelligent 3D seam tracking and adaptable weld process control for robotic TIG welding“. Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/18794.
Der volle Inhalt der QuelleHasasneh, Ahmad. „Robot semantic place recognition based on deep belief networks and a direct use of tiny images“. Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00960289.
Der volle Inhalt der QuelleStránský, Václav. „Vizuální systém pro detekci obsazenosti parkoviště pomocí hlubokých neuronových sítí“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363868.
Der volle Inhalt der QuelleLagarde, Matthieu, Philippe Gaussier und Pierre Andry. „Apprentissage de nouveaux comportements: vers le développement épigénétique d'un robot autonome“. Phd thesis, Université de Cergy Pontoise, 2010. http://tel.archives-ouvertes.fr/tel-00749761.
Der volle Inhalt der Quelle