Dissertationen zum Thema „Camera guidance for robot“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Camera guidance for robot" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Pearson, Christopher Mark. „Linear array cameras for mobile robot guidance“. Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318875.
Der volle Inhalt der QuelleGrepl, Pavel. „Strojové vidění pro navádění robotu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443727.
Der volle Inhalt der QuelleMacknojia, Rizwan. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.
Der volle Inhalt der QuelleMaier, Daniel [Verfasser], und Maren [Akademischer Betreuer] Bennewitz. „Camera-based humanoid robot navigation“. Freiburg : Universität, 2015. http://d-nb.info/1119452082/34.
Der volle Inhalt der QuelleGu, Lifang. „Visual guidance of robot motion“. University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.
Der volle Inhalt der QuelleArthur, Richard B. „Vision-Based Human Directed Robot Guidance“. Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.
Der volle Inhalt der QuelleStark, Per. „Machine vision camera calibration and robot communication“. Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.
Der volle Inhalt der QuelleThis thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.
Snailum, Nicholas. „Mobile robot navigation using single camera vision“. Thesis, University of East London, 2001. http://roar.uel.ac.uk/3565/.
Der volle Inhalt der QuelleQuine, Ben. „Spacecraft guidance systems : attitude determination using star camera data“. Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360417.
Der volle Inhalt der QuelleBurman, Gustav, und Simon Erlandsson. „ACM 9000 : Automated Camera Man“. Thesis, KTH, Maskinkonstruktion (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230253.
Der volle Inhalt der QuelleI dagens digitala samhälle är sättet som undervisning skerpå under ständig förändring. Undervisningen håller på attdigitaliseras genom användningen av nätbaserade kurseroch digitala föreläsningar. Detta kandidatexamensarbetesöker en lösning på frågan om hur man kan filma en föreläsningutan en kameraoperatör, med en automatiserad kameraman,för lättare produktion av högkvalitativt videomaterial.Genom en modulariserad designprocess, praktiska testeroch vetenskapliga studier, designades ett sådant system.Det automatiska kamerastativet kan placeras längst bak ien föreläsningssal, på vilket en kamera kan placeras för attspela in eller strömma filmmaterial medan stativet riktar insig mot föreläsarens position, med hjälp av bildbehandling.
Sonmez, Ahmet Coskun. „Robot guidance using image features and fuzzy logic“. Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259476.
Der volle Inhalt der QuelleSteer, Barry. „Navigation for the guidance of a mobile robot“. Thesis, University of Warwick, 1985. http://wrap.warwick.ac.uk/97892/.
Der volle Inhalt der QuelleMarshall, Matthew Q. „Multi-camera uncalibrated visual servoing“. Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49117.
Der volle Inhalt der QuelleCarrera, Mendoza Gerardo. „Robot SLAM and navigation with multi-camera computer vision“. Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9672.
Der volle Inhalt der QuelleMohareri, Omid. „Image and haptic guidance for robot-assisted laparoscopic surgery“. Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54953.
Der volle Inhalt der QuelleApplied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
BURMAN, GUSTAV, und SIMON ERLANDSSON. „ACM 9000 : Automated Camera Man“. Thesis, KTH, Mekatronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233140.
Der volle Inhalt der QuelleI dagens digitala samhälle är sättet som undervisning sker på under ständig förändring. Undervisningen håller på att digitaliseras genom användningen av nätbaserade kurser och digitala föreläsningar. Detta kandidatexamensarbete söker en lösning på frågan om hur man kan filma en föreläsning utan en kameraoperatör, med en automatiserad kameraman, för lättare produktion av högkvalitativt videomaterial. Genom en modulariserad designprocess, praktiska tester och vetenskapliga studier, designades ett sådant system. Det automatiska kamerastativet kan placeras längst bak i en föreläsningssal, på vilket en kamera kan placeras för att spela in eller strömma filmmaterial medan stativet riktar in sig mot föreläsarens position, med hjälp av bildbehandling.
Durusu, Deniz. „Camera Controlled Pick And Place Application With Puma 760 Robot“. Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606759/index.pdf.
Der volle Inhalt der QuellePatel, Niravkumar Amrutlal. „Towards Closed-loop, Robot Assisted Percutaneous Interventions under MRI Guidance“. Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/130.
Der volle Inhalt der QuelleStein, Procópio Silveira. „Framework for visual guidance of an autonomous robot using learning“. Master's thesis, Universidade de Aveiro, 2009. http://hdl.handle.net/10773/2495.
Der volle Inhalt der QuelleEste documento apresenta os trabalhos de desenvolvimento de uma infraestrutura de aprendizagem para a condução de robôs móveis. Este método de aprendizagem utiliza redes neuronais artificias para calcular uma direcção capaz de manter um robô dentro de uma estrada. A rede "aprende"a calcular esta direcção baseada em exemplos de condutores humanos, replicando e, de uma certa forma, imitando comportamentos. Uma abordagem de aprendizagem pode superar alguns aspectos de algoritmos clássicos para o cálculo da direcção de um robot. No que se relaciona à velocidade de processamento, as redes neuronais artificiais são muito rápidas, o que as torna ideais para navegação em tempo real. Além disso as redes tem a capacidade de extrair informações que não foram detectadas por humanos e, por conseguinte, não podem ser codificadas em programas clássicos. A implementação desta nova forma de interacção entre humanos e robôs, que estão simultaneamente "ensinando"e "aprendendo", também vai ser destacada neste trabalho. A plataforma de testes utilizada nesta investigação será um robô do Projecto Atlas, desenvolvido como robô autónomo de competição, para participar da prova de Condução Autónoma que ocorre anualmente como parte do Festival Nacional de Robótica. Para transformar o robô numa plataforma robusta para testes, uma série de revisões e melhorias foram implementadas. Estas intervenções foram conduzidas a nível mecânico e electrónico, e também a nível de software, sendo este último de grande importância por estabelecer uma nova infraestrutura de desenvolvimento e programação para investigadores. ABSTRACT: This work describes the research and development of a learning infrastructure for mobile robot driving. This method uses artificial neural networks to compute the steer direction that a robot should perform to stay inside a track. The network "learns" to compute a direction based on examples from human drivers, replicating and sometimes even improving human-like behaviors. A learning approach can overcome some aspects of classical algorithms used for robot steering computation. Regarding the processing speed, artificial neural networks are very fast, which make them well suited for real-time navigation. They also have the possibility to perceive information that was undetected by humans and therefore could not be coded in classical programs. The implementation of this new form of interaction between humans and robots, that are simultaneously "teaching" and "learning" from each other, will also be emphasized in this work. The platform used for this research is one of the robots of the Project Atlas, designed as an autonomous robot to participate in the Autonomous Driving Competition, held annually as part of the Portuguese Robotics Open. To render this robot able to serve as a robust test platform, several revisions and improvements were conducted. These interventions were conducted at mechanical, electronic and software levels, with the latter having a big importance as it establishes a new framework for group and modular code development.
Ma, Mo. „Navigation using one camera in structured environment /“. View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20MA.
Der volle Inhalt der QuelleAlbrecht, Ladislav. „Realizace kamerového modulu pro mobilní robot jako nezávislého uzlu systému ROS - Robot Operating System“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-417773.
Der volle Inhalt der QuelleSasaki, Hironobu, Toshio Fukuda, Masashi Satomi und Naoyuki Kubota. „Growing neural gas for intelligent robot vision with range imaging camera“. IEEE, 2009. http://hdl.handle.net/2237/13913.
Der volle Inhalt der QuelleUbaldi, Stefano. „Markerless robot programming by demostration using a time-of-flight camera“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.
Den vollen Inhalt der Quelle findenMehrandezh, Mehran. „Navigation-guidance-based robot trajectory planning for interception of moving objects“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0005/NQ41242.pdf.
Der volle Inhalt der QuelleMeger, David Paul. „Planning, localization, and mapping for a mobile robot in a camera network“. Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101623.
Der volle Inhalt der QuelleKeepence, B. S. „Navigation of autonomous mobile robots“. Thesis, Cardiff University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304921.
Der volle Inhalt der QuellePannu, Rabindra. „Path Traversal Around Obstacles by a Robot using Terrain Marks for Guidance“. University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1312571550.
Der volle Inhalt der QuelleTeimoori, Sangani Hamid Electrical Engineering & Telecommunications Faculty of Engineering UNSW. „Topics in navigation and guidance of wheeled robots“. Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/43709.
Der volle Inhalt der QuelleNur, Kamruddin Md. „Synthetic view of retail spaces using camera and RFID sensors on a robot“. Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/352710.
Der volle Inhalt der QuelleEn aquesta tesi, dos enfocaments de presentació de la informació a la vista d’interior s’han presentat mitjançant identificació per radio-freqüència (RFID) i sensors de la càmera en un robot. L’objectiu és capturar imatges de l’ambient interior i la creació d’una vista 3D, de manera que els usuaris poden veure, navegar i localitzar un producte a la vista. RFID és un ‘sistema’ Auto ID, que és capaç d’identificar un objecte etiquetat des d’una distància remota sense una línia de visió directa. Per altra banda, un sistema RFID pot ser configurat per adquirir ubicacions aproximades d’objectes RFID etiquetats. En el primer enfocament, s’han presentat una creació de vista interior com Google Street View i la projecció d’informació de productes obtinguts per RFID. I, en el segon enfocament, la projecció de la informació de productes obtinguts per RFID en una vista de núvol de punts 3D s’ha presentat usant un econòmic RGB-D (Red, Blue, Green, and Depth) sensor de càmera i RGB-D SLAM.
Motta, J. M. S. T. „Optimised robot calibration using a vision-based measurement system with a single camera“. Thesis, Cranfield University, 1999. http://dspace.lib.cranfield.ac.uk/handle/1826/11097.
Der volle Inhalt der QuelleGoroshin, Rostislav. „Obstacle detection using a monocular camera“. Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24697.
Der volle Inhalt der QuelleBachnak, Rafic A. „Development of a stereo-based multi-camera system for 3-D vision“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1172005477.
Der volle Inhalt der QuelleNygaard, Andreas. „High-Level Control System for Remote Controlled Surgical Robots : Haptic Guidance of Surgical Robot“. Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8864.
Der volle Inhalt der QuelleThis report considers the work to improve the autonomy of surgical teleoperated systems, by introducing haptic guidance. The use of surgical robots in surgical procedures have become more common the recent years, but still it is in its infancy. Some advantages when using robots is scalability of movements, reduced tremor, better visualisation systems and greater range of motions than with conventional minimally invasive surgery. On the contrary, lack of tactile feedback and highly unstructured medical environment restricts the use of teleoperated robots to specific tasks within specific procedures. A way of improving autonomy of the teleoperated system is to introduce predefined constraints in the surgical environment, to create a trajectory or forbidden area, in order to guide the movements of the surgeon. This is often called haptic guidance. This report introduces the basics of teleoperated systems, with control schemes, models and analytical tools. Algorithms for haptic guidance have been developed, and the entire control and guidance system have been modified and suited for implementation on a real teleoperated system. Theoretical analysis of the position position (PP) control scheme reveals some general stability and performance characteristics, later used as a basis for tuning the real system parameters. The teleoperated system consists of a Phantom Omni device, from SensAble-Technologies, used as master manipulator, and AESOP 3000DS, from Computer Motions Inc., as the slave manipulator. The control system is implemented on a regular PC, connecting the complete system. Tests reveal that the slave manipulator is not suited for this task due to a significant communication time delay, limited velocity and inadequate control possibilities. The consequences makes force feedback based on the PP control scheme impossible, and limits performance of the entire teleoperated system. The guidance system is implemented in two variations, one based on slave positions and one based on master positions. This is motivated to give a performance comparison for variations of position error/tracking between the two manipulators. Slave based guidance appears to be stable only for limited values of the gains, and thus, it generates no strict constraints. It can be used to guide the operator away from forbidden areas, but is not suitable for high precision guiding. The master based guidance is stabile for very high gains, and the guidance have the accuracy to improve the surgeons precision during procedures. In the case of line guidance, the master based guidance gives a deviation of up to $1.3mm$ from the given trajectory. The work has shown the possibilities of using haptic guidance to improve accuracy and precision in surgical procedures, but among others, hardware limitations give room for several improvements in order to develop a teleoperated system that works.
Bohora, Anil R. „Visual robot guidance in time-varying environment using quadtree data structure and parallel processing“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182282896.
Der volle Inhalt der QuelleMarx, Anja. „Utilization of an under-actuated robot for co-manipulated guidance for breast cancer detection“. Paris 6, 2013. http://www.theses.fr/2013PA066645.
Der volle Inhalt der QuelleCette thèse s'inscrit dans le domaine de la co-manipulation. Dans les systèmes co-manipulés, le robot et l'utilisateur accomplissent une tâche d'une manière collaborative. Il existe trois types de co-manipulation. La co-manipulation orthodique est notamment utilisée pour la rééducation des membres où le robot et l'humain sont liés dans plus qu'un point. Dans un système de co-manipulation sérielle, le robot se situe entre l'utilisateur et l'outil qui est contrôlé par le robot. Cette thèse se situe dans le contexte de la co-manipulation parallèle. Dans cette classe, le robot et l'humain manient l'outil directement et en même temps. Ce principe de co-manipulation parallèle a été appliqué dans un contexte médical, plus précisément au diagnostic du cancer du sein. Aujourd'hui, la procédure standard pour ces examens est basée sur des imageries consécutives du sein utilisant d'abord la mammographie (MX) et puis l'échographie (U/S). Cette image U/S en 2D représente une coupe de l'objet. Les images MX peuvent être superposées comme des ``couches d'images'' afin d'obtenir un modèle 3D du sein. Ce fait relève la difficulté principale de cet examen d'imagerie. Pendant l'examen échographique, le radiologue doit localiser une zone d'intérêt précédemment définie dans les images MX en se servant seulement de la coupe 2D du sein. Il est à noter que la patiente doit adopter des positions différentes pour chaque examen. Elle est debout avec un sein comprimé entre une pelote de compression et le détecteur du système pour une mammographie. Cependant pour l'échographie, la patiente est couchée sur le dos. Cette différence de posture de la patiente représentent la deuxième difficulté de l'examen du sein. Le système proposé dans cette thèse facilite la procédure d'examens combinés en gardant la même géométrie du sein. De plus, un bras robotisé guidant la sonde échographique est rajouté au système de mammographie existant. Ainsi, un système de co-manipulation parallèle, qui permet la manipulation simultanée de la sonde échographique par le robot et l'utilisateur, a été mis en place. Jusqu'à présent, plusieurs systèmes de co-manipulation parallèle ont été présentés dans le domaine médical. Tous ont comme point commun d'avoir au moins autant de degrés de liberté (DDL) actionnés que la tâche à effectuer. Ceci implique un coût élevé du système entier ainsi qu'un possible encombrement causé par la structure robotisée. L'intérêt de ce travail est d'analyser des solutions alternatives permettant une amélioration significative du geste médical tout en réduisant l'encombrement dût au robot ainsi que son coût. D'un point de vue robotique, l'innovation consiste à proposer des guidages d'outils d'une manière sous-actionnée. Le robot ne fourni donc pas d'assistance couvrant tous les DDL de la tâche mais une aide partielle ayant comme but d'améliorer les gestes du radiologue. Des mesures comme la distance à la cible et le temps d’examen ont été choisies comme indicateur de performance. Les résultats d'une première série de tests ont démontré qu'un guidage complètement actionné améliore les performances des utilisateurs comparé à aucun guidage. Pour qualifier des améliorations des examens avec un guidage sous-actionné, différents modes de sous-actionnements ont été testés. Les résultats montrent que même un guidage partiel augmente d'une manière significative la qualité des examens échographiques. La précision a pu être augmentée en diminuant la durée de l'intervention. La réduction des DDL nécessite néanmoins une adaptation de la commande du robot à l'architecture du système. Il a été observé dans cette thèse qu'une simple réduction des DDL peut induire des instabilités reliées à l'architecture du système. Elle doit donc être adaptée en fonction du sous-actionnement de chaque cas
Pretlove, John. „Stereoscopic eye-in-hand active machine vision for real-time adaptive robot arm guidance“. Thesis, University of Surrey, 1993. http://epubs.surrey.ac.uk/843230/.
Der volle Inhalt der QuelleDokur, Omkar. „Embedded System Design of a Real-time Parking Guidance System“. Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5939.
Der volle Inhalt der QuelleRizwan, Macknojia. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, 2013. http://hdl.handle.net/10393/23976.
Der volle Inhalt der Quelle„Multiple camera pose estimation“. Thesis, 2008. http://library.cuhk.edu.hk/record=b6074556.
Der volle Inhalt der QuelleFurthermore, the process of estimating the rotation and translation parameters between a stereo pair from the essential matrix is investigated. This is an essential step for our multi-camera pose estimation method. We show that there are 16 solutions based on the singular value decomposition (not four or eight as previously thought). We also suggest a checking step to ascertain that the proposed algorithm will come up with accurate results. The checking step ensures the accuracy of the fundamental matrix calculated using the pose obtained. This provides a speedy way to calibrate a stereo rig. Our proposed theories are supported by the real and synthetic data experiments reported in this thesis.
In this thesis, we solve the pose estimation problem for robot motion by placing multiple cameras on the robot. In particular, we use four cameras arranged as two back-to-back stereo pairs combined with the Extended Kalman Filter (EKF). The EKF is used to provide a frame by frame recursive solution suitable for the real-time application at hand. The reason for using multiple cameras is that the pose estimation problem is more constrained for multiple cameras than for a single camera. Their use is further justified by the drop in price which is accompanied by the remarkable increase in accuracy. Back-to-back cameras are used since they are likely to have a larger field of view, provide more information, and capture more features. In this way, they are more likely to disambiguate the pose translation and rotation parameters. Stereo information is used in self-initialization and outlier rejection. Simple yet efficient methods have been proposed to tackle the problem of long-sequence drift. Our approaches have been compared, under different motion patterns, to other methods in the literature which use a single camera. Both the simulations and the real experiments show that our approaches are the most robust and accurate among them all as well as fast enough to realize the real-time requirement of robot navigation.
Mohammad Ehab Mohammad Ragab.
"April 2008."
Adviser: K. H. Wong.
Source: Dissertation Abstracts International, Volume: 70-03, Section: B, page: 1763.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (p. 138-148) and index.
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
Huang, Yu-Hsun, und 黃又勳. „Applying Virtual Guidance for Robot Teleoperation“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/02392861738394931968.
Der volle Inhalt der Quelle國立交通大學
電機與控制工程系所
95
Rapid development in computer network highly enhances the capability of teleoperation systems, people nowadays can explore an unknown area by using a remote-controlled robot. However, there are still some drawbacks in a telerobtic system.For instance, the operator can not obtain remote information fast enough due to the network time delay. Meanwhile, it is also difficult to distinguish the relative distance between objects from the planar image. Some researchers proposed using virtual environment and virtual guidance to tackles the problems. Virtual guidance acts as a guidance that aids the user in manipulating the robot via providing haptic and visual clues. In this thesis, we propose a novel virtual guidance which is more effective than those proposed previously in object grasping. The proposed virtual guidance integrates the planar image and force reflection to provide the guidance. With it, the operator can easily control the robot gripper for object grasping in the presence of imprecision in positioning. In implementation, we developed a networked VR-based telerobotic system for the proposed virtual guidance. Experiments are performed to demonstrate the effectiveness of the proposed approach.
Hong, Hao-cian, und 洪豪謙. „Mobile Robot Localization via RGB-D Camera“. Thesis, 2013. http://ndltd.ncl.edu.tw/handle/63865054447658598133.
Der volle Inhalt der Quelle國立臺南大學
資訊工程學系碩士班
101
How to make robot sense the environment and localize its position in the workspace is a very important issue in the field of autonomous mobile robots. Now there is a good method – Monte Carlo Localization to solve this problem. If sensors can provide the correct information, Monte Carlo Localization can exhibit good stability and accuracy. In the actual application, Monte Carlo Localization has restriction on the sensors and computing ability of robots. In much recent research, two kinds of sensors are applied to Monte Carlo Localization. One is camera which can provide image information, and the other is distance sensor which can provide distance information (ex. laser range finder, infrared, and etc.). The use of camera can get more information, so the convergence can be sped up and get higher accuracy. However, more computation time is required. Using the distance information as sensory input, in contrast, the computation time will be less. If the feature of distance information is not prominent, the convergence speed and localization accuracy will be reduced. In this paper, a novel approach to integrate distance and image information for Monte Carlo Localization is proposed. The distance information is used for locating the regions of interest to perform feature matching. In this way, Monte Carlo Localization is able to achieve higher localization accuracy and convergence speed. The RGB-D camera applied in our work is Microsoft Kinect sensor. Although the accuracy of Microsoft Kinect sensor is not fully satisfactory, it is cheaper than the other sensors. In actual system application, the cost can be reduced.
Tsai, Ming-Jin, und 蔡明瑾. „Camera Calibration of Home Robot Vision System“. Thesis, 2002. http://ndltd.ncl.edu.tw/handle/34205646219620471162.
Der volle Inhalt der Quelle國立交通大學
資訊科學系
90
Camera calibration is a crucial step in the reconstruction of a 3D model and has been an important research topic in computer vision. We can classify calibration techniques roughly into two categories: photogrammetric calibration and self-calibration. In this paper, we will study different algorithms to calibrate a camera. The major method is based on images of a planar pattern obtained from different viewing angles, as proposed in [30]. Both synthetic data and real images have been tested and results with satisfactory accuracy have been obtained. The method deals with noise well but is time consuming. To improve the efficiency of the calibration, a second method which uses homography to quickly compute the focal length is adopted. Proper mapping between the results obtained by these two methods can then be used to derive the correct camera parameters.
Beach, David Michael. „Multi-camera benchmark localization for mobile robot networks“. 2004. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=362308&T=F.
Der volle Inhalt der QuelleBeach, David Michael. „Multi-camera benchmark localization for mobile robot networks“. 2005. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=369735&T=F.
Der volle Inhalt der QuelleLI, JIAN-DE, und 李建德. „Guidepost understanding for robot guidance with single view“. Thesis, 1988. http://ndltd.ncl.edu.tw/handle/21943749085368651705.
Der volle Inhalt der QuelleRao, Fang-Shiang, und 饒方翔. „Active Guidance for a Passive Robot Walking Helper“. Thesis, 2010. http://ndltd.ncl.edu.tw/handle/14406536318976111496.
Der volle Inhalt der Quelle國立交通大學
電控工程研究所
99
Recently, the problem of aging population becomes more and more serious. How to take good care of the elderly is now an important issue around the world. Along with the progress of the medical technology, several robot walking helpers have been developed. It motivates us to develop a robot walking helper, named i-go, in our laboratory for assisting the lives of the elderly. In the thesis, based on navigation techniques previously proposed, we develop two guidance algorithms for passive walking helper, and realize them in our i-go. They are:(1)the position-controlling algorithm and (2)position and orientation-controlling algorithm. The former can guide the user to the desired position, and the latter not only guide the user to the desired position, but also to the desired orientation. The proposed guidance algorithms have been verified via the experiment. In future, we expect the i-go can assist the elderly for guidance in real environments. We will introduce the i-go to the Alzheimer’s patients, so that they can rely on it for movement under the conditions of memory decline and poor sense in orientation.
Chen, Hong-Shiang, und 陳弘翔. „Robot Photographer: Achieving Camera Works witha PTZ Camcorder Mounted on a Mobile Robot“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/36171141488499843199.
Der volle Inhalt der Quelle國立暨南國際大學
資訊工程學系
100
Robot photographer is a recent important development in the field of robotics. However, conventional robot photographer systems mainly aimed to take high quality still images. Very few studies focus on how to shoot high quality films. The goal of this thesis is to develop an automatic camera robot mimicking a professional camera man who can select proper camera works and composition rules during filming. Furthermore, the camera robot will assess the quality of each shot to filter out low quality video clips. In this study, the camera robot is constructed based on a Pioneer-3DX mobile robot. A notebook computer is used as the main controller. A Kinect sensor is adopted to gather skeleton information of people in the camera field of view for selecting camera works. Also, a face detection algorithm is introduced to acquire necessary information for shooting composition. The system also includes a pan-tilt unit and a camcorder controlled by the computer through the local application control bus system. The implemented system has been tested in a real environment. The results show that the camera robot can successfully select different camera works for automatic filming and delete low quality video clips. A subjective evaluation has been conducted and the results show that the videos acquired with the camera robot are visually appealing.
Rajagopalan, Ramesh. „Guidance control for automatic guided vehicles employing binary camera vision“. Thesis, 1991. http://spectrum.library.concordia.ca/3386/1/NN68738.pdf.
Der volle Inhalt der QuelleLu, Yen-Te, und 呂彥德. „IR Camera Technology for Navigation System of Wheeled Robot“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/7jwxha.
Der volle Inhalt der Quelle國立臺灣科技大學
機械工程系
99
Most of the researches for indoor positioning system are based on wireless communication technology, unfortunately its accuracy is still dozens of centimeters difference. In this current research, we are able to design a positioning system that based on infrared technology by using both of IR camera and LED module. LED module is including IR LED and RF communication module as the source of infrared. The coordinate values of LED module are recorded by IR camera that attached on wheeled robot. Moreover, wheeled robot is equipped with ultrasonic module to avoid the unknown position obstacles, or make path planning to avoid obstacles when the positions are known. User can command and monitor the wheeled robot by UI-PC, and use Bluetooth for data transfer. Current result of this research described its high accuracy which has positioning error 5-10cm range, compared to wireless communication that has positioning error in 75-100cm range. This technique could propose as a new position system with simpler method and higher accuracy for on sight indoor environment.
Su, Wei-Yu, und 蘇威宇. „Implementation of Wireless Network-Controlled Robot with Cloud Camera“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/y59d77.
Der volle Inhalt der Quelle義守大學
電機工程學系
104
Wireless-Network-Control of technology is used life in society, people work so hard and can’t lead to proper use of time, even if the plan were on time, but still very difficult, but through use a Wireless Network Control, you can let people in on the same time, even in different locations, it is possible to deal with the matter. In this thesis, the use of Wireless Network Control and remote video cameras to give people life assistance In this thesis, the use of the Internet of Things (Internet of Thing, IOT), Arduino YUN Development Board, Arduino UNO board, Arduino expansion board, Arduino IDE, Parallax Standard Servo machine, ultrasonic sensors (HC-SR04) and video camera module, so that the machine can be remotely controlled through the use of the Internet and video cameras, real-time images back to the phone and the user management page, perform real-time monitoring at home, create personal Home Automation. Home Automation is refers to the control means of order to improve the quality life make home life more comfortable and safe, completed the so-called "home automation." Automation not only the future of technology used in the industry, will also be used in public life, let people through in such a way to use time effectively, to achieve "the wisdom of home life" and "the wisdom of remote monitoring."