Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Camera guidance for robot.

Dissertationen zum Thema „Camera guidance for robot“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Camera guidance for robot" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Pearson, Christopher Mark. „Linear array cameras for mobile robot guidance“. Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318875.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Grepl, Pavel. „Strojové vidění pro navádění robotu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443727.

Der volle Inhalt der Quelle
Annotation:
Master's thesis deals with the design, assembly, and testing of a camera system for localization of randomly placed and oriented objects on a conveyor belt with the purpose of guiding a robot on those objects. The theoretical part is focused on research in individual components making a camera system and on the field of 2D and 3D localization of objects. The practical part consists of two possible arrangements of the camera system, solution of the chosen arrangement, creating testing images, programming the algorithm for image processing, creating HMI, and testing the complete system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Macknojia, Rizwan. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Der volle Inhalt der Quelle
Annotation:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Maier, Daniel [Verfasser], und Maren [Akademischer Betreuer] Bennewitz. „Camera-based humanoid robot navigation“. Freiburg : Universität, 2015. http://d-nb.info/1119452082/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gu, Lifang. „Visual guidance of robot motion“. University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.

Der volle Inhalt der Quelle
Annotation:
Future robots are expected to cooperate with humans in daily activities. Efficient cooperation requires new techniques for transferring human skills to robots. This thesis presents an approach on how a robot can extract and replicate a motion by observing how a human instructor conducts it. In this way, the robot can be taught without any explicit instructions and the human instructor does not need any expertise in robot programming. A system has been implemented which consists of two main parts. The first part is data acquisition and motion extraction. Vision is the most important sensor with which a human can interact with the surrounding world. Therefore two cameras are used to capture the image sequences of a moving rigid object. In order to compress the incoming images from the cameras and extract 3D motion information of the rigid object, feature detection and tracking are applied to the images. Corners are chosen as the main features because they are more stable under perspective projection and during motion. A reliable corner detector is implemented and a new corner tracking algorithm is proposed based on smooth motion constraints. With both spatial and temporal constraints, 3D trajectories of a set of points on the object can be obtained and the 3D motion parameters of the object can be reliably calculated by the algorithm proposed in this thesis. Once the 3D motion parameters are available through the vision system, the robot should be programmed to replicate this motion. Since we are interested in smooth motion and the similarity between two motions, the task of the second part of our system is therefore to extract motion characteristics and to transfer these to the robot. It can be proven that the characteristics of a parametric cubic B-spline curve are completely determined by its control points, which can be obtained by the least-squares fitting method, given some data points on the curve. Therefore a parametric cubic B–spline curve is fitted to the motion data and its control points are calculated. Given the robot configuration the obtained control points can be scaled, translated, and rotated so that a motion trajectory can be generated for the robot to replicate the given motion in its own workspace with the required smoothness and similarity, although the absolute motion trajectories of the robot and the instructor can be different. All the above modules have been integrated and results of an experiment with the whole system show that the approach proposed in this thesis can extract motion characteristics and transfer these to a robot. A robot arm has successfully replicated a human arm movement with similar shape characteristics by our approach. In conclusion, such a system collects human skills and intelligence through vision and transfers them to the robot. Therefore, a robot with such a system can interact with its environment and learn by observation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Arthur, Richard B. „Vision-Based Human Directed Robot Guidance“. Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Stark, Per. „Machine vision camera calibration and robot communication“. Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.

Der volle Inhalt der Quelle
Annotation:

This thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Snailum, Nicholas. „Mobile robot navigation using single camera vision“. Thesis, University of East London, 2001. http://roar.uel.ac.uk/3565/.

Der volle Inhalt der Quelle
Annotation:
This thesis describes the research carried out in overcoming the problems encountered during the development of an autonomous mobile robot (AMR) which uses a single television camera for navigation in environments with visible edges, such as corridors and hallways. The objective was to determine the minimal sensing and signal processing requirements for a real AMR that could achieve self-steering, navigation and obstacle avoidance in real unmodified environments. A goal was to design algorithms that could meet the objective while being able to run on a laptop personal computer (PC). This constraint confined the research to computationally efficient algorithms and memory management techniques. The methods by which the objective was successfully achieved are described. A number of noise reduction and feature extraction algorithms have been tested to determine their suitability in this type of environment, and where possible these have been modified to improve their performance. The current methods of locating lines of perspective and vanishing points in images are described, and a novel algorithm has been devised for this application which is more efficient in both its memory usage and execution time. A novel obstacle avoidance mechanism is described which is shown to provide the low level piloting capability necessary to deal with unexpected situations. The difficulties of using a single camera are described, and it is shown that a second camera is required in order to provide robust performance. A prototype AMR was built and used to demonstrate reliable navigation and obstacle avoidance in real time in real corridors. Test results indicate that the prototype could be developed into a competitively priced commercial service robot.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Quine, Ben. „Spacecraft guidance systems : attitude determination using star camera data“. Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360417.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Burman, Gustav, und Simon Erlandsson. „ACM 9000 : Automated Camera Man“. Thesis, KTH, Maskinkonstruktion (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230253.

Der volle Inhalt der Quelle
Annotation:
Today’s digital society is changing the way we learn andeducate drastically. Education is being digitalized with theuse of online courses and digital lectures. This bachelorthesis solves the problem of how to be able to record alecture without a camera operator, an Automated CameraMan (ACM), for easier production of high quality educationmaterial. It was achieved with a modularized designprocess, practical testing and a scientific approach. TheAutomated Camera Man can be placed in the rear of thelecture hall to record or stream the content while it activelyadjusts itself and its direction towards the lecturerusing image processing and analysis.
I dagens digitala samhälle är sättet som undervisning skerpå under ständig förändring. Undervisningen håller på attdigitaliseras genom användningen av nätbaserade kurseroch digitala föreläsningar. Detta kandidatexamensarbetesöker en lösning på frågan om hur man kan filma en föreläsningutan en kameraoperatör, med en automatiserad kameraman,för lättare produktion av högkvalitativt videomaterial.Genom en modulariserad designprocess, praktiska testeroch vetenskapliga studier, designades ett sådant system.Det automatiska kamerastativet kan placeras längst bak ien föreläsningssal, på vilket en kamera kan placeras för attspela in eller strömma filmmaterial medan stativet riktar insig mot föreläsarens position, med hjälp av bildbehandling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Sonmez, Ahmet Coskun. „Robot guidance using image features and fuzzy logic“. Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259476.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Steer, Barry. „Navigation for the guidance of a mobile robot“. Thesis, University of Warwick, 1985. http://wrap.warwick.ac.uk/97892/.

Der volle Inhalt der Quelle
Annotation:
This thesis is about how a vehicle can, without human intervention, be navigated and guided in its movements through its environment. To move a real mobile robot so that it traces out a desired path, 'commands' need to be dispensed to the control systems of the actuators that drive the wheels that move the vehicle. Algorithms which issue such commands are called guidance algorithms. These can cause the vehicle to move about at the desired speed, in the desired direction, and can change the direction of motion, or can achieve some other 'complex' manoeuvre. As commands from guidance algorithms are physically realised, and become the sensible motion of the mobile robot, the desired 'intentions' embodied in them become corrupted. To combat this corruption the mobile robot needs to keep track of where it is in relation to some reference system. This is navigation. The mobile robot needs to navigate so that 'commands' to the actuation systems can then be reformulated in terms of its navigated 'location', given the task it is doing, and where it has been commanded to go to. In this thesis three navigational phases are distinguished for a wheeled 'robotic' vehicle. Their utility was tested and confirmed by experiment, using a 0.5 tonne mobile robot equipped with the relevant sensors, and actuation systems. The three phases of navigation are:- 1) Deduced reckoning based on the intrinsic motion of the vehicle to produce an initial estimate of the vehicle's position and heading. 2) The use of an absolute measurement of the vehicle's bearing to correct errors in the estimated heading. 3) The use of sonar range measurements to objects in the surroundings to correct errors in the estimated position. The positional coordinates, orientation, and extent of these objects being held in a 'world map'. Two guidance algorithms to control a mobile robot's movement are needed, correctly sequenced and coordinated, to enable it to perform a range of useful activities. This thesis has examined 1) Guidance to achieve motion with zero curvature, for a specified distance, and orientated relative to some specified direction in the environment. 2) Guidance to achieve the reorientation of a vehicle, that has to move in order to turn, so that it can move forward again with zero curvature in a different direction. Finally, a new technique that modulates the steering wheel angle with a time dependent Gaussian envelope is given. This technique is able to produce desirable changes in the position and heading of a path curvature limited vehicle, as it moves. Examples of manoeuvres possible with this technique are illustrated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Marshall, Matthew Q. „Multi-camera uncalibrated visual servoing“. Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49117.

Der volle Inhalt der Quelle
Annotation:
Uncalibrated visual servoing (VS) can improve robot performance without needing camera and robot parameters. Multiple cameras improve uncalibrated VS precision, but no works exist simultaneously using more than two cameras. The first data for uncalibrated VS simultaneously using more than two cameras are presented. VS performance is also compared for two different camera models: a high-cost camera and a low-cost camera, the difference being image noise magnitude and focal length. A Kalman filter based control law for uncalibrated VS is introduced and shown to be stable under the assumptions that robot joint level servo control can reach commanded joint offsets and that the servoing path goes through at least one full column rank robot configuration. Adaptive filtering by a covariance matching technique is applied to achieve automatic camera weighting, prioritizing the best available data. A decentralized sensor fusion architecture is utilized to assure continuous servoing with camera occlusion. The decentralized adaptive Kalman filter (DAKF) control law is compared to a classical method, Gauss-Newton, via simulation and experimentation. Numerical results show that DAKF can improve average tracking error for moving targets and convergence time to static targets. DAKF reduces system sensitivity to noise and poor camera placement, yielding smaller outliers than Gauss-Newton. The DAKF system improves visual servoing performance, simplicity, and reliability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Carrera, Mendoza Gerardo. „Robot SLAM and navigation with multi-camera computer vision“. Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9672.

Der volle Inhalt der Quelle
Annotation:
In this thesis we focus on computer vision capabilities suitable for practical mass-market mobile robots, with an emphasis on techniques using rigs of multiple standard cameras rather than more specialised sensors. We analyse the state of the art of service robotics, and attempt to distill the vision capabilities which will be required of mobile robots over the mid and long-term future to permit autonomous localisation, mapping and navigation while integrating with other task-based vision requirements. The first main novel contribution of the work is to consider how an ad-hoc multi-camera rig can be used as the basis for metric navigation competences such as feature-based Simultaneous Localisation and Mapping (SLAM). The key requirement for the use of such techniques with multiple cameras is accurate calibration of the locations of the cameras as mounted on the robot. This is a challenging problem, since we consider the general case where the cameras might be mounted all around the robot with arbitrary 3D locations and orientations, and may have fields of view which do not intersect. In the second main part of the thesis, we move away from the idea that all cameras should contribute in a uniform manner to a single consistent metric representation, inspired by recent work on SLAM systems which have demonstrated impressive performance by a combination of off-the-shelf or simple techniques which we generally categorise by the term ‘lightweight’. We develop a multi-camera mobile robot vision system which goes beyond pure localisation and SLAM to permit fully autonomous mapping navigation within a cluttered room, requiring free-space mapping and obstacle-avoiding planning capabilities. In the last part of the work we investigate the trade-offs involved in defining a camera rig suitable for this type of vision system and perform some experiments on camera placement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Mohareri, Omid. „Image and haptic guidance for robot-assisted laparoscopic surgery“. Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54953.

Der volle Inhalt der Quelle
Annotation:
Surgical removal of the prostate gland using the da Vinci surgical robot is the state of the art treatment option for organ confined prostate cancer. The da Vinci system provides excellent 3D visualization of the surgical site and improved dexterity, but it lacks haptic force feedback and subsurface tissue visualization. The overall objective of the work done in this thesis is to augment the existing visualization tools of the da Vinci with ones that can identify the prostate boundary, critical structures, and cancerous tissue so that prostate resection can be carried out with minimal damage to the adjacent critical structures, and therefore, with minimal complications. Towards this objective we designed and implemented a real-time image guidance system based on a robotic transrectal ultrasound (R-TRUS) platform that works in tandem with the da Vinci surgical system and tracks its surgical instruments. In addition to ultrasound as an intrinsic imaging modality, the system was first used to bring pre-operative magnetic resonance imaging (MRI) to the operating room by registering the pre-operative MRI to the intraoperative ultrasound and displaying the MRI image at the correct physical location based on the real-time ultrasound image. Second, a method of using the R-TRUS system for tissue palpation is proposed by expanding it to be used in conjunction with a real-time strain imaging technique. Third, another system based on the R-TRUS is described for detecting dominant prostate tumors, based on a combination of features extracted from a novel multi-parametric quantitative ultrasound elastography technique. We tested our systems in an animal study followed by human patient studies involving n = 49 patients undergoing da Vinci prostatectomy. The clinical studies were conducted to evaluate the feasibility of using these systems in real human procedures, and also to improve and optimize our imaging systems using patient data. Finally, a novel force feedback control framework is presented as a solution to the lack of haptic feedback in the current clinically used surgical robots. The framework has been implemented on the da Vinci surgical system using the da Vinci Research Kit controllers and its performance has been evaluated by conducting user studies.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

BURMAN, GUSTAV, und SIMON ERLANDSSON. „ACM 9000 : Automated Camera Man“. Thesis, KTH, Mekatronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233140.

Der volle Inhalt der Quelle
Annotation:
Today’s digital society is changing the way we learn and educate drastically. Education is being digitalized with the use of online courses and digital lectures. This bachelor thesis solves the problem of how to be able to record a lecture without a camera operator, an Automated Camera Man (ACM), for easier production of high quality education material. It was achieved with a modularized design process, practical testing and a scientific approach. The Automated Camera Man can be placed in the rear of the lecture hall to record or stream the content while it actively adjusts itself and its direction towards the lecturer using image processing and analysis.
I dagens digitala samhälle är sättet som undervisning sker på under ständig förändring. Undervisningen håller på att digitaliseras genom användningen av nätbaserade kurser och digitala föreläsningar. Detta kandidatexamensarbete söker en lösning på frågan om hur man kan filma en föreläsning utan en kameraoperatör, med en automatiserad kameraman, för lättare produktion av högkvalitativt videomaterial. Genom en modulariserad designprocess, praktiska tester och vetenskapliga studier, designades ett sådant system. Det automatiska kamerastativet kan placeras längst bak i en föreläsningssal, på vilket en kamera kan placeras för att spela in eller strömma filmmaterial medan stativet riktar in sig mot föreläsarens position, med hjälp av bildbehandling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Durusu, Deniz. „Camera Controlled Pick And Place Application With Puma 760 Robot“. Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606759/index.pdf.

Der volle Inhalt der Quelle
Annotation:
This thesis analyzes the kinematical structure of Puma 760 arm and introduces the implementation of image based pick and place application by taking care of the obstacles in the environment. Forward and inverse kinematical solutions of PUMA 760 are carried out. A control software has been developed to calculate both the forward and inverse kinematics solution of this manipulator. The control program enables user to perform both offline programming and real time realization by transmitting the VAL commands (Variable Assembly Language) to the control computer. Using the proposed inverse kinematics solutions, an interactive application is generated on PUMA 760 arm. The picture of the workspace is taken using a fixed camera attached above the robot workspace. The captured image is then processed to find the position and the distribution of all objects in the workspace. The target is differentiated from the obstacles by analyzing some specific properties of all objects, i.e. roundness. After determining the configuration of the workspace, a clustering based search algorithm is executed to find a path to pick the target object and places it to the desired place. The trajectory points in pixel coordinates, are mapped into the robot workspace coordinates by using the camera calibration matrix obtained in the calibration procedure of the robot arm with respect to the attached camera. The required joint angles, to get the end effector of the robot arm to the desired location, are calculated using the Jacobian type inverse kinematics algorithm. The VAL commands are generated and sent to the control computer of PUMA 760 to pick the object and places it to a user defined location.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Patel, Niravkumar Amrutlal. „Towards Closed-loop, Robot Assisted Percutaneous Interventions under MRI Guidance“. Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/130.

Der volle Inhalt der Quelle
Annotation:
Image guided therapy procedures under MRI guidance has been a focused research area over past decade. Also, over the last decade, various MRI guided robotic devices have been developed and used clinically for percutaneous interventions, such as prostate biopsy, brachytherapy, and tissue ablation. Though MRI provides better soft tissue contrast compared to Computed Tomography and Ultrasound, it poses various challenges like constrained space, less ergonomic patient access and limited material choices due to its high magnetic field. Even after, advancements in MRI compatible actuation methods and robotic devices using them, most MRI guided interventions are still open-loop in nature and relies on preoperative or intraoperative images. In this thesis, an intraoperative MRI guided robotic system for prostate biopsy comprising of an MRI compatible 4-DOF robotic manipulator, robot controller and control application with Clinical User Interface (CUI) and surgical planning applications (3DSlicer and RadVision) is presented. This system utilizes intraoperative images acquired after each full or partial needle insertion for needle tip localization. Presented system was approved by Institutional Review Board at Brigham and Women's Hospital(BWH) and has been used in 30 patient trials. Successful translation of such a system utilizing intraoperative MR images motivated towards the development of a system architecture for close-loop, real-time MRI guided percutaneous interventions. Robot assisted, close-loop intervention could help in accurate positioning and localization of the therapy delivery instrument, improve physician and patient comfort and allow real-time therapy monitoring. Also, utilizing real-time MR images could allow correction of surgical instrument trajectory and controlled therapy delivery. Two of the applications validating the presented architecture; closed-loop needle steering and MRI guided brain tumor ablation are demonstrated under real-time MRI guidance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Stein, Procópio Silveira. „Framework for visual guidance of an autonomous robot using learning“. Master's thesis, Universidade de Aveiro, 2009. http://hdl.handle.net/10773/2495.

Der volle Inhalt der Quelle
Annotation:
Mestrado em Engenharia de Automação Industrial
Este documento apresenta os trabalhos de desenvolvimento de uma infraestrutura de aprendizagem para a condução de robôs móveis. Este método de aprendizagem utiliza redes neuronais artificias para calcular uma direcção capaz de manter um robô dentro de uma estrada. A rede "aprende"a calcular esta direcção baseada em exemplos de condutores humanos, replicando e, de uma certa forma, imitando comportamentos. Uma abordagem de aprendizagem pode superar alguns aspectos de algoritmos clássicos para o cálculo da direcção de um robot. No que se relaciona à velocidade de processamento, as redes neuronais artificiais são muito rápidas, o que as torna ideais para navegação em tempo real. Além disso as redes tem a capacidade de extrair informações que não foram detectadas por humanos e, por conseguinte, não podem ser codificadas em programas clássicos. A implementação desta nova forma de interacção entre humanos e robôs, que estão simultaneamente "ensinando"e "aprendendo", também vai ser destacada neste trabalho. A plataforma de testes utilizada nesta investigação será um robô do Projecto Atlas, desenvolvido como robô autónomo de competição, para participar da prova de Condução Autónoma que ocorre anualmente como parte do Festival Nacional de Robótica. Para transformar o robô numa plataforma robusta para testes, uma série de revisões e melhorias foram implementadas. Estas intervenções foram conduzidas a nível mecânico e electrónico, e também a nível de software, sendo este último de grande importância por estabelecer uma nova infraestrutura de desenvolvimento e programação para investigadores. ABSTRACT: This work describes the research and development of a learning infrastructure for mobile robot driving. This method uses artificial neural networks to compute the steer direction that a robot should perform to stay inside a track. The network "learns" to compute a direction based on examples from human drivers, replicating and sometimes even improving human-like behaviors. A learning approach can overcome some aspects of classical algorithms used for robot steering computation. Regarding the processing speed, artificial neural networks are very fast, which make them well suited for real-time navigation. They also have the possibility to perceive information that was undetected by humans and therefore could not be coded in classical programs. The implementation of this new form of interaction between humans and robots, that are simultaneously "teaching" and "learning" from each other, will also be emphasized in this work. The platform used for this research is one of the robots of the Project Atlas, designed as an autonomous robot to participate in the Autonomous Driving Competition, held annually as part of the Portuguese Robotics Open. To render this robot able to serve as a robust test platform, several revisions and improvements were conducted. These interventions were conducted at mechanical, electronic and software levels, with the latter having a big importance as it establishes a new framework for group and modular code development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Ma, Mo. „Navigation using one camera in structured environment /“. View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20MA.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Albrecht, Ladislav. „Realizace kamerového modulu pro mobilní robot jako nezávislého uzlu systému ROS - Robot Operating System“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-417773.

Der volle Inhalt der Quelle
Annotation:
Stereo vision is one of the most popular elements in the field of mobile robots and significantly contributes to their autonomous behaviour. The aim of the diploma thesis was to design and implement a camera module as a hardware sensor input, which is independent, with the possibility of supplementing the system with other cameras, and to create a depth map from a pair of cameras. The diploma thesis consists of theoretical and practical part, including the conclusion of results. The theoretical part introduces the ROS framework, discusses methods of creating depth maps, and provides an overview of the most popular stereo cameras in robotics. The practical part describes in detail the preparation of the experiment and its implementation. It also describes the camera calibration and the depth map creating. The last chapter contains an evaluation of the experiment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Sasaki, Hironobu, Toshio Fukuda, Masashi Satomi und Naoyuki Kubota. „Growing neural gas for intelligent robot vision with range imaging camera“. IEEE, 2009. http://hdl.handle.net/2237/13913.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Ubaldi, Stefano. „Markerless robot programming by demostration using a time-of-flight camera“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Den vollen Inhalt der Quelle finden
Annotation:
Robot programming by demonstration is a technique that consists in automatically converting a demonstration of a certain task into a robot program ready to use. In particular, the purpose of this thesis is to exploit a 3D time-of-flight camera, used to acquire manipulation tasks performed by the human operator, to plan the trajectories that the robot will replicate. To this purpose, a demonstration device has been developed and used as the target to be tracked by the camera. The prototype is provided with a gripper similar to the end effector of an industrial robot so that, during the demonstration movement, the human operator approaches, picks and releases the objects to be manipulated just as the robot. No passive or active markers are used and, since the 3D time-of-flight camera represents the instantaneous image of the tracked object as a point cloud, the device has been designed with a special external geometry, easy to be detected thus providing univocally its pose in each time frame acquired. Successively, an Iterative Closest Point algorithm is applied to the acquired point clouds to calculate the homogeneous transformation matrixes that synthetically represent the relative displacement of the grippers between two consecutive poses. Therefore, the overall set of these matrices – properly referred to the robot reference frame – can be used to describe the reference trajectory to be performed by the robot starting from the initial pose of its end effector (which must be defined by the user). Indeed, as the final step of the proposed procedure, these data are converted into the specific programming language of the robot controller to make the machine replicate the movement performed by the human operator.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Mehrandezh, Mehran. „Navigation-guidance-based robot trajectory planning for interception of moving objects“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0005/NQ41242.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Meger, David Paul. „Planning, localization, and mapping for a mobile robot in a camera network“. Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101623.

Der volle Inhalt der Quelle
Annotation:
Networks of cameras such as building security systems can be a source of localization information for a mobile robot assuming a map of camera locations as well as calibration information for each camera is available. This thesis describes an automated system to acquire such information. A fully automated camera calibration system uses fiducial markers and a mobile robot in order to drastically improve ease-of-use compared to standard techniques. A 6DOF EKF is used for mapping and is validated experimentally over a 50 m hallway environment. Motion planning strategies are considered both in front of a single camera to maximize calibration accuracy and globally between cameras in order to facilitate accurate measurements. For global motion planning, an adaptive exploration strategy based on heuristic search allows compromise between distance traveled and final map uncertainty which provides the system a level of autonomy which could not be obtained with previous techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Keepence, B. S. „Navigation of autonomous mobile robots“. Thesis, Cardiff University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304921.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Pannu, Rabindra. „Path Traversal Around Obstacles by a Robot using Terrain Marks for Guidance“. University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1312571550.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Teimoori, Sangani Hamid Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. „Topics in navigation and guidance of wheeled robots“. Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/43709.

Der volle Inhalt der Quelle
Annotation:
Navigation and guidance of mobile robots towards steady or maneuvering objects (targets) is one of the most important areas of robotics that has attracted a lot of attention in recent decades. However, in most of the existing methods, both the line-of-sight angle (bearing) and the relative distance (range) are assumed to be available for navigation and guidance algorithms. There is also a relatively large body of research on navigation and guidance with bearings-only measurements. In contrast, only a few results on navigation and guidance towards an unknown target using range-only measurements have been published. Various problems of navigation, guidance, location estimation and target tracking based on range-only measurements often arise in new wireless networks related applications. Recent advances in these applications allow us to use inexpensive transponders and receivers for range-only measurements which provide information in dynamic and noisy environments without the necessity of line-of-sight. To take advantage of these sensors, algorithms must be developed for range-only navigation. The main part of this thesis is concerned with the problem of real-time navigation and guidance of Wheeled Mobile Robots (WMRs) towards an unknown stationary or moving target using range-only measurements. The range can be estimated using the signal strength and the robust extended Kalman filtering. Several similar algorithms for navigation and guidance termed Equiangular Navigation and Guidance (ENG) laws are proposed and mathematically rigorous proofs of convergence and stability of the proposed guidance laws are given. The experimental investigation into the use of range data for a WMR navigation is documented and the results and discussions on the performance of the proposed guidance strategies are presented, where a wheeled robot successfully approach a stationary or follow a maneuvering target. In order to safely navigate and reliably operate in populated environments, ENG is then modified into Augmented-ENG (AENG), which enables the robot to approach a stationary target or follow an unpredictable maneuvering object in an unknown environment, while keeping a safe distance from the target, and simultaneously preserving a safety margin from the obstacles. Furthermore, we propose and experimentally investigate a new biologically inspired method for local obstacle avoidance and give the mathematically rigorous proof of the idea. In order for the robot to avoid collision and bypass the enroute obstacles in this method, the angle between the instantaneous moving direction of the robot and a reference point on the surface of the obstacle is kept constant. The proposed idea is combined with the ENG law, which leads to a reliable and fast long-range navigation. The performance of both navigation strategy and local obstacle avoidance techniques are confirmed with computer simulations and several experiments with ActivMedia Pioneer 3-DX wheeled robots. The second part of the thesis investigates some challenging problems in the area of wheeled robot navigation. We first address the problem of bearing-only guidance of an autonomous vehicle following a moving target with smaller minimum turning radius compared to that of the follower and propose a simple and constructive navigation law. In compliance with the increasing research on decentralized control laws for groups of mobile autonomous robots, we consider the problems of decentralized navigation of network of WMRs with limited communication and decentralized stabilization of formation of WMRs. New control laws are presented and simulation results are provided to illustrate the control laws and their applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Nur, Kamruddin Md. „Synthetic view of retail spaces using camera and RFID sensors on a robot“. Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/352710.

Der volle Inhalt der Quelle
Annotation:
In this thesis, two approaches of information presentation on indoor view have been presented using Radio-Frequency Identification (RFID) and camera sensors on a robot. The goal is to capture images of the indoor environment and to create a 3D view so that users can view, navigate, and locate a product on the view. RFID is an `Auto-ID' system that can identify a tagged object from a remote distance without a direct line-of-sight. In the first approach, a Google Street View-like indoor view creation and RFID-obtained product information projection have been presented. Also, in the second approach, we explore Simultaneous Localization and Mapping (SLAM), RGB-D RGB-D Mapping, and the RFID-obtained product information projection on a 3D point cloud map.
En aquesta tesi, dos enfocaments de presentació de la informació a la vista d’interior s’han presentat mitjançant identificació per radio-freqüència (RFID) i sensors de la càmera en un robot. L’objectiu és capturar imatges de l’ambient interior i la creació d’una vista 3D, de manera que els usuaris poden veure, navegar i localitzar un producte a la vista. RFID és un ‘sistema’ Auto ID, que és capaç d’identificar un objecte etiquetat des d’una distància remota sense una línia de visió directa. Per altra banda, un sistema RFID pot ser configurat per adquirir ubicacions aproximades d’objectes RFID etiquetats. En el primer enfocament, s’han presentat una creació de vista interior com Google Street View i la projecció d’informació de productes obtinguts per RFID. I, en el segon enfocament, la projecció de la informació de productes obtinguts per RFID en una vista de núvol de punts 3D s’ha presentat usant un econòmic RGB-D (Red, Blue, Green, and Depth) sensor de càmera i RGB-D SLAM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Motta, J. M. S. T. „Optimised robot calibration using a vision-based measurement system with a single camera“. Thesis, Cranfield University, 1999. http://dspace.lib.cranfield.ac.uk/handle/1826/11097.

Der volle Inhalt der Quelle
Annotation:
Robot calibration plays an increasingly important role in robot production as well as in robot operation and integration within computer integrated manufacturing or assembly systems. The production, implementation and operation of robots are issues where robot calibration results can lead to significant accuracy improvement and/or cost- savings. The thesis describes techniques for modelling, optimising and performing robot calibration processes using a 3-D vision-based measurement system for off-line programming. The identification of the nonrlnal kinematic model is optimised using numerical methods to eliminate redundant geometric parameters in the model. Calibration based on the optimised model shows improvement in robot accuracy when compared to the non-optimised model. The basics of the measurement system consist of a single CCD camera mounted on the robot tool flange, image processing software, and algorithms specially developed to measure the end-effector pose relative to a world coordinate system. Geometric lens distortions are included in the analytical technique. The target consists of two identical clusters of calibration points printed on photographic paper, and mounted on the sides of a 90-degree angle plate. Experimental work was performed to assess the measurement system accuracy at different distances from the camera to the target. An average accuracy from O.2mm to O.4mm was obtained at distances between 6S0mm to 9S0mm. Tests were also performed on three different robots to assess the improvement in the overall robot accuracy. The robots tested were: PUMA-SOO, IRB-2400 and IRB-6400. The errors before calibration for the three robots were approximately in a range from Smm to lSmm if measured in a large volume. The best average accuracy obtained after the calibration of the three robots was O.3Smm, O.60mm and O.4Smm respectively. This study shows that many different variables are involved in the calibration process. The influence of these variables was studied both experimentally and by means of simulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Goroshin, Rostislav. „Obstacle detection using a monocular camera“. Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24697.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Bachnak, Rafic A. „Development of a stereo-based multi-camera system for 3-D vision“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1172005477.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Nygaard, Andreas. „High-Level Control System for Remote Controlled Surgical Robots : Haptic Guidance of Surgical Robot“. Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8864.

Der volle Inhalt der Quelle
Annotation:

This report considers the work to improve the autonomy of surgical teleoperated systems, by introducing haptic guidance. The use of surgical robots in surgical procedures have become more common the recent years, but still it is in its infancy. Some advantages when using robots is scalability of movements, reduced tremor, better visualisation systems and greater range of motions than with conventional minimally invasive surgery. On the contrary, lack of tactile feedback and highly unstructured medical environment restricts the use of teleoperated robots to specific tasks within specific procedures. A way of improving autonomy of the teleoperated system is to introduce predefined constraints in the surgical environment, to create a trajectory or forbidden area, in order to guide the movements of the surgeon. This is often called haptic guidance. This report introduces the basics of teleoperated systems, with control schemes, models and analytical tools. Algorithms for haptic guidance have been developed, and the entire control and guidance system have been modified and suited for implementation on a real teleoperated system. Theoretical analysis of the position position (PP) control scheme reveals some general stability and performance characteristics, later used as a basis for tuning the real system parameters. The teleoperated system consists of a Phantom Omni device, from SensAble-Technologies, used as master manipulator, and AESOP 3000DS, from Computer Motions Inc., as the slave manipulator. The control system is implemented on a regular PC, connecting the complete system. Tests reveal that the slave manipulator is not suited for this task due to a significant communication time delay, limited velocity and inadequate control possibilities. The consequences makes force feedback based on the PP control scheme impossible, and limits performance of the entire teleoperated system. The guidance system is implemented in two variations, one based on slave positions and one based on master positions. This is motivated to give a performance comparison for variations of position error/tracking between the two manipulators. Slave based guidance appears to be stable only for limited values of the gains, and thus, it generates no strict constraints. It can be used to guide the operator away from forbidden areas, but is not suitable for high precision guiding. The master based guidance is stabile for very high gains, and the guidance have the accuracy to improve the surgeons precision during procedures. In the case of line guidance, the master based guidance gives a deviation of up to $1.3mm$ from the given trajectory. The work has shown the possibilities of using haptic guidance to improve accuracy and precision in surgical procedures, but among others, hardware limitations give room for several improvements in order to develop a teleoperated system that works.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Bohora, Anil R. „Visual robot guidance in time-varying environment using quadtree data structure and parallel processing“. Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182282896.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Marx, Anja. „Utilization of an under-actuated robot for co-manipulated guidance for breast cancer detection“. Paris 6, 2013. http://www.theses.fr/2013PA066645.

Der volle Inhalt der Quelle
Annotation:
This research is situated in the emerging field of co-manipulation systems where robot and user perform a task in a collaborative way. It is applied to the medical context of breast cancer diagnosis where the standard procedure today is the succession of an initial mammography (MX) examination and a supplementary Ultrasound (U/S) scan. The surgeon's task is to localize the target lesion defined in the MX images using 2D U/S. One difficulty of this procedure results from the fact that breast geometry changes for both examinations due to different patient's positions. A second difficulty is the mental correlation of two different image types. MX provides a 3D view, whereas U/S only displays a cross section of the object. The proposed system facilitates this combined examination by keeping the breast geometry and by adding a U/S probe guidance robot to the mammography system. A 6DOF parallel co-manipulation system is set up where the robot and user simultaneously impact on the probe. A robot control is developed for active task assistance. Its relevance is evaluated in vitro and showed a significant increase in examination quality when using robot guidance compared to the standard examination. The novel aspect treated in this thesis is under-actuated co-manipulation, where the robot has less DOF than the task requires. The initial idea is that, although the robot cannot perform the task autonomously, it may bring partial assistance that still improves the movement. This involves adaptation in terms of robot control and system architecture. It is shown in this work that, in case of under-actuation, simple reduction from a fully-actuated robot control to the remaining robot DOF is not sufficient to guarantee system stability. System architecture needs to be adapted accordingly. To quantify examination improvements when using under-actuated guidance, different under-actuated robot controls have been compared. The main outcome of this thesis is that even an under-actuated robot system increases task precision significantly while decreasing execution time
Cette thèse s'inscrit dans le domaine de la co-manipulation. Dans les systèmes co-manipulés, le robot et l'utilisateur accomplissent une tâche d'une manière collaborative. Il existe trois types de co-manipulation. La co-manipulation orthodique est notamment utilisée pour la rééducation des membres où le robot et l'humain sont liés dans plus qu'un point. Dans un système de co-manipulation sérielle, le robot se situe entre l'utilisateur et l'outil qui est contrôlé par le robot. Cette thèse se situe dans le contexte de la co-manipulation parallèle. Dans cette classe, le robot et l'humain manient l'outil directement et en même temps. Ce principe de co-manipulation parallèle a été appliqué dans un contexte médical, plus précisément au diagnostic du cancer du sein. Aujourd'hui, la procédure standard pour ces examens est basée sur des imageries consécutives du sein utilisant d'abord la mammographie (MX) et puis l'échographie (U/S). Cette image U/S en 2D représente une coupe de l'objet. Les images MX peuvent être superposées comme des ``couches d'images'' afin d'obtenir un modèle 3D du sein. Ce fait relève la difficulté principale de cet examen d'imagerie. Pendant l'examen échographique, le radiologue doit localiser une zone d'intérêt précédemment définie dans les images MX en se servant seulement de la coupe 2D du sein. Il est à noter que la patiente doit adopter des positions différentes pour chaque examen. Elle est debout avec un sein comprimé entre une pelote de compression et le détecteur du système pour une mammographie. Cependant pour l'échographie, la patiente est couchée sur le dos. Cette différence de posture de la patiente représentent la deuxième difficulté de l'examen du sein. Le système proposé dans cette thèse facilite la procédure d'examens combinés en gardant la même géométrie du sein. De plus, un bras robotisé guidant la sonde échographique est rajouté au système de mammographie existant. Ainsi, un système de co-manipulation parallèle, qui permet la manipulation simultanée de la sonde échographique par le robot et l'utilisateur, a été mis en place. Jusqu'à présent, plusieurs systèmes de co-manipulation parallèle ont été présentés dans le domaine médical. Tous ont comme point commun d'avoir au moins autant de degrés de liberté (DDL) actionnés que la tâche à effectuer. Ceci implique un coût élevé du système entier ainsi qu'un possible encombrement causé par la structure robotisée. L'intérêt de ce travail est d'analyser des solutions alternatives permettant une amélioration significative du geste médical tout en réduisant l'encombrement dût au robot ainsi que son coût. D'un point de vue robotique, l'innovation consiste à proposer des guidages d'outils d'une manière sous-actionnée. Le robot ne fourni donc pas d'assistance couvrant tous les DDL de la tâche mais une aide partielle ayant comme but d'améliorer les gestes du radiologue. Des mesures comme la distance à la cible et le temps d’examen ont été choisies comme indicateur de performance. Les résultats d'une première série de tests ont démontré qu'un guidage complètement actionné améliore les performances des utilisateurs comparé à aucun guidage. Pour qualifier des améliorations des examens avec un guidage sous-actionné, différents modes de sous-actionnements ont été testés. Les résultats montrent que même un guidage partiel augmente d'une manière significative la qualité des examens échographiques. La précision a pu être augmentée en diminuant la durée de l'intervention. La réduction des DDL nécessite néanmoins une adaptation de la commande du robot à l'architecture du système. Il a été observé dans cette thèse qu'une simple réduction des DDL peut induire des instabilités reliées à l'architecture du système. Elle doit donc être adaptée en fonction du sous-actionnement de chaque cas
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Pretlove, John. „Stereoscopic eye-in-hand active machine vision for real-time adaptive robot arm guidance“. Thesis, University of Surrey, 1993. http://epubs.surrey.ac.uk/843230/.

Der volle Inhalt der Quelle
Annotation:
This thesis describes the design, development and implementation of a robot mounted active stereo vision system for adaptive robot arm guidance. This provides a very flexible and intelligent system that is able to react to uncertainty in a manufacturing environment. It is capable of tracking and determining the 3D position of an object so that the robot can move towards, and intercept, it. Such a system has particular applications in remotely controlled robot arms, typically working in hostile environments. The stereo vision system is designed on mechatronic principles and is modular, light-weight and uses state-of-the-art dc servo-motor technology. Based on visual information, it controls camera vergence and focus independently while making use of the flexibility of the robot for positioning. Calibration and modelling techniques have been developed to determine the geometry of the stereo vision system so that the 3D position of objects can be estimated from the 2D camera information. 3D position estimates are obtained by stereo triangulation. A method for obtaining a quantitative measure of the confidence of the 3D position estimate is presented which is a useful built-in error checking mechanism to reject false or poor 3D matches. A predictive gaze controller has been incorporated into the stereo head control system. This anticipates the relative 3D motion of the object to alleviate the effect of computational delays and ensures a smooth trajectory. Validation experiments have been undertaken with a Puma 562 industrial robot to show the functional integration of the camera system with the robot controller. The vision system is capable of tracking moving objects and the information this provides is used to update command information to the controller. The vision system has been shown to be in full control of the robot during a tracking and intercept duty cycle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Dokur, Omkar. „Embedded System Design of a Real-time Parking Guidance System“. Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5939.

Der volle Inhalt der Quelle
Annotation:
The primary objective of this work is to design a parking guidance system to reliably detect entering/exiting vehicles to a parking garage in a cost-efficient manner. Existing solutions (inductive loops, RFID based systems, and video image processors) at shopping malls, universities, airports etc., are expensive due to high installation and maintenance costs. There is a need for a parking guidance system that is reliable, accurate, and cost-effective. The proposed parking guidance system is designed to optimize the use of parking spaces and to reduce wait times. Based on a literature review we identify that the ultrasonic sensor is suitable to detect an entering/exiting vehicle. Initial experiments were performed to test the sensor using an Arduino based embedded system. Detection logic was then developed to identify a car after analyzing the initial test results. This logic was extended to trigger a camera to take an image of the vehicle for validation purposes. This system consists of Arduino, ultrasonic sensor, and a temperature sensor. It was installed and tested in Richard Beard Garage at the University of South Florida for five days. The test results of each trial are provided and average error for all the trials is calculated. The error cases occur due to golf carts, straddling cars on both entry/exit lanes, and people walking under the sensor. The average error of the system is 5.36% over five days (120 hrs). The estimated cost for one detector per lane is approximately $30.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Rizwan, Macknojia. „Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces“. Thèse, 2013. http://hdl.handle.net/10393/23976.

Der volle Inhalt der Quelle
Annotation:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

„Multiple camera pose estimation“. Thesis, 2008. http://library.cuhk.edu.hk/record=b6074556.

Der volle Inhalt der Quelle
Annotation:
Additionally, we suggest a new formulation for the perspective camera projection matrix. In particular, regarding how the 3 x 3 rotation matrix, R, of the camera should be incorporated into the 3 x 4 camera projection matrix, P. We show that the incorporated rotation should neither be the camera rotation R nor its transpose, but a reversed (left-handed) version of it. The fundamental matrix between a pair of stereo cameras is reformulated more accurately accordingly. This is extremely useful when we want to calculate the fundamental matrix accurately from the stereo camera matrices. It is especially true when the feature correspondences are too few for robust methods, such as RANSAC, to operate. We expect that this new model would have an impact on various applications.
Furthermore, the process of estimating the rotation and translation parameters between a stereo pair from the essential matrix is investigated. This is an essential step for our multi-camera pose estimation method. We show that there are 16 solutions based on the singular value decomposition (not four or eight as previously thought). We also suggest a checking step to ascertain that the proposed algorithm will come up with accurate results. The checking step ensures the accuracy of the fundamental matrix calculated using the pose obtained. This provides a speedy way to calibrate a stereo rig. Our proposed theories are supported by the real and synthetic data experiments reported in this thesis.
In this thesis, we solve the pose estimation problem for robot motion by placing multiple cameras on the robot. In particular, we use four cameras arranged as two back-to-back stereo pairs combined with the Extended Kalman Filter (EKF). The EKF is used to provide a frame by frame recursive solution suitable for the real-time application at hand. The reason for using multiple cameras is that the pose estimation problem is more constrained for multiple cameras than for a single camera. Their use is further justified by the drop in price which is accompanied by the remarkable increase in accuracy. Back-to-back cameras are used since they are likely to have a larger field of view, provide more information, and capture more features. In this way, they are more likely to disambiguate the pose translation and rotation parameters. Stereo information is used in self-initialization and outlier rejection. Simple yet efficient methods have been proposed to tackle the problem of long-sequence drift. Our approaches have been compared, under different motion patterns, to other methods in the literature which use a single camera. Both the simulations and the real experiments show that our approaches are the most robust and accurate among them all as well as fast enough to realize the real-time requirement of robot navigation.
Mohammad Ehab Mohammad Ragab.
"April 2008."
Adviser: K. H. Wong.
Source: Dissertation Abstracts International, Volume: 70-03, Section: B, page: 1763.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (p. 138-148) and index.
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Huang, Yu-Hsun, und 黃又勳. „Applying Virtual Guidance for Robot Teleoperation“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/02392861738394931968.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
電機與控制工程系所
95
Rapid development in computer network highly enhances the capability of teleoperation systems, people nowadays can explore an unknown area by using a remote-controlled robot. However, there are still some drawbacks in a telerobtic system.For instance, the operator can not obtain remote information fast enough due to the network time delay. Meanwhile, it is also difficult to distinguish the relative distance between objects from the planar image. Some researchers proposed using virtual environment and virtual guidance to tackles the problems. Virtual guidance acts as a guidance that aids the user in manipulating the robot via providing haptic and visual clues. In this thesis, we propose a novel virtual guidance which is more effective than those proposed previously in object grasping. The proposed virtual guidance integrates the planar image and force reflection to provide the guidance. With it, the operator can easily control the robot gripper for object grasping in the presence of imprecision in positioning. In implementation, we developed a networked VR-based telerobotic system for the proposed virtual guidance. Experiments are performed to demonstrate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Hong, Hao-cian, und 洪豪謙. „Mobile Robot Localization via RGB-D Camera“. Thesis, 2013. http://ndltd.ncl.edu.tw/handle/63865054447658598133.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺南大學
資訊工程學系碩士班
101
How to make robot sense the environment and localize its position in the workspace is a very important issue in the field of autonomous mobile robots. Now there is a good method – Monte Carlo Localization to solve this problem. If sensors can provide the correct information, Monte Carlo Localization can exhibit good stability and accuracy. In the actual application, Monte Carlo Localization has restriction on the sensors and computing ability of robots. In much recent research, two kinds of sensors are applied to Monte Carlo Localization. One is camera which can provide image information, and the other is distance sensor which can provide distance information (ex. laser range finder, infrared, and etc.). The use of camera can get more information, so the convergence can be sped up and get higher accuracy. However, more computation time is required. Using the distance information as sensory input, in contrast, the computation time will be less. If the feature of distance information is not prominent, the convergence speed and localization accuracy will be reduced. In this paper, a novel approach to integrate distance and image information for Monte Carlo Localization is proposed. The distance information is used for locating the regions of interest to perform feature matching. In this way, Monte Carlo Localization is able to achieve higher localization accuracy and convergence speed. The RGB-D camera applied in our work is Microsoft Kinect sensor. Although the accuracy of Microsoft Kinect sensor is not fully satisfactory, it is cheaper than the other sensors. In actual system application, the cost can be reduced.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Tsai, Ming-Jin, und 蔡明瑾. „Camera Calibration of Home Robot Vision System“. Thesis, 2002. http://ndltd.ncl.edu.tw/handle/34205646219620471162.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
資訊科學系
90
Camera calibration is a crucial step in the reconstruction of a 3D model and has been an important research topic in computer vision. We can classify calibration techniques roughly into two categories: photogrammetric calibration and self-calibration. In this paper, we will study different algorithms to calibrate a camera. The major method is based on images of a planar pattern obtained from different viewing angles, as proposed in [30]. Both synthetic data and real images have been tested and results with satisfactory accuracy have been obtained. The method deals with noise well but is time consuming. To improve the efficiency of the calibration, a second method which uses homography to quickly compute the focal length is adopted. Proper mapping between the results obtained by these two methods can then be used to derive the correct camera parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Beach, David Michael. „Multi-camera benchmark localization for mobile robot networks“. 2004. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=362308&T=F.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Beach, David Michael. „Multi-camera benchmark localization for mobile robot networks“. 2005. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=369735&T=F.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

LI, JIAN-DE, und 李建德. „Guidepost understanding for robot guidance with single view“. Thesis, 1988. http://ndltd.ncl.edu.tw/handle/21943749085368651705.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Rao, Fang-Shiang, und 饒方翔. „Active Guidance for a Passive Robot Walking Helper“. Thesis, 2010. http://ndltd.ncl.edu.tw/handle/14406536318976111496.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
電控工程研究所
99
Recently, the problem of aging population becomes more and more serious. How to take good care of the elderly is now an important issue around the world. Along with the progress of the medical technology, several robot walking helpers have been developed. It motivates us to develop a robot walking helper, named i-go, in our laboratory for assisting the lives of the elderly. In the thesis, based on navigation techniques previously proposed, we develop two guidance algorithms for passive walking helper, and realize them in our i-go. They are:(1)the position-controlling algorithm and (2)position and orientation-controlling algorithm. The former can guide the user to the desired position, and the latter not only guide the user to the desired position, but also to the desired orientation. The proposed guidance algorithms have been verified via the experiment. In future, we expect the i-go can assist the elderly for guidance in real environments. We will introduce the i-go to the Alzheimer’s patients, so that they can rely on it for movement under the conditions of memory decline and poor sense in orientation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Chen, Hong-Shiang, und 陳弘翔. „Robot Photographer: Achieving Camera Works witha PTZ Camcorder Mounted on a Mobile Robot“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/36171141488499843199.

Der volle Inhalt der Quelle
Annotation:
碩士
國立暨南國際大學
資訊工程學系
100
Robot photographer is a recent important development in the field of robotics. However, conventional robot photographer systems mainly aimed to take high quality still images. Very few studies focus on how to shoot high quality films. The goal of this thesis is to develop an automatic camera robot mimicking a professional camera man who can select proper camera works and composition rules during filming. Furthermore, the camera robot will assess the quality of each shot to filter out low quality video clips. In this study, the camera robot is constructed based on a Pioneer-3DX mobile robot. A notebook computer is used as the main controller. A Kinect sensor is adopted to gather skeleton information of people in the camera field of view for selecting camera works. Also, a face detection algorithm is introduced to acquire necessary information for shooting composition. The system also includes a pan-tilt unit and a camcorder controlled by the computer through the local application control bus system. The implemented system has been tested in a real environment. The results show that the camera robot can successfully select different camera works for automatic filming and delete low quality video clips. A subjective evaluation has been conducted and the results show that the videos acquired with the camera robot are visually appealing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Rajagopalan, Ramesh. „Guidance control for automatic guided vehicles employing binary camera vision“. Thesis, 1991. http://spectrum.library.concordia.ca/3386/1/NN68738.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Lu, Yen-Te, und 呂彥德. „IR Camera Technology for Navigation System of Wheeled Robot“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/7jwxha.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣科技大學
機械工程系
99
Most of the researches for indoor positioning system are based on wireless communication technology, unfortunately its accuracy is still dozens of centimeters difference. In this current research, we are able to design a positioning system that based on infrared technology by using both of IR camera and LED module. LED module is including IR LED and RF communication module as the source of infrared. The coordinate values of LED module are recorded by IR camera that attached on wheeled robot. Moreover, wheeled robot is equipped with ultrasonic module to avoid the unknown position obstacles, or make path planning to avoid obstacles when the positions are known. User can command and monitor the wheeled robot by UI-PC, and use Bluetooth for data transfer. Current result of this research described its high accuracy which has positioning error 5-10cm range, compared to wireless communication that has positioning error in 75-100cm range. This technique could propose as a new position system with simpler method and higher accuracy for on sight indoor environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Su, Wei-Yu, und 蘇威宇. „Implementation of Wireless Network-Controlled Robot with Cloud Camera“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/y59d77.

Der volle Inhalt der Quelle
Annotation:
碩士
義守大學
電機工程學系
104
Wireless-Network-Control of technology is used life in society, people work so hard and can’t lead to proper use of time, even if the plan were on time, but still very difficult, but through use a Wireless Network Control, you can let people in on the same time, even in different locations, it is possible to deal with the matter. In this thesis, the use of Wireless Network Control and remote video cameras to give people life assistance In this thesis, the use of the Internet of Things (Internet of Thing, IOT), Arduino YUN Development Board, Arduino UNO board, Arduino expansion board, Arduino IDE, Parallax Standard Servo machine, ultrasonic sensors (HC-SR04) and video camera module, so that the machine can be remotely controlled through the use of the Internet and video cameras, real-time images back to the phone and the user management page, perform real-time monitoring at home, create personal Home Automation. Home Automation is refers to the control means of order to improve the quality life make home life more comfortable and safe, completed the so-called "home automation." Automation not only the future of technology used in the industry, will also be used in public life, let people through in such a way to use time effectively, to achieve "the wisdom of home life" and "the wisdom of remote monitoring."
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie