Dissertations / Theses on the topic 'Calibrage camera'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 43 dissertations / theses for your research on the topic 'Calibrage camera.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Dornaika, Fadi. "Contributions à l'intégration vision-robotique : calibrage, localisation et asservissement." Phd thesis, Grenoble INPG, 1995. http://www.theses.fr/1995INPG0097.
Full textThe integration of computer vision with robot control is the concern of this thesis. This integration has many advantages for the interaction of a robotic system with its environment. First, we are interested in the study of calibration methods. Two topics are treated : i) hand/eye calibration and ii) object pose. For the first, we developed a nonlinear method that seems to be very robust with respect to measurement errors; for the second, we developed an iterative para-perspective pose computation method that can be used in real-time applications. Next we are interested in visual servo control and extend the well known method "image-based servoing" for a camera that is not attached to the robot being serv(o)(e)d. When performing relative positioning, we show that the computation of the goal features do not depend on an explicit estimate of the camera intrinsic or extrinsic parameters. For a given task, the robot motions are computed in order to reduce a 2D error to zero. The central issue of any image-based servoing method is the estimation of the image Jacobian. We show the advantage of using an exact image Jacobian with respect to the dynamic behaviour of the servoing process. This control method is used in automatic object grasping with a 6 DOF robot manipulator. All the methods presented in this thesis are validated with real and simulated data
Draréni, Jamil. "Exploitation de contraintes photométriques et géométriques en vision : application au suivi, au calibrage et à la reconstruction." Grenoble, 2010. http://www.theses.fr/2010GRENM061.
Full textThe topic of this thesis revolves around three fundamental problems in computer vision; namely, video tracking, camera calibration and shape recovery. The proposed methods are solely based on photometric and geometric constraints found in the images. Video tracking, usually performed on a video sequence, consists in tracking a region of interest, selected manually by an operator. We extend a successful tracking method by adding the ability to estimate the orientation of the tracked object. Furthermore, we consider another fundamental problem in computer vision: calibration. Here we tackle the problem of calibrating linear cameras (a. K. A: pushbroom)and video projectors. For the former one we propose a convenient plane-based calibration algorithm and for the latter, a calibration algorithm that does not require aphysical grid and a planar auto-calibration algorithm. Finally, we pointed our third research direction toward shape reconstruction using coplanar shadows. This technique is known to suffer from a bas-relief ambiguity if no extra information on the scene or light source is provided. We propose a simple method to reduce this ambiguity from four to a single parameter. We achieve this by taking into account the visibility of the light spots in the camera
Fang, Yong. "Road scene perception based on fisheye camera, LIDAR and GPS data combination." Thesis, Belfort-Montbéliard, 2015. http://www.theses.fr/2015BELF0265/document.
Full textRoad scene understanding is one of key research topics of intelligent vehicles. This thesis focuses on detection and tracking of obstacles by multisensors data fusion and analysis. The considered system is composed of a lidar, a fisheye camera and aglobal positioning system (GPS). Several steps of the perception scheme are studied: extrinsic calibration between fisheye camera and lidar, road detection and obstacles detection and tracking. Firstly, a new method for extinsic calibration between fisheye camera and lidar is proposed. For intrinsic modeling of the fisheye camera, three models of the literatureare studied and compared. For extrinsic calibration between the two sensors, the normal to the lidar plane is firstly estimated based on the determination of ń known ż points. The extrinsic parameters are then computed using a least square approachbased on geometrical constraints, the lidar plane normal and the lidar measurements. The second part of this thesis is dedicated to road detection exploiting both fisheye camera and lidar data. The road is firstly coarse detected considering the illumination invariant image. Then the normalised histogram based classification is validated using the lidar data. The road segmentation is finally refined exploiting two successive roaddetection results and distance map computed in HSI color space. The third step focuses on obstacles detection, especially in case of motion blur. The proposed method combines previously detected road, map, GPS and lidar information.Regions of interest are extracted from previously road detection. Then road central lines are extracted from the image and matched with road shape model extracted from 2DŋSIG map. Lidar measurements are used to validated the results.The final step is object tracking still using fisheye camera and lidar. The proposed method is based on previously detected obstacles and a region growth approach. All the methods proposed in this thesis are tested, evaluated and compared to stateŋofŋtheŋart approaches using real data acquired with the IRTESŋSET laboratory experimental platform
Szczepanski, Michał. "Online stereo camera calibration on embedded systems." Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC095.
Full textThis thesis describes an approach for online calibration of stereo cameras on embeddedsystems. It introduces a new functionality for cyber physical systems by measuring the qualityof service of the calibration. Thus, the manuscript proposes a dynamic monitoring andcalculation of the internal sensor parameters required for many computer vision tasks. Themethod improves both security and system efficiency using stereo cameras. It prolongs the lifeof the devices thanks to this self-repair capability, which increases autonomy. Systems such asmobile robots or smart glasses in particular can directly benefit from this technique.The stereo camera is a sensor capable of providing a wide spectrum of data. Beforehand, thissensor must be extrinsically calibrated, i.e. the relative positions of the two cameras must bedetermined.. However, camera extrinsic calibration can change over time due to interactionswith the external environment for example (shocks, vibrations...). Thus, a recalibrationoperation allow correcting these effects. Indeed, misunderstood data can lead to errors andmalfunction of applications. In order to counter such a scenario, the system must have aninternal mechanism, a quality of service, to decide whether the current parameters are correctand/or calculate new ones, if necessary.The approach proposed in this thesis is a self-calibration method based on the use of data coming only from the observed scene, without controlled models. First of all, we consider calibration as a system process running in the background and having to run continuously in real time. This internal calibration is not the main task of the system, but the procedure on which high-level applications rely. For this reason, system constraints severely limit the algorithm in terms of complexity, memory and time. The proposed calibration method requires few resources and uses standard data from computer vision applications, so it is hidden within the application pipeline. In this manuscript, we present many discussions to topics related to the online stereocalibration on embedded systems, such as problems on the extraction of robust points ofinterest, the calculation of the scale factor, hardware implementation aspects, high-levelapplications requiring this approach, etc. Finally, this thesis describes and explains amethodology for the building of a new type of dataset to represent the change of the cameraposition to validate the approach. The manuscript also explains the different workenvironments used in the realization of the datasets and the camera calibration procedure. Inaddition, it presents the first prototype of a smart helmet, on which the proposed self-calibration service is dynamically executed. Finally, this thesis characterizes the real-timecalibration on an embedded ARM Cortex A7 processor
Rameau, François. "Système de vision hybride à fovéation pour la vidéo-surveillance et la navigation robotique." Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOS031/document.
Full textThe primary goal of this thesis is to elaborate a binocular vision system using two different types of camera. The system studied here is composed of one omnidirectional camera coupled with a PTZ camera. This heterogeneous association of cameras having different characteristics is called a hybrid stereo-vision system. The couple composed of these two cameras combines the advantages given by both of them, that is to say a large field of view and an accurate vision of a particular Region of interest with an adjustable level of details using the zoom. In this thesis, we are presenting multiple contributions in visual tracking using omnidirectional sensors, PTZ camera self calibration, hybrid vision system calibration and structure from motion using a hybrid stereo-vision system
Scandaroli, Glauco Garcia. "Fusion de données visuo-inertielles pour l'estimation de pose et l'autocalibrage." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00861858.
Full textPessel, Nathalie. "Auto-calibrage d'une caméra en milieu sous-marin." Montpellier 2, 2003. http://www.theses.fr/2003MON20156.
Full textAndersson, Elin. "Thermal Impact of a Calibrated Stereo Camera Rig." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129636.
Full textZhou, Han, and 周晗. "Intelligent video surveillance in a calibrated multi-camera system." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45989217.
Full textJethwa, Manish 1976. "Efficient volumetric reconstruction from multiple calibrated cameras." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/30163.
Full textIncludes bibliographical references (p. 137-142).
The automatic reconstruction of large scale 3-D models from real images is of significant value to the field of computer vision in the understanding of images. As a consequence, many techniques have emerged to perform scene reconstruction from calibrated images where the position and orientation of the camera are known. Feature based methods using points and lines have enjoyed much success and have been shown to be robust against noise and changing illumination conditions. The models produced by these techniques however, can often appear crude when untextured due to the sparse set of points from which they are created. Other reconstruction methods, such as volumetric techniques, use image pixel intensities rather than features, reconstructing the scene as small volumetric units called voxels. The direct use of pixel values in the images has restricted current methods to operating on scenes with static illumination conditions. Creating a volumetric representation of the scene may also require millions of interdependent voxels which must be efficiently processed. This has limited most techniques to constrained camera locations and small indoor scenes. The primary goal of this thesis is to perform efficient voxel-based reconstruction of urban environments using a large set of pose-instrumented images. In addition to the 3- D scene reconstruction, the algorithm will also generate estimates of surface reflectance and illumination. Designing an algorithm that operates in a discretized 3-D scene space allows for the recovery of intrinsic scene color and for the integration of visibility constraints, while avoiding the pitfalls of image based feature correspondence.
(cont.) The algorithm demonstrates how in principle it is possible to reduce computational effort over more naive methods. The algorithm is intended to perform the reconstruction of large scale 3-D models from controlled imagery without human intervention.
by Manish Jethwa.
Ph.D.
Li, Dong. "Thermal image analysis using calibrated video imaging." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4455.
Full textThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on April 23, 2009) Includes bibliographical references.
Gaffney, Stephen Grant. "Markerless deformation capture of hoverfly wings using multiple calibrated cameras." Thesis, Heriot-Watt University, 2013. http://hdl.handle.net/10399/2694.
Full textDornaika, Fadi. "Contributions à l'intégration vision robotique : calibrage, localisation et asservissement." Phd thesis, Grenoble INPG, 1995. http://tel.archives-ouvertes.fr/tel-00005044.
Full texti) le calibrage caméra/pince et
ii) la localisation caméra/objet.
Pour le premier, nous proposons une méthode de calibrage non linéaire qui s'avère robuste en présence des erreurs de mesure ; pour le second, nous proposons une méthode linéaire très rapide et bien adaptée aux applications temps-réel puisqu'elle est basée sur des approximations successives par une projection para-perspective.
Dans un deuxième temps, nous nous intéressons au contrôle visuel de robots. Nous adaptons la méthode "commande référencée capteur" à une caméra indépendante du robot asservi. De plus, dans le cas d'un positionnement relatif, nous montrons que le calcul de la position de référence ne dépend pas de l'estimation explicite des paramètres intrinsèques et extrinsèques de la caméra. Pour une tâche donnée, le problème de la commande peut alors se traduire sous la forme d'une régulation d'une erreur dans l'image. Nous montrons que la localisation temps-réel caméra/robot améliore le comportement dynamique de l'asservissement. Cette méthode de contrôle a été expérimentée dans la réalisation de tâches de saisie avec un robot manipulateur à six degrés de liberté. Toutes les méthodes proposées sont validées avec des mesures réelles et simulées.
Ouellet, Jean-Nicolas. "Ré-observabilité des points caractéristiques pour le calibrage et le positionnement d'un capteur multi-caméra." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/27802/27802.pdf.
Full textPumrin, Suree. "A framework for dynamically measuring mean vehicle speed using un-calibrated cameras /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6090.
Full textTollefsen, Cristina Dawn Spanu. "Comparison of video and CCD cameras in online portal imagers calibrated for dosimetry." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ57587.pdf.
Full textSu, Po-Chang. "REAL-TIME CAPTURE AND RENDERING OF PHYSICAL SCENE WITH AN EFFICIENTLY CALIBRATED RGB-D CAMERA NETWORK." UKnowledge, 2017. https://uknowledge.uky.edu/ece_etds/110.
Full textFERREIRA, MAURICIO AZEVEDO LAGE. "SURVEILLANCE AND MONITORING OF VEHICLES IN REAL TIME AT HIGHWAYS WITH NON-CALIBRATED CAMERAS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=12971@1.
Full textVehicle surveillance computerized systems have grown great interest due to the automatizing duties demand, which recently executed by computer vision like shadows, occlusion and light variation have to be solved. The present work proposes real time algorithms for low cost machines focused on tracking, classifying and determining each vehicle`s speed on a highway.
Champleboux, Guillaume. "Utilisation de fonctions splines pour la mise au point d'un capteur tridimensionnel sans contact : quelques applications médicales." Grenoble 1, 1991. http://www.theses.fr/1991GRE10075.
Full textCornou, Sébastien. "Modélisation interactive sous contraintes à partir d'images non-calibrées : Application à la reconstruction tridimensionnelle de bâtiments." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2004. http://tel.archives-ouvertes.fr/tel-00008292.
Full textCalvet, Lilian. "Méthodes de reconstruction tridimensionnelle intégrant des points cycliques : application au suivi d'une caméra." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2014. http://tel.archives-ouvertes.fr/tel-00981191.
Full textWalker, Timothy A. "Testing camera trap density estimates from the spatial capture model and calibrated capture rate indices against kangaroo rat (Dipodomys spp.) live trapping data." Thesis, San Jose State University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10169614.
Full textCamera trapping studies often focus on estimating population density, which is critical for managing wild populations. Density estimators typically require unique markers such as stripe patterns to identify individuals but most animals do not have such markings. The spatial capture model (SC model; Chandler & Royle, 2013) estimates density without individual identification but lacks sufficient field testing. Here, both the SC model and calibrated capture rate indices were compared against ten sessions of live trapping data on kangaroo rats (Dipodomys spp). These camera and live trapping data were combined in a joint-likelihood model to further compare the two methods. From these comparisons, the factors governing the SC model?s success were scrutinized. Additionally, a method for estimating missed captures was developed and tested here. Regressions comparing live trapping density to the SC model density and capture rate were significant only for the capture rate comparison. Missed image rate had a significant relationship with ambient nighttime temperatures but only marginally improved the capture rate index calibration. Results showed the SC model was highly sensitive to deviations from its movement model, producing potentially misleading results. The model may be effective only when movement assumptions hold. Several factors such as camera coverage area, microhabitat, and burrow locations could be incorporated into the SC model density estimation process to improve precision and inference.
Laveau, Stéphane. "Géométrie d'un système de N caméras : théorie, estimation et applications." Phd thesis, Ecole Polytechnique X, 1996. http://tel.archives-ouvertes.fr/tel-00267257.
Full textmême s'agir de caméras différentes. La seule hypothèse est que la scène est rigide.
Salazar-Garibay, Adan. "Une approche directe pour l'auto-calibration des caméras catadioptriques omnidirectionnelles centrales." Phd thesis, École Nationale Supérieure des Mines de Paris, 2011. http://pastel.archives-ouvertes.fr/pastel-00645697.
Full textEa, Thomas. "Etude d'un capteur d'images stéréoscopique panoramique couleur : conception, réalisation, validation et intégration." Paris 6, 2001. http://www.theses.fr/2001PA066421.
Full textGlauco, Garcia Scandaroli. "Fusion de données visuo-inertielles pour l'estimation de pose et l'autocalibrage." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00849384.
Full textHernández, Esteban Carlos. "Modélisation d'objets 3D par fusion silhouettes-stéréo à partir de séquences d'images en rotation non calibrées." Phd thesis, Télécom ParisTech, 2004. http://pastel.archives-ouvertes.fr/pastel-00000862.
Full textСлавков, Віктор Миколайович. "Розробка цифрового фотографічного методу теплового контролю металів при високих температурах." Thesis, НТУ "ХПІ", 2015. http://repository.kpi.kharkov.ua/handle/KhPI-Press/17036.
Full textThesis for granting the Degree of Candidate of Technical sciences in speciality 05.11.13 - devices and methods of testing and materials composition determination. - National Technical University "Kharkiv Politechnical Institute", Kharkiv, 2015. The dissertation is devoted to development of a thermal control metals method at temperatures above 600 °C using as thermal radiation detector, digital camera. On the basis of the established method of theoretical positions were developed software algorithms for digital image processing that allowed: to carry a digital camera calibration brightness temperature in the range of 500...1800 °C and set the calibration curve in the form of mathematical equations; perform thermal control of metal plates, bulk metallic samples and established the presence of defects; to solve additional tasks of thermal metals control, namely to establish the value of the specific heat capacity of the metal mass; simulate uniform temperature field on the surface of the metal plates; determine the distribution coefficient of thermal radiation from the metal plates surface.
Славков, Віктор Миколайович. "Розробка цифрового фотографічного методу теплового контролю металів при високих температурах." Thesis, НТУ "ХПІ", 2015. http://repository.kpi.kharkov.ua/handle/KhPI-Press/17002.
Full textThesis for granting the Degree of Candidate of Technical sciences in speciality 05.11.13 - devices and methods of testing and materials composition determination. - National Technical University "Kharkiv Politechnical Institute", Kharkiv, 2015. The dissertation is devoted to development of a thermal control metals method at temperatures above 600 °C using as thermal radiation detector, digital camera. On the basis of the established method of theoretical positions were developed software algorithms for digital image processing that allowed: to carry a digital camera calibration brightness temperature in the range of 500...1800 °C and set the calibration curve in the form of mathematical equations; perform thermal control of metal plates, bulk metallic samples and established the presence of defects; to solve additional tasks of thermal metals control, namely to establish the value of the specific heat capacity of the metal mass; simulate uniform temperature field on the surface of the metal plates; determine the distribution coefficient of thermal radiation from the metal plates surface.
Nogueira, Marcelo Borges. "Posicionamento e movimenta??o de um rob? human?ide utilizando imagens de uma c?mera m?vel externa." Universidade Federal do Rio Grande do Norte, 2005. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15350.
Full textCoordena??o de Aperfei?oamento de Pessoal de N?vel Superior
This work proposes a method to localize a simple humanoid robot, without embedded sensors, using images taken from an extern camera and image processing techniques. Once the robot is localized relative to the camera, supposing we know the position of the camera relative to the world, we can compute the position of the robot relative to the world. To make the camera move in the work space, we will use another mobile robot with wheels, which has a precise locating system, and will place the camera on it. Once the humanoid is localized in the work space, we can take the necessary actions to move it. Simultaneously, we will move the camera robot, so it will take good images of the humanoid. The mainly contributions of this work are: the idea of using another mobile robot to aid the navigation of a humanoid robot without and advanced embedded electronics; chosing of the intrinsic and extrinsic calibration methods appropriated to the task, especially in the real time part; and the collaborative algorithm of simultaneous navigation of the robots
Este trabalho prop?e um m?todo para localizar um rob? human?ide simples, sem sensores embarcados, utilizando imagens obtidas por uma c?mera externa e t?cnicas de processamento de imagens. Localizando o rob? em rela??o ? c?mera, e supondo conhecida a posi??o da c?mera em rela??o ao mundo, podemos determinar a posi??o do rob? human?ide em rela??o ao mundo. Para que a posi??o da c?mera n?o seja fixa, utilizamos um outro rob? m?vel com rodas, dotado de um sistema de localiza??o preciso, sobre o qual ser? colocada a c?mera. Uma vez que o human?ide seja localizado no ambiente de trabalho, podemos tomar as a??es necess?rias para realizar a sua movimenta??o. Simultaneamente, movimentamos o rob? que cont?m a c?mera, de forma que este tenha uma boa visada do human?ide. As principais contribui??es deste trabalho s?o: a id?ia de utilizar um segundo rob? m?vel para auxiliar a movimenta??o de um rob? human?ide sem eletr?nica embarcada avan?ada; a escolha de m?todos de calibra??o dos par?metros intr?nsecos e extr?nsecos da c?mera apropriados para a aplica??o em quest?o, especialmente na parte em tempo real; e o algoritmo colaborativo de movimenta??o simult?nea dos dois rob?s
Santiago, Gutemberg Santos. "Navega??o cooperativa de um rob? human?ide e um rob? com rodas usando informa??o visual." Universidade Federal do Rio Grande do Norte, 2008. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15197.
Full textThis work presents a cooperative navigation systemof a humanoid robot and a wheeled robot using visual information, aiming to navigate the non-instrumented humanoid robot using information obtained from the instrumented wheeled robot. Despite the humanoid not having sensors to its navigation, it can be remotely controlled by infra-red signals. Thus, the wheeled robot can control the humanoid positioning itself behind him and, through visual information, find it and navigate it. The location of the wheeled robot is obtained merging information from odometers and from landmarks detection, using the Extended Kalman Filter. The marks are visually detected, and their features are extracted by image processing. Parameters obtained by image processing are directly used in the Extended Kalman Filter. Thus, while the wheeled robot locates and navigates the humanoid, it also simultaneously calculates its own location and maps the environment (SLAM). The navigation is done through heuristic algorithms based on errors between the actual and desired pose for each robot. The main contribution of this work was the implementation of a cooperative navigation system for two robots based on visual information, which can be extended to other robotic applications, as the ability to control robots without interfering on its hardware, or attaching communication devices
Este trabalho apresenta um sistema de navega??o cooperativa de um rob? human?ide e um rob? com rodas usando informa??o visual, com o objetivo de efetuar a navega??o do rob? human?ide n?o instrumentado utilizando-se das informa??es obtidas do rob? com rodas instrumentado. Apesar do human?ide n?o possuir sensores para sua navega??o, pode ser remotamente controlado por sinal infravermelho. Assim, o rob? com rodas pode controlar o human?ide posicionando-se atr?s dele e, atrav?s de informa??o visual, localiz?-lo e naveg?-lo. A localiza??o do rob? com rodas ? obtida fundindo-se informa??es de odometria e detec??o de marcos utilizando o filtro de Kalman estendido. Os marcos s?o detectados visualmente, e suas caracter?sticas s?o extra?das pelo o processamento da imagem. As informa??es das caracter?sticas da imagem s?o utilizadas diretamente no filtro de Kalman estendido. Assim, enquanto o rob? com rodas localiza e navega o human?ide, realiza tamb?m sua localiza??o e o mapeamento do ambiente simultaneamente (SLAM). A navega??o ? realizada atrav?s de algoritmos heur?sticos baseados nos erros de pose entre a pose dos rob?s e a pose desejada para cada rob?. A principal contribui??o desse trabalho foi a implementa??o de um sistema de navega??o cooperativa entre dois rob?s baseados em informa??o visual, que pode ser estendido para outras aplica??es rob?ticas, dado a possibilidade de se controlar rob?s sem interferir em seu hardware, ou acoplar dispositivos de comunica??o
Li, You. "Stereo vision and LIDAR based Dynamic Occupancy Grid mapping : Application to scenes analysis for Intelligent Vehicles." Phd thesis, Université de Technologie de Belfort-Montbeliard, 2013. http://tel.archives-ouvertes.fr/tel-00982325.
Full textBrousseau, Pierre-André. "Calibrage de caméra fisheye et estimation de la profondeur pour la navigation autonome." Thèse, 2019. http://hdl.handle.net/1866/23782.
Full textThis thesis focuses on the problems of calibrating wide-angle cameras and estimating depth from a single camera, stationary or in motion. The work carried out is at the intersection between traditional 3D vision and new deep learning methods in the field of autonomous navigation. They are designed to allow the detection of obstacles by a moving drone equipped with a single camera with a very wide field of view. First, a new calibration method is proposed for fisheye cameras with very large field of view by planar calibration with dense correspondences obtained by structured light that can be modelled by a set of central virtual generic cameras. We demonstrate that this approach allows direct modeling of axial cameras, and validate it on synthetic and real data. Then, a method is proposed to estimate the depth from a single image, using only the strong depth cues, the T-junctions. We demonstrate that deep learning methods are likely to learn from the biases of their data sets and have weaknesses to invariance. Finally, we propose a method to estimate the depth from a camera in free 6 DoF motion. This involves calibrating the fisheye camera on the drone, visual odometry and depth resolution. The proposed methods allow the detection of obstacles for a drone.
Draréni, Jamil. "Exploitation de contraintes photométriques et géométriques en vision : application au suivi, au calibrage et à la reconstruction." Thèse, 2010. http://hdl.handle.net/1866/4868.
Full textThe topic of this thesis revolves around three fundamental problems in computer vision; namely, video tracking, camera calibration and shape recovery. The proposed methods are solely based on photometric and geometric constraints found in the images. Video tracking, usually performed on a video sequence, consists in tracking a region of interest, selected manually by an operator. We extend a successful tracking method by adding the ability to estimate the orientation of the tracked object. Furthermore, we consider another fundamental problem in computer vision: cali- bration. Here we tackle the problem of calibrating linear cameras (a.k.a: pushbroom) and video projectors. For the former one we propose a convenient plane-based cali- bration algorithm and for the latter, a calibration algorithm that does not require a physical grid and a planar auto-calibration algorithm. Finally, we pointed our third research direction toward shape reconstruction using coplanar shadows. This technique is known to suffer from a bas-relief ambiguity if no extra information on the scene or light source is provided. We propose a simple method to reduce this ambiguity from four to a single parameter. We achieve this by taking into account the visibility of the light spots in the camera.
Cette thése a été réalisée dans le cadre d'une cotutelle avec l'Institut National Polytechnique de Grenoble (France). La recherche a été effectuée au sein des laboratoires de vision 3D (DIRO, UdM) et PERCEPTION-INRIA (Grenoble).
Liang, Chun-An, and 梁俊安. "Visual Odometry Algorithms Using Ground Image Sequence from Calibrated Camera and Cooperated Un-Calibrated Cameras." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/26484143101880697575.
Full text國立臺灣大學
電機工程學研究所
100
For mobile robots, ego-motion estimation and trajectory reconstruction are two important issues for localizing themselves in the operational environments. Numerous kinds of sensors and techniques are used in robot localization, such as wheel encoder, IMU, GPS, LRF, and visual sensors. Comparing to other sensors, visual sensors could obtain information-rich environment data and usually with low prices, which are good options for robot localization. This thesis proposes two visual odometry methods using ground image sequences. In the first method, image sequence is captured from a well-calibrated monocular camera. Due to the geometrical relationship between ground and camera can be reconstructed by the calibration results, the image scenes could be back projected to the ground and thus get the real-world positions. The proposed visual odometry method with calibrated camera includes mainly three steps. In the first step, the positional correspondences between two consecutive images are established by feature extraction and matching. Then, the extracted features are projected onto the ground plane. Finally, the robot motion is estimated with a Gaussian kernel density voting outlier rejection scheme. For the second method, two un-calibrated cameras mounted on the lateral sides of one robot are used. The intrinsic and extrinsic parameters of cameras are assumed to be unknown and, hence, it is hard to obtain the geometric relationship between image coordinate and world coordinate. To overcome this problem, only a small part of image frame is used to extract the motion quantities for reducing the effect of radial distortion and simplifying the problem as an ordinary wheel odometry problem. The proposed method with un-calibrated cameras includes four steps. In the first step, multiple motion vectors are extracted by block matching. Then, based on the spatial and temporal distribution of motion vectors, unreliable vectors are then determined and deleted. The vectors would be normalized to the desired form to fit the motion model in the next step. Finally, the motion in each frame is calculated and the trajectory is also reconstructed. Both two methods are tested by simulations and real-environment experiments.
Amjadi, Faezeh. "Comparing of radial and tangencial geometric for cylindric panorama." Thèse, 2016. http://hdl.handle.net/1866/18753.
Full textLes caméras ont généralement un champ de vision à peine assez grand pour capturer partie de leur environnement. L’objectif de l’immersion est de remplacer virtuellement un grand nombre de sens, de sorte que l’environnement virtuel soit perçu comme le plus réel possible. Une caméra panoramique est utilisée pour capturer l’ensemble d’une vue 360°, également connue sous le nom d’image panoramique. La réalité virtuelle fait usage de ces images panoramiques pour fournir une expérience plus immersive par rapport aux images sur un écran 2D. Cette thèse, qui est dans le domaine de la vision par ordinateur, s’intéresse à la création d’une géométrie multi-caméras pour générer une image cylindrique panoramique et vise une mise en œuvre avec les caméras moins chères possibles. L’objectif spécifique de ce projet est de proposer une géométrie de caméra qui va diminuer au maximum les problèmes d’artefacts liés au parallaxe présent dans l’image panoramique. Nous présentons une nouvelle approche de capture des images panoramiques cylindriques à partir de plusieurs caméras disposées uniformément autour d’un cercle. Au lieu de regarder vers l’extérieur, ce qui est la configuration traditionnelle ”radiale”, nous proposons de rendre les axes optiques tangents au cercle des caméras, une configuration ”tangentielle”. Outre une analyse et la comparaison des géométries radiales et tangentielles, nous fournissons un montage expérimental avec de vrais panoramas obtenus dans des conditions réalistes
邱裕翔. "Using Laser Level to Calibrate a Fish-eye Camera." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/48916188557184387141.
Full text國立交通大學
多媒體工程研究所
103
Recently, fish-eye cameras are used in many applications such as around vehicle monitoring system and parking guidance. A fish-eye camera can support very large field of view, up to 180 degrees or more, and is very suitable for above applications for large areas. However, an image captured by fish-eye camera impaired with severe radial distortion. Distorted images are very unnatural for human to perceive. Therefore, it is an important issue to eliminate the radial distortion of a fish-eye image. In this thesis, a novel fish-eye camera calibration method based on an off-the-shelf laser level is proposed, which estimates the principal point and measures the data for correcting radial distortion by using the relationship between laser lines projected by a laser level and imaging geometry of a camera. Finally, the geometric relation between a fish-eye image and perspective projection can be applied to de-warp the fish-eye image.
Gouiaa, Rafik. "Reconnaissance de postures humaines par fusion de la silhouette et de l'ombre dans l'infrarouge." Thèse, 2017. http://hdl.handle.net/1866/19538.
Full textHuman posture recognition (HPR) from video sequences is one of the major active research areas of computer vision. It is one step of the global process of human activity recognition (HAR) for behaviors analysis. Many HPR application systems have been developed including video surveillance, human-machine interaction, and the video retrieval. Generally, applications related to HPR can be achieved using mainly two approaches : single camera or multi-cameras. Despite the interesting performance achieved by multi-camera systems, their complexity and the huge information to be processed greatly limit their widespread use for HPR. The main goal of this thesis is to simplify the multi-camera system by replacing a camera by a light source. In fact, a light source can be seen as a virtual camera, which generates a cast shadow image representing the silhouette of the person that blocks the light. Our system will consist of a single camera and one or more infrared light sources. Despite some technical difficulties in cast shadow segmentation and cast shadow deformation because of walls and furniture, different advantages can be achieved by using our system. Indeed, we can avoid the synchronization and calibration problems of multiple cameras, reducing the cost of the system and the amount of processed data by replacing a camera by one light source. We introduce two different approaches in order to automatically recognize human postures. The first approach directly combines the person’s silhouette and cast shadow information, and uses 2D silhouette descriptor in order to extract discriminative features useful for HPR. The second approach is inspired from the shape from silhouette technique to reconstruct the visual hull of the posture using a set of cast shadow silhouettes, and extract informative features through 3D shape descriptor. Using these approaches, our goal is to prove the utility of the combination of person’s silhouette and cast shadow information for recognizing elementary human postures (stand, bend, crouch, fall,...) The proposed system can be used for video surveillance of uncluttered areas such as a corridor in a senior’s residence (for example, for the detection of falls) or in a company (for security). Its low cost may allow greater use of video surveillance for the benefit of society.
Mellor, J. P. "Automatically Recovering Geometry and Texture from Large Sets of Calibrated Images." 1999. http://hdl.handle.net/1721.1/6766.
Full textCheng, Chiao-Wen, and 鄭喬文. "Accurate Camera Pose Estimation from the Noisy Calibrated Stereo Images Based on LMedS Initial Results." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/11036736402283126661.
Full textDrareni, Jamil. "Exploitation de contraintes photométriques et géométriques en vision. Application au suivi, au calibrage et à la reconstruction." Phd thesis, 2010. http://tel.archives-ouvertes.fr/tel-00593514.
Full textMorat, Julien. "Vision stéréoscopique par ordinateur pour la détection et le suivi de cibles pour une application automobile." Phd thesis, 2008. http://tel.archives-ouvertes.fr/tel-00343675.
Full textParmi tous les capteurs susceptibles de percevoir la complexité d'un environnement urbain, la stéréo-vision offre à la fois des performances intéressantes, une spectre d'applications très larges (détection de piéton, suivi de véhicules, détection de ligne blanches, etc.) et un prix compétitif. Pour ces raisons, Renault s'attache à identifier et résoudre les problèmes liés à l'implantation d'un tel système dans un véhicule de série, notamment pour une application de suivi de véhicules.
La première problématique à maîtriser concerne le calibrage du système
stéréoscopique. En effet, pour que le système puisse fournir une mesure, ses paramètres doivent être correctement estimés, y compris sous des conditions extrêmes (forte températures, chocs, vibrations, ...). Nous présentons donc une méthodologie d'évaluation permettant de répondre aux interrogations sur les dégradations de performances du système en fonction du calibrage.
Le deuxième problème concerne la détection des obstacles. La méthode mis au point utilise d'une originale les propriétés des rectifications. Le résultat est une segmentation de la route et des obstacles.
La dernière problématique concerne la calcul de vitesse des obstacles. Une grande majorité des approches de la littérature approxime la vitesse d'un obstacle à partir de ses positions successives. Lors de ce calcul, l'accumulation des incertitudes rendent cette estimation extrêmement bruitée. Notre approche combine efficacement les atouts de la stéréo-vision et du flux optique afin d'obtenir directement une mesure de vitesse 3-D robuste et précise.
Siegmund, Bernward. "Untersuchung der Geschosswirkung in der sehr frühen Phase unter besonderer Berücksichtigung der Hochgeschwindigkeitsmunition." Doctoral thesis, 2006. http://hdl.handle.net/11858/00-1735-0000-0006-B33E-8.
Full text