Academic literature on the topic 'Calibrage camera'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Calibrage camera.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Calibrage camera"
Baek, Seung-Hae, Pathum Rathnayaka, and Soon-Yong Park. "Calibration of a Stereo Radiation Detection Camera Using Planar Homography." Journal of Sensors 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/8928096.
Full textZhou, Chenchen, Shaoqi Wang, Yi Cao, Shuang-Hua Yang, and Bin Bai. "Online Pyrometry Calibration for Industrial Combustion Process Monitoring." Processes 10, no. 9 (August 26, 2022): 1694. http://dx.doi.org/10.3390/pr10091694.
Full textSimarro, Gonzalo, Daniel Calvete, and Paola Souto. "UCalib: Cameras Autocalibration on Coastal Video Monitoring Systems." Remote Sensing 13, no. 14 (July 16, 2021): 2795. http://dx.doi.org/10.3390/rs13142795.
Full textMokatren, Moayad, Tsvi Kuflik, and Ilan Shimshoni. "Calibration-Free Mobile Eye-Tracking Using Corneal Imaging." Sensors 24, no. 4 (February 15, 2024): 1237. http://dx.doi.org/10.3390/s24041237.
Full textDedei Tagoe, Naa, and S. Mantey. "Determination of the Interior Orientation Parameters of a Non-metric Digital Camera for Terrestrial Photogrammetric Applications." Ghana Mining Journal 19, no. 2 (December 22, 2019): 1–9. http://dx.doi.org/10.4314/gm.v19i2.1.
Full textLiu, Zhe, Zhaozong Meng, Nan Gao, and Zonghua Zhang. "Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target." Sensors 19, no. 13 (July 8, 2019): 3008. http://dx.doi.org/10.3390/s19133008.
Full textPark, Byung-Seo, Woosuk Kim, Jin-Kyum Kim, Eui Seok Hwang, Dong-Wook Kim, and Young-Ho Seo. "3D Static Point Cloud Registration by Estimating Temporal Human Pose at Multiview." Sensors 22, no. 3 (January 31, 2022): 1097. http://dx.doi.org/10.3390/s22031097.
Full textYin, Lei, Xiangjun Wang, Yubo Ni, Kai Zhou, and Jilong Zhang. "Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing." Remote Sensing 10, no. 8 (August 16, 2018): 1298. http://dx.doi.org/10.3390/rs10081298.
Full textDu, Yuchuan, Cong Zhao, Feng Li, and Xuefeng Yang. "An Open Data Platform for Traffic Parameters Measurement via Multirotor Unmanned Aerial Vehicles Video." Journal of Advanced Transportation 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/8324301.
Full textTeo, T. "VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4/W5 (May 11, 2015): 55–60. http://dx.doi.org/10.5194/isprsarchives-xl-4-w5-55-2015.
Full textDissertations / Theses on the topic "Calibrage camera"
Dornaika, Fadi. "Contributions à l'intégration vision-robotique : calibrage, localisation et asservissement." Phd thesis, Grenoble INPG, 1995. http://www.theses.fr/1995INPG0097.
Full textThe integration of computer vision with robot control is the concern of this thesis. This integration has many advantages for the interaction of a robotic system with its environment. First, we are interested in the study of calibration methods. Two topics are treated : i) hand/eye calibration and ii) object pose. For the first, we developed a nonlinear method that seems to be very robust with respect to measurement errors; for the second, we developed an iterative para-perspective pose computation method that can be used in real-time applications. Next we are interested in visual servo control and extend the well known method "image-based servoing" for a camera that is not attached to the robot being serv(o)(e)d. When performing relative positioning, we show that the computation of the goal features do not depend on an explicit estimate of the camera intrinsic or extrinsic parameters. For a given task, the robot motions are computed in order to reduce a 2D error to zero. The central issue of any image-based servoing method is the estimation of the image Jacobian. We show the advantage of using an exact image Jacobian with respect to the dynamic behaviour of the servoing process. This control method is used in automatic object grasping with a 6 DOF robot manipulator. All the methods presented in this thesis are validated with real and simulated data
Draréni, Jamil. "Exploitation de contraintes photométriques et géométriques en vision : application au suivi, au calibrage et à la reconstruction." Grenoble, 2010. http://www.theses.fr/2010GRENM061.
Full textThe topic of this thesis revolves around three fundamental problems in computer vision; namely, video tracking, camera calibration and shape recovery. The proposed methods are solely based on photometric and geometric constraints found in the images. Video tracking, usually performed on a video sequence, consists in tracking a region of interest, selected manually by an operator. We extend a successful tracking method by adding the ability to estimate the orientation of the tracked object. Furthermore, we consider another fundamental problem in computer vision: calibration. Here we tackle the problem of calibrating linear cameras (a. K. A: pushbroom)and video projectors. For the former one we propose a convenient plane-based calibration algorithm and for the latter, a calibration algorithm that does not require aphysical grid and a planar auto-calibration algorithm. Finally, we pointed our third research direction toward shape reconstruction using coplanar shadows. This technique is known to suffer from a bas-relief ambiguity if no extra information on the scene or light source is provided. We propose a simple method to reduce this ambiguity from four to a single parameter. We achieve this by taking into account the visibility of the light spots in the camera
Fang, Yong. "Road scene perception based on fisheye camera, LIDAR and GPS data combination." Thesis, Belfort-Montbéliard, 2015. http://www.theses.fr/2015BELF0265/document.
Full textRoad scene understanding is one of key research topics of intelligent vehicles. This thesis focuses on detection and tracking of obstacles by multisensors data fusion and analysis. The considered system is composed of a lidar, a fisheye camera and aglobal positioning system (GPS). Several steps of the perception scheme are studied: extrinsic calibration between fisheye camera and lidar, road detection and obstacles detection and tracking. Firstly, a new method for extinsic calibration between fisheye camera and lidar is proposed. For intrinsic modeling of the fisheye camera, three models of the literatureare studied and compared. For extrinsic calibration between the two sensors, the normal to the lidar plane is firstly estimated based on the determination of ń known ż points. The extrinsic parameters are then computed using a least square approachbased on geometrical constraints, the lidar plane normal and the lidar measurements. The second part of this thesis is dedicated to road detection exploiting both fisheye camera and lidar data. The road is firstly coarse detected considering the illumination invariant image. Then the normalised histogram based classification is validated using the lidar data. The road segmentation is finally refined exploiting two successive roaddetection results and distance map computed in HSI color space. The third step focuses on obstacles detection, especially in case of motion blur. The proposed method combines previously detected road, map, GPS and lidar information.Regions of interest are extracted from previously road detection. Then road central lines are extracted from the image and matched with road shape model extracted from 2DŋSIG map. Lidar measurements are used to validated the results.The final step is object tracking still using fisheye camera and lidar. The proposed method is based on previously detected obstacles and a region growth approach. All the methods proposed in this thesis are tested, evaluated and compared to stateŋofŋtheŋart approaches using real data acquired with the IRTESŋSET laboratory experimental platform
Szczepanski, Michał. "Online stereo camera calibration on embedded systems." Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC095.
Full textThis thesis describes an approach for online calibration of stereo cameras on embeddedsystems. It introduces a new functionality for cyber physical systems by measuring the qualityof service of the calibration. Thus, the manuscript proposes a dynamic monitoring andcalculation of the internal sensor parameters required for many computer vision tasks. Themethod improves both security and system efficiency using stereo cameras. It prolongs the lifeof the devices thanks to this self-repair capability, which increases autonomy. Systems such asmobile robots or smart glasses in particular can directly benefit from this technique.The stereo camera is a sensor capable of providing a wide spectrum of data. Beforehand, thissensor must be extrinsically calibrated, i.e. the relative positions of the two cameras must bedetermined.. However, camera extrinsic calibration can change over time due to interactionswith the external environment for example (shocks, vibrations...). Thus, a recalibrationoperation allow correcting these effects. Indeed, misunderstood data can lead to errors andmalfunction of applications. In order to counter such a scenario, the system must have aninternal mechanism, a quality of service, to decide whether the current parameters are correctand/or calculate new ones, if necessary.The approach proposed in this thesis is a self-calibration method based on the use of data coming only from the observed scene, without controlled models. First of all, we consider calibration as a system process running in the background and having to run continuously in real time. This internal calibration is not the main task of the system, but the procedure on which high-level applications rely. For this reason, system constraints severely limit the algorithm in terms of complexity, memory and time. The proposed calibration method requires few resources and uses standard data from computer vision applications, so it is hidden within the application pipeline. In this manuscript, we present many discussions to topics related to the online stereocalibration on embedded systems, such as problems on the extraction of robust points ofinterest, the calculation of the scale factor, hardware implementation aspects, high-levelapplications requiring this approach, etc. Finally, this thesis describes and explains amethodology for the building of a new type of dataset to represent the change of the cameraposition to validate the approach. The manuscript also explains the different workenvironments used in the realization of the datasets and the camera calibration procedure. Inaddition, it presents the first prototype of a smart helmet, on which the proposed self-calibration service is dynamically executed. Finally, this thesis characterizes the real-timecalibration on an embedded ARM Cortex A7 processor
Rameau, François. "Système de vision hybride à fovéation pour la vidéo-surveillance et la navigation robotique." Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOS031/document.
Full textThe primary goal of this thesis is to elaborate a binocular vision system using two different types of camera. The system studied here is composed of one omnidirectional camera coupled with a PTZ camera. This heterogeneous association of cameras having different characteristics is called a hybrid stereo-vision system. The couple composed of these two cameras combines the advantages given by both of them, that is to say a large field of view and an accurate vision of a particular Region of interest with an adjustable level of details using the zoom. In this thesis, we are presenting multiple contributions in visual tracking using omnidirectional sensors, PTZ camera self calibration, hybrid vision system calibration and structure from motion using a hybrid stereo-vision system
Scandaroli, Glauco Garcia. "Fusion de données visuo-inertielles pour l'estimation de pose et l'autocalibrage." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00861858.
Full textPessel, Nathalie. "Auto-calibrage d'une caméra en milieu sous-marin." Montpellier 2, 2003. http://www.theses.fr/2003MON20156.
Full textAndersson, Elin. "Thermal Impact of a Calibrated Stereo Camera Rig." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129636.
Full textZhou, Han, and 周晗. "Intelligent video surveillance in a calibrated multi-camera system." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45989217.
Full textJethwa, Manish 1976. "Efficient volumetric reconstruction from multiple calibrated cameras." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/30163.
Full textIncludes bibliographical references (p. 137-142).
The automatic reconstruction of large scale 3-D models from real images is of significant value to the field of computer vision in the understanding of images. As a consequence, many techniques have emerged to perform scene reconstruction from calibrated images where the position and orientation of the camera are known. Feature based methods using points and lines have enjoyed much success and have been shown to be robust against noise and changing illumination conditions. The models produced by these techniques however, can often appear crude when untextured due to the sparse set of points from which they are created. Other reconstruction methods, such as volumetric techniques, use image pixel intensities rather than features, reconstructing the scene as small volumetric units called voxels. The direct use of pixel values in the images has restricted current methods to operating on scenes with static illumination conditions. Creating a volumetric representation of the scene may also require millions of interdependent voxels which must be efficiently processed. This has limited most techniques to constrained camera locations and small indoor scenes. The primary goal of this thesis is to perform efficient voxel-based reconstruction of urban environments using a large set of pose-instrumented images. In addition to the 3- D scene reconstruction, the algorithm will also generate estimates of surface reflectance and illumination. Designing an algorithm that operates in a discretized 3-D scene space allows for the recovery of intrinsic scene color and for the integration of visibility constraints, while avoiding the pitfalls of image based feature correspondence.
(cont.) The algorithm demonstrates how in principle it is possible to reduce computational effort over more naive methods. The algorithm is intended to perform the reconstruction of large scale 3-D models from controlled imagery without human intervention.
by Manish Jethwa.
Ph.D.
Books on the topic "Calibrage camera"
S, Roth Zvi, ed. Camera-aided robot calibration. Boca Raton: CRC Press, 1996.
Find full textDailey, Daniel J. The automated use of un-calibrated CCTV cameras as quantitative speed sensors, Phase 3. Olympia, WA: Washington State Dept. of Transportation, 2006.
Find full textBook chapters on the topic "Calibrage camera"
Benligiray, Burak, Halil Ibrahim Cakir, Cihan Topal, and Cuneyt Akinlar. "Counting Turkish Coins with a Calibrated Camera." In Image Analysis and Processing — ICIAP 2015, 216–26. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23234-8_21.
Full textKomorowski, Jacek, and Przemysław Rokita. "Camera Pose Estimation from Sequence of Calibrated Images." In Advances in Intelligent Systems and Computing, 101–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-32384-3_13.
Full textZhang, Tianlong, Xiaorong Shen, Quanfa Xiu, and Luodi Zhao. "Person Re-identification Based on Minimum Feature Using Calibrated Camera." In Lecture Notes in Electrical Engineering, 533–40. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6499-9_51.
Full textSvoboda, Tomáš, and Peter Sturm. "A badly calibrated camera in ego-motion estimation — propagation of uncertainty." In Computer Analysis of Images and Patterns, 183–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63460-6_116.
Full textChoi, Wongun, and Silvio Savarese. "Multiple Target Tracking in World Coordinate with Single, Minimally Calibrated Camera." In Computer Vision – ECCV 2010, 553–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15561-1_40.
Full textJiang, Xiaoyan, Erik Rodner, and Joachim Denzler. "Multi-person Tracking-by-Detection Based on Calibrated Multi-camera Systems." In Computer Vision and Graphics, 743–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33564-8_89.
Full textZhang, Xiaoqiang, Yanning Zhang, Tao Yang, and Zhengxi Song. "Calibrate a Moving Camera on a Linear Translating Stage Using Virtual Plane + Parallax." In Intelligent Science and Intelligent Data Engineering, 48–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36669-7_7.
Full textCheung, Kin-Wang, Jiansheng Chen, and Yiu-Sang Moon. "Synthesizing Frontal Faces on Calibrated Stereo Cameras for Face Recognition." In Advances in Biometrics, 347–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01793-3_36.
Full textNishizaki, Takashi, Yoshinari Kameda, and Yuichi Ohta. "Visual Surveillance Using Less ROIs of Multiple Non-calibrated Cameras." In Computer Vision – ACCV 2006, 317–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11612032_33.
Full textKang, Sing Bing, and Richard Weiss. "Can We Calibrate a Camera Using an Image of a Flat,Textureless Lambertian Surface?" In Lecture Notes in Computer Science, 640–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45053-x_41.
Full textConference papers on the topic "Calibrage camera"
Lai, H. W., C. K. Ma, S. L. Yang, and C. M. Tsui. "Calibration of the Period and Time Difference of Synchronized Flashing Lights." In NCSL International Workshop & Symposium. NCSL International, 2020. http://dx.doi.org/10.51843/wsproceedings.2020.10.
Full textVader, Anup M., Abhinav Chadda, Wenjuan Zhu, Ming C. Leu, Xiaoqing F. Liu, and Jonathan B. Vance. "An Integrated Calibration Technique for Multi-Camera Vision Systems." In ASME 2010 World Conference on Innovative Virtual Reality. ASMEDC, 2010. http://dx.doi.org/10.1115/winvr2010-3732.
Full textMuglikar, Manasi, Mathias Gehrig, Daniel Gehrig, and Davide Scaramuzza. "How to Calibrate Your Event Camera." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00155.
Full textSlembrouck, N., J. Audenaert, and F. Leloup. "SNAPSHOT AND LINESCAN HYPERSPECTRAL IMAGING FOR VISUAL APPEARANCE MEASUREMENTS." In CIE 2023 Conference. International Commission on Illumination, CIE, 2023. http://dx.doi.org/10.25039/x50.2023.po139.
Full textTrocoli, Tiago, and Luciano Oliveira. "Using the Scene to Calibrate the Camera." In 2016 29th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2016. http://dx.doi.org/10.1109/sibgrapi.2016.069.
Full textMentens, A., G. H. Scheir, Y. Ghysel, F. Descamps, J. Lataire, and V. A. Jacobs. "OPTIMIZING CAMERA PLACEMENT FOR A LUMINANCE-BASED SHADING CONTROL SYSTEM." In CIE 2021 Conference. International Commission on Illumination, CIE, 2021. http://dx.doi.org/10.25039/x48.2021.po39.
Full textShih, Ping-Chang, Guillermo Gallego, Anthony Yezzi, and Francesco Fedele. "Improving 3-D Variational Stereo Reconstruction of Oceanic Sea States by Camera Calibration Refinement." In ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/omae2013-10550.
Full textXie, Yupeng, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, and Gauthier Lafruit. "View Synthesis: LiDAR Camera versus Depth Estimation." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.35.
Full textTsui, Darin, Capalina Melentyev, Ananya Rajan, Rohan Kumar, and Frank E. Talke. "An Optical Tracking Approach to Computer-Assisted Surgical Navigation via Stereoscopic Vision." In ASME 2023 32nd Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/isps2023-111020.
Full textNeal, Joseph, Tara Leipold, and Karla Petroskey. "The Effect of Image Stabilization on PhotoModeler Project Accuracy." In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2024. http://dx.doi.org/10.4271/2024-01-2474.
Full textReports on the topic "Calibrage camera"
Latz, Michael I. DURIP: A Low-Light Photon-Calibrated High-Resolution Digital Camera Imaging System. Fort Belvoir, VA: Defense Technical Information Center, September 2006. http://dx.doi.org/10.21236/ada612146.
Full text