Gotowa bibliografia na temat „Calibrage camera”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Calibrage camera”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Calibrage camera"
Baek, Seung-Hae, Pathum Rathnayaka i Soon-Yong Park. "Calibration of a Stereo Radiation Detection Camera Using Planar Homography". Journal of Sensors 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/8928096.
Pełny tekst źródłaZhou, Chenchen, Shaoqi Wang, Yi Cao, Shuang-Hua Yang i Bin Bai. "Online Pyrometry Calibration for Industrial Combustion Process Monitoring". Processes 10, nr 9 (26.08.2022): 1694. http://dx.doi.org/10.3390/pr10091694.
Pełny tekst źródłaSimarro, Gonzalo, Daniel Calvete i Paola Souto. "UCalib: Cameras Autocalibration on Coastal Video Monitoring Systems". Remote Sensing 13, nr 14 (16.07.2021): 2795. http://dx.doi.org/10.3390/rs13142795.
Pełny tekst źródłaMokatren, Moayad, Tsvi Kuflik i Ilan Shimshoni. "Calibration-Free Mobile Eye-Tracking Using Corneal Imaging". Sensors 24, nr 4 (15.02.2024): 1237. http://dx.doi.org/10.3390/s24041237.
Pełny tekst źródłaDedei Tagoe, Naa, i S. Mantey. "Determination of the Interior Orientation Parameters of a Non-metric Digital Camera for Terrestrial Photogrammetric Applications". Ghana Mining Journal 19, nr 2 (22.12.2019): 1–9. http://dx.doi.org/10.4314/gm.v19i2.1.
Pełny tekst źródłaLiu, Zhe, Zhaozong Meng, Nan Gao i Zonghua Zhang. "Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target". Sensors 19, nr 13 (8.07.2019): 3008. http://dx.doi.org/10.3390/s19133008.
Pełny tekst źródłaPark, Byung-Seo, Woosuk Kim, Jin-Kyum Kim, Eui Seok Hwang, Dong-Wook Kim i Young-Ho Seo. "3D Static Point Cloud Registration by Estimating Temporal Human Pose at Multiview". Sensors 22, nr 3 (31.01.2022): 1097. http://dx.doi.org/10.3390/s22031097.
Pełny tekst źródłaYin, Lei, Xiangjun Wang, Yubo Ni, Kai Zhou i Jilong Zhang. "Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing". Remote Sensing 10, nr 8 (16.08.2018): 1298. http://dx.doi.org/10.3390/rs10081298.
Pełny tekst źródłaDu, Yuchuan, Cong Zhao, Feng Li i Xuefeng Yang. "An Open Data Platform for Traffic Parameters Measurement via Multirotor Unmanned Aerial Vehicles Video". Journal of Advanced Transportation 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/8324301.
Pełny tekst źródłaTeo, T. "VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4/W5 (11.05.2015): 55–60. http://dx.doi.org/10.5194/isprsarchives-xl-4-w5-55-2015.
Pełny tekst źródłaRozprawy doktorskie na temat "Calibrage camera"
Dornaika, Fadi. "Contributions à l'intégration vision-robotique : calibrage, localisation et asservissement". Phd thesis, Grenoble INPG, 1995. http://www.theses.fr/1995INPG0097.
Pełny tekst źródłaThe integration of computer vision with robot control is the concern of this thesis. This integration has many advantages for the interaction of a robotic system with its environment. First, we are interested in the study of calibration methods. Two topics are treated : i) hand/eye calibration and ii) object pose. For the first, we developed a nonlinear method that seems to be very robust with respect to measurement errors; for the second, we developed an iterative para-perspective pose computation method that can be used in real-time applications. Next we are interested in visual servo control and extend the well known method "image-based servoing" for a camera that is not attached to the robot being serv(o)(e)d. When performing relative positioning, we show that the computation of the goal features do not depend on an explicit estimate of the camera intrinsic or extrinsic parameters. For a given task, the robot motions are computed in order to reduce a 2D error to zero. The central issue of any image-based servoing method is the estimation of the image Jacobian. We show the advantage of using an exact image Jacobian with respect to the dynamic behaviour of the servoing process. This control method is used in automatic object grasping with a 6 DOF robot manipulator. All the methods presented in this thesis are validated with real and simulated data
Draréni, Jamil. "Exploitation de contraintes photométriques et géométriques en vision : application au suivi, au calibrage et à la reconstruction". Grenoble, 2010. http://www.theses.fr/2010GRENM061.
Pełny tekst źródłaThe topic of this thesis revolves around three fundamental problems in computer vision; namely, video tracking, camera calibration and shape recovery. The proposed methods are solely based on photometric and geometric constraints found in the images. Video tracking, usually performed on a video sequence, consists in tracking a region of interest, selected manually by an operator. We extend a successful tracking method by adding the ability to estimate the orientation of the tracked object. Furthermore, we consider another fundamental problem in computer vision: calibration. Here we tackle the problem of calibrating linear cameras (a. K. A: pushbroom)and video projectors. For the former one we propose a convenient plane-based calibration algorithm and for the latter, a calibration algorithm that does not require aphysical grid and a planar auto-calibration algorithm. Finally, we pointed our third research direction toward shape reconstruction using coplanar shadows. This technique is known to suffer from a bas-relief ambiguity if no extra information on the scene or light source is provided. We propose a simple method to reduce this ambiguity from four to a single parameter. We achieve this by taking into account the visibility of the light spots in the camera
Fang, Yong. "Road scene perception based on fisheye camera, LIDAR and GPS data combination". Thesis, Belfort-Montbéliard, 2015. http://www.theses.fr/2015BELF0265/document.
Pełny tekst źródłaRoad scene understanding is one of key research topics of intelligent vehicles. This thesis focuses on detection and tracking of obstacles by multisensors data fusion and analysis. The considered system is composed of a lidar, a fisheye camera and aglobal positioning system (GPS). Several steps of the perception scheme are studied: extrinsic calibration between fisheye camera and lidar, road detection and obstacles detection and tracking. Firstly, a new method for extinsic calibration between fisheye camera and lidar is proposed. For intrinsic modeling of the fisheye camera, three models of the literatureare studied and compared. For extrinsic calibration between the two sensors, the normal to the lidar plane is firstly estimated based on the determination of ń known ż points. The extrinsic parameters are then computed using a least square approachbased on geometrical constraints, the lidar plane normal and the lidar measurements. The second part of this thesis is dedicated to road detection exploiting both fisheye camera and lidar data. The road is firstly coarse detected considering the illumination invariant image. Then the normalised histogram based classification is validated using the lidar data. The road segmentation is finally refined exploiting two successive roaddetection results and distance map computed in HSI color space. The third step focuses on obstacles detection, especially in case of motion blur. The proposed method combines previously detected road, map, GPS and lidar information.Regions of interest are extracted from previously road detection. Then road central lines are extracted from the image and matched with road shape model extracted from 2DŋSIG map. Lidar measurements are used to validated the results.The final step is object tracking still using fisheye camera and lidar. The proposed method is based on previously detected obstacles and a region growth approach. All the methods proposed in this thesis are tested, evaluated and compared to stateŋofŋtheŋart approaches using real data acquired with the IRTESŋSET laboratory experimental platform
Szczepanski, Michał. "Online stereo camera calibration on embedded systems". Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC095.
Pełny tekst źródłaThis thesis describes an approach for online calibration of stereo cameras on embeddedsystems. It introduces a new functionality for cyber physical systems by measuring the qualityof service of the calibration. Thus, the manuscript proposes a dynamic monitoring andcalculation of the internal sensor parameters required for many computer vision tasks. Themethod improves both security and system efficiency using stereo cameras. It prolongs the lifeof the devices thanks to this self-repair capability, which increases autonomy. Systems such asmobile robots or smart glasses in particular can directly benefit from this technique.The stereo camera is a sensor capable of providing a wide spectrum of data. Beforehand, thissensor must be extrinsically calibrated, i.e. the relative positions of the two cameras must bedetermined.. However, camera extrinsic calibration can change over time due to interactionswith the external environment for example (shocks, vibrations...). Thus, a recalibrationoperation allow correcting these effects. Indeed, misunderstood data can lead to errors andmalfunction of applications. In order to counter such a scenario, the system must have aninternal mechanism, a quality of service, to decide whether the current parameters are correctand/or calculate new ones, if necessary.The approach proposed in this thesis is a self-calibration method based on the use of data coming only from the observed scene, without controlled models. First of all, we consider calibration as a system process running in the background and having to run continuously in real time. This internal calibration is not the main task of the system, but the procedure on which high-level applications rely. For this reason, system constraints severely limit the algorithm in terms of complexity, memory and time. The proposed calibration method requires few resources and uses standard data from computer vision applications, so it is hidden within the application pipeline. In this manuscript, we present many discussions to topics related to the online stereocalibration on embedded systems, such as problems on the extraction of robust points ofinterest, the calculation of the scale factor, hardware implementation aspects, high-levelapplications requiring this approach, etc. Finally, this thesis describes and explains amethodology for the building of a new type of dataset to represent the change of the cameraposition to validate the approach. The manuscript also explains the different workenvironments used in the realization of the datasets and the camera calibration procedure. Inaddition, it presents the first prototype of a smart helmet, on which the proposed self-calibration service is dynamically executed. Finally, this thesis characterizes the real-timecalibration on an embedded ARM Cortex A7 processor
Rameau, François. "Système de vision hybride à fovéation pour la vidéo-surveillance et la navigation robotique". Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOS031/document.
Pełny tekst źródłaThe primary goal of this thesis is to elaborate a binocular vision system using two different types of camera. The system studied here is composed of one omnidirectional camera coupled with a PTZ camera. This heterogeneous association of cameras having different characteristics is called a hybrid stereo-vision system. The couple composed of these two cameras combines the advantages given by both of them, that is to say a large field of view and an accurate vision of a particular Region of interest with an adjustable level of details using the zoom. In this thesis, we are presenting multiple contributions in visual tracking using omnidirectional sensors, PTZ camera self calibration, hybrid vision system calibration and structure from motion using a hybrid stereo-vision system
Scandaroli, Glauco Garcia. "Fusion de données visuo-inertielles pour l'estimation de pose et l'autocalibrage". Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00861858.
Pełny tekst źródłaPessel, Nathalie. "Auto-calibrage d'une caméra en milieu sous-marin". Montpellier 2, 2003. http://www.theses.fr/2003MON20156.
Pełny tekst źródłaAndersson, Elin. "Thermal Impact of a Calibrated Stereo Camera Rig". Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129636.
Pełny tekst źródłaZhou, Han, i 周晗. "Intelligent video surveillance in a calibrated multi-camera system". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45989217.
Pełny tekst źródłaJethwa, Manish 1976. "Efficient volumetric reconstruction from multiple calibrated cameras". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/30163.
Pełny tekst źródłaIncludes bibliographical references (p. 137-142).
The automatic reconstruction of large scale 3-D models from real images is of significant value to the field of computer vision in the understanding of images. As a consequence, many techniques have emerged to perform scene reconstruction from calibrated images where the position and orientation of the camera are known. Feature based methods using points and lines have enjoyed much success and have been shown to be robust against noise and changing illumination conditions. The models produced by these techniques however, can often appear crude when untextured due to the sparse set of points from which they are created. Other reconstruction methods, such as volumetric techniques, use image pixel intensities rather than features, reconstructing the scene as small volumetric units called voxels. The direct use of pixel values in the images has restricted current methods to operating on scenes with static illumination conditions. Creating a volumetric representation of the scene may also require millions of interdependent voxels which must be efficiently processed. This has limited most techniques to constrained camera locations and small indoor scenes. The primary goal of this thesis is to perform efficient voxel-based reconstruction of urban environments using a large set of pose-instrumented images. In addition to the 3- D scene reconstruction, the algorithm will also generate estimates of surface reflectance and illumination. Designing an algorithm that operates in a discretized 3-D scene space allows for the recovery of intrinsic scene color and for the integration of visibility constraints, while avoiding the pitfalls of image based feature correspondence.
(cont.) The algorithm demonstrates how in principle it is possible to reduce computational effort over more naive methods. The algorithm is intended to perform the reconstruction of large scale 3-D models from controlled imagery without human intervention.
by Manish Jethwa.
Ph.D.
Książki na temat "Calibrage camera"
S, Roth Zvi, red. Camera-aided robot calibration. Boca Raton: CRC Press, 1996.
Znajdź pełny tekst źródłaDailey, Daniel J. The automated use of un-calibrated CCTV cameras as quantitative speed sensors, Phase 3. Olympia, WA: Washington State Dept. of Transportation, 2006.
Znajdź pełny tekst źródłaCzęści książek na temat "Calibrage camera"
Benligiray, Burak, Halil Ibrahim Cakir, Cihan Topal i Cuneyt Akinlar. "Counting Turkish Coins with a Calibrated Camera". W Image Analysis and Processing — ICIAP 2015, 216–26. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23234-8_21.
Pełny tekst źródłaKomorowski, Jacek, i Przemysław Rokita. "Camera Pose Estimation from Sequence of Calibrated Images". W Advances in Intelligent Systems and Computing, 101–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-32384-3_13.
Pełny tekst źródłaZhang, Tianlong, Xiaorong Shen, Quanfa Xiu i Luodi Zhao. "Person Re-identification Based on Minimum Feature Using Calibrated Camera". W Lecture Notes in Electrical Engineering, 533–40. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6499-9_51.
Pełny tekst źródłaSvoboda, Tomáš, i Peter Sturm. "A badly calibrated camera in ego-motion estimation — propagation of uncertainty". W Computer Analysis of Images and Patterns, 183–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63460-6_116.
Pełny tekst źródłaChoi, Wongun, i Silvio Savarese. "Multiple Target Tracking in World Coordinate with Single, Minimally Calibrated Camera". W Computer Vision – ECCV 2010, 553–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15561-1_40.
Pełny tekst źródłaJiang, Xiaoyan, Erik Rodner i Joachim Denzler. "Multi-person Tracking-by-Detection Based on Calibrated Multi-camera Systems". W Computer Vision and Graphics, 743–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33564-8_89.
Pełny tekst źródłaZhang, Xiaoqiang, Yanning Zhang, Tao Yang i Zhengxi Song. "Calibrate a Moving Camera on a Linear Translating Stage Using Virtual Plane + Parallax". W Intelligent Science and Intelligent Data Engineering, 48–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36669-7_7.
Pełny tekst źródłaCheung, Kin-Wang, Jiansheng Chen i Yiu-Sang Moon. "Synthesizing Frontal Faces on Calibrated Stereo Cameras for Face Recognition". W Advances in Biometrics, 347–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01793-3_36.
Pełny tekst źródłaNishizaki, Takashi, Yoshinari Kameda i Yuichi Ohta. "Visual Surveillance Using Less ROIs of Multiple Non-calibrated Cameras". W Computer Vision – ACCV 2006, 317–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11612032_33.
Pełny tekst źródłaKang, Sing Bing, i Richard Weiss. "Can We Calibrate a Camera Using an Image of a Flat,Textureless Lambertian Surface?" W Lecture Notes in Computer Science, 640–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45053-x_41.
Pełny tekst źródłaStreszczenia konferencji na temat "Calibrage camera"
Lai, H. W., C. K. Ma, S. L. Yang i C. M. Tsui. "Calibration of the Period and Time Difference of Synchronized Flashing Lights". W NCSL International Workshop & Symposium. NCSL International, 2020. http://dx.doi.org/10.51843/wsproceedings.2020.10.
Pełny tekst źródłaVader, Anup M., Abhinav Chadda, Wenjuan Zhu, Ming C. Leu, Xiaoqing F. Liu i Jonathan B. Vance. "An Integrated Calibration Technique for Multi-Camera Vision Systems". W ASME 2010 World Conference on Innovative Virtual Reality. ASMEDC, 2010. http://dx.doi.org/10.1115/winvr2010-3732.
Pełny tekst źródłaMuglikar, Manasi, Mathias Gehrig, Daniel Gehrig i Davide Scaramuzza. "How to Calibrate Your Event Camera". W 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00155.
Pełny tekst źródłaSlembrouck, N., J. Audenaert i F. Leloup. "SNAPSHOT AND LINESCAN HYPERSPECTRAL IMAGING FOR VISUAL APPEARANCE MEASUREMENTS". W CIE 2023 Conference. International Commission on Illumination, CIE, 2023. http://dx.doi.org/10.25039/x50.2023.po139.
Pełny tekst źródłaTrocoli, Tiago, i Luciano Oliveira. "Using the Scene to Calibrate the Camera". W 2016 29th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2016. http://dx.doi.org/10.1109/sibgrapi.2016.069.
Pełny tekst źródłaMentens, A., G. H. Scheir, Y. Ghysel, F. Descamps, J. Lataire i V. A. Jacobs. "OPTIMIZING CAMERA PLACEMENT FOR A LUMINANCE-BASED SHADING CONTROL SYSTEM". W CIE 2021 Conference. International Commission on Illumination, CIE, 2021. http://dx.doi.org/10.25039/x48.2021.po39.
Pełny tekst źródłaShih, Ping-Chang, Guillermo Gallego, Anthony Yezzi i Francesco Fedele. "Improving 3-D Variational Stereo Reconstruction of Oceanic Sea States by Camera Calibration Refinement". W ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/omae2013-10550.
Pełny tekst źródłaXie, Yupeng, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani i Gauthier Lafruit. "View Synthesis: LiDAR Camera versus Depth Estimation". W WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.35.
Pełny tekst źródłaTsui, Darin, Capalina Melentyev, Ananya Rajan, Rohan Kumar i Frank E. Talke. "An Optical Tracking Approach to Computer-Assisted Surgical Navigation via Stereoscopic Vision". W ASME 2023 32nd Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/isps2023-111020.
Pełny tekst źródłaNeal, Joseph, Tara Leipold i Karla Petroskey. "The Effect of Image Stabilization on PhotoModeler Project Accuracy". W WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2024. http://dx.doi.org/10.4271/2024-01-2474.
Pełny tekst źródłaRaporty organizacyjne na temat "Calibrage camera"
Latz, Michael I. DURIP: A Low-Light Photon-Calibrated High-Resolution Digital Camera Imaging System. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 2006. http://dx.doi.org/10.21236/ada612146.
Pełny tekst źródła