Academic literature on the topic 'Calibrage camera'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Calibrage camera.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Calibrage camera"

1

Baek, Seung-Hae, Pathum Rathnayaka, and Soon-Yong Park. "Calibration of a Stereo Radiation Detection Camera Using Planar Homography." Journal of Sensors 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/8928096.

Full text
Abstract:
This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Chenchen, Shaoqi Wang, Yi Cao, Shuang-Hua Yang, and Bin Bai. "Online Pyrometry Calibration for Industrial Combustion Process Monitoring." Processes 10, no. 9 (August 26, 2022): 1694. http://dx.doi.org/10.3390/pr10091694.

Full text
Abstract:
Temperature and its distribution are crucial for combustion monitoring and control. For this application, digital camera-based pyrometers become increasingly popular, due to its relatively low cost. However, these pyrometers are not universally applicable due to the dependence of calibration. Compared with pyrometers, monitoring cameras exist in all most every combustion chamber. Although these cameras, theologically, have the ability to measure temperature, due to lack of calibration they are only used for visualization to support the decisions of operators. Almost all existing calibration methods are laboratory-based, and hence cannot calibrate a camera in operation. This paper proposes an online calibration method. It uses a pre-calibrated camera as a standard pyrometer to calibrate another camera in operation. The calibration is based on a photo taken by the pyrometry-camera at a position close to the camera in operation. Since the calibration does not affect the use of the camera in operation, it sharply reduces the cost and difficulty of pyrometer calibration. In this paper, a procedure of online calibration is proposed, and the advice about how to set camera parameters is given. Besides, the radio pyrometry is revised for a wider temperature range. The online calibration algorithm is developed based on two assumptions for images of the same flame taken in proximity: (1) there are common regions between the two images taken at close position; (2) there are some constant characteristic temperatures between the two-dimensional temperature distributions of the same flame taken from different angles. And those two assumptions are verified in a real industrial plants. Based on these two verified features, a temperature distribution matching algorithm is developed to calibrate pyrometers online. This method was tested and validated in an industrial-scale municipal solid waste incinerator. The accuracy of the calibrated pyrometer is sufficient for flame monitoring and control.
APA, Harvard, Vancouver, ISO, and other styles
3

Simarro, Gonzalo, Daniel Calvete, and Paola Souto. "UCalib: Cameras Autocalibration on Coastal Video Monitoring Systems." Remote Sensing 13, no. 14 (July 16, 2021): 2795. http://dx.doi.org/10.3390/rs13142795.

Full text
Abstract:
Following the path set out by the “Argus” project, video monitoring stations have become a very popular low cost tool to continuously monitor beaches around the world. For these stations to be able to offer quantitative results, the cameras must be calibrated. Cameras are typically calibrated when installed, and, at best, extrinsic calibrations are performed from time to time. However, intra-day variations of camera calibration parameters due to thermal factors, or other kinds of uncontrolled movements, have been shown to introduce significant errors when transforming the pixels to real world coordinates. Departing from well-known feature detection and matching algorithms from computer vision, this paper presents a methodology to automatically calibrate cameras, in the intra-day time scale, from a small number of manually calibrated images. For the three cameras analyzed here, the proposed methodology allows for automatic calibration of >90% of the images in favorable conditions (images with many fixed features) and ∼40% in the worst conditioned camera (almost featureless images). The results can be improved by increasing the number of manually calibrated images. Further, the procedure provides the user with two values that allow for the assessment of the expected quality of each automatic calibration. The proposed methodology, here applied to Argus-like stations, is applicable e.g., in CoastSnap sites, where each image corresponds to a different camera.
APA, Harvard, Vancouver, ISO, and other styles
4

Mokatren, Moayad, Tsvi Kuflik, and Ilan Shimshoni. "Calibration-Free Mobile Eye-Tracking Using Corneal Imaging." Sensors 24, no. 4 (February 15, 2024): 1237. http://dx.doi.org/10.3390/s24041237.

Full text
Abstract:
In this paper, we present and evaluate a calibration-free mobile eye-traking system. The system’s mobile device consists of three cameras: an IR eye camera, an RGB eye camera, and a front-scene RGB camera. The three cameras build a reliable corneal imaging system that is used to estimate the user’s point of gaze continuously and reliably. The system auto-calibrates the device unobtrusively. Since the user is not required to follow any special instructions to calibrate the system, they can simply put on the eye tracker and start moving around using it. Deep learning algorithms together with 3D geometric computations were used to auto-calibrate the system per user. Once the model is built, a point-to-point transformation from the eye camera to the front camera is computed automatically by matching corneal and scene images, which allows the gaze point in the scene image to be estimated. The system was evaluated by users in real-life scenarios, indoors and outdoors. The average gaze error was 1.6∘ indoors and 1.69∘ outdoors, which is considered very good compared to state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Dedei Tagoe, Naa, and S. Mantey. "Determination of the Interior Orientation Parameters of a Non-metric Digital Camera for Terrestrial Photogrammetric Applications." Ghana Mining Journal 19, no. 2 (December 22, 2019): 1–9. http://dx.doi.org/10.4314/gm.v19i2.1.

Full text
Abstract:
AbstractHigh cost of metric photogrammetric cameras has given rise to the utilisation of non-metric digital cameras to generate photogrammetric products in traditional close range or terrestrial photogrammetric applications. For precision photogrammetric applications, the internal metric characteristics of the camera, customarily known as the Interior Orientation Parameters, need to be determined and analysed. The derivation of these parameters is usually achieved by implementing a bundle adjustment with self-calibration procedure. The stability of the Interior Orientation Parameters is an issue in terms of accuracy in digital cameras since they are not built with photogrammetric applications in mind. This study utilised two photogrammetric software (i.e. Photo Modeler and Australis) to calibrate a non-metric digital camera to determine its Interior Orientation Parameters. The camera parameters were obtained using the two software and the Root Mean Square Errors (RMSE) calculated. It was observed that Australis gave a RMSE of 0.2435 and Photo Modeler gave 0.2335, implying that, the calibrated non-metric digital camera is suitable for high precision terrestrial photogrammetric projects. Keywords: Camera Calibration, Interior Orientation Parameters, Non-Metric Digital Camera
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Zhe, Zhaozong Meng, Nan Gao, and Zonghua Zhang. "Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target." Sensors 19, no. 13 (July 8, 2019): 3008. http://dx.doi.org/10.3390/s19133008.

Full text
Abstract:
Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
7

Park, Byung-Seo, Woosuk Kim, Jin-Kyum Kim, Eui Seok Hwang, Dong-Wook Kim, and Young-Ho Seo. "3D Static Point Cloud Registration by Estimating Temporal Human Pose at Multiview." Sensors 22, no. 3 (January 31, 2022): 1097. http://dx.doi.org/10.3390/s22031097.

Full text
Abstract:
This paper proposes a new technique for performing 3D static-point cloud registration after calibrating a multi-view RGB-D camera using a 3D (dimensional) joint set. Consistent feature points are required to calibrate a multi-view camera, and accurate feature points are necessary to obtain high-accuracy calibration results. In general, a special tool, such as a chessboard, is used to calibrate a multi-view camera. However, this paper uses joints on a human skeleton as feature points for calibrating a multi-view camera to perform calibration efficiently without special tools. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D joint set obtained through pose estimation as feature points. Since human body information captured by the multi-view camera may be incomplete, a joint set predicted based on image information obtained through this may be incomplete. After efficiently integrating a plurality of incomplete joint sets into one joint set, multi-view cameras can be calibrated by using the combined joint set to obtain extrinsic matrices. To increase the accuracy of calibration, multiple joint sets are used for optimization through temporal iteration. We prove through experiments that it is possible to calibrate a multi-view camera using a large number of incomplete joint sets.
APA, Harvard, Vancouver, ISO, and other styles
8

Yin, Lei, Xiangjun Wang, Yubo Ni, Kai Zhou, and Jilong Zhang. "Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing." Remote Sensing 10, no. 8 (August 16, 2018): 1298. http://dx.doi.org/10.3390/rs10081298.

Full text
Abstract:
Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm.
APA, Harvard, Vancouver, ISO, and other styles
9

Du, Yuchuan, Cong Zhao, Feng Li, and Xuefeng Yang. "An Open Data Platform for Traffic Parameters Measurement via Multirotor Unmanned Aerial Vehicles Video." Journal of Advanced Transportation 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/8324301.

Full text
Abstract:
Multirotor unmanned aerial vehicle video observation can obtain accurate information about traffic flow of large areas over extended times. This paper aims to construct an open data test platform for updated traffic data accumulation and traffic simulation model verification by analyzing real time aerial video. Common calibration boards were used to calibrate internal camera parameters and image distortion correction was performed using a high-precision distortion model. To solve external parameters calibration problems, an existing algorithm was improved by adding two sets of orthogonal equations, achieving higher accuracy with only four calibrated points. A simplified algorithm is proposed to calibrate cameras by calculating the relationship between pixel and true length under the camera optical axis perpendicular to road conditions. Aerial video (160 min) from the Shanghai inner ring expressway was collected and real time traffic parameter values were obtained from analyzing and processing the aerial visual data containing spatial, time, velocity, and acceleration data. The results verify that the proposed platform provides a reasonable and objective approach to traffic simulation model verification and improvement. The proposed data platform also offers significant advantages over conventional methods that use historical and outdated data to run poorly calibrated traffic simulation models.
APA, Harvard, Vancouver, ISO, and other styles
10

Teo, T. "VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4/W5 (May 11, 2015): 55–60. http://dx.doi.org/10.5194/isprsarchives-xl-4-w5-55-2015.

Full text
Abstract:
Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1) camera calibration, (2) video conversion and alignment, (3) orientation modelling, (4) dense matching, and (5) evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM) technique is utilized to obtain the image orientations. Then, semi-global matching (SGM) algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Calibrage camera"

1

Dornaika, Fadi. "Contributions à l'intégration vision-robotique : calibrage, localisation et asservissement." Phd thesis, Grenoble INPG, 1995. http://www.theses.fr/1995INPG0097.

Full text
Abstract:
Cette thèse concerne principalement l'intégration des fonctionnalités d'un système de vision avec celles d'un système robotique. Cette intégration apporte beaucoup d'avantages pour l'interaction d'un robot avec son environnement. Dans un premier temps, nous nous intéressons aux aspects de modélisation. Deux sujets liés à cette modélisation ont été traités : i) le calibrage caméra/pince et ii) la localisation caméra/objet. Pour le premier, nous proposons une méthode de calibrage non linéaire qui s'avère robuste en présence des erreurs de mesure ; pour le second, nous proposons une méthode linéaire très rapide et bien adaptée aux applications temps-réel puisqu'elle est basée sur des approximations successives par une projection para-perspective. Dans un deuxième temps, nous nous intéressons au contrôle visuel de robots. Nous adaptons la méthode "commande référencée capteur" à une caméra indépendante du robot asservi. De plus, dans le cas d'un positionnement relatif, nous montrons que le calcul de la position de référence ne dépend pas de l'estimation explicite des paramètres intrinsèques et extrinsèques de la caméra. Pour une tâche donnée, le problème de la commande peut alors se traduire sous la forme d'une régulation d'une erreur dans l'image. Nous montrons que la localisation temps-réel caméra/robot améliore le comportement dynamique de l'asservissement. Cette méthode de contrôle a été expérimentée dans la réalisation de tâches de saisie avec un robot manipulateur à six degrés de liberté. Toutes les méthodes proposées sont validées avec des mesures réelles et simulées
The integration of computer vision with robot control is the concern of this thesis. This integration has many advantages for the interaction of a robotic system with its environment. First, we are interested in the study of calibration methods. Two topics are treated : i) hand/eye calibration and ii) object pose. For the first, we developed a nonlinear method that seems to be very robust with respect to measurement errors; for the second, we developed an iterative para-perspective pose computation method that can be used in real-time applications. Next we are interested in visual servo control and extend the well known method "image-based servoing" for a camera that is not attached to the robot being serv(o)(e)d. When performing relative positioning, we show that the computation of the goal features do not depend on an explicit estimate of the camera intrinsic or extrinsic parameters. For a given task, the robot motions are computed in order to reduce a 2D error to zero. The central issue of any image-based servoing method is the estimation of the image Jacobian. We show the advantage of using an exact image Jacobian with respect to the dynamic behaviour of the servoing process. This control method is used in automatic object grasping with a 6 DOF robot manipulator. All the methods presented in this thesis are validated with real and simulated data
APA, Harvard, Vancouver, ISO, and other styles
2

Draréni, Jamil. "Exploitation de contraintes photométriques et géométriques en vision : application au suivi, au calibrage et à la reconstruction." Grenoble, 2010. http://www.theses.fr/2010GRENM061.

Full text
Abstract:
Cette thèse s'intéresse à trois problèmes fondamentaux de la vision par ordinateur qui sont le suivi vidéo, le calibrage et la reconstruction 3D. Les approches proposées sont strictement basées sur des contraintes photométriques et géométriques présentent dans des images 2D. Le suivi de mouvement se fait généralement dans un flux vidéo et consiste à suivre un objet d'intérêt identifié par l'usager. Nous reprenons une des méthodes les plus robustes à cet effet et l'améliorons de sorte à prendre en charge, en plus de ses translations, les rotations qu'effectue l'objet d'intérêt. Par la suite nous nous attelons au calibrage de caméras ; un autre problème fondamental en vision. Il s'agit là, d'estimer des paramètres intrinsèques qui décrivent la projection d'entités 3D dans une image plane. Plus précisément, nous proposons des algorithmes de calibrage plan pour les caméras linéaires (pushbroom) et les vidéo projecteurs lesquels étaient, jusque-là, calibrés de façon laborieuse. Le troisième volet de cette thèse sera consacré à la reconstruction 3D par ombres projetée. À moins de connaissance à priori sur le contenu de la scène, cette technique est intrinsèquement ambigüe. Nous proposons une méthode pour réduire cette ambiguïté en exploitant le fait que les spots de lumières sont souvent visibles dans la caméra
The topic of this thesis revolves around three fundamental problems in computer vision; namely, video tracking, camera calibration and shape recovery. The proposed methods are solely based on photometric and geometric constraints found in the images. Video tracking, usually performed on a video sequence, consists in tracking a region of interest, selected manually by an operator. We extend a successful tracking method by adding the ability to estimate the orientation of the tracked object. Furthermore, we consider another fundamental problem in computer vision: calibration. Here we tackle the problem of calibrating linear cameras (a. K. A: pushbroom)and video projectors. For the former one we propose a convenient plane-based calibration algorithm and for the latter, a calibration algorithm that does not require aphysical grid and a planar auto-calibration algorithm. Finally, we pointed our third research direction toward shape reconstruction using coplanar shadows. This technique is known to suffer from a bas-relief ambiguity if no extra information on the scene or light source is provided. We propose a simple method to reduce this ambiguity from four to a single parameter. We achieve this by taking into account the visibility of the light spots in the camera
APA, Harvard, Vancouver, ISO, and other styles
3

Fang, Yong. "Road scene perception based on fisheye camera, LIDAR and GPS data combination." Thesis, Belfort-Montbéliard, 2015. http://www.theses.fr/2015BELF0265/document.

Full text
Abstract:
La perception de scènes routières est un domaine de recherche très actif. Cette thèse se focalise sur la détection et le suivi d’objets par fusion de données d’un système multi-capteurs composé d’un télémètre laser, une caméra fisheye et un système de positionnement global (GPS). Plusieurs étapes de la chaîne de perception sont ´ étudiées : le calibrage extrinsèque du couple caméra fisheye / télémètre laser, la détection de la route et enfin la détection et le suivi d’obstacles sur la route.Afin de traiter les informations géométriques du télémètre laser et de la caméra fisheye dans un repère commun, une nouvelle approche de calibrage extrinsèque entre les deux capteurs est proposée. La caméra fisheye est d’abord calibrée intrinsèquement. Pour cela, trois modèles de la littérature sont étudiés et comparés. Ensuite, pour le calibrage extrinsèque entre les capteurs,la normale au plan du télémètre laser est estimée par une approche de RANSAC couplée `a une régression linéaire `a partir de points connus dans le repère des deux capteurs. Enfin une méthode des moindres carres basée sur des contraintes géométriques entre les points connus, la normale au plan et les données du télémètre laser permet de calculer les paramètres extrinsèques. La méthode proposée est testée et évaluée en simulation et sur des données réelles.On s’intéresse ensuite `a la détection de la route à partir des données issues de la caméra fisheye et du télémètre laser. La détection de la route est initialisée `a partir du calcul de l’image invariante aux conditions d’illumination basée sur l’espace log-chromatique. Un seuillage sur l’histogramme normalisé est appliqué pour classifier les pixels de la route. Ensuite, la cohérence de la détection de la route est vérifiée en utilisant les mesures du télémètre laser. La segmentation de la route est enfin affinée en exploitant deux détections de la route successives. Pour cela, une carte de distance est calculée dans l’espace couleur HSI (Hue,Saturation, Intensity). La méthode est expérimentée sur des données réelles. Une méthode de détection d’obstacles basée sur les données de la caméra fisheye, du télémètre laser, d’un GPS et d’une cartographie routière est ensuite proposée. On s’intéresse notamment aux objets mobiles apparaissant flous dans l’image fisheye. Les régions d’intérêts de l’image sont extraites `a partir de la méthode de détection de la route proposée précédemment. Puis, la détection dans l’image du marquage de la ligne centrale de la route est mise en correspondance avec un modelé de route reconstruit `a partir des données GPS et cartographiques. Pour cela, la transformation IPM (Inverse Perspective Mapping) est appliquée à l’image. Les régions contenant potentiellement des obstacles sont alors extraites puis confirmées à l’aide du télémètre laser.L’approche est testée sur des données réelles et comparée `a deux méthodes de la littérature. Enfin, la dernière problématique étudiée est le suivi temporel des obstacles détectés `a l’aide de l’utilisation conjointe des données de la caméra fisheye et du télémètre laser. Pour cela, les resultats de détection d’obstacles précédemment obtenus sont exploit ´es ainsi qu’une approche de croissance de région. La méthode proposée est également testée sur des données réelles
Road scene understanding is one of key research topics of intelligent vehicles. This thesis focuses on detection and tracking of obstacles by multisensors data fusion and analysis. The considered system is composed of a lidar, a fisheye camera and aglobal positioning system (GPS). Several steps of the perception scheme are studied: extrinsic calibration between fisheye camera and lidar, road detection and obstacles detection and tracking. Firstly, a new method for extinsic calibration between fisheye camera and lidar is proposed. For intrinsic modeling of the fisheye camera, three models of the literatureare studied and compared. For extrinsic calibration between the two sensors, the normal to the lidar plane is firstly estimated based on the determination of ń known ż points. The extrinsic parameters are then computed using a least square approachbased on geometrical constraints, the lidar plane normal and the lidar measurements. The second part of this thesis is dedicated to road detection exploiting both fisheye camera and lidar data. The road is firstly coarse detected considering the illumination invariant image. Then the normalised histogram based classification is validated using the lidar data. The road segmentation is finally refined exploiting two successive roaddetection results and distance map computed in HSI color space. The third step focuses on obstacles detection, especially in case of motion blur. The proposed method combines previously detected road, map, GPS and lidar information.Regions of interest are extracted from previously road detection. Then road central lines are extracted from the image and matched with road shape model extracted from 2DŋSIG map. Lidar measurements are used to validated the results.The final step is object tracking still using fisheye camera and lidar. The proposed method is based on previously detected obstacles and a region growth approach. All the methods proposed in this thesis are tested, evaluated and compared to stateŋofŋtheŋart approaches using real data acquired with the IRTESŋSET laboratory experimental platform
APA, Harvard, Vancouver, ISO, and other styles
4

Szczepanski, Michał. "Online stereo camera calibration on embedded systems." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC095.

Full text
Abstract:
Cette thèse décrit une approche de calibration en ligne des caméras stéréo pour des systèmes embarqués. Le manuscrit introduit une nouvelle mesure de la qualité du service de cette fonctionnalité dans les systèmes cyber physiques. Ainsi, le suivi et le calcul des paramètres internes du capteur (requis pour de nombreuses tâches de vision par ordinateur) est réalisé dynamiquement. La méthode permet à la fois d'augmenter la sécurité et d'améliorer les performances des systèmes utilisant des caméras stéréo. Elle prolonge la durée de vie des appareils grâce à cette procédure d'auto-réparation, et peut accroître l'autonomie. Des systèmes tels que les robots mobiles ou les lunettes intelligentes en particulier peuvent directement bénéficier de cette technique.La caméra stéréo est un capteur capable de fournir un large spectre de données. Au préalable, le capteur doit être calibré extrinsèquement, c'est à dire que les positions relatives des deux caméras doivent être déterminées. Cependant, cette calibration extrinsèque peut varier au cours du temps à cause d'interactions avec l'environnement extérieur par exemple (chocs, vibrations...). Ainsi, une opération de recalibration permet de corriger ces effets. En effet, des données mal comprises peuvent entraîner des erreurs et le mauvais fonctionnement des applications. Afin de contrer un tel scénario, le système doit disposer d'un mécanisme interne, la qualité des services, pour décider si les paramètres actuels sont corrects et/ou en calculer des nouveaux, si nécessaire. L'approche proposée dans cette thèse est une méthode d'auto-calibration basée sur l'utilisation de données issues uniquement de la scène observée (sans modèles contrôlés). Tout d'abord, nous considérons la calibration comme un processus système s'exécutant en arrière-plan devant fonctionner en continu et en temps réel. Cette calibration interne n'est pas la tâche principale du système, mais la procédure sur laquelle s'appuient les applications de haut niveau. Pour cette raison, les contraintes systèmes limitent considérablement l'algorithme en termes de complexité, de mémoire et de temps. La méthode de calibration proposée nécessite peu de ressources et utilise des données standards provenant d'applications de vision par ordinateur, de sorte qu'elle est masquée à l'intérieur du pipeline applicatif. Dans ce manuscrit, de nombreuses discussions sont consacrées aux sujets liés à la calibration de caméras en ligne pour des systèmes embarqués, tels que des problématiques sur l'extraction de points d'intérêts robustes et au calcul du facteur d'échelle, les aspects d’implémentation matérielle, les applications de haut niveau nécessitant cette approche, etc.Enfin, cette thèse décrit et explique une méthodologie pour la constitution d'un nouveau type d'ensemble de données, permettant de représenter un changement de position d'une caméra,pour valider l’approche. Le manuscrit explique également les différents environnements de travail utilisés dans la réalisation des jeux de données et la procédure de calibration de la caméra. De plus, il présente un premier prototype de casque intelligent, sur lequel s’exécute dynamiquement le service d’auto-calibration proposé. Enfin, une caractérisation en temps réel sur un processeur embarqué ARM Cortex A7 est réalisée
This thesis describes an approach for online calibration of stereo cameras on embeddedsystems. It introduces a new functionality for cyber physical systems by measuring the qualityof service of the calibration. Thus, the manuscript proposes a dynamic monitoring andcalculation of the internal sensor parameters required for many computer vision tasks. Themethod improves both security and system efficiency using stereo cameras. It prolongs the lifeof the devices thanks to this self-repair capability, which increases autonomy. Systems such asmobile robots or smart glasses in particular can directly benefit from this technique.The stereo camera is a sensor capable of providing a wide spectrum of data. Beforehand, thissensor must be extrinsically calibrated, i.e. the relative positions of the two cameras must bedetermined.. However, camera extrinsic calibration can change over time due to interactionswith the external environment for example (shocks, vibrations...). Thus, a recalibrationoperation allow correcting these effects. Indeed, misunderstood data can lead to errors andmalfunction of applications. In order to counter such a scenario, the system must have aninternal mechanism, a quality of service, to decide whether the current parameters are correctand/or calculate new ones, if necessary.The approach proposed in this thesis is a self-calibration method based on the use of data coming only from the observed scene, without controlled models. First of all, we consider calibration as a system process running in the background and having to run continuously in real time. This internal calibration is not the main task of the system, but the procedure on which high-level applications rely. For this reason, system constraints severely limit the algorithm in terms of complexity, memory and time. The proposed calibration method requires few resources and uses standard data from computer vision applications, so it is hidden within the application pipeline. In this manuscript, we present many discussions to topics related to the online stereocalibration on embedded systems, such as problems on the extraction of robust points ofinterest, the calculation of the scale factor, hardware implementation aspects, high-levelapplications requiring this approach, etc. Finally, this thesis describes and explains amethodology for the building of a new type of dataset to represent the change of the cameraposition to validate the approach. The manuscript also explains the different workenvironments used in the realization of the datasets and the camera calibration procedure. Inaddition, it presents the first prototype of a smart helmet, on which the proposed self-calibration service is dynamically executed. Finally, this thesis characterizes the real-timecalibration on an embedded ARM Cortex A7 processor
APA, Harvard, Vancouver, ISO, and other styles
5

Rameau, François. "Système de vision hybride à fovéation pour la vidéo-surveillance et la navigation robotique." Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOS031/document.

Full text
Abstract:
L'objectif principal de ce travail de thèse est l'élaboration d'un système de vision binoculaire mettant en oeuvre deux caméras de types différents. Le système étudié est constitué d'une caméra de type omnidirectionnelle associée à une caméra PTZ. Nous appellerons ce couple de caméras un système de vision hybride. L'utilisation de ce type de capteur fournit une vision globale de la scène à l'aide de la caméra omnidirectionnelle tandis que l'usage de la caméra mécanisée permet une fovéation, c'est-à-dire l'acquisition de détails, sur une région d'intérêt détectée depuis l'image panoramique.Les travaux présentés dans ce manuscrit ont pour objet, à la fois de permettre le suivi d'une cible à l'aide de notre banc de caméras mais également de permettre une reconstruction 3D par stéréoscopie hybride de l'environnement nous permettant d'étudier le déplacement du robot équipé du capteur
The primary goal of this thesis is to elaborate a binocular vision system using two different types of camera. The system studied here is composed of one omnidirectional camera coupled with a PTZ camera. This heterogeneous association of cameras having different characteristics is called a hybrid stereo-vision system. The couple composed of these two cameras combines the advantages given by both of them, that is to say a large field of view and an accurate vision of a particular Region of interest with an adjustable level of details using the zoom. In this thesis, we are presenting multiple contributions in visual tracking using omnidirectional sensors, PTZ camera self calibration, hybrid vision system calibration and structure from motion using a hybrid stereo-vision system
APA, Harvard, Vancouver, ISO, and other styles
6

Scandaroli, Glauco Garcia. "Fusion de données visuo-inertielles pour l'estimation de pose et l'autocalibrage." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00861858.

Full text
Abstract:
Les systèmes multi-capteurs exploitent les complémentarités des différentes sources sensorielles. Par exemple, le capteur visuo-inertiel permet d'estimer la pose à haute fréquence et avec une grande précision. Les méthodes de vision mesurent la pose à basse fréquence mais limitent la dérive causée par l'intégration des données inertielles. Les centrales inertielles mesurent des incréments du déplacement à haute fréquence, ce que permet d'initialiser la vision et de compenser la perte momentanée de celle-ci. Cette thèse analyse deux aspects du problème. Premièrement, nous étudions les méthodes visuelles directes pour l'estimation de pose, et proposons une nouvelle technique basée sur la corrélation entre des images et la pondération des régions et des pixels, avec une optimisation inspirée de la méthode de Newton. Notre technique estime la pose même en présence des changements d'illumination extrêmes. Deuxièmement, nous étudions la fusion des données a partir de la théorie de la commande. Nos résultats principaux concernent le développement d'observateurs pour l'estimation de pose, biais IMU et l'autocalibrage. Nous analysons la dynamique de rotation d'un point de vue non linéaire, et fournissons des observateurs stables dans le groupe des matrices de rotation. Par ailleurs, nous analysons la dynamique de translation en tant que système linéaire variant dans le temps, et proposons des conditions d'observabilité uniforme. Les analyses d'observabilité nous permettent de démontrer la stabilité uniforme des observateurs proposés. La méthode visuelle et les observateurs sont testés et comparés aux méthodes classiques avec des simulations et de vraies données visuo-inertielles.
APA, Harvard, Vancouver, ISO, and other styles
7

Pessel, Nathalie. "Auto-calibrage d'une caméra en milieu sous-marin." Montpellier 2, 2003. http://www.theses.fr/2003MON20156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Andersson, Elin. "Thermal Impact of a Calibrated Stereo Camera Rig." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129636.

Full text
Abstract:
Measurements performed from stereo reconstruction can be obtained with a high accuracy with correct calibrated cameras. A stereo camera rig mounted in an outdoor environment is exposed to temperature changes, which has an impact of the calibration of the cameras. The aim of the master thesis was to investigate the thermal impact of a calibrated stereo camera rig. This was performed by placing a stereo rig in a temperature chamber and collect data of a calibration board at different temperatures. Data was collected with two different cameras and lensesand used for calibration of the stereo camera rig for different scenarios. The obtained parameters were plotted and analyzed. The result from the master thesis gives that the thermal variation has an impact of the accuracy of the calibrated stereo camera rig. A calibration obtained in one temperature can not be used for a different temperature without a degradation of the accuracy. The plotted parameters from the calibration had a high noise level due to problems with the calibration methods, and no visible trend from temperature changes could be seen.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Han, and 周晗. "Intelligent video surveillance in a calibrated multi-camera system." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45989217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jethwa, Manish 1976. "Efficient volumetric reconstruction from multiple calibrated cameras." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/30163.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.
Includes bibliographical references (p. 137-142).
The automatic reconstruction of large scale 3-D models from real images is of significant value to the field of computer vision in the understanding of images. As a consequence, many techniques have emerged to perform scene reconstruction from calibrated images where the position and orientation of the camera are known. Feature based methods using points and lines have enjoyed much success and have been shown to be robust against noise and changing illumination conditions. The models produced by these techniques however, can often appear crude when untextured due to the sparse set of points from which they are created. Other reconstruction methods, such as volumetric techniques, use image pixel intensities rather than features, reconstructing the scene as small volumetric units called voxels. The direct use of pixel values in the images has restricted current methods to operating on scenes with static illumination conditions. Creating a volumetric representation of the scene may also require millions of interdependent voxels which must be efficiently processed. This has limited most techniques to constrained camera locations and small indoor scenes. The primary goal of this thesis is to perform efficient voxel-based reconstruction of urban environments using a large set of pose-instrumented images. In addition to the 3- D scene reconstruction, the algorithm will also generate estimates of surface reflectance and illumination. Designing an algorithm that operates in a discretized 3-D scene space allows for the recovery of intrinsic scene color and for the integration of visibility constraints, while avoiding the pitfalls of image based feature correspondence.
(cont.) The algorithm demonstrates how in principle it is possible to reduce computational effort over more naive methods. The algorithm is intended to perform the reconstruction of large scale 3-D models from controlled imagery without human intervention.
by Manish Jethwa.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Calibrage camera"

1

S, Roth Zvi, ed. Camera-aided robot calibration. Boca Raton: CRC Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dailey, Daniel J. The automated use of un-calibrated CCTV cameras as quantitative speed sensors, Phase 3. Olympia, WA: Washington State Dept. of Transportation, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Calibrage camera"

1

Benligiray, Burak, Halil Ibrahim Cakir, Cihan Topal, and Cuneyt Akinlar. "Counting Turkish Coins with a Calibrated Camera." In Image Analysis and Processing — ICIAP 2015, 216–26. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23234-8_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Komorowski, Jacek, and Przemysław Rokita. "Camera Pose Estimation from Sequence of Calibrated Images." In Advances in Intelligent Systems and Computing, 101–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-32384-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Tianlong, Xiaorong Shen, Quanfa Xiu, and Luodi Zhao. "Person Re-identification Based on Minimum Feature Using Calibrated Camera." In Lecture Notes in Electrical Engineering, 533–40. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6499-9_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Svoboda, Tomáš, and Peter Sturm. "A badly calibrated camera in ego-motion estimation — propagation of uncertainty." In Computer Analysis of Images and Patterns, 183–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63460-6_116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Choi, Wongun, and Silvio Savarese. "Multiple Target Tracking in World Coordinate with Single, Minimally Calibrated Camera." In Computer Vision – ECCV 2010, 553–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15561-1_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jiang, Xiaoyan, Erik Rodner, and Joachim Denzler. "Multi-person Tracking-by-Detection Based on Calibrated Multi-camera Systems." In Computer Vision and Graphics, 743–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33564-8_89.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Xiaoqiang, Yanning Zhang, Tao Yang, and Zhengxi Song. "Calibrate a Moving Camera on a Linear Translating Stage Using Virtual Plane + Parallax." In Intelligent Science and Intelligent Data Engineering, 48–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36669-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cheung, Kin-Wang, Jiansheng Chen, and Yiu-Sang Moon. "Synthesizing Frontal Faces on Calibrated Stereo Cameras for Face Recognition." In Advances in Biometrics, 347–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01793-3_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nishizaki, Takashi, Yoshinari Kameda, and Yuichi Ohta. "Visual Surveillance Using Less ROIs of Multiple Non-calibrated Cameras." In Computer Vision – ACCV 2006, 317–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11612032_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kang, Sing Bing, and Richard Weiss. "Can We Calibrate a Camera Using an Image of a Flat,Textureless Lambertian Surface?" In Lecture Notes in Computer Science, 640–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45053-x_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Calibrage camera"

1

Lai, H. W., C. K. Ma, S. L. Yang, and C. M. Tsui. "Calibration of the Period and Time Difference of Synchronized Flashing Lights." In NCSL International Workshop & Symposium. NCSL International, 2020. http://dx.doi.org/10.51843/wsproceedings.2020.10.

Full text
Abstract:
Videos recorded by CCTVs or car cameras are often tendered as evidences in legal proceedings and traffic accident investigations. Important information such as vehicle speed may be estimated from the video data to support the case. In such applications, to satisfy the standard of proof in a court of law, the timing parameters of the cameras must be calibrated. A specially designed frame interval timer called the Lightboard is being used in some jurisdictions to determine the frame interval of video recorded by a camera. The accuracy of the time base in this frame interval timer in turn need to be calibrated. This paper proposed methods to calibrate the flashing period and the time difference between two synchronized LED panels of a Lightboard system.
APA, Harvard, Vancouver, ISO, and other styles
2

Vader, Anup M., Abhinav Chadda, Wenjuan Zhu, Ming C. Leu, Xiaoqing F. Liu, and Jonathan B. Vance. "An Integrated Calibration Technique for Multi-Camera Vision Systems." In ASME 2010 World Conference on Innovative Virtual Reality. ASMEDC, 2010. http://dx.doi.org/10.1115/winvr2010-3732.

Full text
Abstract:
This paper presents the integration and evaluation of two popular camera calibration techniques for multi-camera vision system development for motion capture. An integrated calibration technique for multi-camera vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo were used to form a vision system to perform 3D motion capture in real time. This integrated technique is a two-step process: it first calibrates the intrinsic parameters of each camera using Zhang’s algorithm [5] and then calibrates the extrinsic parameters of the cameras together using Svoboda’s algorithm [9]. Computer software has been developed for implementation of the integrated technique, and experiments carried out using this technique to perform motion capture with Wiimotes show a significant improvement in the measurement accuracy over the existing calibration techniques.
APA, Harvard, Vancouver, ISO, and other styles
3

Muglikar, Manasi, Mathias Gehrig, Daniel Gehrig, and Davide Scaramuzza. "How to Calibrate Your Event Camera." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Slembrouck, N., J. Audenaert, and F. Leloup. "SNAPSHOT AND LINESCAN HYPERSPECTRAL IMAGING FOR VISUAL APPEARANCE MEASUREMENTS." In CIE 2023 Conference. International Commission on Illumination, CIE, 2023. http://dx.doi.org/10.25039/x50.2023.po139.

Full text
Abstract:
Hyperspectral imaging techniques offer interesting new methods of measuring visual appearance. In this paper, two different types of hyperspectral cameras (snapshot vs. linescan) are compared on a theoretical and practical basis, and it is tested if they could be used for visual appearance measurements. The cameras are first characterized and calibrated. A GretagMacbeth ColorChecker® is then used to compare and evaluate the quality of the spectral reflectance measurements of the two cameras. Whilst the data of the snapshot camera better matches the spectral measurements obtained with two spectrophotometers, the spatial performance limits the usability of the device. Based on the findings of this study, further research is discussed in order to improve the measurement accuracy of the snapshot hyperspectral camera.
APA, Harvard, Vancouver, ISO, and other styles
5

Trocoli, Tiago, and Luciano Oliveira. "Using the Scene to Calibrate the Camera." In 2016 29th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2016. http://dx.doi.org/10.1109/sibgrapi.2016.069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mentens, A., G. H. Scheir, Y. Ghysel, F. Descamps, J. Lataire, and V. A. Jacobs. "OPTIMIZING CAMERA PLACEMENT FOR A LUMINANCE-BASED SHADING CONTROL SYSTEM." In CIE 2021 Conference. International Commission on Illumination, CIE, 2021. http://dx.doi.org/10.25039/x48.2021.po39.

Full text
Abstract:
Shading control strategies are nowadays employed in office environments to improve the visual comfort of the user. These strategies are often solely illuminance-based whereas comfort metrics as the Daylight Glare Probability (DGP) also need luminance values. In previous studies, daylight glare has been assessed by calculating the DGP from luminance maps obtained via a luminance camera or from a High Dynamic Range (HDR) image obtained with a commercially available camera. These detectors are traditionally mounted close to the user and aligned with the viewing direction. In real office environments, this camera position is impractical, and simulations based on machine learning techniques have shown a relation between the DGP from an observer's viewpoint and the DGP calculated from a ceiling camera. This paper experimentally validates this method in a real office environment by using two different cameras and two different illuminance sensors, i.e., a low-cost illuminance sensor and a calibrated sensor. Both cameras render similar results, although one camera overestimates the DGP. Moreover, the shortcomings of the simulation results are pinpointed and the obstacles for a realistic application are addressed. Furthermore, it was found that when moving the cameras to different positions, the sun position was shown to be an informative additional input for correlating the two DGP values. In future work, additional data will be analysed to determine the performance in other weather conditions and window orientations.
APA, Harvard, Vancouver, ISO, and other styles
7

Shih, Ping-Chang, Guillermo Gallego, Anthony Yezzi, and Francesco Fedele. "Improving 3-D Variational Stereo Reconstruction of Oceanic Sea States by Camera Calibration Refinement." In ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/omae2013-10550.

Full text
Abstract:
Studies of wave climate, extreme ocean events, turbulence, and the energy dissipation of breaking and non-breaking waves are closely related to the measurements of the ocean surface. To gauge and analyze ocean waves on a computer, we reconstruct their 3-D model by utilizing the concepts of stereoscopic reconstruction and variational optimization. This technique requires a pair of calibrated cameras — cameras whose parameters are estimated for the mathematical projection model from space to an image plane — to take videos of the ocean surface as input. However, the accuracy of camera parameters, including the orientations and the positions of cameras as well as the internal specifications of optics elements, are subject to environmental factors and manual calibration errors. Because the errors of camera parameters magnify the errors of the 3-D reconstruction after projection, we propose a novel algorithm that refines camera parameters, thereby improving the accuracy of variational 3-D reconstruction. We design a multivariate error function that represents discrepancies between captured images and the reprojection of the reconstruction onto the images. As a result of the iteratively diminished error function, the camera parameters and the reconstruction of ocean waves evolve to optimal values. We demonstrate the success of our algorithm by comparing the reconstruction results with the refinement procedure to those without it and show improvements in the statistics and spectrum of the wave reconstruction after the refinement procedure.
APA, Harvard, Vancouver, ISO, and other styles
8

Xie, Yupeng, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, and Gauthier Lafruit. "View Synthesis: LiDAR Camera versus Depth Estimation." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita v Plzni, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.35.

Full text
Abstract:
Depth-Image-Based Rendering (DIBR) can synthesize a virtual view image from a set of multiview images and corresponding depth maps. However, this requires an accurate depth map estimation that incurs a high compu- tational cost over several minutes per frame in DERS (MPEG-I’s Depth Estimation Reference Software) even by using a high-class computer. LiDAR cameras can thus be an alternative solution to DERS in real-time DIBR ap- plications. We compare the quality of a low-cost LiDAR camera, the Intel Realsense LiDAR L515 calibrated and configured adequately, with DERS using MPEG-I’s Reference View Synthesizer (RVS). In IV-PSNR, the LiDAR camera reaches 32.2dB view synthesis quality with a 15cm camera baseline and 40.3dB with a 2cm baseline. Though DERS outperforms the LiDAR camera with 4.2dB, the latter provides a better quality-performance trade- off. However, visual inspection demonstrates that LiDAR’s virtual views have even slightly higher quality than with DERS in most tested low-texture scene areas, except for object borders. Overall, we highly recommend using LiDAR cameras over advanced depth estimation methods (like DERS) in real-time DIBR applications. Neverthe- less, this requires delicate calibration with multiple tools further exposed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Tsui, Darin, Capalina Melentyev, Ananya Rajan, Rohan Kumar, and Frank E. Talke. "An Optical Tracking Approach to Computer-Assisted Surgical Navigation via Stereoscopic Vision." In ASME 2023 32nd Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/isps2023-111020.

Full text
Abstract:
Abstract Computer-assisted navigation has become a popular solution in surgical procedures where a high amount of precision is required. Current state-of-the-art methods of surgical navigation involve tracking reflective 3D marker spheres using IR stereo-scopic cameras. However, the cost of implementing such systems may not be affordable for smaller healthcare systems. In this paper, we propose that fully optical navigation has the potential to be a viable alternative to state-of-the-art reflective marker navigation. We use fiducial ArUco markers to facilitate the tracking of real-time position. Using two inexpensive cameras, we design and calibrate a stereoscopic camera to record the 3D position of an ArUco marker moving through space along a positioning platform. Additionally, we explore the possibility of using different color spaces and physical marker colors to improve the detection percentage and accuracy of markers. We identified that black-and-white ArUco markers using the Hue, Saturation, and Lightness (HSL) color space gave a positional mean error of 5.38 mm. Using the Red, Green, and Blue (RGB) color space gave the highest detection percentage for the same ArUco markers. In the future, the mean error can be reduced by increasing camera quality and by using a multi-stereoscopic camera setup.
APA, Harvard, Vancouver, ISO, and other styles
10

Neal, Joseph, Tara Leipold, and Karla Petroskey. "The Effect of Image Stabilization on PhotoModeler Project Accuracy." In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2024. http://dx.doi.org/10.4271/2024-01-2474.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">Optical Image Stabilization (OIS) is a technology used in cameras and camcorders to reduce blur and shaky images or videos caused by unintentional camera movements. The primary goal of OIS is to counteract motion and maintain the stability of the image being captured, resulting in clearer, sharper, and more stable photos and videos. PhotoModeler, a photogrammetry software, advises users to turn off OIS on their cameras. Since the iPhone 7, OIS has become standard on all iPhones and cannot be deactivated. When calibrating an iPhone camera for photogrammetry, the OIS affects the calibration project's marking residual. In photogrammetry and 3D modeling terminology, "marking residual" typically refers to the difference between the observed image points and the corresponding points predicted by the photogrammetric process and refers to pixels. In other words, it represents the error between the actual image measurements and the values calculated by the photogrammetric algorithm. Because of OIS, the marking residual for the calibration project for an OIS-equipped camera is often outside the range recommended by PhotoModeler. As camera phones are now ubiquitous, this study aims to understand the effect of the OIS in modern camera phones on the accuracy of a PhotoModeler project. PhotoModeler projects were done using photographs taken with iPhone 7, 8, XS, 11, 12, 13, and 14 Pro models, all equipped with OIS. The results of this study demonstrate that for OIS-equipped cameras, approximately 95 percent of points measured via a calibrated camera project were within 1 to 11 mm (0.04 to 0.42 inches) of their true position, approximately 95 percent of points measured via an exemplar camera project were within 1 to 12 mm (0.03 to 0.48 inches) of their true position, and approximately 95 percent of points measured via a targetless exemplar camera project were within 1 to 14 mm (0.04 to 0.54 inches) of their true position.</div></div>
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Calibrage camera"

1

Latz, Michael I. DURIP: A Low-Light Photon-Calibrated High-Resolution Digital Camera Imaging System. Fort Belvoir, VA: Defense Technical Information Center, September 2006. http://dx.doi.org/10.21236/ada612146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography