Articles de revues sur le sujet « Calibrage camera »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Calibrage camera.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Calibrage camera ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Baek, Seung-Hae, Pathum Rathnayaka et Soon-Yong Park. « Calibration of a Stereo Radiation Detection Camera Using Planar Homography ». Journal of Sensors 2016 (2016) : 1–11. http://dx.doi.org/10.1155/2016/8928096.

Texte intégral
Résumé :
This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhou, Chenchen, Shaoqi Wang, Yi Cao, Shuang-Hua Yang et Bin Bai. « Online Pyrometry Calibration for Industrial Combustion Process Monitoring ». Processes 10, no 9 (26 août 2022) : 1694. http://dx.doi.org/10.3390/pr10091694.

Texte intégral
Résumé :
Temperature and its distribution are crucial for combustion monitoring and control. For this application, digital camera-based pyrometers become increasingly popular, due to its relatively low cost. However, these pyrometers are not universally applicable due to the dependence of calibration. Compared with pyrometers, monitoring cameras exist in all most every combustion chamber. Although these cameras, theologically, have the ability to measure temperature, due to lack of calibration they are only used for visualization to support the decisions of operators. Almost all existing calibration methods are laboratory-based, and hence cannot calibrate a camera in operation. This paper proposes an online calibration method. It uses a pre-calibrated camera as a standard pyrometer to calibrate another camera in operation. The calibration is based on a photo taken by the pyrometry-camera at a position close to the camera in operation. Since the calibration does not affect the use of the camera in operation, it sharply reduces the cost and difficulty of pyrometer calibration. In this paper, a procedure of online calibration is proposed, and the advice about how to set camera parameters is given. Besides, the radio pyrometry is revised for a wider temperature range. The online calibration algorithm is developed based on two assumptions for images of the same flame taken in proximity: (1) there are common regions between the two images taken at close position; (2) there are some constant characteristic temperatures between the two-dimensional temperature distributions of the same flame taken from different angles. And those two assumptions are verified in a real industrial plants. Based on these two verified features, a temperature distribution matching algorithm is developed to calibrate pyrometers online. This method was tested and validated in an industrial-scale municipal solid waste incinerator. The accuracy of the calibrated pyrometer is sufficient for flame monitoring and control.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Simarro, Gonzalo, Daniel Calvete et Paola Souto. « UCalib : Cameras Autocalibration on Coastal Video Monitoring Systems ». Remote Sensing 13, no 14 (16 juillet 2021) : 2795. http://dx.doi.org/10.3390/rs13142795.

Texte intégral
Résumé :
Following the path set out by the “Argus” project, video monitoring stations have become a very popular low cost tool to continuously monitor beaches around the world. For these stations to be able to offer quantitative results, the cameras must be calibrated. Cameras are typically calibrated when installed, and, at best, extrinsic calibrations are performed from time to time. However, intra-day variations of camera calibration parameters due to thermal factors, or other kinds of uncontrolled movements, have been shown to introduce significant errors when transforming the pixels to real world coordinates. Departing from well-known feature detection and matching algorithms from computer vision, this paper presents a methodology to automatically calibrate cameras, in the intra-day time scale, from a small number of manually calibrated images. For the three cameras analyzed here, the proposed methodology allows for automatic calibration of >90% of the images in favorable conditions (images with many fixed features) and ∼40% in the worst conditioned camera (almost featureless images). The results can be improved by increasing the number of manually calibrated images. Further, the procedure provides the user with two values that allow for the assessment of the expected quality of each automatic calibration. The proposed methodology, here applied to Argus-like stations, is applicable e.g., in CoastSnap sites, where each image corresponds to a different camera.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Mokatren, Moayad, Tsvi Kuflik et Ilan Shimshoni. « Calibration-Free Mobile Eye-Tracking Using Corneal Imaging ». Sensors 24, no 4 (15 février 2024) : 1237. http://dx.doi.org/10.3390/s24041237.

Texte intégral
Résumé :
In this paper, we present and evaluate a calibration-free mobile eye-traking system. The system’s mobile device consists of three cameras: an IR eye camera, an RGB eye camera, and a front-scene RGB camera. The three cameras build a reliable corneal imaging system that is used to estimate the user’s point of gaze continuously and reliably. The system auto-calibrates the device unobtrusively. Since the user is not required to follow any special instructions to calibrate the system, they can simply put on the eye tracker and start moving around using it. Deep learning algorithms together with 3D geometric computations were used to auto-calibrate the system per user. Once the model is built, a point-to-point transformation from the eye camera to the front camera is computed automatically by matching corneal and scene images, which allows the gaze point in the scene image to be estimated. The system was evaluated by users in real-life scenarios, indoors and outdoors. The average gaze error was 1.6∘ indoors and 1.69∘ outdoors, which is considered very good compared to state-of-the-art approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Dedei Tagoe, Naa, et S. Mantey. « Determination of the Interior Orientation Parameters of a Non-metric Digital Camera for Terrestrial Photogrammetric Applications ». Ghana Mining Journal 19, no 2 (22 décembre 2019) : 1–9. http://dx.doi.org/10.4314/gm.v19i2.1.

Texte intégral
Résumé :
AbstractHigh cost of metric photogrammetric cameras has given rise to the utilisation of non-metric digital cameras to generate photogrammetric products in traditional close range or terrestrial photogrammetric applications. For precision photogrammetric applications, the internal metric characteristics of the camera, customarily known as the Interior Orientation Parameters, need to be determined and analysed. The derivation of these parameters is usually achieved by implementing a bundle adjustment with self-calibration procedure. The stability of the Interior Orientation Parameters is an issue in terms of accuracy in digital cameras since they are not built with photogrammetric applications in mind. This study utilised two photogrammetric software (i.e. Photo Modeler and Australis) to calibrate a non-metric digital camera to determine its Interior Orientation Parameters. The camera parameters were obtained using the two software and the Root Mean Square Errors (RMSE) calculated. It was observed that Australis gave a RMSE of 0.2435 and Photo Modeler gave 0.2335, implying that, the calibrated non-metric digital camera is suitable for high precision terrestrial photogrammetric projects. Keywords: Camera Calibration, Interior Orientation Parameters, Non-Metric Digital Camera
Styles APA, Harvard, Vancouver, ISO, etc.
6

Liu, Zhe, Zhaozong Meng, Nan Gao et Zonghua Zhang. « Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target ». Sensors 19, no 13 (8 juillet 2019) : 3008. http://dx.doi.org/10.3390/s19133008.

Texte intégral
Résumé :
Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Park, Byung-Seo, Woosuk Kim, Jin-Kyum Kim, Eui Seok Hwang, Dong-Wook Kim et Young-Ho Seo. « 3D Static Point Cloud Registration by Estimating Temporal Human Pose at Multiview ». Sensors 22, no 3 (31 janvier 2022) : 1097. http://dx.doi.org/10.3390/s22031097.

Texte intégral
Résumé :
This paper proposes a new technique for performing 3D static-point cloud registration after calibrating a multi-view RGB-D camera using a 3D (dimensional) joint set. Consistent feature points are required to calibrate a multi-view camera, and accurate feature points are necessary to obtain high-accuracy calibration results. In general, a special tool, such as a chessboard, is used to calibrate a multi-view camera. However, this paper uses joints on a human skeleton as feature points for calibrating a multi-view camera to perform calibration efficiently without special tools. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D joint set obtained through pose estimation as feature points. Since human body information captured by the multi-view camera may be incomplete, a joint set predicted based on image information obtained through this may be incomplete. After efficiently integrating a plurality of incomplete joint sets into one joint set, multi-view cameras can be calibrated by using the combined joint set to obtain extrinsic matrices. To increase the accuracy of calibration, multiple joint sets are used for optimization through temporal iteration. We prove through experiments that it is possible to calibrate a multi-view camera using a large number of incomplete joint sets.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Yin, Lei, Xiangjun Wang, Yubo Ni, Kai Zhou et Jilong Zhang. « Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing ». Remote Sensing 10, no 8 (16 août 2018) : 1298. http://dx.doi.org/10.3390/rs10081298.

Texte intégral
Résumé :
Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Du, Yuchuan, Cong Zhao, Feng Li et Xuefeng Yang. « An Open Data Platform for Traffic Parameters Measurement via Multirotor Unmanned Aerial Vehicles Video ». Journal of Advanced Transportation 2017 (2017) : 1–12. http://dx.doi.org/10.1155/2017/8324301.

Texte intégral
Résumé :
Multirotor unmanned aerial vehicle video observation can obtain accurate information about traffic flow of large areas over extended times. This paper aims to construct an open data test platform for updated traffic data accumulation and traffic simulation model verification by analyzing real time aerial video. Common calibration boards were used to calibrate internal camera parameters and image distortion correction was performed using a high-precision distortion model. To solve external parameters calibration problems, an existing algorithm was improved by adding two sets of orthogonal equations, achieving higher accuracy with only four calibrated points. A simplified algorithm is proposed to calibrate cameras by calculating the relationship between pixel and true length under the camera optical axis perpendicular to road conditions. Aerial video (160 min) from the Shanghai inner ring expressway was collected and real time traffic parameter values were obtained from analyzing and processing the aerial visual data containing spatial, time, velocity, and acceleration data. The results verify that the proposed platform provides a reasonable and objective approach to traffic simulation model verification and improvement. The proposed data platform also offers significant advantages over conventional methods that use historical and outdated data to run poorly calibrated traffic simulation models.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Teo, T. « VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4/W5 (11 mai 2015) : 55–60. http://dx.doi.org/10.5194/isprsarchives-xl-4-w5-55-2015.

Texte intégral
Résumé :
Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1) camera calibration, (2) video conversion and alignment, (3) orientation modelling, (4) dense matching, and (5) evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM) technique is utilized to obtain the image orientations. Then, semi-global matching (SGM) algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Tan, Jia Hai, Peng Yu Li, You Shan Qu, Ya Meng Han, Ya Li Yu et Wei Wang. « Design of Calibration System for a Great Quantity of High Precision Scientific Grade CCD Cameras ». Applied Mechanics and Materials 331 (juillet 2013) : 326–30. http://dx.doi.org/10.4028/www.scientific.net/amm.331.326.

Texte intégral
Résumé :
For the calibration of a great quantity of scientific grade CCD cameras in the high energy physics system, a scientific grade CCD camera calibration system with high precision and efficiency is designed. The designed camera calibration system consists of a 1053nm nanosecond solid-state laser, a knife, a double-integrating sphere, a laser power meter, a signal generator, a computer with its data processing software. Key technical parameters of scientific grade CCD under the condition of 1053nm optical pulses that are the modulation, contrast, defects, optical dynamic range, non-linear response can be calibrated by the designed calibration system. A double-integrating sphere with high uniformity and stability is designed as a uniform light source, which improves the calibrating performance and accuracy. Experimental results show the system designed in this paper can calibrate the large number of scientific grade CCD cameras quickly and efficiently.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Zeller, N., F. Quint et U. Stilla. « Calibration and accuracy analysis of a focused plenoptic camera ». ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3 (7 août 2014) : 205–12. http://dx.doi.org/10.5194/isprsannals-ii-3-205-2014.

Texte intégral
Résumé :
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Choi, Kyoungtaek, Ho Jung et Jae Suhr. « Automatic Calibration of an Around View Monitor System Exploiting Lane Markings ». Sensors 18, no 9 (5 septembre 2018) : 2956. http://dx.doi.org/10.3390/s18092956.

Texte intégral
Résumé :
This paper proposes a method that automatically calibrates four cameras of an around view monitor (AVM) system in a natural driving situation. The proposed method estimates orientation angles of four cameras composing the AVM system, and assumes that their locations and intrinsic parameters are known in advance. This method utilizes lane markings because they exist in almost all on-road situations and appear across images of adjacent cameras. It starts by detecting lane markings from images captured by four cameras of the AVM system in a cost-effective manner. False lane markings are rejected by analyzing the statistical properties of the detected lane markings. Once the correct lane markings are sufficiently gathered, this method first calibrates the front and rear cameras, and then calibrates the left and right cameras with the help of the calibration results of the front and rear cameras. This two-step approach is essential because side cameras cannot be fully calibrated by themselves, due to insufficient lane marking information. After this initial calibration, this method collects corresponding lane markings appearing across images of adjacent cameras and simultaneously refines the initial calibration results of four cameras to obtain seamless AVM images. In the case of a long image sequence, this method conducts the camera calibration multiple times, and then selects the medoid as the final result to reduce computational resources and dependency on a specific place. In the experiment, the proposed method was quantitatively and qualitatively evaluated in various real driving situations and showed promising results.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Jiang, Haiyang, Yuanyao Lu et Jingxuan Wang. « A Data-Driven Miscalibration Detection Algorithm for a Vehicle-Mounted Camera ». Mobile Information Systems 2022 (25 octobre 2022) : 1–10. http://dx.doi.org/10.1155/2022/5058611.

Texte intégral
Résumé :
LiDAR and camera are two commonly used sensors in autonomous vehicles. In order to fuse the data collected by these two sensors to accurately perceive the 3D world, it is necessary to perform accurate internal parameters’ and external parameters’ calibration on the two sensors. However, during the long-term deployment and use of autonomous vehicles, factors such as aging of the equipment, transient changes in the external environment, and interference can cause the initially correctly calibrated camera internal parameters to no longer be applicable to the current environment, requiring a recalibration of the camera internal reference. Since most of the current work is focused on the research of perception algorithms and the calibration of various sensors, there has not been much research in identifying when a sensor needs to be recalibrated. Consequently, this paper proposed a data-driven detection method for the miscalibration of RGB cameras to detect the miscalibrated camera internal parameters. The specific operation process is to first add a random perturbation factor to the correctly calibrated camera internal parameters to generate an incorrect camera internal parameter and then calibrate the raw image with the incorrect internal parameter to generate a miscalibrated image data. The miscalibrated image data are used as the input data of the neural network to train the network model and generate a network model for detecting the miscalibration parameters. On the KITTI dataset, we conducted training as well as model deployment with the data collected from Cam2 and Cam3, respectively, and evaluated the abovementioned two models. The experimental results show that our proposed method has some application value in detecting errors in the calibration of the camera’s internal parameters.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Svoboda, Tomáš, Daniel Martinec et Tomáš Pajdla. « A Convenient Multicamera Self-Calibration for Virtual Environments ». Presence : Teleoperators and Virtual Environments 14, no 4 (août 2005) : 407–22. http://dx.doi.org/10.1162/105474605774785325.

Texte intégral
Résumé :
Virtual immersive environments or telepresence setups often consist of multiple cameras that have to be calibrated. We present a convenient method for doing this. The minimum is three cameras, but there is no upper limit. The method is fully automatic and a freely moving bright spot is the only calibration object. A set of virtual 3D points is made by waving the bright spot through the working volume. Its projections are found with subpixel precision and verified by a robust RANSAC analysis. The cameras do not have to see all points; only reasonable overlap between camera subgroups is necessary. Projective structures are computed via rank-4 factorization and the Euclidean stratification is done by imposing geometric constraints. This linear estimate initializes a postprocessing computation of nonlinear distortion, which is also fully automatic. We suggest a trick on how to use a very ordinary laser pointer as the calibration object. We show that it is possible to calibrate an immersive virtual environment with 16 cameras in less than 60 minutes reaching about 1/5 pixel reprojection error. The method has been successfully tested on numerous multicamera environments using varying numbers of cameras of varying quality.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Chibunichev, A. G., A. V. Govorov et V. E. Chernyshev. « RESEARCH OF THE CAMERA CALIBRATION USING SERIES OF IMAGES WITH COMMON CENTER OF PROJECTION ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W18 (29 novembre 2019) : 19–22. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w18-19-2019.

Texte intégral
Résumé :
Abstract. The method for calibration of cameras equipped with long focal distance lens is researched in the present work. The basic idea is as follows. The camera to be calibrated is placed on the tripod with panoramic head. The main condition of panorama shooting is that the rotation center of the camera and the front nodal point of the lens should be the same. The camera is calibrated based on a series of images of a test object with a common center of projection. Special software has been created for this purpose. The results of experimental studies on digital simulated data and for a real camera Hasselblad H4D-60 are presented. Results of these experiments show that use of common projection center allow to increase accuracy of the calibration process of the long focal length cameras.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Liang, Mingpei, Xinyu Huang, Chung-Hao Chen, Gaolin Zheng et Alade Tokuta. « Robust Calibration of Cameras with Telephoto Lens Using Regularized Least Squares ». Mathematical Problems in Engineering 2014 (2014) : 1–9. http://dx.doi.org/10.1155/2014/689429.

Texte intégral
Résumé :
Cameras with telephoto lens are usually used to recover details of an object that is either small or located far away from the cameras. However, the calibration of this kind of cameras is not as accurate as the one of cameras with short focal lengths that are commonly used in many vision applications. This paper has two contributions. First, we present a first-order error analysis that shows the relation between focal length and estimation uncertainties of camera parameters. To our knowledge, this error analysis with respect to focal length has not been studied in the area of camera calibration. Second, we propose a robust algorithm to calibrate the camera with a long focal length without using additional devices. By adding a regularization term, our algorithm makes the estimation of the image of the absolute conic well posed. As a consequence, the covariance of camera parameters can be reduced greatly. We further used simulations and real data to verify our proposed algorithm and obtained very stable results.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Steger, Carsten, et Markus Ulrich. « A Multi-view Camera Model for Line-Scan Cameras with Telecentric Lenses ». Journal of Mathematical Imaging and Vision 64, no 2 (13 octobre 2021) : 105–30. http://dx.doi.org/10.1007/s10851-021-01055-x.

Texte intégral
Résumé :
AbstractWe propose a novel multi-view camera model for line-scan cameras with telecentric lenses. The camera model supports an arbitrary number of cameras and assumes a linear relative motion with constant velocity between the cameras and the object. We distinguish two motion configurations. In the first configuration, all cameras move with independent motion vectors. In the second configuration, the cameras are mounted rigidly with respect to each other and therefore share a common motion vector. The camera model can model arbitrary lens distortions by supporting arbitrary positions of the line sensor with respect to the optical axis. We propose an algorithm to calibrate a multi-view telecentric line-scan camera setup. To facilitate a 3D reconstruction, we prove that an image pair acquired with two telecentric line-scan cameras can always be rectified to the epipolar standard configuration, in contrast to line-scan cameras with entocentric lenses, for which this is possible only under very restricted conditions. The rectification allows an arbitrary stereo algorithm to be used to calculate disparity images. We propose an efficient algorithm to compute 3D coordinates from these disparities. Experiments on real images show the validity of the proposed multi-view telecentric line-scan camera model.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Zhang, Zhe, Chunyu Wang et Wenhu Qin. « Semantically Synchronizing Multiple-Camera Systems with Human Pose Estimation ». Sensors 21, no 7 (2 avril 2021) : 2464. http://dx.doi.org/10.3390/s21072464.

Texte intégral
Résumé :
Multiple-camera systems can expand coverage and mitigate occlusion problems. However, temporal synchronization remains a problem for budget cameras and capture devices. We propose an out-of-the-box framework to temporally synchronize multiple cameras using semantic human pose estimation from the videos. Human pose predictions are obtained with an out-of-the-shelf pose estimator for each camera. Our method firstly calibrates each pair of cameras by minimizing an energy function related to epipolar distances. We also propose a simple yet effective multiple-person association algorithm across cameras and a score-regularized energy function for improved performance. Secondly, we integrate the synchronized camera pairs into a graph and derive the optimal temporal displacement configuration for the multiple-camera system. We evaluate our method on four public benchmark datasets and demonstrate robust sub-frame synchronization accuracy on all of them.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Ramm, Roland, Pedro de Dios Cruz, Stefan Heist, Peter Kühmstedt et Gunther Notni. « Fusion of Multimodal Imaging and 3D Digitization Using Photogrammetry ». Sensors 24, no 7 (3 avril 2024) : 2290. http://dx.doi.org/10.3390/s24072290.

Texte intégral
Résumé :
Multimodal sensors capture and integrate diverse characteristics of a scene to maximize information gain. In optics, this may involve capturing intensity in specific spectra or polarization states to determine factors such as material properties or an individual’s health conditions. Combining multimodal camera data with shape data from 3D sensors is a challenging issue. Multimodal cameras, e.g., hyperspectral cameras, or cameras outside the visible light spectrum, e.g., thermal cameras, lack strongly in terms of resolution and image quality compared with state-of-the-art photo cameras. In this article, a new method is demonstrated to superimpose multimodal image data onto a 3D model created by multi-view photogrammetry. While a high-resolution photo camera captures a set of images from varying view angles to reconstruct a detailed 3D model of the scene, low-resolution multimodal camera(s) simultaneously record the scene. All cameras are pre-calibrated and rigidly mounted on a rig, i.e., their imaging properties and relative positions are known. The method was realized in a laboratory setup consisting of a professional photo camera, a thermal camera, and a 12-channel multispectral camera. In our experiments, an accuracy better than one pixel was achieved for the data fusion using multimodal superimposition. Finally, application examples of multimodal 3D digitization are demonstrated, and further steps to system realization are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Du, Ye Fei, Ming Li Dong, Jun Wang et Peng Sun. « A Method Based on Two Constraints for Camera Calibration Interior Parameters ». Applied Mechanics and Materials 103 (septembre 2011) : 181–86. http://dx.doi.org/10.4028/www.scientific.net/amm.103.181.

Texte intégral
Résumé :
In the digital photogrammetry system, camera calibration is an essential part and its accuracy directly affects system accuracy. Because non-metric digital camera interior parameters are the stationary in the measurement process, the camera interior parameters should be accurately calibrated in the laboratory. Considering the lack of general DLT algorithm, we adopt nonlinear model of the camera based on distortion. And two additional constraints are used to accurately calibrate the parameters of the camera. The experimental results are analyzed and compared with the V-STARS system. The results of interior parameters calibration are less than 1 pixel. Test results meet the requirements of industrial digital photogrammetry.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Zhou, Yang, Danqing Chen, Jun Wu, Mingyi Huang et Yubin Weng. « Calibration of RGB-D Camera Using Depth Correction Model ». Journal of Physics : Conference Series 2203, no 1 (1 février 2022) : 012032. http://dx.doi.org/10.1088/1742-6596/2203/1/012032.

Texte intégral
Résumé :
Abstract This paper proposes a calibration method of RGB-D camera, especially its depth camera. First, use a checkerboard calibration board under auxiliary Infrared light source to collect calibration images. Then, the internal and external parameters of the depth camera are calculated by Zhang’s calibration method, which improves the accuracy of the internal parameter. Next, the depth correction model is proposed to directly calibrate the distortion of the depth image, which is more intuitive and faster than the disparity distortion correction model. This method is simple, high-precision, and suitable for most depth cameras.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Zhang, Huang et Zhao. « A New Model of RGB-D Camera Calibration Based On 3D Control Field ». Sensors 19, no 23 (21 novembre 2019) : 5082. http://dx.doi.org/10.3390/s19235082.

Texte intégral
Résumé :
With extensive application of RGB-D cameras in robotics, computer vision, and many other fields, accurate calibration becomes more and more critical to the sensors. However, most existing models for calibrating depth and the relative pose between a depth camera and an RGB camera are not universally applicable to many different kinds of RGB-D cameras. In this paper, by using the collinear equation and space resection of photogrammetry, we present a new model to correct the depth and calibrate the relative pose between depth and RGB cameras based on a 3D control field. We establish a rigorous relationship model between the two cameras; then, we optimize the relative parameters of two cameras by least-squares iteration. For depth correction, based on the extrinsic parameters related to object space, the reference depths are calculated by using a collinear equation. Then, we calibrate the depth measurements with consideration of the distortion of pixels in depth images. We apply Kinect-2 to verify the calibration parameters by registering depth and color images. We test the effect of depth correction based on 3D reconstruction. Compared to the registration results from a state-of-the-art calibration model, the registration results obtained with our calibration parameters improve dramatically. Likewise, the performances of 3D reconstruction demonstrate obvious improvements after depth correction.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Liu, Xinhua, Jie Tian, Hailan Kuang et Xiaolin Ma. « A Stereo Calibration Method of Multi-Camera Based on Circular Calibration Board ». Electronics 11, no 4 (17 février 2022) : 627. http://dx.doi.org/10.3390/electronics11040627.

Texte intégral
Résumé :
In the application of 3D reconstruction of multi-cameras, it is necessary to calibrate the camera used separately, and at the same time carry out multi-stereo calibration, and the calibration accuracy directly affects the effect of the 3D reconstruction of the system. Many researchers focus on the optimization of the calibration algorithm and the improvement of calibration accuracy after obtaining the calibration plate pattern coordinates, ignoring the impact of calibration on the accuracy of the calibration board pattern coordinate extraction. Therefore, this paper proposes a multi-camera stereo calibration method based on circular calibration plate focusing on the extraction of pattern features during the calibration process. This method preforms the acquisition of the subpixel edge acquisition based on Franklin matrix and circular feature extraction of the circular calibration plate pattern collected by the camera, and then combines the Zhang’s calibration method to calibrate the camera. Experimental results show that compared with the traditional calibration method, the method has better calibration effect and calibration accuracy, and the average reprojection error of the multi-camera is reduced by more than 0.006 pixels.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Zhao, Zijian, et Ying Weng. « A flexible method combining camera calibration and hand–eye calibration ». Robotica 31, no 5 (1 février 2013) : 747–56. http://dx.doi.org/10.1017/s0263574713000040.

Texte intégral
Résumé :
SUMMARYWe consider the conventional techniques of vision robot system calibration where camera parameters and robot hand–eye parameters are computed separately, i.e., first performing camera calibration and then carrying out hand–eye calibration based on the calibrated parameters of cameras. In this paper we propose a joint algorithm that combines the camera calibration and the hand–eye calibration together. The proposed algorithm gives the solutions of the cameras' parameters and the hand–eye parameters simultaneously by using nonlinear optimization. Both simulations and real experiments show the superiority of our algorithm. We also apply our algorithm in the real application of the robot-assisted surgical system, and very good results have been obtained.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Yan, Yu, et Bing Wei He. « Single Camera Stereo with Planar Mirrors ». Advanced Materials Research 684 (avril 2013) : 447–50. http://dx.doi.org/10.4028/www.scientific.net/amr.684.447.

Texte intégral
Résumé :
This paper presents a new system for rapidly acquiring stereo images using a single camera and a pair of planar mirrors (catadioptric stereo). Firstly, the camera used to capture images is calibrated with matlab toolbox. Secondly, the position and pose of the planar mirrors relative to the fixed, calibrated camera is estimated, and this procedure is accomplished by calculating the symmetry plane of the real and reflected image corners of a chessboard. Thirdly, the relative orientation of two reflected virtual cameras is obtained. Finally, Gaussian noise is added to the image corners of the chessboard to verify the performance of the established stereo system. Experimental results show the effectiveness and robustness of our system.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Truong, Philips, Deligiannis, Abrahamyan et Guan. « Automatic Multi-Camera Extrinsic Parameter Calibration Based on Pedestrian Torsors † ». Sensors 19, no 22 (15 novembre 2019) : 4989. http://dx.doi.org/10.3390/s19224989.

Texte intégral
Résumé :
Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel, accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. The proposed method considers the pedestrians in the observed scene as the calibration objects and analyzes the pedestrian tracks to obtain extrinsic parameters. Compared to the state of the art, the new method is fully automatic and robust in various environments. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We apply a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the top and the bottom of the pedestrians are then used to compute the extrinsic calibration matrices. We also propose a novel method to calibrate the camera network by only using the top and centerline of the person when the bottom of the person is not available in heavily occluded scenes. We verified the robustness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires less than one minute of observing the walking people to reach this accuracy in controlled environments. It also just takes a few minutes to collect enough data for the calibration in uncontrolled environments. Our proposed method can perform well in various situations such as multi-person, occlusions, or even at real intersections on the street.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Yin, Jiang Yan. « The Application Research of Robot Vision Target Positioning Based on Static Camera Calibration ». Advanced Materials Research 712-715 (juin 2013) : 2378–84. http://dx.doi.org/10.4028/www.scientific.net/amr.712-715.2378.

Texte intégral
Résumé :
Target precise positioning by vision system is one of key techniques in robot vision system. In target positioning and selection with robot vision technique, the camera lens distortion must be calibrated. In this paper, a calibration method based on segment slope is used to calibrate the camera and the radial lens distortion coefficient is obtained. The distortion coefficient is used in calculating target position coordinates, and the robot end-exceutor is led to position the target with the use of the coordinates. The experimental results show the effectiveness of the research work. Keywords:robot vision;camera calibration;radial distortion;target positioning
Styles APA, Harvard, Vancouver, ISO, etc.
29

Pan, Jan Wei, Jin Quan Cheng et Tomonari Furukawa. « Data Fusion of Probabilistic Full-Field Measurements for Material Characterization ». Key Engineering Materials 462-463 (janvier 2011) : 686–91. http://dx.doi.org/10.4028/www.scientific.net/kem.462-463.686.

Texte intégral
Résumé :
This paper presents a data fusion technique to model more certain probabilistic full-field strain/displacement measurements for stochastic energy-based characterization proposed by the authors. The proposed technique measures the full-field measurements by using multiple cameras, constructing a Gaussian probability density function (PDF) for each camera, fusing the PDFs and developing the total PDF of the full-field measurements. Since the certainty of measurements is magnified by the use of multiple cameras, the use of multiple well-calibrated cameras could achieve the accuracy which no single camera could attain. The validity of the proposed energy-based characterization and its superiority to the original formulation were investigated using numerical analysis of an anisotropic material, and the proposed technique was found to improve the accuracy significantly with the addition of cameras.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Mustafah, Yasir Mohd, A. W. Azman et M. H. Ani. « Object Distance and Size Measurement Using Stereo Vision System ». Advanced Materials Research 622-623 (décembre 2012) : 1373–77. http://dx.doi.org/10.4028/www.scientific.net/amr.622-623.1373.

Texte intégral
Résumé :
Object size identification is very useful in building systems or applications especially in autonomous system navigation. Many recent works have started to use multiple vision sensors or cameras for different type of application such as 3D image constructions, occlusion detection and etc. Multiple cameras system has becoming more popular since cameras are now very cheap and easy to deploy and utilize. The proposed measurement system consists of object detection on the stereo images and blob extraction and distance and size calculation and object identification. The system also employs a fast algorithm so that the measurement can be done in real-time. The object measurement using stereo camera is better than object detection using a single camera that was proposed in many previous research works. It is much easier to calibrate and can produce a more accurate results.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Dong, Liang, Jingao Xu, Guoxuan Chi, Danyang Li, Xinglin Zhang, Jianbo Li, Qiang Ma et Zheng Yang. « Enabling Surveillance Cameras to Navigate ». ACM Transactions on Sensor Networks 17, no 4 (30 novembre 2021) : 1–20. http://dx.doi.org/10.1145/3446633.

Texte intégral
Résumé :
Smartphone localization is essential to a wide spectrum of applications in the era of mobile computing. The ubiquity of smartphone mobile cameras and surveillance ambient cameras holds promise for offering sub-meter accuracy localization services thanks to the maturity of computer vision techniques. In general, ambient-camera-based solutions are able to localize pedestrians in video frames at fine-grained, but the tracking performance under dynamic environments remains unreliable. On the contrary, mobile-camera-based solutions are capable of continuously tracking pedestrians; however, they usually involve constructing a large volume of image database, a labor-intensive overhead for practical deployment. We observe an opportunity of integrating these two most promising approaches to overcome above limitations and revisit the problem of smartphone localization with a fresh perspective. However, fusing mobile-camera-based and ambient-camera-based systems is non-trivial due to disparity of camera in terms of perspectives, parameters and incorrespondence of localization results. In this article, we propose iMAC, an integrated mobile cameras and ambient cameras based localization system that achieves sub-meter accuracy and enhanced robustness with zero-human start-up effort. The key innovation of iMAC is a well-designed fusing frame to eliminate disparity of cameras including a construction of projection map function to automatically calibrate ambient cameras, an instant crowd fingerprints model to describe user motion patterns, and a confidence-aware matching algorithm to associate results from two sub-systems. We fully implement iMAC on commodity smartphones and validate its performance in five different scenarios. The results show that iMAC achieves a remarkable localization accuracy of 0.68 m, outperforming the state-of-the-art systems by >75%.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Ren, Zhengwei, Ming Fang et Chunyi Chen. « Self-Calibration Spherical Video Stabilization Based on Gyroscope ». Information 12, no 8 (27 juillet 2021) : 299. http://dx.doi.org/10.3390/info12080299.

Texte intégral
Résumé :
With the development of handheld video capturing devices, video stabilization becomes increasingly important. The gyroscope-based video stabilization methods perform promising ability, since they can return more reliable three-dimensional (3D) camera rotation estimation, especially when there are many moving objects in scenes or there are serious motion blur or illumination changes. However, the gyroscope-based methods depend on the camera intrinsic parameters to execute video stabilization. Therefore, a self-calibrated spherical video stabilization method was proposed. It builds a virtual sphere, of which the spherical radius is calibrated automatically, and then projects each frame of the video to the sphere. Through the inverse rotation of the spherical image according to the rotation jitter component, the dependence on the camera intrinsic parameters is relaxed. The experimental results showed that the proposed method does not need to calibrate the camera and it can suppress the camera jitter by binding the gyroscope on the camera. Moreover, compared with other state-of-the-art methods, the proposed method can improve the peak signal-to-noise ratio, the structural similarity metric, the cropping ratio, the distortion score, and the stability score.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Welty, Ethan Z., Timothy C. Bartholomaus, Shad O’Neel et W. Tad Pfeffer. « Cameras as clocks ». Journal of Glaciology 59, no 214 (2013) : 275–86. http://dx.doi.org/10.3189/2013jog12j126.

Texte intégral
Résumé :
AbstractConsumer-grade digital cameras have become ubiquitous accessories of science. Particularly in glaciology, the recognized importance of short-term variability has motivated their deployment for increasingly time-critical observations. However, such devices were never intended for precise timekeeping, and their use as such needs to be accompanied by appropriate management of systematic, rounding and random errors in reported image times. This study describes clock drift, subsecond reporting resolution and timestamp precision as the major obstacles to precise camera timekeeping, and documents the subsecond capability of camera models from 17 leading manufacturers. We present a complete and accessible methodology to calibrate cameras for absolute timing and provide a suite of supporting scripts. Two glaciological case studies serve to illustrate how the methods relate to contemporary investigations: (1) georeferencing aerial photogrammetric surveys with camera positions time-interpolated from GPS tracklogs; and (2) coupling videos of glacier-calving events to synchronous seismic waveforms.
Styles APA, Harvard, Vancouver, ISO, etc.
34

König, Sebastian, Berndt Gutschwager, Richard Dieter Taubert et Jörg Hollandt. « Metrological characterization and calibration of thermographic cameras for quantitative temperature measurement ». Journal of Sensors and Sensor Systems 9, no 2 (18 décembre 2020) : 425–42. http://dx.doi.org/10.5194/jsss-9-425-2020.

Texte intégral
Résumé :
Abstract. We present the metrological characterization and calibration of three different types of thermographic cameras for quantitative temperature measurement traceable to the International Temperature Scale (ITS-90). Relevant technical specifications – i.e., the non-uniformity of the pixel-to-pixel responsivity, the inhomogeneity equivalent temperature difference (IETD), the noise equivalent temperature difference (NETD), and the size-of-source effect (SSE) – are determined according to the requirements given in the series of Technical Directives VDI/VDE 5585. The measurements are performed with the camera calibration facility of the Physikalisch-Technische Bundesanstalt. The data reference method is applied for the determination and improvement of the non-uniformity, leading to an improved IETD for all three cameras. Finally, the cameras are calibrated according to the different procedures discussed in the VDI/VDE 5585 series. Results achieved with the different calibration procedures are compared for each type of camera and among the three cameras. An uncertainty budget for the calibration of each camera is given according to GUM (ISO, 1995) and VDI/VDE 5585.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Lu, Zhenghai, Yaowen Lv, Zhiqing Ai, Ke Suo, Xuanrui Gong et Yuxuan Wang. « Calibration of a Catadioptric System and 3D Reconstruction Based on Surface Structured Light ». Sensors 22, no 19 (28 septembre 2022) : 7385. http://dx.doi.org/10.3390/s22197385.

Texte intégral
Résumé :
In response to the problem of the small field of vision in 3D reconstruction, a 3D reconstruction system based on a catadioptric camera and projector was built by introducing a traditional camera to calibrate the catadioptric camera and projector system. Firstly, the intrinsic parameters of the camera and the traditional camera are calibrated separately. Then, the calibration of the projection system is accomplished by the traditional camera. Secondly, the coordinate system is introduced to calculate, respectively, the position of the catadioptric camera and projector in the coordinate system, and the position relationship between the coordinate systems of the catadioptric camera and the projector is obtained. Finally, the projector is used to project the structured light fringe to realize the reconstruction using a catadioptric camera. The experimental results show that the reconstruction error is 0.75 mm and the relative error is 0.0068 for a target of about 1 m. The calibration method and reconstruction method proposed in this paper can guarantee the ideal geometric reconstruction accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Abramov, N. F., I. V. Polyanskii, S. A. Prokhorova et Ya D. El’yashev. « Ground Testing of the Landing Platform Television System of the Exomars-2022 Spacecraft ». Астрономический вестник 57, no 5 (1 septembre 2023) : 393–402. http://dx.doi.org/10.31857/s0320930x23040011.

Texte intégral
Résumé :
Ground testing results are presented for the landing platform television system (TSPP) within the complex of scientific payloads onboard the ExoMars-2022 spacecraft. In the course of the ground testing, the different operation modes have been checked and the camera characteristics have been measured and calibrated. The Space Research Institute (IKI RAS) has obtained photographic material from each camera, with the cameras being installed on a stand that simulates in full scale the ExoMars-2022 landing platform. In addition, special measurements have been collected for the cameras’ most important characteristics: horizontal and vertical angular field of view, distortion, focal length, resolution, dynamic range, vignetting coefficient, and absolute sensitivity.
Styles APA, Harvard, Vancouver, ISO, etc.
37

BOUFAMA, BOUBAKEUR S. « ON THE RECOVERY OF MOTION AND STRUCTURE WHEN CAMERAS ARE NOT CALIBRATED ». International Journal of Pattern Recognition and Artificial Intelligence 13, no 05 (août 1999) : 735–59. http://dx.doi.org/10.1142/s0218001499000422.

Texte intégral
Résumé :
This paper addresses the problem of computing the camera motion and the Euclidean 3D structure of an observed scene using uncalibrated images. Given at least two images with pixel correspondences, the motion of the camera (translation and rotation) and the 3D structure of the scene are calculated simultaneously. We do not assume the knowledge of the intrinsic parameters of the camera. However, an approximation of these parameters is required. Such an approximation is all the time available, either from the camera manufacturer's data or from former experiments. Classical methods based on the essential matrix are highly sensitive to image noise. This sensitivity is amplified when the intrinsic parameters of the cameras contain errors. To overcome such instability, we propose here a method where a particular choice of a 3D Euclidean coordinate system with a different parameterization of the motion/structure problem allowed us to reduce significantly the total number of unknowns. In addition, the simultaneous calculation of the camera motion and the 3D structure has made the computation of the motion and structure less sensitive to the errors in the values of the intrinsic parameters of the camera. All steps of our method are linear. However, a final nonlinear optimal step might be added to improve the accuracy of the results and to allow the orthogonality of the rotation matrix to be taken into account. Experiments with real images validated our method and showed that a good quality motion/structure can be recovered from a pair of uncalibrated images. Intensive experiments with simulated images have shown the relationship between the errors on the intrinsic parameters and the accuracy of the recovered 3D structure.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Wisotzky, Eric L., Peter Eisert et Anna Hilsmann. « 3D Hyperspectral Light-Field Imaging : a first intraoperative implementation ». Current Directions in Biomedical Engineering 9, no 1 (1 septembre 2023) : 611–14. http://dx.doi.org/10.1515/cdbme-2023-1153.

Texte intégral
Résumé :
Abstract Hyperspectral imaging is an emerging technology that has gained significant attention in the medical field due to its ability to provide precise and accurate imaging of biological tissues. The current methods of hyperspectral imaging, such as filter-wheel, snapshot, line-scanning, and push-broom cameras have limitations such as low spatial and spectral resolution, slow acquisition time. New developments on the field of light field cameras show the potential to overcome these limitations. In this paper, we use a novel hyperspectral lightfield camera and try to combine the capability of hyperspectral and 3D analysis. For this purpose we calibrate our system and test it during two ENT-surgeries to show its potential for improving surgical outcomes. The micro-lenses of the camera map 66 spectral sub-images onto the sensor allowing to reconstruct the spectral behavior of the captured scene in the spectral range of 350-1000nm. In addition, we use the sensor data to apply a 3D camera calibration pipeline to allow 3D surface reconstruction. We captured 26 calibration images and achieved calibration results in accordance to stated company data. The best calibration showed a re-projection error of 0.55 px. Further, we tested the camera during a parotidectomy and a neck-dissection. The extracted reflectance spectra of the selected venal and arterial regions correspond perfectly to the spectrum of oxygenated and deoxygenated hemoglobin. For the first time, up to our knowledge, a hyperspectral lightfield camera has been used during a surgery. We were able to continuously capture images and analyze the reconstructed spectra of specific tissue types. Further, we are able to use the sensor data of the micro-lens projections to calibrate the multilens camera system for later intraoperative measurement tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Jhan, Jyun-Ping, Jiann-Yeou Rau et Chih-Ming Chou. « Underwater 3D Rigid Object Tracking and 6-DOF Estimation : A Case Study of Giant Steel Pipe Scale Model Underwater Installation ». Remote Sensing 12, no 16 (12 août 2020) : 2600. http://dx.doi.org/10.3390/rs12162600.

Texte intégral
Résumé :
The Zengwen desilting tunnel project installed an Elephant Trunk Steel Pipe (ETSP) at the bottom of the reservoir that is designed to connect the new bypass tunnel and reach downward to the sediment surface. Since ETSP is huge and its underwater installation is an unprecedented construction method, there are several uncertainties in its dynamic motion changes during installation. To assure construction safety, a 1:20 ETSP scale model was built to simulate the underwater installation procedure, and its six-degrees-of-freedom (6-DOF) motion parameters were monitored by offline underwater 3D rigid object tracking and photogrammetry. Three cameras were used to form a multicamera system, and several auxiliary devices—such as waterproof housing, tripods, and a waterproof LED—were adopted to protect the cameras and to obtain clear images in the underwater environment. However, since it is difficult for the divers to position the camera and ensure the camera field of view overlap, each camera can only observe the head, middle, and tail parts of ETSP, respectively, leading to a small overlap area among all images. Therefore, it is not possible to perform a traditional method via multiple images forward intersection, where the camera’s positions and orientations have to be calibrated and fixed in advance. Instead, by tracking the 3D coordinates of ETSP and obtaining the camera orientation information via space resection, we propose a multicamera coordinate transformation and adopted a single-camera relative orientation transformation to calculate the 6-DOF motion parameters. The offline procedure is to first acquire the 3D coordinates of ETSP by taking multiposition images with a precalibrated camera in the air and then use the 3D coordinates as control points to perform the space resection of the calibrated underwater cameras. Finally, we calculated the 6-DOF of ETSP by using the camera orientation information through both multi- and single-camera approaches. In this study, we show the results of camera calibration in the air and underwater environment, present the 6-DOF motion parameters of ETSP underwater installation and the reconstructed 4D animation, and compare the differences between the multi- and single-camera approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Yusoff, A. R., M. F. M. Ariff, K. M. Idris, Z. Majid et A. K. Chong. « CAMERA CALIBRATION ACCURACY AT DIFFERENT UAV FLYING HEIGHTS ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (23 février 2017) : 595–600. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-595-2017.

Texte intégral
Résumé :
Unmanned Aerial Vehicles (UAVs) can be used to acquire highly accurate data in deformation survey, whereby low-cost digital cameras are commonly used in the UAV mapping. Thus, camera calibration is considered important in obtaining high-accuracy UAV mapping using low-cost digital cameras. The main focus of this study was to calibrate the UAV camera at different camera distances and check the measurement accuracy. The scope of this study included camera calibration in the laboratory and on the field, and the UAV image mapping accuracy assessment used calibration parameters of different camera distances. The camera distances used for the image calibration acquisition and mapping accuracy assessment were 1.5 metres in the laboratory, and 15 and 25 metres on the field using a Sony NEX6 digital camera. A large calibration field and a portable calibration frame were used as the tools for the camera calibration and for checking the accuracy of the measurement at different camera distances. Bundle adjustment concept was applied in Australis software to perform the camera calibration and accuracy assessment. The results showed that the camera distance at 25 metres is the optimum object distance as this is the best accuracy obtained from the laboratory as well as outdoor mapping. In conclusion, the camera calibration at several camera distances should be applied to acquire better accuracy in mapping and the best camera parameter for the UAV image mapping should be selected for highly accurate mapping measurement.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Chen, C., B. S. Yang et S. Song. « LOW COST AND EFFICIENT 3D INDOOR MAPPING USING MULTIPLE CONSUMER RGB-D CAMERAS ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (2 juin 2016) : 169–74. http://dx.doi.org/10.5194/isprsarchives-xli-b1-169-2016.

Texte intégral
Résumé :
Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Chen, C., B. S. Yang et S. Song. « LOW COST AND EFFICIENT 3D INDOOR MAPPING USING MULTIPLE CONSUMER RGB-D CAMERAS ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (2 juin 2016) : 169–74. http://dx.doi.org/10.5194/isprs-archives-xli-b1-169-2016.

Texte intégral
Résumé :
Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Lin, Tzung-Han, Yu-Lun Liu, Chi-Cheng Lee et Hsuan-Kai Huang. « A camera array system based on DSLR cameras for autostereoscopic prints ». Electronic Imaging 2020, no 2 (26 janvier 2020) : 155–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-155.

Texte intégral
Résumé :
In this paper, we present a multi-view camera-array system by using commercial DSLR cameras. In order to produce quality autostereoscopic prints, we initially calibrated the color based on X-Rite color-checker, then automatically adjust the alignment of all images by one black-white checker. In this system, we also utilized an external electronic trigger based on Arduino for synchronizing all cameras to generate bullet-time effect photos. Finally, we converted all photos into multiplexed images then printed them on a lenticular lens panel to be an autostereoscopic photo frame.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Lussem, U., J. Hollberg, J. Menne, J. Schellberg et G. Bareth. « USING CALIBRATED RGB IMAGERY FROM LOW-COST UAVS FOR GRASSLAND MONITORING : CASE STUDY AT THE RENGEN GRASSLAND EXPERIMENT (RGE), GERMANY ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (23 août 2017) : 229–33. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-229-2017.

Texte intégral
Résumé :
Monitoring the spectral response of intensively managed grassland throughout the growing season allows optimizing fertilizer inputs by monitoring plant growth. For example, site-specific fertilizer application as part of precision agriculture (PA) management requires information within short time. But, this requires field-based measurements with hyper- or multispectral sensors, which may not be feasible on a day to day farming practice. Exploiting the information of RGB images from consumer grade cameras mounted on unmanned aerial vehicles (UAV) can offer cost-efficient as well as near-real time analysis of grasslands with high temporal and spatial resolution. The potential of RGB imagery-based vegetation indices (VI) from consumer grade cameras mounted on UAVs has been explored recently in several. However, for multitemporal analyses it is desirable to calibrate the digital numbers (DN) of RGB-images to physical units. In this study, we explored the comparability of the RGBVI from a consumer grade camera mounted on a low-cost UAV to well established vegetation indices from hyperspectral field measurements for applications in grassland. The study was conducted in 2014 on the Rengen Grassland Experiment (RGE) in Germany. Image DN values were calibrated into reflectance by using the Empirical Line Method (Smith & Milton 1999). Depending on sampling date and VI the correlation between the UAV-based RGBVI and VIs such as the NDVI resulted in varying R2 values from no correlation to up to 0.9. These results indicate, that calibrated RGB-based VIs have the potential to support or substitute hyperspectral field measurements to facilitate management decisions on grasslands.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Stępień, Grzegorz, Artur Kujawski, Arkadiusz Tomczak, Roman Hałaburda et Kamil Borczyk. « Method of Improving Incomplete Spatial-Temporal Data in Inland Navigation, on the Basis of Industrial Camera Images – West Oder River Case Study ». Transport and Telecommunication Journal 23, no 1 (1 février 2022) : 48–61. http://dx.doi.org/10.2478/ttj-2022-0005.

Texte intégral
Résumé :
Abstract Main aim of the paper is to use a single non-metric camera to support the determination of the position of. Authors propose to use the existing infrastructure of CCTV cameras mounted on bridges and wharves to determine the position of inland waterway vessels. Image from cameras giving the pixel coordinates of moving object is transformed to the geodetic data domain using a modified projective transformation method. Novel approach is to use of Sequential Projection Transformation (SPT) which additionally uses virtual reference points. The transformation coefficients calculated using the virtual points are used to determine the position of the vessels and are also simultaneously used to calibrate the industrial camera. The method has been verified under real conditions, and the results obtained are average 30% more accurate compared to the traditionally used projective transformation using a small number of real points.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Zhu, Ying, Long Ye, Jing Ling Wang et Qin Zhang. « Camera Calibration Based Multi-Objects Location in Binocular Stereo Vision ». Applied Mechanics and Materials 719-720 (janvier 2015) : 1217–22. http://dx.doi.org/10.4028/www.scientific.net/amm.719-720.1217.

Texte intégral
Résumé :
In order to capture high quality binocular stereo video, it is necessary to manipulate both the convergence and the interaxial to take control of the depth of objects within the 3D space. Therefore the scene understanding becomes important as it can increase the efficiency of parameters control. In this paper, a camera calibration based multi-objects location method is introduced with the motivation that supply prior information of adjusting the convergence and the interaxial during capturing. Firstly, we are intended to calibrate the two cameras to get the intrinsic and extrinsic parameters. And then, we select points of the object in the images taken by left and right cameras respectively to determine its locations in the two images. With three-dimensional coordinate of objects, the distance between the object and camera baseline is calculated by mathematical methods.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Appelt, T., J. van der Lucht, M. Bleier et A. Nüchter. « CALIBRATION AND VALIDATION OF THE INTEL T265 FOR VISUAL LOCALISATION AND TRACKING UNDERWATER ». International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (28 juin 2021) : 635–41. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-635-2021.

Texte intégral
Résumé :
Abstract. Localization and navigation for autonomous underwater vehicle (AUV) has always been a major challenge and many situations complex solutions had to be devised. One of the main approaches is visual odometry using a stereo camera. In this study, the Intel T265 fisheye stereo camera has been calibrated and tested to determine it’s usability for localisation and navigation under water as an alternative to more complex systems. Firstly the Intel T265 fisheye stereo camera was appropriately calibrated inside a water filled container. This calibration consisting of camera and distortion parameters got programmed onto the T265 fisheye stereo camera to take the differences between land and underwater usage into account. Successive the calibration, the accuracy and the precision of the T265 fisheye stereo camera were tested using a linear, a circular and finally a chaotic motion. This includes a review of the localisation and tracking of the cameras visual odometry compared to a ground truth provided by an OptiTrack V120:Trio to account for scaling, accuracy and precision. Also experiments to determine the usability with fast chaotic motions were performed and analysed. Finally, a conclusion concerning the applicability of the Intel T265 fisheye stereo camera, the challenges using this model, the possibilities for low cost operations and the main challenges for future work is conducted.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Zhang, Zhuang, Rujin Zhao, Enhai Liu, Kun Yan et Yuebo Ma. « A Convenient Calibration Method for LRF-Camera Combination Systems Based on a Checkerboard ». Sensors 19, no 6 (15 mars 2019) : 1315. http://dx.doi.org/10.3390/s19061315.

Texte intégral
Résumé :
In this paper, a simple and easy high-precision calibration method is proposed for the LRF-camera combined measurement system which is widely used at present. This method can be applied not only to mainstream 2D and 3D LRF-cameras, but also to calibrate newly developed 1D LRF-camera combined systems. It only needs a calibration board to record at least three sets of data. First, the camera parameters and distortion coefficients are decoupled by the distortion center. Then, the spatial coordinates of laser spots are solved using line and plane constraints, and the estimation of LRF-camera extrinsic parameters is realized. In addition, we establish a cost function for optimizing the system. Finally, the calibration accuracy and characteristics of the method are analyzed through simulation experiments, and the validity of the method is verified through the calibration of a real system.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Yao, Linshen, et Haibo Liu. « Design and Analysis of High-Accuracy Telecentric Surface Reconstruction System Based on Line Laser ». Applied Sciences 11, no 2 (6 janvier 2021) : 488. http://dx.doi.org/10.3390/app11020488.

Texte intégral
Résumé :
Non-contact measurement technology based on triangulation with cameras is extensively applied to the development of computer vision. However, the accuracy of the technology is generally not satisfactory enough. The application of telecentric lenses can significantly improve the accuracy, but the view of telecentric lenses is limited due to their structure. To address these challenges, a telecentric surface reconstruction system is designed for surface detection, which consists of a single camera with a telecentric lens, line laser generator and one-dimensional displacement platform. The designed system can reconstruct the surface with high accuracy. The measured region is expanded with the used of the displacement platform. To achieve high-accuracy surface reconstruction, we propose a method based on a checkerboard to calibrate the designed system, including line laser plane and motor direction of the displacement platform. Based on the calibrated system, the object under the line laser is measured, and the results of lines are assembled to make the final surface reconstruction. The results show that the designed system can reconstruct a region of 20×40 mm2, up to the accuracy of micron order.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Dang, Chang Gwon, Seung Soo Lee, Mahboob Alam, Sang Min Lee, Mi Na Park, Ha-Seung Seong, Seungkyu Han et al. « Korean Cattle 3D Reconstruction from Multi-View 3D-Camera System in Real Environment ». Sensors 24, no 2 (10 janvier 2024) : 427. http://dx.doi.org/10.3390/s24020427.

Texte intégral
Résumé :
The rapid evolution of 3D technology in recent years has brought about significant change in the field of agriculture, including precision livestock management. From 3D geometry information, the weight and characteristics of body parts of Korean cattle can be analyzed to improve cow growth. In this paper, a system of cameras is built to synchronously capture 3D data and then reconstruct a 3D mesh representation. In general, to reconstruct non-rigid objects, a system of cameras is synchronized and calibrated, and then the data of each camera are transformed to global coordinates. However, when reconstructing cattle in a real environment, difficulties including fences and the vibration of cameras can lead to the failure of the process of reconstruction. A new scheme is proposed that automatically removes environmental fences and noise. An optimization method is proposed that interweaves camera pose updates, and the distances between the camera pose and the initial camera position are added as part of the objective function. The difference between the camera’s point clouds to the mesh output is reduced from 7.5 mm to 5.5 mm. The experimental results showed that our scheme can automatically generate a high-quality mesh in a real environment. This scheme provides data that can be used for other research on Korean cattle.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie