Articoli di riviste sul tema "Catadioptric cameras"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Catadioptric cameras.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Catadioptric cameras".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Benamar, F., S. Elfkihi, C. Demonceaux, E. Mouaddib e D. Aboutajdine. "Visual contact with catadioptric cameras". Robotics and Autonomous Systems 64 (febbraio 2015): 100–119. http://dx.doi.org/10.1016/j.robot.2014.09.036.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Rostkowska, Marta, e Piotr Skrzypczyński. "Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices". Sensors 23, n. 14 (18 luglio 2023): 6485. http://dx.doi.org/10.3390/s23146485.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper considers the task of appearance-based localization: visual place recognition from omnidirectional images obtained from catadioptric cameras. The focus is on designing an efficient neural network architecture that accurately and reliably recognizes indoor scenes on distorted images from a catadioptric camera, even in self-similar environments with few discernible features. As the target application is the global localization of a low-cost service mobile robot, the proposed solutions are optimized toward being small-footprint models that provide real-time inference on edge devices, such as Nvidia Jetson. We compare several design choices for the neural network-based architecture of the localization system and then demonstrate that the best results are achieved with embeddings (global descriptors) yielded by exploiting transfer learning and fine tuning on a limited number of catadioptric images. We test our solutions on two small-scale datasets collected using different catadioptric cameras in the same office building. Next, we compare the performance of our system to state-of-the-art visual place recognition systems on the publicly available COLD Freiburg and Saarbrücken datasets that contain images collected under different lighting conditions. Our system compares favourably to the competitors both in terms of the accuracy of place recognition and the inference time, providing a cost- and energy-efficient means of appearance-based localization for an indoor service robot.
3

Khurana, M., e C. Armenakis. "LOCALIZATION AND MAPPING USING A NON-CENTRAL CATADIOPTRIC CAMERA SYSTEM". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2 (28 maggio 2018): 145–52. http://dx.doi.org/10.5194/isprs-annals-iv-2-145-2018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to “see and move” more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.
4

Zhao, Yue, e Xin Yang. "Calibration Method for Central Catadioptric Camera Using Multiple Groups of Parallel Lines and Their Properties". Journal of Sensors 2021 (2 luglio 2021): 1–13. http://dx.doi.org/10.1155/2021/6675110.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper presents an approach for calibrating omnidirectional single-viewpoint sensors using the central catadioptric projection properties of parallel lines. Single-viewpoint sensors are widely used in robot navigation and driverless cars; thus, a high degree of calibration accuracy is needed. In the unit viewing sphere model of central catadioptric cameras, a line in a three-dimensional space is projected to a great circle, resulting in the projections of a group of parallel lines intersecting only at the endpoints of the diameter of the great circle. Based on this property, when there are multiple groups of parallel lines, a group of orthogonal directions can be determined by a rectangle constructed by two groups of parallel lines in different directions. When there is a single group of parallel lines in space, the diameter and tangents at their endpoints determine a group of orthogonal directions for the plane containing the great circle. The intrinsic parameters of the camera can be obtained from the orthogonal vanishing points in the central catadioptric image plane. An optimization algorithm for line image fitting based on the properties of antipodal points is proposed. The performance of the algorithm is verified using simulated setups. Our calibration method was validated though simulations and real experiments with a catadioptric camera.
5

Córdova-Esparza, Diana-Margarita, Juan Terven, Julio-Alejandro Romero-González e Alfonso Ramírez-Pedraza. "Three-Dimensional Reconstruction of Indoor and Outdoor Environments Using a Stereo Catadioptric System". Applied Sciences 10, n. 24 (10 dicembre 2020): 8851. http://dx.doi.org/10.3390/app10248851.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this work, we present a panoramic 3D stereo reconstruction system composed of two catadioptric cameras. Each one consists of a CCD camera and a parabolic convex mirror that allows the acquisition of catadioptric images. We describe the calibration approach and propose the improvement of existing deep feature matching methods with epipolar constraints. We show that the improved matching algorithm covers more of the scene than classic feature detectors, yielding broader and denser reconstructions for outdoor environments. Our system can also generate accurate measurements in the wild without large amounts of data used in deep learning-based systems. We demonstrate the system’s feasibility and effectiveness as a practical stereo sensor with real experiments in indoor and outdoor environments.
6

Li, Yongle, e Jingtao Lou. "Omnigradient Based Total Variation Minimization for Enhanced Defocus Deblurring of Omnidirectional Images". International Journal of Optics 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/732937.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We propose a new method of image restoration for catadioptric defocus blur using omnitotal variation (Omni-TV) minimization based on omnigradient. Catadioptric omnidirectional imaging systems usually consist of conventional cameras and curved mirrors for capturing 360° field of view. The problem of catadioptric omnidirectional imaging defocus blur, which is caused by lens aperture and mirror curvature, becomes more severe when high resolution sensors and large apertures are used. In an omnidirectional image, two points near each other may not be close to one another in the 3D scene. Traditional gradient computation cannot be directly applied to omnidirectional image processing. Thus, omnigradient computing method combined with the characteristics of catadioptric omnidirectional imaging is proposed. Following this Omni-TV minimization is used as the constraint for deconvolution regularization, leading to the restoration of defocus blur in an omnidirectional image to obtain all sharp omnidirectional images. The proposed method is important for improving catadioptric omnidirectional imaging quality and promoting applications in related fields like omnidirectional video and image processing.
7

Zhang, Yu, Xiping Xu, Ning Zhang e Yaowen Lv. "A Semantic SLAM System for Catadioptric Panoramic Cameras in Dynamic Environments". Sensors 21, n. 17 (1 settembre 2021): 5889. http://dx.doi.org/10.3390/s21175889.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
When a traditional visual SLAM system works in a dynamic environment, it will be disturbed by dynamic objects and perform poorly. In order to overcome the interference of dynamic objects, we propose a semantic SLAM system for catadioptric panoramic cameras in dynamic environments. A real-time instance segmentation network is used to detect potential moving targets in the panoramic image. In order to find the real dynamic targets, potential moving targets are verified according to the sphere’s epipolar constraints. Then, when extracting feature points, the dynamic objects in the panoramic image are masked. Only static feature points are used to estimate the pose of the panoramic camera, so as to improve the accuracy of pose estimation. In order to verify the performance of our system, experiments were conducted on public data sets. The experiments showed that in a highly dynamic environment, the accuracy of our system is significantly better than traditional algorithms. By calculating the RMSE of the absolute trajectory error, we found that our system performed up to 96.3% better than traditional SLAM. Our catadioptric panoramic camera semantic SLAM system has higher accuracy and robustness in complex dynamic environments.
8

Ilizirov, Grigory, e Sagi Filin. "POSE ESTIMATION AND MAPPING USING CATADIOPTRIC CAMERAS WITH SPHERICAL MIRRORS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (9 giugno 2016): 43–47. http://dx.doi.org/10.5194/isprs-archives-xli-b3-43-2016.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Catadioptric cameras have the advantage of broadening the field of view and revealing otherwise occluded object parts. However, they differ geometrically from standard central perspective cameras because of light reflection from the mirror surface which alters the collinearity relation and introduces severe non-linear distortions of the imaged scene. Accommodating for these features, we present in this paper a novel modeling for pose estimation and reconstruction while imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which we estimate the system’s parameters. Our model yields a resection-like solution which can be developed into a linear one. We show that accurate estimates can be derived with only a small set of control points. Analysis shows that control configuration in the orientation scheme is rather flexible and that high levels of accuracy can be reached in both pose estimation and mapping. Clearly, the ability to model objects which fall outside of the immediate camera field-of-view offers an appealing means to supplement 3-D reconstruction and modeling.
9

Ilizirov, Grigory, e Sagi Filin. "POSE ESTIMATION AND MAPPING USING CATADIOPTRIC CAMERAS WITH SPHERICAL MIRRORS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (9 giugno 2016): 43–47. http://dx.doi.org/10.5194/isprsarchives-xli-b3-43-2016.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Catadioptric cameras have the advantage of broadening the field of view and revealing otherwise occluded object parts. However, they differ geometrically from standard central perspective cameras because of light reflection from the mirror surface which alters the collinearity relation and introduces severe non-linear distortions of the imaged scene. Accommodating for these features, we present in this paper a novel modeling for pose estimation and reconstruction while imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which we estimate the system’s parameters. Our model yields a resection-like solution which can be developed into a linear one. We show that accurate estimates can be derived with only a small set of control points. Analysis shows that control configuration in the orientation scheme is rather flexible and that high levels of accuracy can be reached in both pose estimation and mapping. Clearly, the ability to model objects which fall outside of the immediate camera field-of-view offers an appealing means to supplement 3-D reconstruction and modeling.
10

Mariottini, Gian Luca, e Domenico Prattichizzo. "Image-based Visual Servoing with Central Catadioptric Cameras". International Journal of Robotics Research 27, n. 1 (gennaio 2008): 41–56. http://dx.doi.org/10.1177/0278364907084320.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

DENG, Xiao-Ming, Fu-Chao WU e Yi-Hong WU. "An Easy Calibration Method for Central Catadioptric Cameras". Acta Automatica Sinica 33, n. 8 (agosto 2007): 801–8. http://dx.doi.org/10.1360/aas-007-0801.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Swaminathan, Rahul, Michael D. Grossberg e Shree K. Nayar. "Non-Single Viewpoint Catadioptric Cameras: Geometry and Analysis". International Journal of Computer Vision 66, n. 3 (marzo 2006): 211–29. http://dx.doi.org/10.1007/s11263-005-3220-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Filin, Sagi, Grigory Ilizirov e Bashar Elnashef. "Robust Pose Estimation and Calibration of Catadioptric Cameras With Spherical Mirrors". Photogrammetric Engineering & Remote Sensing 86, n. 1 (1 gennaio 2020): 33–44. http://dx.doi.org/10.14358/pers.86.1.33.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Catadioptric cameras broaden the field of view and reveal otherwise occluded object parts. They differ geometrically from central-perspective cameras because of light reflection from the mirror surface. To handle these effects, we present new pose-estimation and reconstruction models for imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which three methods are established to estimate the system parameters: a resection-based one, a trilateration-based one that introduces novel constraints that enhance accuracy, and a direct and linear transform-based one. The estimated system parameters exhibit improved accuracy compared to the state of the art, and analysis shows intrinsic robustness to the presence of a high fraction of outliers. We then show that 3D point reconstruction can be performed at accurate levels. Thus, we provide an in-depth look into the geometrical modeling of spherical catadioptric systems and practical enhancements of accuracies and requirements to reach them.
14

Yan, Yu, e Bing Wei He. "Single Camera Stereo with Planar Mirrors". Advanced Materials Research 684 (aprile 2013): 447–50. http://dx.doi.org/10.4028/www.scientific.net/amr.684.447.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper presents a new system for rapidly acquiring stereo images using a single camera and a pair of planar mirrors (catadioptric stereo). Firstly, the camera used to capture images is calibrated with matlab toolbox. Secondly, the position and pose of the planar mirrors relative to the fixed, calibrated camera is estimated, and this procedure is accomplished by calculating the symmetry plane of the real and reflected image corners of a chessboard. Thirdly, the relative orientation of two reflected virtual cameras is obtained. Finally, Gaussian noise is added to the image corners of the chessboard to verify the performance of the established stereo system. Experimental results show the effectiveness and robustness of our system.
15

Wu, Fuchao, Fuqing Duan, Zhanyi Hu e Yihong Wu. "A new linear algorithm for calibrating central catadioptric cameras". Pattern Recognition 41, n. 10 (ottobre 2008): 3166–72. http://dx.doi.org/10.1016/j.patcog.2008.03.010.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Deng, Xiaoming, Fuchao Wu, Yihong Wu, Fuqing Duan, Liang Chang e Hongan Wang. "Self-calibration of hybrid central catadioptric and perspective cameras". Computer Vision and Image Understanding 116, n. 6 (giugno 2012): 715–29. http://dx.doi.org/10.1016/j.cviu.2012.02.003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Liu, Guo, Feng e Yang. "Accurate and Robust Monocular SLAM with Omnidirectional Cameras". Sensors 19, n. 20 (16 ottobre 2019): 4494. http://dx.doi.org/10.3390/s19204494.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Simultaneous localization and mapping (SLAM) are fundamental elements for many emerging technologies, such as autonomous driving and augmented reality. For this paper, to get more information, we developed an improved monocular visual SLAM system by using omnidirectional cameras. Our method extends the ORB-SLAM framework with the enhanced unified camera model as a projection function, which can be applied to catadioptric systems and wide-angle fisheye cameras with 195 degrees field-of-view. The proposed system can use the full area of the images even with strong distortion. For omnidirectional cameras, a map initialization method is proposed. We analytically derive the Jacobian matrices of the reprojection errors with respect to the camera pose and 3D position of points. The proposed SLAM has been extensively tested in real-world datasets. The results show positioning error is less than 0.1% in a small indoor environment and is less than 1.5% in a large environment. The results demonstrate that our method is real-time, and increases its accuracy and robustness over the normal systems based on the pinhole model. We open source in https://github.com/lsyads/fisheye-ORB-SLAM.
18

Phalak, Yogesh, Gaurav Charpe e Kartik Paigwar. "Omnidirectional Visual Navigation System for TurtleBot Using Paraboloid Catadioptric Cameras." Procedia Computer Science 133 (2018): 190–96. http://dx.doi.org/10.1016/j.procs.2018.07.023.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Aliakbarpour, Hadi, Omar Tahri e Helder Araujo. "Visual servoing of mobile robots using non-central catadioptric cameras". Robotics and Autonomous Systems 62, n. 11 (novembre 2014): 1613–22. http://dx.doi.org/10.1016/j.robot.2014.03.007.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Dias, Tiago, Pedro Miraldo e Nuno Gonçalves. "A Framework for Augmented Reality using Non-Central Catadioptric Cameras". Journal of Intelligent & Robotic Systems 83, n. 3-4 (27 febbraio 2016): 359–73. http://dx.doi.org/10.1007/s10846-016-0349-9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Puig, Luis, Yalin Bastanlar, Peter Sturm, J. J. Guerrero e João Barreto. "Calibration of Central Catadioptric Cameras Using a DLT-Like Approach". International Journal of Computer Vision 93, n. 1 (21 dicembre 2010): 101–14. http://dx.doi.org/10.1007/s11263-010-0411-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Cinaroglu, Ibrahim, e Yalin Bastanlar. "A direct approach for object detection with catadioptric omnidirectional cameras". Signal, Image and Video Processing 10, n. 2 (8 aprile 2015): 413–20. http://dx.doi.org/10.1007/s11760-015-0768-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Li, Jing, Yalin Ding, Xueji Liu, Guoqin Yuan e Yiming Cai. "Achromatic and Athermal Design of Aerial Catadioptric Optical Systems by Efficient Optimization of Materials". Sensors 23, n. 4 (4 febbraio 2023): 1754. http://dx.doi.org/10.3390/s23041754.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The remote sensing imaging requirements of aerial cameras require their optical system to have wide temperature adaptability. Based on the optical passive athermal technology, the expression of thermal power offset of a single lens in the catadioptric optical system is first derived, and then a mathematical model for efficient optimization of materials is established; finally, the mechanical material combination (mirror and housing material) is optimized according to the comprehensive weight of offset with temperature change and the position change of the equivalent single lens, and achieve optimization of the lens material on an athermal map. In order to verify the effectiveness of the method, an example of a catadioptric aerial optical system with a focal length of 350 mm is designed. The results show that in the temperature range of −40 ℃ to 60 ℃, the diffraction-limited MTF of the designed optical system is 0.59 (at 68 lp/mm), the MTF of each field of view is greater than 0.39, and the thermal defocus is less than 0.004 mm, which is within one time of the focal depth, indicating that the imaging quality of the optical system basically does not change with temperature, meeting the stringent application requirements of the aerial camera.
24

Zeng, Jiyong, Xianyu Su e Gofan Jin. "Incorporating lens distortion into the design of undistorted catadioptric omnidirectional cameras". Applied Optics 45, n. 30 (20 ottobre 2006): 7778. http://dx.doi.org/10.1364/ao.45.007778.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Bermudez-Cameo, Jesus, Gonzalo Lopez-Nicolas e Jose J. Guerrero. "Fitting line projections in non-central catadioptric cameras with revolution symmetry". Computer Vision and Image Understanding 167 (febbraio 2018): 134–52. http://dx.doi.org/10.1016/j.cviu.2018.01.003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Goncalves, Nuno, Ana Catarina Nogueira e Andre Lages Miguel. "Forward projection model of non-central catadioptric cameras with spherical mirrors". Robotica 35, n. 6 (5 aprile 2016): 1378–96. http://dx.doi.org/10.1017/s026357471600014x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
SUMMARYNon-central catadioptric vision is widely used in robotics and vision but suffers from the lack of an explicit closed-form forward projection model (FPM) that relates a 3D point with its 2D image. The search for the reflection point where the scene ray is projected is extremely slow and unpractical for real-time applications. Almost all methods thus rely on the assumption of a central projection model, even at the cost of an exact projection.Two recent methods are able to solve this FPM, presenting a quasi-closed form FPM. However, in the special case of spherical mirrors, further enhancements can be made. We compare these two methods for the computation of the FPM and discuss both approaches in terms of practicality and performance. We also derive new expressions for the FPM on spherical mirrors (extremely useful to robotics and graphics) which speed up its computation.
27

Hu, Shaopeng, Mingjun Jiang, Takeshi Takaki e Idaku Ishii. "Real-Time Monocular Three-Dimensional Motion Tracking Using a Multithread Active Vision System". Journal of Robotics and Mechatronics 30, n. 3 (20 giugno 2018): 453–66. http://dx.doi.org/10.20965/jrm.2018.p0453.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this study, we developed a monocular stereo tracking system to be used as a marker-based, three-dimensional (3-D) motion capture system. This system aims to localize dozens of markers on multiple moving objects in real time by switching five hundred different views in 1 s. The ultrafast mirror-drive active vision used in our catadioptric stereo tracking system can accelerate a series of operations for multithread gaze control with video shooting, computation, and actuation within 2 ms. By switching between five hundred different views in 1 s, with real-time video processing for marker extraction, our system can function asJvirtual left and right pan-tilt tracking cameras, operating at 250/Jfps to simultaneously capture and processJpairs of 512 × 512 stereo images with different views via the catadioptric mirror system. We conducted several real-time 3-D motion experiments to capture multiple fast-moving objects with markers. The results demonstrated the effectiveness of our monocular 3-D motion tracking system.
28

Benseddik, Houssem-Eddine, Fabio Morbidi e Guillaume Caron. "PanoraMIS: An ultra-wide field of view image dataset for vision-based robot-motion estimation". International Journal of Robotics Research 39, n. 9 (9 giugno 2020): 1037–51. http://dx.doi.org/10.1177/0278364920915248.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This article presents a new dataset of ultra-wide field of view images with accurate ground truth, called PanoraMIS. The dataset covers a large spectrum of panoramic cameras (catadioptric, twin-fisheye), robotic platforms (wheeled, aerial, and industrial robots), and testing environments (indoors and outdoors), and it is well suited to rigorously validate novel image-based robot-motion estimation algorithms, including visual odometry, visual SLAM, and deep learning-based methods. PanoraMIS and the accompanying documentation is publicly available on the Internet for the entire research community.
29

Duan, Fuqing, Fuchao Wu, Mingquan Zhou, Xiaoming Deng e Yun Tian. "Calibrating effective focal length for central catadioptric cameras using one space line". Pattern Recognition Letters 33, n. 5 (aprile 2012): 646–53. http://dx.doi.org/10.1016/j.patrec.2011.05.012.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Xiang, Zhiyu, Bo Sun e Xing Dai. "The Camera Itself as a Calibration Pattern: A Novel Self-Calibration Method for Non-Central Catadioptric Cameras". Sensors 12, n. 6 (30 maggio 2012): 7299–317. http://dx.doi.org/10.3390/s120607299.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Berenguel-Baeta, Bruno, Jesus Bermudez-Cameo e Jose J. Guerrero. "OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision". Sensors 20, n. 7 (7 aprile 2020): 2066. http://dx.doi.org/10.3390/s20072066.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Omnidirectional and 360° images are becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. Their wide field of view allows the gathering of a great amount of information about the environment from only an image. However, the distortion of these images requires the development of specific algorithms for their treatment and interpretation. Moreover, a high number of images is essential for the correct training of computer vision algorithms based on learning. In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information. These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4 through an interface plugin. We gather a variety of well-known projection models such as equirectangular and cylindrical panoramas, different fish-eye lenses, catadioptric systems, and empiric models. Furthermore, we include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems. As far as we know, this is the first reported tool for generating photorealistic non-central images in the literature. Moreover, since the omnidirectional images are made virtually, we provide pixel-wise information about semantics and depth as well as perfect knowledge of the calibration parameters of the cameras. This allows the creation of ground-truth information with pixel precision for training learning algorithms and testing 3D vision approaches. To validate the proposed tool, different computer vision algorithms are tested as line extractions from dioptric and catadioptric central images, 3D Layout recovery and SLAM using equirectangular panoramas, and 3D reconstruction from non-central panoramas.
32

Zhang, Hongwei, Weining Chen, Yalin Ding, Rui Qu e Sansan Chang. "Optical System Design of Oblique Airborne-Mapping Camera with Focusing Function". Photonics 9, n. 8 (31 luglio 2022): 537. http://dx.doi.org/10.3390/photonics9080537.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The use of airborne-mapping technology plays a key role in the acquisition of large-scale basic geographic data information, especially in various important civil/military-mapping missions. However, most airborne-mapping cameras are limited by parameters, such as the flight altitude, working-environment temperature, and so on. To solve this problem, in this paper, we designed a panchromatic wide-spectrum optical system with a focusing function. Based on the catadioptric optical structure, the optical system approached a telecentric optical structure. Sharp images at different object distances could be acquired by micro-moving the focusing lens. At the same time, an optical passive compensation method was adopted to realize an athermalization design in the range of −40 °C~60 °C. According to the design parameters of the optical system, we analyzed the influence of system focusing on mapping accuracy during the focusing process of the airborne-mapping camera. In the laboratory, the camera calibration and imaging experiments were performed at different focusing positions. The results show that the experimental data are consistent with the analysis results. Due to the limited experiment conditions, only a single flight experiment was performed. The results show that the airborne-mapping camera can achieve 1:5000 scale-imaging accuracy. Flight experiments for different flight altitudes are being planned, and the relevant experimental data will be released in the future. In conclusion, the airborne-mapping camera is expected to be applied in various high-precision scale-mapping fields.
33

de Souza-Daw, Tony, Robert Ross, Truong Duy Nhan, Le Anh Hung, Nguyen Duc Quoc Trung, Le Hai Chau, Hoang Minh Phuong, Le Hoang Ngoc e Mathews Nkhoma. "Design and evaluation of a low-cost street-level image capturing vehicle for south-east Asia". Journal of Engineering, Design and Technology 13, n. 4 (5 ottobre 2015): 579–95. http://dx.doi.org/10.1108/jedt-09-2013-0062.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Purpose – The purpose of this paper is to present a low-cost, highly mobile system for performing street-level imaging. Street-level imaging and geo-location-based services are rapidly growing in both popularity and coverage. Google Street View and Bing StreetSide are two of the free, online services which allow users to search location-based information on interactive maps. In addition, these services also provide software developers and researchers a rich source of street-level images for different purposes – from identifying traffic routes to augmented reality applications. Currently, coverage for Street View and StreetSide is limited to more affluent Western countries with sparse coverage throughout south-east Asia and Africa. In this paper, we present a low-cost system to perform street-level imaging targeted towards the congested, motorcycle-dominant south-east Asian countries. The proposed system uses a catadioptric imaging system to capture 360-degree panoramic images which are geo-located using an on-board GPS. The system is mounted on the back of a motorcycle to provide maximum mobility and access to narrow roads. An innovative backwards remapping technique for flattening the images is discussed along with some results from the first 150 km which have been captured from Southern Vietnam. Design/methodology/approach – The design was a low-cost prototype design using low-cost off-the-shelf hardware with custom software and assembly to facilitate functionality. Findings – The system was shown to work well as a low-cost omnidirectional mapping solution targeted toward sea-of-motorbike road conditions. Research limitations/implications – Some of the pictures returned by the system were unclear. These could be improved by having artificial lighting (currently only ambient light is used), a gyroscope-stabilised imaging platform and a higher resolution camera. Originality/value – This paper discusses a design which facilitates low-cost, street-level imaging for a sea-of-motorcycle environment. The system uses a catadioptric imaging approach to give a wide field of view without excessive image storage requirements using dozens of cameras.
34

Bayro-Corrochano, Eduardo. "Editorial". Robotica 26, n. 4 (luglio 2008): 415–16. http://dx.doi.org/10.1017/s0263574708004785.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Robotic sensing is a relatively new field of activity compared with the design and control of robot mechanisms. In both areas the role of geometry is natural and necessary for the development of devices, their control and use in challenging environments. At the very beginning odometry, tactile and touch sensors dominated robot sensing. More recently, due to the fall in the price of laser devices, they have become more attractive to the community. On the other hand, progress in photogrametry, particularly during the nineties as the n-view geometry in projective geometry matured, boot-strapped the use of computer vision as an extra powerful sensor technique for robot guidance. Cameras were used in monocular or stereoscopic fashion, catadioptric systems for ominidirectional vision, fish-eye cameras and camera networks made the use of computer vision even more diverse. Researchers started to combine sensors for 2D and 3D sensing by fusing sensor data in a projective framework. Thanks to the continuous progress in mechatronics, the low prices of fast computers and increasing accuracy of sensor systems, one can build a robot to perceive its surroundings, reconstruct, plan and ultimately act intelligently. In these perception-action systems there is of course, the urgent need for a geometric stochastic framework to deal with uncertainty in the sensing, planning and action in a robust manner. Here geometry can play a central role for the representation and computing in higher dimensions using projective geometry and differential geometry on Lie groups manifolds with a pseudo Euclidean metric. Let us review briefly the developments towards modern geometry that have been often overlooked by the robotic researchers and practitioners.
35

Gonçalves, Nuno. "Low-cost method for the estimation of the shape of quadric mirrors and calibration of catadioptric cameras". Optical Engineering 46, n. 7 (1 luglio 2007): 073001. http://dx.doi.org/10.1117/1.2752493.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Ababsa, Fakhreddine, Hicham Hadj-Abdelkader e Marouane Boui. "3D Human Pose Estimation with a Catadioptric Sensor in Unconstrained Environments Using an Annealed Particle Filter". Sensors 20, n. 23 (7 dicembre 2020): 6985. http://dx.doi.org/10.3390/s20236985.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The purpose of this paper is to investigate the problem of 3D human tracking in complex environments using a particle filter with images captured by a catadioptric vision system. This issue has been widely studied in the literature on RGB images acquired from conventional perspective cameras, while omnidirectional images have seldom been used and published research works in this field remains limited. In this study, the Riemannian varieties was considered in order to compute the gradient on spherical images and generate a robust descriptor used along with an SVM classifier for human detection. Original likelihood functions associated with the particle filter are proposed, using both geodesic distances and overlapping regions between the silhouette detected in the images and the projected 3D human model. Our approach was experimentally evaluated on real data and showed favorable results compared to machine learning based techniques about the 3D pose accuracy. Thus, the Root Mean Square Error (RMSE) was measured by comparing estimated 3D poses and truth data, resulting in a mean error of 0.065 m when walking action was applied.
37

Egorenko, M. P., e V. S. Efremov. "Choosing the optical materials for multichannel catadioptric systems with Mangin mirrors in the video cameras of miniature drones". Journal of Optical Technology 87, n. 12 (1 dicembre 2020): 715. http://dx.doi.org/10.1364/jot.87.000715.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Li, Jing, Yalin Ding, Yiming Cai, Guoqin Yuan e Mingqiang Zhang. "Optimization Method for Low Tilt Sensitivity of Secondary Mirror Based on the Nodal Aberration Theory". Applied Sciences 12, n. 13 (27 giugno 2022): 6514. http://dx.doi.org/10.3390/app12136514.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The optical system that combines imaging and image motion compensation is conducive to the miniaturization of aerial mapping cameras, but the movement of optical element for image motion compensation will cause a decrease in image quality. To solve this problem, reducing the sensitivity of moving optical element is one of the effective ways to ensure the imaging quality of aerial mapping cameras. Therefore, this paper proposes an optimization method for the low tilt sensitivity of the secondary mirror based on the Nodal aberration theory. In this method, the analytical expressions of the tilt sensitivity of the secondary mirror in different tilt directions are given in the form of zernike polynomial coefficients, and the influence of the field of view on the sensitivity is expressed in the mathematical model. The desensitization optimization function and desensitization optimization method are proposed. The catadioptric optical system with a focal length of 350 mm is used for desensitization optimization. The results show that the desensitization function proposed in this paper is linearly related to the decrease of sensitivity within a certain range, and the standard deviation of the system after desensitization is 0.020, which is 59% of the system without desensitization. Compared with the traditional method, the method in this paper widens the range of angle reduction sensitivity and has a better desensitization effect. The research results show that the optimization method for low tilt sensitivity of the secondary mirror based on the Nodal aberration theory proposed in this paper reduces the tilt sensitivity of the secondary mirror, revealing that the reduction of the sensitivity depends on the reduction of the aberration coefficient related to the misalignment in the field of view, which is critical for the development of an optical system for aerial mapping cameras that combines imaging and image motion compensation.
39

Lu, Zhenghai, Yaowen Lv, Zhiqing Ai, Ke Suo, Xuanrui Gong e Yuxuan Wang. "Calibration of a Catadioptric System and 3D Reconstruction Based on Surface Structured Light". Sensors 22, n. 19 (28 settembre 2022): 7385. http://dx.doi.org/10.3390/s22197385.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In response to the problem of the small field of vision in 3D reconstruction, a 3D reconstruction system based on a catadioptric camera and projector was built by introducing a traditional camera to calibrate the catadioptric camera and projector system. Firstly, the intrinsic parameters of the camera and the traditional camera are calibrated separately. Then, the calibration of the projection system is accomplished by the traditional camera. Secondly, the coordinate system is introduced to calculate, respectively, the position of the catadioptric camera and projector in the coordinate system, and the position relationship between the coordinate systems of the catadioptric camera and the projector is obtained. Finally, the projector is used to project the structured light fringe to realize the reconstruction using a catadioptric camera. The experimental results show that the reconstruction error is 0.75 mm and the relative error is 0.0068 for a target of about 1 m. The calibration method and reconstruction method proposed in this paper can guarantee the ideal geometric reconstruction accuracy.
40

Rasmana, Susijanto Tri, Harianto Harianto, Pauladie Susanto, Anan Pepe Abseno e Zendi Zakaria Raga Permana. "Lokalisasi Mobile Robot berdasarkan Citra Kamera OMNI menggunakan Fitur Surf". Jurnal Teknologi Informasi dan Ilmu Komputer 7, n. 5 (8 ottobre 2020): 1079. http://dx.doi.org/10.25126/jtiik.2020712539.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<p class="Abstrak">Deteksi lokasi diri atau lokalisasi diri adalah salah satu kemampuan yang harus dimiliki oleh <em>mobile robot</em>. Kemampuan lokalisasi diri digunakan untuk menentukan posisi robot di suatu daerah dan sebagai referensi untuk menentukan arah perjalanan selanjutnya. Dalam penelitian ini, lokalisasi robot didasarkan pada data citra yang ditangkap oleh kamera <em>omnidirectional</em> tipe <em>catadioptric</em>. Jumlah fitur terdekat antara citra 360<sup>o</sup> yang ditangkap oleh kamera Omni dan citra referensi menjadi dasar untuk menentukan prediksi lokasi. Ekstraksi fitur gambar menggunakan metode Speeded-Up Robust Features (SURF). Kontribusi pertama dari penelitian ini adalah optimasi akurasi deteksi dengan memilih nilai <em>Hessian Threshold</em> dan jarak maksimum fitur yang tepat. Kontribusi kedua optimasi waktu deteksi menggunakan metode yang diusulkan. Metode ini hanya menggunakan fitur 3 gambar referensi berdasarkan hasil deteksi sebelumnya. Optimasi waktu deteksi, untuk lintasan dengan 28 gambar referensi, dapat mempersingkat waktu deteksi sebesar 8,72 kali. Pengujian metode yang diusulkan dilakukan menggunakan <em>omnidirectional mobile robot</em> yang berjalan di suatu daerah. Pengujian dilakukan dengan menggunakan metode <em>recall</em>, presisi, akurasi, <em>F-measure</em>, <em>G-measure</em>, dan waktu deteksi. Pengujian deteksi lokasi juga dilakukan berdasarkan metode SIFT untuk dibandingkan dengan metode yang diusulkan. Berdasarkan pengujian, kinerja metode yang diusulkan lebih baik daripada SIFT untuk pengukuran dengan recall 89,67%, akurasi 99,59%, <em>F-measure</em> 93,58%, <em>G-measure</em> 93,87%, dan waktu deteksi 0,365 detik. Metode SIFT hanya lebih baik pada presisi 98,74%.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Self-location detection or self-localization is one of the capabilities that must be possessed by the mobile robot. The self-localization ability is used to determine the robot position in an area and as a reference to determine the next trip direction. In this research, robot localization was by vision-data based, which was captured by catadioptric-types omnidirectional cameras. The number of closest features between the 360<sup>o</sup> image captured by the Omni camera and the reference image was the basis for determining location predictions. Image feature extraction uses the Speeded-Up Robust Features (SURF) method. The first contribution of this research is the optimization of detection accuracy by selecting the Hessian Threshold value and the maximum distance of the right features. The second contribution is the optimization of detection time using the proposed method. This method uses only the features of 3 reference images based on the previous detection results. Optimization of detection time, for trajectories with 28 reference images, can shorten the detection time by 8.72 times. Testing the proposed method was done using an omnidirectional mobile robot that walks in an area. Tests carried out using the method of recall, precision, accuracy, F-measure, G-measure, and detection time. Location detection testing was also done based on the SIFT method to be compared with the proposed method. Based on testing, the proposed method performance is better than SIFT for measurements with recall 89.67%, accuracy 99.59%, F-measure 93.58%, G-measure 93.87%, and detection time 0.365 seconds. The SIFT method is only better at precision 98.74%.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>
41

Phan, Tran Dang Khoa, e Cong Thang Pham. "CATADIOPTRIC IMAGE DENOISING: A SPATIALLY VARIANT APPROACH". Eurasian Journal of Mathematical and Computer Applications 11, n. 2 (2023): 82–98. http://dx.doi.org/10.32523/2306-6172-2023-11-2-82-98.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A catadioptric camera uses a conventional camera in conjunction with a quadratic mirror for capturing an omnidirectional field of view in real-time. The resolution of catadiop- tric images, however, is non-uniform due to the mirror curvature. A widely used approach to processing catadioptric images is to apply classical methods to them directly or via a trans- formed domain. The aim of this work is to demonstrate that for the task of image denoising, an appropriate approach is to modify classical methods so that they become spatially adaptive with the non-uniform resolution of catadioptric images. To this end, we modify the famous Rudin-Osher-Fatemi (ROF) denoising model by introducing a space-variant regularizer. The proposed model comprises a spatially varying total variation term, which adjusts the edge- preservation and the noise reduction abilities in the whole image domain. We carry out an empirical evaluation of the performance of the proposed model compared with the widely used methods for processing catadioptric images. The results reveal that, despite its simplic- ity, our model improves the performance of the original method in terms of both quantitative and qualitative aspects.
42

Zhao, Yue, Yuyang Chen e Liping Yang. "Calibration of Double-Plane-Mirror Catadioptric Camera Based on Coaxial Parallel Circles". Journal of Sensors 2022 (25 agosto 2022): 1–15. http://dx.doi.org/10.1155/2022/7145400.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A catadioptric camera with a double-mirror system, composed of a pinhole camera and two planar mirrors, can capture multiple catadioptric views of an object. The catadioptric point sets (CPSs) formed by the contour points on the object lie on circles, all of which are coaxial parallel. Based on the property of the polar line of the infinity point with respect to a circle, the infinity points in orthogonal directions can be obtained using any two CPSs, and a pole and polar pair with respect to the image of the absolute conic (IAC) can be obtained through inference of the Laguerre theorem. Thus, the camera intrinsic parameters can be solved. Furthermore, as the five points needed to fit the image of a circle are not easy to obtain accurately, only sets in which five points can be located can be obtained, whereas the points on the line of intersection between the two plane mirrors and the ground plane can easily be obtained accurately. An optimization method based on the analysis of neighboring point sets to compare the intersection points with an image of the center of multiple circle images fitted using the point sets is proposed. Bundle adjustment is then applied to further optimize the camera intrinsic parameters. The feasibility and validity of the proposed calibration methods and their optimization were confirmed through simulation and experiments. Two primary innovations were obtained from the results of this study: (1) by applying coaxial parallel circles to the double-plane-mirror catadioptric camera model, a variety of calibration methods were derived, and (2) we found that the overall model could be optimized by analyzing the features of the neighboring point set and bundle adjustment.
43

Xianghua Ying e Zhanyi Hu. "Catadioptric camera calibration using geometric invariants". IEEE Transactions on Pattern Analysis and Machine Intelligence 26, n. 10 (ottobre 2004): 1260–71. http://dx.doi.org/10.1109/tpami.2004.79.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

MASHITA, T. "Calibration Method for Misaligned Catadioptric Camera". IEICE Transactions on Information and Systems E89-D, n. 7 (1 luglio 2006): 1984–93. http://dx.doi.org/10.1093/ietisy/e89-d.7.1984.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Lin, Huei-Yung, Yuan-Chi Chung e Ming-Liang Wang. "Self-Localization of Mobile Robots Using a Single Catadioptric Camera with Line Feature Extraction". Sensors 21, n. 14 (9 luglio 2021): 4719. http://dx.doi.org/10.3390/s21144719.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper presents a novel self-localization technique for mobile robots using a central catadioptric camera. A unified sphere model for the image projection is derived by the catadioptric camera calibration. The geometric property of the camera projection model is utilized to obtain the intersections of the vertical lines and ground plane in the scene. Different from the conventional stereo vision techniques, the feature points are projected onto a known planar surface, and the plane equation is used for depth computation. The 3D coordinates of the base points on the ground are calculated using the consecutive image frames. The derivation of motion trajectory is then carried out based on the computation of rotation and translation between the robot positions. We develop an algorithm for feature correspondence matching based on the invariability of the structure in the 3D space. The experimental results obtained using the real scene images have demonstrated the feasibility of the proposed method for mobile robot localization applications.
46

Zhang, Yan, Lina Zhao e Wanbao Hu. "A Survey of Catadioptric Omnidirectional Camera Calibration". International Journal of Information Technology and Computer Science 5, n. 3 (3 febbraio 2013): 13–20. http://dx.doi.org/10.5815/ijitcs.2013.03.02.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Zhang, Lei, Xin Du e Ji-lin Liu. "Using concurrent lines in central catadioptric camera calibration". Journal of Zhejiang University SCIENCE C 12, n. 3 (marzo 2011): 239–49. http://dx.doi.org/10.1631/jzus.c1000043.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

LI Can, 李灿, 宋淑梅 SONG Shu-mei, 刘英 LIU Ying, 李淳 LI Chun, 李小虎 LI Xiao-hu e 孙强 SUN Qiang. "Design of optical system for catadioptric fundus camera". Optics and Precision Engineering 20, n. 8 (2012): 1710–17. http://dx.doi.org/10.3788/ope.20122008.1710.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Owen, G., S. T. Yang, R. L. Hsieh e R. F. W. Pease. "A catadioptric reduction camera for deep UV microlithography". Microelectronic Engineering 11, n. 1-4 (aprile 1990): 219–22. http://dx.doi.org/10.1016/0167-9317(90)90101-x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Garbacz, Piotr, Piotr Czajka e Bartłomiej Burski. "Method for Monitoring the Destruction Process of Materials Using a High-Speed Camera and a Catodioptric Stereo-Vision System". Solid State Phenomena 224 (novembre 2014): 145–50. http://dx.doi.org/10.4028/www.scientific.net/ssp.224.145.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The article presents a method for monitoring the destruction process of materials under mechanical load on a universal testing machine. Observations of the subjects are made using a high-speed camera and a catadioptric stereo-vision system. The camera allows for data acquisition with the capture speed more than a million frames per second (FPS) with reduced resolution. Catadioptric vision systems use mirrors and lenses in order to modify the observation path. In the proposed system four mirrors are required to divide the observation path into two separate paths. This enables monitoring the test specimens from different perspectives, which provides a number of advantages including information redundancy or stereovision. In order to verify the proposed method several metal specimens were put under mechanical load and monitored with the vision system. Test results are enclosed in the article. Selected registered images presenting the moment of the destruction are described in greater detail with data about the capture speed provided. The tests were conducted under front and back lighting in order to assess the best method of illumination.

Vai alla bibliografia