Letteratura scientifica selezionata sul tema "Catadioptric cameras"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Catadioptric cameras".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Catadioptric cameras":

1

Benamar, F., S. Elfkihi, C. Demonceaux, E. Mouaddib e D. Aboutajdine. "Visual contact with catadioptric cameras". Robotics and Autonomous Systems 64 (febbraio 2015): 100–119. http://dx.doi.org/10.1016/j.robot.2014.09.036.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Rostkowska, Marta, e Piotr Skrzypczyński. "Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices". Sensors 23, n. 14 (18 luglio 2023): 6485. http://dx.doi.org/10.3390/s23146485.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper considers the task of appearance-based localization: visual place recognition from omnidirectional images obtained from catadioptric cameras. The focus is on designing an efficient neural network architecture that accurately and reliably recognizes indoor scenes on distorted images from a catadioptric camera, even in self-similar environments with few discernible features. As the target application is the global localization of a low-cost service mobile robot, the proposed solutions are optimized toward being small-footprint models that provide real-time inference on edge devices, such as Nvidia Jetson. We compare several design choices for the neural network-based architecture of the localization system and then demonstrate that the best results are achieved with embeddings (global descriptors) yielded by exploiting transfer learning and fine tuning on a limited number of catadioptric images. We test our solutions on two small-scale datasets collected using different catadioptric cameras in the same office building. Next, we compare the performance of our system to state-of-the-art visual place recognition systems on the publicly available COLD Freiburg and Saarbrücken datasets that contain images collected under different lighting conditions. Our system compares favourably to the competitors both in terms of the accuracy of place recognition and the inference time, providing a cost- and energy-efficient means of appearance-based localization for an indoor service robot.
3

Khurana, M., e C. Armenakis. "LOCALIZATION AND MAPPING USING A NON-CENTRAL CATADIOPTRIC CAMERA SYSTEM". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2 (28 maggio 2018): 145–52. http://dx.doi.org/10.5194/isprs-annals-iv-2-145-2018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to “see and move” more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.
4

Zhao, Yue, e Xin Yang. "Calibration Method for Central Catadioptric Camera Using Multiple Groups of Parallel Lines and Their Properties". Journal of Sensors 2021 (2 luglio 2021): 1–13. http://dx.doi.org/10.1155/2021/6675110.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper presents an approach for calibrating omnidirectional single-viewpoint sensors using the central catadioptric projection properties of parallel lines. Single-viewpoint sensors are widely used in robot navigation and driverless cars; thus, a high degree of calibration accuracy is needed. In the unit viewing sphere model of central catadioptric cameras, a line in a three-dimensional space is projected to a great circle, resulting in the projections of a group of parallel lines intersecting only at the endpoints of the diameter of the great circle. Based on this property, when there are multiple groups of parallel lines, a group of orthogonal directions can be determined by a rectangle constructed by two groups of parallel lines in different directions. When there is a single group of parallel lines in space, the diameter and tangents at their endpoints determine a group of orthogonal directions for the plane containing the great circle. The intrinsic parameters of the camera can be obtained from the orthogonal vanishing points in the central catadioptric image plane. An optimization algorithm for line image fitting based on the properties of antipodal points is proposed. The performance of the algorithm is verified using simulated setups. Our calibration method was validated though simulations and real experiments with a catadioptric camera.
5

Córdova-Esparza, Diana-Margarita, Juan Terven, Julio-Alejandro Romero-González e Alfonso Ramírez-Pedraza. "Three-Dimensional Reconstruction of Indoor and Outdoor Environments Using a Stereo Catadioptric System". Applied Sciences 10, n. 24 (10 dicembre 2020): 8851. http://dx.doi.org/10.3390/app10248851.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this work, we present a panoramic 3D stereo reconstruction system composed of two catadioptric cameras. Each one consists of a CCD camera and a parabolic convex mirror that allows the acquisition of catadioptric images. We describe the calibration approach and propose the improvement of existing deep feature matching methods with epipolar constraints. We show that the improved matching algorithm covers more of the scene than classic feature detectors, yielding broader and denser reconstructions for outdoor environments. Our system can also generate accurate measurements in the wild without large amounts of data used in deep learning-based systems. We demonstrate the system’s feasibility and effectiveness as a practical stereo sensor with real experiments in indoor and outdoor environments.
6

Li, Yongle, e Jingtao Lou. "Omnigradient Based Total Variation Minimization for Enhanced Defocus Deblurring of Omnidirectional Images". International Journal of Optics 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/732937.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We propose a new method of image restoration for catadioptric defocus blur using omnitotal variation (Omni-TV) minimization based on omnigradient. Catadioptric omnidirectional imaging systems usually consist of conventional cameras and curved mirrors for capturing 360° field of view. The problem of catadioptric omnidirectional imaging defocus blur, which is caused by lens aperture and mirror curvature, becomes more severe when high resolution sensors and large apertures are used. In an omnidirectional image, two points near each other may not be close to one another in the 3D scene. Traditional gradient computation cannot be directly applied to omnidirectional image processing. Thus, omnigradient computing method combined with the characteristics of catadioptric omnidirectional imaging is proposed. Following this Omni-TV minimization is used as the constraint for deconvolution regularization, leading to the restoration of defocus blur in an omnidirectional image to obtain all sharp omnidirectional images. The proposed method is important for improving catadioptric omnidirectional imaging quality and promoting applications in related fields like omnidirectional video and image processing.
7

Zhang, Yu, Xiping Xu, Ning Zhang e Yaowen Lv. "A Semantic SLAM System for Catadioptric Panoramic Cameras in Dynamic Environments". Sensors 21, n. 17 (1 settembre 2021): 5889. http://dx.doi.org/10.3390/s21175889.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
When a traditional visual SLAM system works in a dynamic environment, it will be disturbed by dynamic objects and perform poorly. In order to overcome the interference of dynamic objects, we propose a semantic SLAM system for catadioptric panoramic cameras in dynamic environments. A real-time instance segmentation network is used to detect potential moving targets in the panoramic image. In order to find the real dynamic targets, potential moving targets are verified according to the sphere’s epipolar constraints. Then, when extracting feature points, the dynamic objects in the panoramic image are masked. Only static feature points are used to estimate the pose of the panoramic camera, so as to improve the accuracy of pose estimation. In order to verify the performance of our system, experiments were conducted on public data sets. The experiments showed that in a highly dynamic environment, the accuracy of our system is significantly better than traditional algorithms. By calculating the RMSE of the absolute trajectory error, we found that our system performed up to 96.3% better than traditional SLAM. Our catadioptric panoramic camera semantic SLAM system has higher accuracy and robustness in complex dynamic environments.
8

Ilizirov, Grigory, e Sagi Filin. "POSE ESTIMATION AND MAPPING USING CATADIOPTRIC CAMERAS WITH SPHERICAL MIRRORS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (9 giugno 2016): 43–47. http://dx.doi.org/10.5194/isprs-archives-xli-b3-43-2016.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Catadioptric cameras have the advantage of broadening the field of view and revealing otherwise occluded object parts. However, they differ geometrically from standard central perspective cameras because of light reflection from the mirror surface which alters the collinearity relation and introduces severe non-linear distortions of the imaged scene. Accommodating for these features, we present in this paper a novel modeling for pose estimation and reconstruction while imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which we estimate the system’s parameters. Our model yields a resection-like solution which can be developed into a linear one. We show that accurate estimates can be derived with only a small set of control points. Analysis shows that control configuration in the orientation scheme is rather flexible and that high levels of accuracy can be reached in both pose estimation and mapping. Clearly, the ability to model objects which fall outside of the immediate camera field-of-view offers an appealing means to supplement 3-D reconstruction and modeling.
9

Ilizirov, Grigory, e Sagi Filin. "POSE ESTIMATION AND MAPPING USING CATADIOPTRIC CAMERAS WITH SPHERICAL MIRRORS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (9 giugno 2016): 43–47. http://dx.doi.org/10.5194/isprsarchives-xli-b3-43-2016.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Catadioptric cameras have the advantage of broadening the field of view and revealing otherwise occluded object parts. However, they differ geometrically from standard central perspective cameras because of light reflection from the mirror surface which alters the collinearity relation and introduces severe non-linear distortions of the imaged scene. Accommodating for these features, we present in this paper a novel modeling for pose estimation and reconstruction while imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which we estimate the system’s parameters. Our model yields a resection-like solution which can be developed into a linear one. We show that accurate estimates can be derived with only a small set of control points. Analysis shows that control configuration in the orientation scheme is rather flexible and that high levels of accuracy can be reached in both pose estimation and mapping. Clearly, the ability to model objects which fall outside of the immediate camera field-of-view offers an appealing means to supplement 3-D reconstruction and modeling.
10

Mariottini, Gian Luca, e Domenico Prattichizzo. "Image-based Visual Servoing with Central Catadioptric Cameras". International Journal of Robotics Research 27, n. 1 (gennaio 2008): 41–56. http://dx.doi.org/10.1177/0278364907084320.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Catadioptric cameras":

1

Bastanlar, Yalin. "Parameter Extraction And Image Enhancement For Catadioptric Omnidirectional Cameras". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606034/index.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this thesis, catadioptric omnidirectional imaging systems are analyzed in detail. Omnidirectional image (ODI) formation characteristics of different camera-mirror configurations are examined and geometrical relations for panoramic and perspective image generation with common mirror types are summarized. A method is developed to determine the unknown parameters of a hyperboloidal-mirrored system using the world coordinates of a set of points and their corresponding image points on the ODI. A linear relation between the parameters of the hyperboloidal mirror is determined as well. Conducted research and findings are instrumental for calibration of such imaging systems. The resolution problem due to the up-sampling while transferring the pixels from ODI to the panoramic image is defined. Enhancing effects of standard interpolation methods on the panoramic images are analyzed and edge detection-based techniques are developed for improving the resolutional quality of the panoramic images. Also, the projection surface alternatives of generating panoramic images are evaluated.
2

Bastanlar, Yalin. "Structure-from-motion For Systems With Perspective And Omnidirectional Cameras". Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610833/index.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this thesis, a pipeline for structure-from-motion with mixed camera types is described and methods for the steps of this pipeline to make it effective and automatic are proposed. These steps can be summarized as calibration, feature point matching, epipolar geometry and pose estimation, triangulation and bundle adjustment. We worked with catadioptric omnidirectional and perspective cameras and employed the sphere camera model, which encompasses single-viewpoint catadioptric systems as well as perspective cameras. For calibration of the sphere camera model, a new technique that has the advantage of linear and automatic parameter initialization is proposed. The projection of 3D points on a catadioptric image is represented linearly with a 6x10 projection matrix using lifted coordinates. This projection matrix is computed with an adequate number of 3D-2D correspondences and decomposed to obtain intrinsic and extrinsic parameters. Then, a non-linear optimization is performed to refine the parameters. For feature point matching between hybrid camera images, scale invariant feature transform (SIFT) is employed and a method is proposed to improve the SIFT matching output. With the proposed approach, omnidirectional-perspective matching performance significantly increases to enable automatic point matching. In addition, the use of virtual camera plane (VCP) images is evaluated, which are perspective images produced by unwarping the corresponding region in the omnidirectional image. The hybrid epipolar geometry is estimated using random sample consensus (RANSAC) and alternatives of pose estimation methods are evaluated. A weighting strategy for iterative linear triangulation which improves the structure estimation accuracy is proposed. Finally, multi-view structure-from-motion (SfM) is performed by employing the approach of adding views to the structure one by one. To refine the structure estimated with multiple views, sparse bundle adjustment method is employed with a modification to use the sphere camera model. Experiments on simulated and real images for the proposed approaches are conducted. Also, the results of hybrid multi-view SfM with real images are demonstrated, emphasizing the cases where it is advantageous to use omnidirectional cameras with perspective cameras.
3

Mohtaram, Noureddine. "Reconstruction 3D dense d'objets sans recul par vision catadioptrique". Electronic Thesis or Diss., Amiens, 2019. http://www.theses.fr/2019AMIE0023.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ce travail de thèse s'intéresse au problème de la reconstruction 3D dense complète d'objets sans recul. Nous avons conçu et développé un système de reconstruction tridimensionnelle d'objet réel basé caméra avec deux miroirs plans: c'est un système Stéréo Catadioptrique Planaire (SCP). En premier lieu, nous modélisons le système SCP par un réseau de caméras virtuelles afin de l'étalonner. Ensuite, nous formulons la mise en correspondance des points caractéristiques détectés dans les images reflétés en utilisant une variante de la méthode ASIFT qui est adaptée aux miroirs plans et que l'on dénomme AMIFT. L'élimination des fausses correspondances est assurée par la méthode Symmetric-RANSAC proposée dans cette thèse. Pour reconstruire la surface de l'objet, et non quelques points épars, une étape de mise en correspondance dense est par la suite nécessaire. Nous proposons alors une approche de recherche de pixels correspondants intégrant la géométrie projective par le biais d'homographies locales. Cette méthode permet d'estimer la transformation géométrique qui lie l'image de l'objet à l'une de ses inter-réflexions sur les miroirs plans en minimisant une fonction de coût avec la technique d'optimisation de Branch-and-Bound. Cela nous permet d'adapter la reconstruction 3D dense, fondamentalement basée sur la triangulation de correspondances entre images. Enfin, nous appliquons ce pipeline de reconstruction 3D sur des images de réflexions multiples afin de vérifier l'hypothèse de reconstruction 3D complète à partir d'un système SCP. La performance du système proposé est validée par des expérimentations sur des images synthétiques et les résultats obtenus montrent la qualité de la reconstruction 3D
This PhD work focuses on the problem of complete dense 3D reconstruction of objects without recoil. We have conceived and developed a tridimensional reconstruction system of real objects based on a camera with two planar mirrors; baptised as a Planar Catadioptric Stereo (PCS) system. We first model the PCS system by a network of virtual cameras to perform calibration. Then, we formulate the problem of characteristic points' correspondences detected on the reflected images by using a variant of the ASIFT method. This adaptation which we behold as AMIFT on the mirror planes. Putative point correspondences are further refined with outlier rejection using our method of Symmetric RANSAC proposed in this thesis. To reconstruct a proper dense object surface, and not just a sparse projection of points, a dense correspondence technique is consequently required. This method estimates the geometric transformation linking the image object with one of it’s inter-reflections on the mirror planes by minimizing a branch and bound cost function. This allows us to adapt the 3D dense reconstruction, fundamentally based on the triangulation of image correspondences. We apply this 3D reconstruction pipeline on multiple catadioptric images in order to verify the undermining hypothesis using a PCS system. Our methodology is validated using experimental results on synthetic images to illustrate the quality of the 3D reconstruction
4

Ducrocq, Julien. "Vision catadioptrique pour favoriser la perception d'environnements hétérogènes". Electronic Thesis or Diss., Amiens, 2022. http://www.theses.fr/2022AMIE0067.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse présente les méthodes de conception de deux caméras catadioptriques capables de capturer des images exploitables d'environnements hétérogènes. Ces caméras s'inscrivent dans le domaine de la vision adaptative, qui rassemble les caméras dont la partie optique ou le capteur ont des propriétés hétérogènes qui peuvent varier au cours du temps. Les caméras adaptatives sont capables, entre autres, de capturer des environnements hétérogènes dont les propriétés physiques ou la géométrie varient dans l'espace. La thèse propose une revue de l'état de l'art des caméras adaptatives qui permettent de capturer certains types d'environnements hétérogènes. Dans un premier temps, on considère les scènes caractérisées par une variation spatiale de luminances, d'une gamme dynamique de l'ordre de 120 décibels. Ces scènes mettent en difficulté les caméras conventionnelles, dont les images ont des pixels d'intensité saturée et d'autres trop sombres, à cause de leur dynamique plus faible. Dans les deux cas, ces zones d'image n'apportent pas d'informations visuelles sur la scène, elles ne sont pas exploitables. Afin de capturer les luminances associées à ces zones claires et sombres, les caméras à large gamme dynamique (HDR) sont employées. Néanmoins, à l'heure actuelle, aucune caméra HDR n'est panoramique. C'est l'objet de la première contribution de cette thèse : concevoir une caméra panoramique HDR, afin d'améliorer la navigation de robots en extérieur par la perception visuelle seule dans des environnements à luminances variées. Montée sur un robot mobile, elle accroît le domaine de convergence et la précision en positionnement du robot en extérieur par asservissement visuel direct. Dans un second temps, on s'intéresse aux scènes dont le niveau de détail est non-uniforme dans l'espace : certains éléments de la scène présentent sont plus riches en informations visuelles que les autres. Afin de capturer de tels environnements hétérogènes, la deuxième contribution de cette thèse est une caméra adaptative. Elle s'appuie sur un nouveau miroir déformable dont les courbures locales permettent d'accroître ou de réduire le nombre de pixels qu'occupent les régions de la scène dans l'image. Cette caméra, surnommée Visadapt, capture des images multi-résolution selon le contenu de la scène. D'une scène à l'autre, il est possible de changer la forme du miroir afin d'optimiser la résolution des images qu'elle capture à cette nouvelle scène. La surface du miroir est constituée d'un matériau à la fois réfléchissant et déformable, le mylar et se déforme par une grille d'actionneurs linéaires placés sous sa surface. Ce miroir, plan à l’état initial, peut se déformer afin que les régions de la scène soient capturées par Visadapt à la résolution désirée. Une étude en simulation a permis de fixer les caractéristiques de Visadapt, notamment les dimensions et les matériaux des différents éléments qui la constituent, ainsi que l'espacement inter-actionneurs. Un prototype a été réalisé à partir des paramètres fixés en simulation. Les expérimentations réalisées ont montré que ce prototype est capable de magnifier jusqu'à quatre zones de la scène à la fois. Cette thèse se conclut sur des perspectives de travail qui proposent d'améliorer les prototypes des deux caméras conçues afin d'améliorer leurs performances et la variété des images qu'elles peuvent capturer. De plus, elle propose des pistes de recherche afin d'aller plus loin sur ces deux concepts de caméras et même sur la vision adaptative en général
This thesis presents the conception methods of two catadioptric cameras capable of recording usable images of heterogeneous environments. These cameras belong to the adaptive vision field, which gathers the cameras of which the optics or sensor have heterogeneous properties which can vary across time. Adaptive cameras abilities include capturing heterogenous environments which physical or geometrical properties change across space. This thesis proposes a survey of the state of the art on adaptive cameras which are able to capture specific types of heterogenous environments. On the one hand, we consider the scenes characterized by a spatial variation of radiances, with a dynamic range around 120 decibels. These scenes put conventional cameras in difficulty, their images have some pixels saturated and others to dark, because of their low dynamic range. In both casses, these image regions does not carry any visual information about the scene, they are not usable. In order to capture the radiances corresponding to these bright and dark areas, the high dynamic range cameras (HDR) are used. Nonetheless, there is no available HDR panoramic camera yet. Therefore, the first contribution of this thesis is the conception of an HDR panoramic camera in order to improve robots navigation, with only visual perception, in outdoor scenes with various. Mounted on a mobile robot, this camera enlarges the convergence domain and the positioning accuracy of a robot by direct visual servoing, outdoors. On the other hand, we consider the scenes which have a non-uniform level of details across space : some scene elements carry more visual information than the others. To capture such heterogeneous environments, the second contribution of this thesis is an adaptive camera. This camera is based on a new deformable mirror of local curvatures allowing to enlarge or reduce the number of pixels occupied by scene regions in the image. This camera, nicknamed Visadapt, capture multi-résolution images which depend on scene content. From one scene to another, the shape of the mirror may be changed to optimise the resolution of the images captured to this new scene. The surface of the mirror is made of material both reflective and deformable, the mylar, and changes of shape thanks to a grid of linear actuators placed underneath. This mirror, plan as an initial state, is able to change shape to give to the scene regions captured by Visadapt the desired resolution in the image. The characteristics of Visadapt, particularly the dimensions, the materials of its different elements and the actuators pitch, have been defined thanks to a simulation study. A real prototype have been built to respect the parameters defined by the simulation. The experiments shown that this prototype is able to magnify up to four scene regions at once. This thesis ends with a conclusion presenting future works to upgrade the prototypes of the two cameras, in order to enhance their performances and the diversity of images they can capture. Furthermore, this conclusion proposes research tracks to improve even further these two cameras and even adaptive vision in general
5

Kawahara, Ryo. "A Novel Catadioptric Ray-Pixel Camera Model and its Application to 3D Reconstruction". Kyoto University, 2019. http://hdl.handle.net/2433/242435.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Aziz, Fatima. "Approche géométrique couleur pour le traitement des images catadioptriques". Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0080/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ce manuscrit étudie les images omnidirectionnelles catadioptriques couleur en tant que variétés Riemanniennes. Cette représentation géométrique ouvre des pistes intéressantes pour résoudre les problèmes liés aux distorsions introduites par le système catadioptrique dans le cadre de la perception couleur des systèmes autonomes. Notre travail démarre avec un état de l’art sur la vision omnidirectionnelle, les différents dispositifs et modèles de projection géométriques. Ensuite, nous présentons les notions de base de la géométrie Riemannienne et son utilisation en traitement d’images. Ceci nous amène à introduire les opérateurs différentiels sur les variétés Riemanniennes, qui nous seront utiles dans cette étude. Nous développons alors une méthode de construction d’un tenseur métrique hybride adapté aux images catadioptriques couleur. Ce tenseur a la double caractéristique, de dépendre de la position géométrique des points dans l’image, et de leurs coordonnées photométriques également. L’exploitation du tenseur métrique proposé pour différents traitements des images catadioptriques, est une partie importante dans cette thèse. En effet, on constate que la fonction Gaussienne est au cœur de plusieurs filtres et opérateurs pour diverses applications comme le débruitage, ou bien l’extraction des caractéristiques bas niveau à partir de la représentation dans l’espace-échelle Gaussien. On construit ainsi un nouveau noyau Gaussien dépendant du tenseur métrique Riemannien. Il présente l’avantage d’être applicable directement sur le plan image catadioptrique, également, variable dans l’espace et dépendant de l’information image locale. Dans la dernière partie de cette thèse, nous discutons des applications robotiques de la métrique hybride, en particulier, la détection de l’espace libre navigable pour un robot mobile, et nous développons une méthode de planification de trajectoires optimal
This manuscript investigates omnidirectional catadioptric color images as Riemannian manifolds. This geometric representation offers insights into the resolution of problems related to the distortions introduced by the catadioptric system in the context of the color perception of autonomous systems. The report starts with an overview of the omnidirectional vision, the different used systems, and the geometric projection models. Then, we present the basic notions and tools of Riemannian geometry and its use in the image processing domain. This leads us to introduce some useful differential operators on Riemannian manifolds. We develop a method of constructing a hybrid metric tensor adapted to color catadioptric images. This tensor has the dual characteristic of depending on the geometric position of the image points and their photometric coordinates as well.In this work, we mostly deal with the exploitation of the previously constructed hybrid metric tensor in the catadioptric image processing. Indeed, it is recognized that the Gaussian function is at the core of several filters and operators for various applications, such as noise reduction, or the extraction of low-level characteristics from the Gaussian space- scale representation. We thus build a new Gaussian kernel dependent on the Riemannian metric tensor. It has the advantage of being applicable directly on the catadioptric image plane, also, variable in space and depending on the local image information. As a final part in this thesis, we discuss some possible robotic applications of the hybrid metric tensor. We propose to define the free space and distance transforms in the omni- image, then to extract geodesic medial axis. The latter is a relevant topological representation for autonomous navigation, that we use to define an optimal trajectory planning method
7

hart, charles. "A Low-cost Omni-directional Visual Bearing Only Localization System". Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1386695142.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Chen, Cheng-I., e 陳政毅. "Catadioptric Camera Calibration using Planar Calibration Objects". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/76234730503256581655.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立交通大學
資訊科學與工程研究所
94
Catadioptric camera has been widely used in the applications of video surveillance and robot navigation due to its advantage of large field of view. However, it is more difficult to calibrate this kind of cameras than to calibrate perspective cameras because the camera structure is more complicated and there are more camera parameters to be determined. Catadioptric camera can be either central or non-central, depending on whether it keeps the single viewpoint constraint. The central catadioptric camera has a single center of projection, hence the epipolar geometry can be applied to calibrate the camera parameters. Considering the practical issues, such as the large size of central catadioptric camera and the difficulty of precise alignment between the camera and the mirror, most off-the-shelf catadioptric cameras are non-central ones without obeying the single viewpoint constraint. A non-central catadioptric camera can be calibrated by photogrammetric methods requiring the correspondence of 3-D world coordinate and 2-D image coordinate. In this thesis, we propose novel calibration methods for determining camera parameters of both central and non-central catadioptric cameras. Our methods utilize planar objects and can achieve very accurate results while keeping the calibration procedures simple. For central catadioptric camera, 2-D projection point in image for a 3-D projection ray can be determined by the viewing sphere model. In the proposed calibration procedure, we place a planar calibration plate several times surrounding the camera and capture an image for each pose of the calibration plate. With the viewing sphere model and the associated parameters, we can unwarp the captured catadioptric image into the image captured by a virtual perspective camera with known intrinsic parameters as well as extrinsic parameters relative to the viewing sphere. We show that moving a calibration plate around the catadioptric camera is equivalent to placing the same calibration plate at different poses relative to a static, virtual, perspective camera. We can then use this set of unwarped perspective images to calculate the relative poses of the calibration plate as well as the projection error of the feature points on the calibration plate by using the homography method. The associated parameters of the viewing sphere model can be obtained by minimizing the projection error in a nonlinear optimization procedure. For non-central catadioptric camera, the single viewpoint constraint does not hold and the viewing sphere model cannot be applied. Thus it is even more difficult to calibrate a non-central catadioptric camera. In this work we determine the projection model of the non-central catadioptric camera through a calibrated central catadioptric camera as an intermediate. First, we use a set of LCD panels with fixed positions to present feature patterns to the central catadioptric camera. Image coordinates of these feature patterns in the captured images are automatically determined and the corresponding 3-D coordinates can be calculated since the camera is calibrated. The same set of LCD panels are then presented to the non-central catadioptric camera. For each feature point, we can obtain its 2-D coordinate in the image captured by the non-central catadioptric camera. Since the 3-D coordinates of the feature points are determined beforehand, camera parameters of the non-central catadioptric camera can then be obtained photogrammetrically. In the proposed method, we use Mashita’s method to determine the initial values of the parameters in the reflected ray model and then optimize the values of the parameters by minimizing the projection error. Experiments with simulation and real data clearly demonstrate the robustness and accuracy of the proposed calibration methods. In the simulation data, we add gaussian noise with zero-mean and standard deviation ( = 0.0 2.0) to evaluate the performance of proposed methods. The results show that our methods are indeed robust and accurate. We also implement calibration method using geometric invariants for the purpose of comparison. The results show that our methods are indeed robust and accuracy, and have better performance than one using geometric invariants. Moreover, we also present an integrated surveillance system in which the calibrated non-central catadioptric camera is used to navigate a mobile robot for patrolling.

Libri sul tema "Catadioptric cameras":

1

Klette, Reinhard, e Kostas Daniilidis. Imaging Beyond the Pinhole Camera. Springer, 2008.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Klette, Reinhard, e Kostas Daniilidis. Imaging Beyond the Pinhole Camera. Kostas Daniilidis, 2010.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Klette, Reinhard, e Kostas Daniilidis. Imaging Beyond the Pinhole Camera. Springer, 2006.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

(Editor), Kostas Daniilidis, e Reinhard Klette (Editor), a cura di. Imaging Beyond the Pinhole Camera (Computational Imaging and Vision). Springer, 2007.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Catadioptric cameras":

1

Nayar, S. K., e V. Peri. "Folded Catadioptric Cameras". In Panoramic Vision, 103–19. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4757-3482-9_6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Baker, S., e S. K. Nayar. "Single Viewpoint Catadioptric Cameras". In Panoramic Vision, 39–71. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4757-3482-9_4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Sturm, Peter, e João P. Barreto. "General Imaging Geometry for Central Catadioptric Cameras". In Lecture Notes in Computer Science, 609–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88693-8_45.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Pajdla, T., T. Svoboda e V. Hlaváč. "Epipolar Geometry of Central Panoramic Catadioptric Cameras". In Panoramic Vision, 73–102. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4757-3482-9_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Schönbein, Miriam, Holger Rapp e Martin Lauer. "Panoramic 3D Reconstruction with Three Catadioptric Cameras". In Advances in Intelligent Systems and Computing, 345–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-33926-4_32.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Tolvanen, Antti, Christian Perwass e Gerald Sommer. "Projective Model for Central Catadioptric Cameras Using Clifford Algebra". In Lecture Notes in Computer Science, 192–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11550518_24.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Ying, Xianghua, e Zhanyi Hu. "Can We Consider Central Catadioptric Cameras and Fisheye Cameras within a Unified Imaging Model". In Lecture Notes in Computer Science, 442–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24670-1_34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Agrawal, Amit, Yuichi Taguchi e Srikumar Ramalingam. "Analytical Forward Projection for Axial Non-central Dioptric and Catadioptric Cameras". In Computer Vision – ECCV 2010, 129–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15558-1_10.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Bermudez-Cameo, Jesus, Gonzalo Lopez-Nicolas e Jose J. Guerrero. "A Unified Framework for Line Extraction in Dioptric and Catadioptric Cameras". In Computer Vision – ACCV 2012, 627–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37447-0_48.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Ramalingam, Srikumar. "Catadioptric Camera". In Computer Vision, 85–89. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_486.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Catadioptric cameras":

1

Endres, Felix, Christoph Sprunk, Rainer Kummerle e Wolfram Burgard. "A catadioptric extension for RGB-D cameras". In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014). IEEE, 2014. http://dx.doi.org/10.1109/iros.2014.6942600.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Mei, Christopher, Selim Benhimane, Ezio Malis e Patrick Rives. "Homography-based Tracking for Central Catadioptric Cameras". In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006. http://dx.doi.org/10.1109/iros.2006.282553.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Schonbein, Miriam, Tobias Straus e Andreas Geiger. "Calibrating and centering quasi-central catadioptric cameras". In 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014. http://dx.doi.org/10.1109/icra.2014.6907507.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Gasparini, Simone, Peter Sturm e Joao P. Barreto. "Plane-based calibration of central catadioptric cameras". In 2009 IEEE 12th International Conference on Computer Vision (ICCV). IEEE, 2009. http://dx.doi.org/10.1109/iccv.2009.5459336.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Bazin, J. C., I. Kweon, C. Demonceaux e P. Vasseur. "Automatic calibration of catadioptric cameras in urban environment". In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2008. http://dx.doi.org/10.1109/iros.2008.4650590.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Xianghua Ying e Zhanyi Hu. "Spherical objects based motion estimation for catadioptric cameras". In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1334510.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Mei, C., S. Benhimane, E. Malis e P. Rives. "Constrained Multiple Planar Template Tracking for Central Catadioptric Cameras". In British Machine Vision Conference 2006. British Machine Vision Association, 2006. http://dx.doi.org/10.5244/c.20.64.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Duan, F. Q., R. Liu e M. Q. Zhou. "A new easy calibration algorithm for para-catadioptric cameras". In 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ). IEEE, 2010. http://dx.doi.org/10.1109/ivcnz.2010.6148805.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Wu Keyu, Gao He, Zhou Fuqiang e Tan Haishu. "Simplified calibration of non-single viewpoint catadioptric omnidirectional cameras". In 2012 IEEE International Conference on Oxide Materials for Electronic Engineering (OMEE). IEEE, 2012. http://dx.doi.org/10.1109/omee.2012.6343532.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Dias, Tiago Jose Simoes, Pedro Daniel dos Santos Miraldo e Nuno Miguel Mendonca da Silva Goncalves. "A Framework for Augmented Reality Using Non-Central Catadioptric Cameras". In 2015 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). IEEE, 2015. http://dx.doi.org/10.1109/icarsc.2015.31.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Catadioptric cameras":

1

Fagan, Joseph, Eddy Tsui, Terence Ringwood, Mark Mellini e Amir Morcos. Catadioptric Omni-Directional System for M1A2 Abrams (360-Degree Camera System). Fort Belvoir, VA: Defense Technical Information Center, novembre 2005. http://dx.doi.org/10.21236/ada440545.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Vai alla bibliografia