Gotowa bibliografia na temat „Catadioptric cameras”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Catadioptric cameras”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Catadioptric cameras"
Benamar, F., S. Elfkihi, C. Demonceaux, E. Mouaddib i D. Aboutajdine. "Visual contact with catadioptric cameras". Robotics and Autonomous Systems 64 (luty 2015): 100–119. http://dx.doi.org/10.1016/j.robot.2014.09.036.
Pełny tekst źródłaRostkowska, Marta, i Piotr Skrzypczyński. "Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices". Sensors 23, nr 14 (18.07.2023): 6485. http://dx.doi.org/10.3390/s23146485.
Pełny tekst źródłaKhurana, M., i C. Armenakis. "LOCALIZATION AND MAPPING USING A NON-CENTRAL CATADIOPTRIC CAMERA SYSTEM". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2 (28.05.2018): 145–52. http://dx.doi.org/10.5194/isprs-annals-iv-2-145-2018.
Pełny tekst źródłaZhao, Yue, i Xin Yang. "Calibration Method for Central Catadioptric Camera Using Multiple Groups of Parallel Lines and Their Properties". Journal of Sensors 2021 (2.07.2021): 1–13. http://dx.doi.org/10.1155/2021/6675110.
Pełny tekst źródłaCórdova-Esparza, Diana-Margarita, Juan Terven, Julio-Alejandro Romero-González i Alfonso Ramírez-Pedraza. "Three-Dimensional Reconstruction of Indoor and Outdoor Environments Using a Stereo Catadioptric System". Applied Sciences 10, nr 24 (10.12.2020): 8851. http://dx.doi.org/10.3390/app10248851.
Pełny tekst źródłaLi, Yongle, i Jingtao Lou. "Omnigradient Based Total Variation Minimization for Enhanced Defocus Deblurring of Omnidirectional Images". International Journal of Optics 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/732937.
Pełny tekst źródłaZhang, Yu, Xiping Xu, Ning Zhang i Yaowen Lv. "A Semantic SLAM System for Catadioptric Panoramic Cameras in Dynamic Environments". Sensors 21, nr 17 (1.09.2021): 5889. http://dx.doi.org/10.3390/s21175889.
Pełny tekst źródłaIlizirov, Grigory, i Sagi Filin. "POSE ESTIMATION AND MAPPING USING CATADIOPTRIC CAMERAS WITH SPHERICAL MIRRORS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (9.06.2016): 43–47. http://dx.doi.org/10.5194/isprs-archives-xli-b3-43-2016.
Pełny tekst źródłaIlizirov, Grigory, i Sagi Filin. "POSE ESTIMATION AND MAPPING USING CATADIOPTRIC CAMERAS WITH SPHERICAL MIRRORS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (9.06.2016): 43–47. http://dx.doi.org/10.5194/isprsarchives-xli-b3-43-2016.
Pełny tekst źródłaMariottini, Gian Luca, i Domenico Prattichizzo. "Image-based Visual Servoing with Central Catadioptric Cameras". International Journal of Robotics Research 27, nr 1 (styczeń 2008): 41–56. http://dx.doi.org/10.1177/0278364907084320.
Pełny tekst źródłaRozprawy doktorskie na temat "Catadioptric cameras"
Bastanlar, Yalin. "Parameter Extraction And Image Enhancement For Catadioptric Omnidirectional Cameras". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606034/index.pdf.
Pełny tekst źródłaBastanlar, Yalin. "Structure-from-motion For Systems With Perspective And Omnidirectional Cameras". Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610833/index.pdf.
Pełny tekst źródłaMohtaram, Noureddine. "Reconstruction 3D dense d'objets sans recul par vision catadioptrique". Electronic Thesis or Diss., Amiens, 2019. http://www.theses.fr/2019AMIE0023.
Pełny tekst źródłaThis PhD work focuses on the problem of complete dense 3D reconstruction of objects without recoil. We have conceived and developed a tridimensional reconstruction system of real objects based on a camera with two planar mirrors; baptised as a Planar Catadioptric Stereo (PCS) system. We first model the PCS system by a network of virtual cameras to perform calibration. Then, we formulate the problem of characteristic points' correspondences detected on the reflected images by using a variant of the ASIFT method. This adaptation which we behold as AMIFT on the mirror planes. Putative point correspondences are further refined with outlier rejection using our method of Symmetric RANSAC proposed in this thesis. To reconstruct a proper dense object surface, and not just a sparse projection of points, a dense correspondence technique is consequently required. This method estimates the geometric transformation linking the image object with one of it’s inter-reflections on the mirror planes by minimizing a branch and bound cost function. This allows us to adapt the 3D dense reconstruction, fundamentally based on the triangulation of image correspondences. We apply this 3D reconstruction pipeline on multiple catadioptric images in order to verify the undermining hypothesis using a PCS system. Our methodology is validated using experimental results on synthetic images to illustrate the quality of the 3D reconstruction
Ducrocq, Julien. "Vision catadioptrique pour favoriser la perception d'environnements hétérogènes". Electronic Thesis or Diss., Amiens, 2022. http://www.theses.fr/2022AMIE0067.
Pełny tekst źródłaThis thesis presents the conception methods of two catadioptric cameras capable of recording usable images of heterogeneous environments. These cameras belong to the adaptive vision field, which gathers the cameras of which the optics or sensor have heterogeneous properties which can vary across time. Adaptive cameras abilities include capturing heterogenous environments which physical or geometrical properties change across space. This thesis proposes a survey of the state of the art on adaptive cameras which are able to capture specific types of heterogenous environments. On the one hand, we consider the scenes characterized by a spatial variation of radiances, with a dynamic range around 120 decibels. These scenes put conventional cameras in difficulty, their images have some pixels saturated and others to dark, because of their low dynamic range. In both casses, these image regions does not carry any visual information about the scene, they are not usable. In order to capture the radiances corresponding to these bright and dark areas, the high dynamic range cameras (HDR) are used. Nonetheless, there is no available HDR panoramic camera yet. Therefore, the first contribution of this thesis is the conception of an HDR panoramic camera in order to improve robots navigation, with only visual perception, in outdoor scenes with various. Mounted on a mobile robot, this camera enlarges the convergence domain and the positioning accuracy of a robot by direct visual servoing, outdoors. On the other hand, we consider the scenes which have a non-uniform level of details across space : some scene elements carry more visual information than the others. To capture such heterogeneous environments, the second contribution of this thesis is an adaptive camera. This camera is based on a new deformable mirror of local curvatures allowing to enlarge or reduce the number of pixels occupied by scene regions in the image. This camera, nicknamed Visadapt, capture multi-résolution images which depend on scene content. From one scene to another, the shape of the mirror may be changed to optimise the resolution of the images captured to this new scene. The surface of the mirror is made of material both reflective and deformable, the mylar, and changes of shape thanks to a grid of linear actuators placed underneath. This mirror, plan as an initial state, is able to change shape to give to the scene regions captured by Visadapt the desired resolution in the image. The characteristics of Visadapt, particularly the dimensions, the materials of its different elements and the actuators pitch, have been defined thanks to a simulation study. A real prototype have been built to respect the parameters defined by the simulation. The experiments shown that this prototype is able to magnify up to four scene regions at once. This thesis ends with a conclusion presenting future works to upgrade the prototypes of the two cameras, in order to enhance their performances and the diversity of images they can capture. Furthermore, this conclusion proposes research tracks to improve even further these two cameras and even adaptive vision in general
Kawahara, Ryo. "A Novel Catadioptric Ray-Pixel Camera Model and its Application to 3D Reconstruction". Kyoto University, 2019. http://hdl.handle.net/2433/242435.
Pełny tekst źródłaAziz, Fatima. "Approche géométrique couleur pour le traitement des images catadioptriques". Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0080/document.
Pełny tekst źródłaThis manuscript investigates omnidirectional catadioptric color images as Riemannian manifolds. This geometric representation offers insights into the resolution of problems related to the distortions introduced by the catadioptric system in the context of the color perception of autonomous systems. The report starts with an overview of the omnidirectional vision, the different used systems, and the geometric projection models. Then, we present the basic notions and tools of Riemannian geometry and its use in the image processing domain. This leads us to introduce some useful differential operators on Riemannian manifolds. We develop a method of constructing a hybrid metric tensor adapted to color catadioptric images. This tensor has the dual characteristic of depending on the geometric position of the image points and their photometric coordinates as well.In this work, we mostly deal with the exploitation of the previously constructed hybrid metric tensor in the catadioptric image processing. Indeed, it is recognized that the Gaussian function is at the core of several filters and operators for various applications, such as noise reduction, or the extraction of low-level characteristics from the Gaussian space- scale representation. We thus build a new Gaussian kernel dependent on the Riemannian metric tensor. It has the advantage of being applicable directly on the catadioptric image plane, also, variable in space and depending on the local image information. As a final part in this thesis, we discuss some possible robotic applications of the hybrid metric tensor. We propose to define the free space and distance transforms in the omni- image, then to extract geodesic medial axis. The latter is a relevant topological representation for autonomous navigation, that we use to define an optimal trajectory planning method
hart, charles. "A Low-cost Omni-directional Visual Bearing Only Localization System". Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1386695142.
Pełny tekst źródłaChen, Cheng-I., i 陳政毅. "Catadioptric Camera Calibration using Planar Calibration Objects". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/76234730503256581655.
Pełny tekst źródła國立交通大學
資訊科學與工程研究所
94
Catadioptric camera has been widely used in the applications of video surveillance and robot navigation due to its advantage of large field of view. However, it is more difficult to calibrate this kind of cameras than to calibrate perspective cameras because the camera structure is more complicated and there are more camera parameters to be determined. Catadioptric camera can be either central or non-central, depending on whether it keeps the single viewpoint constraint. The central catadioptric camera has a single center of projection, hence the epipolar geometry can be applied to calibrate the camera parameters. Considering the practical issues, such as the large size of central catadioptric camera and the difficulty of precise alignment between the camera and the mirror, most off-the-shelf catadioptric cameras are non-central ones without obeying the single viewpoint constraint. A non-central catadioptric camera can be calibrated by photogrammetric methods requiring the correspondence of 3-D world coordinate and 2-D image coordinate. In this thesis, we propose novel calibration methods for determining camera parameters of both central and non-central catadioptric cameras. Our methods utilize planar objects and can achieve very accurate results while keeping the calibration procedures simple. For central catadioptric camera, 2-D projection point in image for a 3-D projection ray can be determined by the viewing sphere model. In the proposed calibration procedure, we place a planar calibration plate several times surrounding the camera and capture an image for each pose of the calibration plate. With the viewing sphere model and the associated parameters, we can unwarp the captured catadioptric image into the image captured by a virtual perspective camera with known intrinsic parameters as well as extrinsic parameters relative to the viewing sphere. We show that moving a calibration plate around the catadioptric camera is equivalent to placing the same calibration plate at different poses relative to a static, virtual, perspective camera. We can then use this set of unwarped perspective images to calculate the relative poses of the calibration plate as well as the projection error of the feature points on the calibration plate by using the homography method. The associated parameters of the viewing sphere model can be obtained by minimizing the projection error in a nonlinear optimization procedure. For non-central catadioptric camera, the single viewpoint constraint does not hold and the viewing sphere model cannot be applied. Thus it is even more difficult to calibrate a non-central catadioptric camera. In this work we determine the projection model of the non-central catadioptric camera through a calibrated central catadioptric camera as an intermediate. First, we use a set of LCD panels with fixed positions to present feature patterns to the central catadioptric camera. Image coordinates of these feature patterns in the captured images are automatically determined and the corresponding 3-D coordinates can be calculated since the camera is calibrated. The same set of LCD panels are then presented to the non-central catadioptric camera. For each feature point, we can obtain its 2-D coordinate in the image captured by the non-central catadioptric camera. Since the 3-D coordinates of the feature points are determined beforehand, camera parameters of the non-central catadioptric camera can then be obtained photogrammetrically. In the proposed method, we use Mashita’s method to determine the initial values of the parameters in the reflected ray model and then optimize the values of the parameters by minimizing the projection error. Experiments with simulation and real data clearly demonstrate the robustness and accuracy of the proposed calibration methods. In the simulation data, we add gaussian noise with zero-mean and standard deviation ( = 0.0 2.0) to evaluate the performance of proposed methods. The results show that our methods are indeed robust and accurate. We also implement calibration method using geometric invariants for the purpose of comparison. The results show that our methods are indeed robust and accuracy, and have better performance than one using geometric invariants. Moreover, we also present an integrated surveillance system in which the calibrated non-central catadioptric camera is used to navigate a mobile robot for patrolling.
Książki na temat "Catadioptric cameras"
Klette, Reinhard, i Kostas Daniilidis. Imaging Beyond the Pinhole Camera. Springer, 2008.
Znajdź pełny tekst źródłaKlette, Reinhard, i Kostas Daniilidis. Imaging Beyond the Pinhole Camera. Kostas Daniilidis, 2010.
Znajdź pełny tekst źródłaKlette, Reinhard, i Kostas Daniilidis. Imaging Beyond the Pinhole Camera. Springer, 2006.
Znajdź pełny tekst źródła(Editor), Kostas Daniilidis, i Reinhard Klette (Editor), red. Imaging Beyond the Pinhole Camera (Computational Imaging and Vision). Springer, 2007.
Znajdź pełny tekst źródłaCzęści książek na temat "Catadioptric cameras"
Nayar, S. K., i V. Peri. "Folded Catadioptric Cameras". W Panoramic Vision, 103–19. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4757-3482-9_6.
Pełny tekst źródłaBaker, S., i S. K. Nayar. "Single Viewpoint Catadioptric Cameras". W Panoramic Vision, 39–71. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4757-3482-9_4.
Pełny tekst źródłaSturm, Peter, i João P. Barreto. "General Imaging Geometry for Central Catadioptric Cameras". W Lecture Notes in Computer Science, 609–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88693-8_45.
Pełny tekst źródłaPajdla, T., T. Svoboda i V. Hlaváč. "Epipolar Geometry of Central Panoramic Catadioptric Cameras". W Panoramic Vision, 73–102. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4757-3482-9_5.
Pełny tekst źródłaSchönbein, Miriam, Holger Rapp i Martin Lauer. "Panoramic 3D Reconstruction with Three Catadioptric Cameras". W Advances in Intelligent Systems and Computing, 345–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-33926-4_32.
Pełny tekst źródłaTolvanen, Antti, Christian Perwass i Gerald Sommer. "Projective Model for Central Catadioptric Cameras Using Clifford Algebra". W Lecture Notes in Computer Science, 192–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11550518_24.
Pełny tekst źródłaYing, Xianghua, i Zhanyi Hu. "Can We Consider Central Catadioptric Cameras and Fisheye Cameras within a Unified Imaging Model". W Lecture Notes in Computer Science, 442–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24670-1_34.
Pełny tekst źródłaAgrawal, Amit, Yuichi Taguchi i Srikumar Ramalingam. "Analytical Forward Projection for Axial Non-central Dioptric and Catadioptric Cameras". W Computer Vision – ECCV 2010, 129–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15558-1_10.
Pełny tekst źródłaBermudez-Cameo, Jesus, Gonzalo Lopez-Nicolas i Jose J. Guerrero. "A Unified Framework for Line Extraction in Dioptric and Catadioptric Cameras". W Computer Vision – ACCV 2012, 627–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37447-0_48.
Pełny tekst źródłaRamalingam, Srikumar. "Catadioptric Camera". W Computer Vision, 85–89. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_486.
Pełny tekst źródłaStreszczenia konferencji na temat "Catadioptric cameras"
Endres, Felix, Christoph Sprunk, Rainer Kummerle i Wolfram Burgard. "A catadioptric extension for RGB-D cameras". W 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014). IEEE, 2014. http://dx.doi.org/10.1109/iros.2014.6942600.
Pełny tekst źródłaMei, Christopher, Selim Benhimane, Ezio Malis i Patrick Rives. "Homography-based Tracking for Central Catadioptric Cameras". W 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006. http://dx.doi.org/10.1109/iros.2006.282553.
Pełny tekst źródłaSchonbein, Miriam, Tobias Straus i Andreas Geiger. "Calibrating and centering quasi-central catadioptric cameras". W 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014. http://dx.doi.org/10.1109/icra.2014.6907507.
Pełny tekst źródłaGasparini, Simone, Peter Sturm i Joao P. Barreto. "Plane-based calibration of central catadioptric cameras". W 2009 IEEE 12th International Conference on Computer Vision (ICCV). IEEE, 2009. http://dx.doi.org/10.1109/iccv.2009.5459336.
Pełny tekst źródłaBazin, J. C., I. Kweon, C. Demonceaux i P. Vasseur. "Automatic calibration of catadioptric cameras in urban environment". W 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2008. http://dx.doi.org/10.1109/iros.2008.4650590.
Pełny tekst źródłaXianghua Ying i Zhanyi Hu. "Spherical objects based motion estimation for catadioptric cameras". W Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1334510.
Pełny tekst źródłaMei, C., S. Benhimane, E. Malis i P. Rives. "Constrained Multiple Planar Template Tracking for Central Catadioptric Cameras". W British Machine Vision Conference 2006. British Machine Vision Association, 2006. http://dx.doi.org/10.5244/c.20.64.
Pełny tekst źródłaDuan, F. Q., R. Liu i M. Q. Zhou. "A new easy calibration algorithm for para-catadioptric cameras". W 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ). IEEE, 2010. http://dx.doi.org/10.1109/ivcnz.2010.6148805.
Pełny tekst źródłaWu Keyu, Gao He, Zhou Fuqiang i Tan Haishu. "Simplified calibration of non-single viewpoint catadioptric omnidirectional cameras". W 2012 IEEE International Conference on Oxide Materials for Electronic Engineering (OMEE). IEEE, 2012. http://dx.doi.org/10.1109/omee.2012.6343532.
Pełny tekst źródłaDias, Tiago Jose Simoes, Pedro Daniel dos Santos Miraldo i Nuno Miguel Mendonca da Silva Goncalves. "A Framework for Augmented Reality Using Non-Central Catadioptric Cameras". W 2015 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). IEEE, 2015. http://dx.doi.org/10.1109/icarsc.2015.31.
Pełny tekst źródłaRaporty organizacyjne na temat "Catadioptric cameras"
Fagan, Joseph, Eddy Tsui, Terence Ringwood, Mark Mellini i Amir Morcos. Catadioptric Omni-Directional System for M1A2 Abrams (360-Degree Camera System). Fort Belvoir, VA: Defense Technical Information Center, listopad 2005. http://dx.doi.org/10.21236/ada440545.
Pełny tekst źródła