Auswahl der wissenschaftlichen Literatur zum Thema „Catadioptric cameras“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Catadioptric cameras" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Catadioptric cameras"
Benamar, F., S. Elfkihi, C. Demonceaux, E. Mouaddib und D. Aboutajdine. „Visual contact with catadioptric cameras“. Robotics and Autonomous Systems 64 (Februar 2015): 100–119. http://dx.doi.org/10.1016/j.robot.2014.09.036.
Der volle Inhalt der QuelleRostkowska, Marta, und Piotr Skrzypczyński. „Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices“. Sensors 23, Nr. 14 (18.07.2023): 6485. http://dx.doi.org/10.3390/s23146485.
Der volle Inhalt der QuelleKhurana, M., und C. Armenakis. „LOCALIZATION AND MAPPING USING A NON-CENTRAL CATADIOPTRIC CAMERA SYSTEM“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2 (28.05.2018): 145–52. http://dx.doi.org/10.5194/isprs-annals-iv-2-145-2018.
Der volle Inhalt der QuelleZhao, Yue, und Xin Yang. „Calibration Method for Central Catadioptric Camera Using Multiple Groups of Parallel Lines and Their Properties“. Journal of Sensors 2021 (02.07.2021): 1–13. http://dx.doi.org/10.1155/2021/6675110.
Der volle Inhalt der QuelleCórdova-Esparza, Diana-Margarita, Juan Terven, Julio-Alejandro Romero-González und Alfonso Ramírez-Pedraza. „Three-Dimensional Reconstruction of Indoor and Outdoor Environments Using a Stereo Catadioptric System“. Applied Sciences 10, Nr. 24 (10.12.2020): 8851. http://dx.doi.org/10.3390/app10248851.
Der volle Inhalt der QuelleLi, Yongle, und Jingtao Lou. „Omnigradient Based Total Variation Minimization for Enhanced Defocus Deblurring of Omnidirectional Images“. International Journal of Optics 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/732937.
Der volle Inhalt der QuelleZhang, Yu, Xiping Xu, Ning Zhang und Yaowen Lv. „A Semantic SLAM System for Catadioptric Panoramic Cameras in Dynamic Environments“. Sensors 21, Nr. 17 (01.09.2021): 5889. http://dx.doi.org/10.3390/s21175889.
Der volle Inhalt der QuelleIlizirov, Grigory, und Sagi Filin. „POSE ESTIMATION AND MAPPING USING CATADIOPTRIC CAMERAS WITH SPHERICAL MIRRORS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (09.06.2016): 43–47. http://dx.doi.org/10.5194/isprs-archives-xli-b3-43-2016.
Der volle Inhalt der QuelleIlizirov, Grigory, und Sagi Filin. „POSE ESTIMATION AND MAPPING USING CATADIOPTRIC CAMERAS WITH SPHERICAL MIRRORS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (09.06.2016): 43–47. http://dx.doi.org/10.5194/isprsarchives-xli-b3-43-2016.
Der volle Inhalt der QuelleMariottini, Gian Luca, und Domenico Prattichizzo. „Image-based Visual Servoing with Central Catadioptric Cameras“. International Journal of Robotics Research 27, Nr. 1 (Januar 2008): 41–56. http://dx.doi.org/10.1177/0278364907084320.
Der volle Inhalt der QuelleDissertationen zum Thema "Catadioptric cameras"
Bastanlar, Yalin. „Parameter Extraction And Image Enhancement For Catadioptric Omnidirectional Cameras“. Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606034/index.pdf.
Der volle Inhalt der QuelleBastanlar, Yalin. „Structure-from-motion For Systems With Perspective And Omnidirectional Cameras“. Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610833/index.pdf.
Der volle Inhalt der QuelleMohtaram, Noureddine. „Reconstruction 3D dense d'objets sans recul par vision catadioptrique“. Electronic Thesis or Diss., Amiens, 2019. http://www.theses.fr/2019AMIE0023.
Der volle Inhalt der QuelleThis PhD work focuses on the problem of complete dense 3D reconstruction of objects without recoil. We have conceived and developed a tridimensional reconstruction system of real objects based on a camera with two planar mirrors; baptised as a Planar Catadioptric Stereo (PCS) system. We first model the PCS system by a network of virtual cameras to perform calibration. Then, we formulate the problem of characteristic points' correspondences detected on the reflected images by using a variant of the ASIFT method. This adaptation which we behold as AMIFT on the mirror planes. Putative point correspondences are further refined with outlier rejection using our method of Symmetric RANSAC proposed in this thesis. To reconstruct a proper dense object surface, and not just a sparse projection of points, a dense correspondence technique is consequently required. This method estimates the geometric transformation linking the image object with one of it’s inter-reflections on the mirror planes by minimizing a branch and bound cost function. This allows us to adapt the 3D dense reconstruction, fundamentally based on the triangulation of image correspondences. We apply this 3D reconstruction pipeline on multiple catadioptric images in order to verify the undermining hypothesis using a PCS system. Our methodology is validated using experimental results on synthetic images to illustrate the quality of the 3D reconstruction
Ducrocq, Julien. „Vision catadioptrique pour favoriser la perception d'environnements hétérogènes“. Electronic Thesis or Diss., Amiens, 2022. http://www.theses.fr/2022AMIE0067.
Der volle Inhalt der QuelleThis thesis presents the conception methods of two catadioptric cameras capable of recording usable images of heterogeneous environments. These cameras belong to the adaptive vision field, which gathers the cameras of which the optics or sensor have heterogeneous properties which can vary across time. Adaptive cameras abilities include capturing heterogenous environments which physical or geometrical properties change across space. This thesis proposes a survey of the state of the art on adaptive cameras which are able to capture specific types of heterogenous environments. On the one hand, we consider the scenes characterized by a spatial variation of radiances, with a dynamic range around 120 decibels. These scenes put conventional cameras in difficulty, their images have some pixels saturated and others to dark, because of their low dynamic range. In both casses, these image regions does not carry any visual information about the scene, they are not usable. In order to capture the radiances corresponding to these bright and dark areas, the high dynamic range cameras (HDR) are used. Nonetheless, there is no available HDR panoramic camera yet. Therefore, the first contribution of this thesis is the conception of an HDR panoramic camera in order to improve robots navigation, with only visual perception, in outdoor scenes with various. Mounted on a mobile robot, this camera enlarges the convergence domain and the positioning accuracy of a robot by direct visual servoing, outdoors. On the other hand, we consider the scenes which have a non-uniform level of details across space : some scene elements carry more visual information than the others. To capture such heterogeneous environments, the second contribution of this thesis is an adaptive camera. This camera is based on a new deformable mirror of local curvatures allowing to enlarge or reduce the number of pixels occupied by scene regions in the image. This camera, nicknamed Visadapt, capture multi-résolution images which depend on scene content. From one scene to another, the shape of the mirror may be changed to optimise the resolution of the images captured to this new scene. The surface of the mirror is made of material both reflective and deformable, the mylar, and changes of shape thanks to a grid of linear actuators placed underneath. This mirror, plan as an initial state, is able to change shape to give to the scene regions captured by Visadapt the desired resolution in the image. The characteristics of Visadapt, particularly the dimensions, the materials of its different elements and the actuators pitch, have been defined thanks to a simulation study. A real prototype have been built to respect the parameters defined by the simulation. The experiments shown that this prototype is able to magnify up to four scene regions at once. This thesis ends with a conclusion presenting future works to upgrade the prototypes of the two cameras, in order to enhance their performances and the diversity of images they can capture. Furthermore, this conclusion proposes research tracks to improve even further these two cameras and even adaptive vision in general
Kawahara, Ryo. „A Novel Catadioptric Ray-Pixel Camera Model and its Application to 3D Reconstruction“. Kyoto University, 2019. http://hdl.handle.net/2433/242435.
Der volle Inhalt der QuelleAziz, Fatima. „Approche géométrique couleur pour le traitement des images catadioptriques“. Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0080/document.
Der volle Inhalt der QuelleThis manuscript investigates omnidirectional catadioptric color images as Riemannian manifolds. This geometric representation offers insights into the resolution of problems related to the distortions introduced by the catadioptric system in the context of the color perception of autonomous systems. The report starts with an overview of the omnidirectional vision, the different used systems, and the geometric projection models. Then, we present the basic notions and tools of Riemannian geometry and its use in the image processing domain. This leads us to introduce some useful differential operators on Riemannian manifolds. We develop a method of constructing a hybrid metric tensor adapted to color catadioptric images. This tensor has the dual characteristic of depending on the geometric position of the image points and their photometric coordinates as well.In this work, we mostly deal with the exploitation of the previously constructed hybrid metric tensor in the catadioptric image processing. Indeed, it is recognized that the Gaussian function is at the core of several filters and operators for various applications, such as noise reduction, or the extraction of low-level characteristics from the Gaussian space- scale representation. We thus build a new Gaussian kernel dependent on the Riemannian metric tensor. It has the advantage of being applicable directly on the catadioptric image plane, also, variable in space and depending on the local image information. As a final part in this thesis, we discuss some possible robotic applications of the hybrid metric tensor. We propose to define the free space and distance transforms in the omni- image, then to extract geodesic medial axis. The latter is a relevant topological representation for autonomous navigation, that we use to define an optimal trajectory planning method
hart, charles. „A Low-cost Omni-directional Visual Bearing Only Localization System“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1386695142.
Der volle Inhalt der QuelleChen, Cheng-I., und 陳政毅. „Catadioptric Camera Calibration using Planar Calibration Objects“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/76234730503256581655.
Der volle Inhalt der Quelle國立交通大學
資訊科學與工程研究所
94
Catadioptric camera has been widely used in the applications of video surveillance and robot navigation due to its advantage of large field of view. However, it is more difficult to calibrate this kind of cameras than to calibrate perspective cameras because the camera structure is more complicated and there are more camera parameters to be determined. Catadioptric camera can be either central or non-central, depending on whether it keeps the single viewpoint constraint. The central catadioptric camera has a single center of projection, hence the epipolar geometry can be applied to calibrate the camera parameters. Considering the practical issues, such as the large size of central catadioptric camera and the difficulty of precise alignment between the camera and the mirror, most off-the-shelf catadioptric cameras are non-central ones without obeying the single viewpoint constraint. A non-central catadioptric camera can be calibrated by photogrammetric methods requiring the correspondence of 3-D world coordinate and 2-D image coordinate. In this thesis, we propose novel calibration methods for determining camera parameters of both central and non-central catadioptric cameras. Our methods utilize planar objects and can achieve very accurate results while keeping the calibration procedures simple. For central catadioptric camera, 2-D projection point in image for a 3-D projection ray can be determined by the viewing sphere model. In the proposed calibration procedure, we place a planar calibration plate several times surrounding the camera and capture an image for each pose of the calibration plate. With the viewing sphere model and the associated parameters, we can unwarp the captured catadioptric image into the image captured by a virtual perspective camera with known intrinsic parameters as well as extrinsic parameters relative to the viewing sphere. We show that moving a calibration plate around the catadioptric camera is equivalent to placing the same calibration plate at different poses relative to a static, virtual, perspective camera. We can then use this set of unwarped perspective images to calculate the relative poses of the calibration plate as well as the projection error of the feature points on the calibration plate by using the homography method. The associated parameters of the viewing sphere model can be obtained by minimizing the projection error in a nonlinear optimization procedure. For non-central catadioptric camera, the single viewpoint constraint does not hold and the viewing sphere model cannot be applied. Thus it is even more difficult to calibrate a non-central catadioptric camera. In this work we determine the projection model of the non-central catadioptric camera through a calibrated central catadioptric camera as an intermediate. First, we use a set of LCD panels with fixed positions to present feature patterns to the central catadioptric camera. Image coordinates of these feature patterns in the captured images are automatically determined and the corresponding 3-D coordinates can be calculated since the camera is calibrated. The same set of LCD panels are then presented to the non-central catadioptric camera. For each feature point, we can obtain its 2-D coordinate in the image captured by the non-central catadioptric camera. Since the 3-D coordinates of the feature points are determined beforehand, camera parameters of the non-central catadioptric camera can then be obtained photogrammetrically. In the proposed method, we use Mashita’s method to determine the initial values of the parameters in the reflected ray model and then optimize the values of the parameters by minimizing the projection error. Experiments with simulation and real data clearly demonstrate the robustness and accuracy of the proposed calibration methods. In the simulation data, we add gaussian noise with zero-mean and standard deviation ( = 0.0 2.0) to evaluate the performance of proposed methods. The results show that our methods are indeed robust and accurate. We also implement calibration method using geometric invariants for the purpose of comparison. The results show that our methods are indeed robust and accuracy, and have better performance than one using geometric invariants. Moreover, we also present an integrated surveillance system in which the calibrated non-central catadioptric camera is used to navigate a mobile robot for patrolling.
Bücher zum Thema "Catadioptric cameras"
Klette, Reinhard, und Kostas Daniilidis. Imaging Beyond the Pinhole Camera. Springer, 2008.
Den vollen Inhalt der Quelle findenKlette, Reinhard, und Kostas Daniilidis. Imaging Beyond the Pinhole Camera. Kostas Daniilidis, 2010.
Den vollen Inhalt der Quelle findenKlette, Reinhard, und Kostas Daniilidis. Imaging Beyond the Pinhole Camera. Springer, 2006.
Den vollen Inhalt der Quelle finden(Editor), Kostas Daniilidis, und Reinhard Klette (Editor), Hrsg. Imaging Beyond the Pinhole Camera (Computational Imaging and Vision). Springer, 2007.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Catadioptric cameras"
Nayar, S. K., und V. Peri. „Folded Catadioptric Cameras“. In Panoramic Vision, 103–19. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4757-3482-9_6.
Der volle Inhalt der QuelleBaker, S., und S. K. Nayar. „Single Viewpoint Catadioptric Cameras“. In Panoramic Vision, 39–71. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4757-3482-9_4.
Der volle Inhalt der QuelleSturm, Peter, und João P. Barreto. „General Imaging Geometry for Central Catadioptric Cameras“. In Lecture Notes in Computer Science, 609–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88693-8_45.
Der volle Inhalt der QuellePajdla, T., T. Svoboda und V. Hlaváč. „Epipolar Geometry of Central Panoramic Catadioptric Cameras“. In Panoramic Vision, 73–102. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4757-3482-9_5.
Der volle Inhalt der QuelleSchönbein, Miriam, Holger Rapp und Martin Lauer. „Panoramic 3D Reconstruction with Three Catadioptric Cameras“. In Advances in Intelligent Systems and Computing, 345–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-33926-4_32.
Der volle Inhalt der QuelleTolvanen, Antti, Christian Perwass und Gerald Sommer. „Projective Model for Central Catadioptric Cameras Using Clifford Algebra“. In Lecture Notes in Computer Science, 192–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11550518_24.
Der volle Inhalt der QuelleYing, Xianghua, und Zhanyi Hu. „Can We Consider Central Catadioptric Cameras and Fisheye Cameras within a Unified Imaging Model“. In Lecture Notes in Computer Science, 442–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24670-1_34.
Der volle Inhalt der QuelleAgrawal, Amit, Yuichi Taguchi und Srikumar Ramalingam. „Analytical Forward Projection for Axial Non-central Dioptric and Catadioptric Cameras“. In Computer Vision – ECCV 2010, 129–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15558-1_10.
Der volle Inhalt der QuelleBermudez-Cameo, Jesus, Gonzalo Lopez-Nicolas und Jose J. Guerrero. „A Unified Framework for Line Extraction in Dioptric and Catadioptric Cameras“. In Computer Vision – ACCV 2012, 627–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37447-0_48.
Der volle Inhalt der QuelleRamalingam, Srikumar. „Catadioptric Camera“. In Computer Vision, 85–89. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_486.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Catadioptric cameras"
Endres, Felix, Christoph Sprunk, Rainer Kummerle und Wolfram Burgard. „A catadioptric extension for RGB-D cameras“. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014). IEEE, 2014. http://dx.doi.org/10.1109/iros.2014.6942600.
Der volle Inhalt der QuelleMei, Christopher, Selim Benhimane, Ezio Malis und Patrick Rives. „Homography-based Tracking for Central Catadioptric Cameras“. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006. http://dx.doi.org/10.1109/iros.2006.282553.
Der volle Inhalt der QuelleSchonbein, Miriam, Tobias Straus und Andreas Geiger. „Calibrating and centering quasi-central catadioptric cameras“. In 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014. http://dx.doi.org/10.1109/icra.2014.6907507.
Der volle Inhalt der QuelleGasparini, Simone, Peter Sturm und Joao P. Barreto. „Plane-based calibration of central catadioptric cameras“. In 2009 IEEE 12th International Conference on Computer Vision (ICCV). IEEE, 2009. http://dx.doi.org/10.1109/iccv.2009.5459336.
Der volle Inhalt der QuelleBazin, J. C., I. Kweon, C. Demonceaux und P. Vasseur. „Automatic calibration of catadioptric cameras in urban environment“. In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2008. http://dx.doi.org/10.1109/iros.2008.4650590.
Der volle Inhalt der QuelleXianghua Ying und Zhanyi Hu. „Spherical objects based motion estimation for catadioptric cameras“. In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1334510.
Der volle Inhalt der QuelleMei, C., S. Benhimane, E. Malis und P. Rives. „Constrained Multiple Planar Template Tracking for Central Catadioptric Cameras“. In British Machine Vision Conference 2006. British Machine Vision Association, 2006. http://dx.doi.org/10.5244/c.20.64.
Der volle Inhalt der QuelleDuan, F. Q., R. Liu und M. Q. Zhou. „A new easy calibration algorithm for para-catadioptric cameras“. In 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ). IEEE, 2010. http://dx.doi.org/10.1109/ivcnz.2010.6148805.
Der volle Inhalt der QuelleWu Keyu, Gao He, Zhou Fuqiang und Tan Haishu. „Simplified calibration of non-single viewpoint catadioptric omnidirectional cameras“. In 2012 IEEE International Conference on Oxide Materials for Electronic Engineering (OMEE). IEEE, 2012. http://dx.doi.org/10.1109/omee.2012.6343532.
Der volle Inhalt der QuelleDias, Tiago Jose Simoes, Pedro Daniel dos Santos Miraldo und Nuno Miguel Mendonca da Silva Goncalves. „A Framework for Augmented Reality Using Non-Central Catadioptric Cameras“. In 2015 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). IEEE, 2015. http://dx.doi.org/10.1109/icarsc.2015.31.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Catadioptric cameras"
Fagan, Joseph, Eddy Tsui, Terence Ringwood, Mark Mellini und Amir Morcos. Catadioptric Omni-Directional System for M1A2 Abrams (360-Degree Camera System). Fort Belvoir, VA: Defense Technical Information Center, November 2005. http://dx.doi.org/10.21236/ada440545.
Der volle Inhalt der Quelle