Índice
Literatura académica sobre el tema "Caméras de profondeur"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Caméras de profondeur".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Caméras de profondeur"
Bouchafa, Samia. "Décision cumulative pour la vision dynamique des systèmes". Revue Française de Photogrammétrie et de Télédétection, n.º 202 (16 de abril de 2014): 2–26. http://dx.doi.org/10.52638/rfpt.2013.48.
Texto completoGademer, Antoine, Loïca Avanthey, Laurent Beaudouin, Michel Roux y Jean-Paul Rudant. "Micro-charges utiles dédiées à l'acquisition de données par drone pour l'étude des zones naturelles". Revue Française de Photogrammétrie et de Télédétection, n.º 213 (31 de marzo de 2017): 19–31. http://dx.doi.org/10.52638/rfpt.2017.192.
Texto completoDehouck, Aurélie, Virginie Lafon, Nadia Sénéchal, Jean-Marie Froidefond, Rafael Almar, Bruno Castelle y Nadège Martiny. "Evolution morphodynamique interannuelle du littoral sud de la Gironde". Revue Française de Photogrammétrie et de Télédétection, n.º 197 (22 de abril de 2014): 31–42. http://dx.doi.org/10.52638/rfpt.2012.82.
Texto completoSachs Collopy, Peter. "La vidéo et les origines de la photographie électronique". Transbordeur 3 (2019): 26–35. http://dx.doi.org/10.4000/12gya.
Texto completoEl Gharbie, Rana. "L’adaptation cinématographique d’oeuvres théâtrales chez Jean Cocteau". Études littéraires 45, n.º 3 (22 de julio de 2015): 31–41. http://dx.doi.org/10.7202/1032443ar.
Texto completoCléry, Isabelle y Marc Pierrot-Deseilligny. "Une interface ergonomique de calcul de modèles 3D par photogrammétrie". Revue Française de Photogrammétrie et de Télédétection, n.º 196 (15 de abril de 2014): 40–51. http://dx.doi.org/10.52638/rfpt.2011.36.
Texto completoDarnoux, Camille y Laurine Leriche. "Qualification d’un procédé multi technique en alternative au ressuage". e-journal of nondestructive testing 28, n.º 9 (septiembre de 2023). http://dx.doi.org/10.58286/28503.
Texto completoGarçon, Lucie. "La trame cachée de Barry Lyndon". Mosaïque, n.º 2 (1 de enero de 2010). http://dx.doi.org/10.54563/mosaique.724.
Texto completoMichau, Nadine. "Dans les entrailles de la machine. Anthropologie filmique d’une usine de mécanique industrielle". Images du travail, travail des images 17 (2024). http://dx.doi.org/10.4000/12f9r.
Texto completoTesis sobre el tema "Caméras de profondeur"
Noury, Charles-Antoine. "Etalonnage de caméra plénoptique et estimation de profondeur à partir des données brutes". Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC063.
Texto completoUnlike a standard camera that records two dimensions of a light field, a plenoptic camera is designed to locally capture four of its dimensions. The richness of data from this sensor can be use for many applications : it is possible to post-process new points of view, to refocus on different scene’s planes, or to compute depth maps of a scene from a single acquisition and thus obtain 3D reconstructions of the environment. This passive sensor allows the capture of depth using a compact optical system, which makes it attractive for robotics applications. However, depth estimations from such a sensor requires its precise calibration. This camera is composed of a substantial number of elements, including a micro-lens array placed in front of the sensor, and its raw data is complex. Most of the state-of-the-art calibration approaches then consist in formulating simplified projection models and exploiting interpreted data such as synthesized images and associated depth maps. Hence, in our first contribution, carried out in collaboration with the TUM laboratory, we proposed a calibration method from a 3D test pattern using interpreted data. Then we proposed a new calibration approach based on raw data. We formalized a physical-based model of the camera and proposed a minimization expressed directly in the sensor data space to estimate its parameters. Finally, we proposed a new metric scaled depth estimation method using the camera projection model. This direct approach uses an error minimization between each micro-image content and the texture reprojection of the micro-images that surround it. Our algorithms performance was evaluated both on a simulator developed during this thesis and on real scenes. We have shown that the calibration is robust to bad model initialization and the depth estimation accuracy competes with the state-of-the-art
Belhedi, Amira. "Modélisation du bruit et étalonnage de la mesure de profondeur des caméras Temps-de-Vol". Thesis, Clermont-Ferrand 1, 2013. http://www.theses.fr/2013CLF1MM08/document.
Texto completo3D cameras open new possibilities in different fields such as 3D reconstruction, Augmented Reality and video-surveillance since they provide depth information at high frame-rates. However, they have limitations that affect the accuracy of their measures. In particular for TOF cameras, two types of error can be distinguished : the stochastic camera noise and the depth distortion. In state of the art of TOF cameras, the noise is not well studied and the depth distortion models are difficult to use and don't guarantee the accuracy required for some applications. The objective of this thesis is to study, to model and to propose a calibration method of these two errors of TOF cameras which is accurate and easy to set up. Both for the noise and for the depth distortion, two solutions are proposed. Each of them gives a solution for a different problem. The former aims to obtain an accurate model. The latter, promotes the simplicity of the set up. Thereby, for the noise, while the majority of the proposed models are only based on the amplitude information, we propose a first model which integrate also the pixel position in the image. For a better accuracy, we propose a second model where we replace the amplitude by the depth and the integration time. Regarding the depth distortion, we propose a first solution based on a non-parametric model which guarantee a better accuracy. Then, we use the prior knowledge of the planar geometry of the observed scene to provide a solution which is easier to use compared to the previous one and to those of the litterature
Benyoucef, Rayane. "Contribution à la cinématique à partir du mouvement : approches basées sur des observateurs non linéaires". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG091.
Texto completoThis manuscript outlines the various scientific contributions during this thesis, with the objective of solving the problem of the 3D reconstructure Structure from Motion (SfM) through the synthesis of dynamic models and observation algorithms for thereconstruction of unmeasured variables. Undoubtedly, vision sensors are the most important source providing rich information about the environment and that's where the scene structure estimation begins to find professional uses. There has been significant interest in recovering the 3D structure of a scene from 2D pixels considering a sequence of images. Several solutions have been proposed in the literature for this issue. The objective of the classic “structure from motion (SfM)” problem is to estimate the Euclidean coordinates of tracked feature points attached to an object (i.e., 3-D structure) provided the relative motion between the camera and the object is known. It isthoroughly addressed in literature that the camera translational motion has a strong effect on the performance of 3D structure estimation. For eye-in-hand cameras carried by robot manipulators fixed to the ground, it is straightforward to measure linear velocity. However when dealing with cameras riding on mobile robots, some practical issues may occur. In this context, particular attention has been devoted to estimating the camera linear velocities. Hence another objective of this thesis is to find strategies to develop deterministic nonlinear observers to address the SfM problem and obtain a reliable estimation of the unknown time-varying linear velocities used as input of the estimator scheme which is an implicit requirement for most of active vision. The techniques used are based on Takagi-Sugeno models and Lyapunov-based analysis. In the first part, a solution for the SfM problem is presented, where the main purpose is to identify the 3D structure of a tracked feature point observed by a moving calibrated camera assuming a precise knowledge of the camera linear and angular velocities. Then this work is extended to investigate the depth reconstruction considering a partially calibratedcamera. In the second part, the thesis focuses on identifying the 3D structure of a tracked feature point and the camera linear velocity. At first, the strategy presented aims to recover the 3D information and partial estimate of the camera linear velocity. Then, the thesis introduces the structure estimation and full estimation of the camera linear velocity, where the proposed observer only requires the camera angular velocity and the corresponding linear and angular accelerations. The performance of eachobserver is highlighted using a series of simulations with several scenarios and experimental tests
Ali-Bey, Mohamed. "Contribution à la spécification et à la calibration des caméras relief". Thesis, Reims, 2011. http://www.theses.fr/2011REIMS022/document.
Texto completoThe works proposed in this thesis are part of the projects ANR-Cam-Relief and CPER-CREATIS supported by the National Agency of Research, the Champagne-Ardenne region and the FEDER.These studies also join within the framework of a collaboration with the 3DTV-Solutions society and two groups of the CReSTIC (AUTO and SIC).The objective of this project is, among others, to design by analogy to the current 2D popularized systems, 3D shooting systems allowing to display on 3D screens (auto-stereoscopic) visible without glasses, 3D quality images. Our interest has focused particularly on shooting systems with parallel and decentred configuration. There search works led in this thesis are motivated by the inability of the static configurations of these shooting systems to capture correctly real dynamic scenes for a correct auto-stereoscopic endering. To overcome this drawback, an adaptation scheme of the geometrical configuration of the shooting system is proposed. To determine the parameters to be affected by this adaptation,the effect of the constancy of each parameter on the rendering quality is studied. The repercussions of the dynamic and mechanical constraints on the 3D rendering are then examined. Positioning accuracy of the structural parameters is approached through two methods proposed for the rendering quality assessment, to determine the thresholds of positioning error of the structural parameters of the shooting system. Finally, the problem of calibration is discussed where we propose an approach based on the DLT method, and perspectives are envisaged to automatic control of these shooting systems by classical approaches or by visual servoing
Auvinet, Edouard. "Analyse d’information tridimensionnelle issue de systèmes multi-caméras pour la détection de la chute et l’analyse de la marche". Thèse, Rennes 2, 2012. http://hdl.handle.net/1866/9770.
Texto completoThis thesis is concerned with defining new clinical investigation method to assess the impact of ageing on motricity. In particular, this thesis focuses on two main possible disturbance during ageing : the fall and walk impairment. This two motricity disturbances still remain unclear and their clinical analysis presents real scientist and technological challenges. In this thesis, we propose novel measuring methods usable in everyday life or in the walking clinic, with a minimum of technical constraints. In the first part, we address the problem of fall detection at home, which was widely discussed in previous years. In particular, we propose an approach to exploit the subject’s volume, reconstructed from multiple calibrated cameras. These methods are generally very sensitive to occlusions that inevitably occur in the home and we therefore propose an original approach much more robust to these occultations. The efficiency and real-time operation has been validated on more than two dozen videos of falls and lures, with results approaching 100 % sensitivity and specificity with at least four or more cameras. In the second part, we go a little further in the exploitation of reconstructed volumes of a person at a particular motor task : the treadmill, in a clinical diagnostic. In this section we analyze more specifically the quality of walking. For this we develop the concept of using depth camera for the quantification of the spatial and temporal asymmetry of lower limb movement during walking. After detecting each step in time, this method makes a comparison of surfaces of each leg with its corresponding symmetric leg in the opposite step. The validation performed on a cohort of 20 subjects showed the viability of the approach.
Réalisé en cotutelle avec le laboratoire M2S de Rennes 2
Voisin, Yvon. "Détermination d'un critère pour la mise au point automatique des caméras pour des scènes à faible profondeur de champ : contribution à la mise au point des microscopes". Besançon, 1993. http://www.theses.fr/1993BESA2016.
Texto completoAuvinet, Edouard. "Analyse d'information tridimensionnelle issue de systèmes multi-caméras pour la détection de la chute et l'analyse de la marche". Phd thesis, Université Rennes 2, 2012. http://tel.archives-ouvertes.fr/tel-00946188.
Texto completoArgui, Imane. "A vision-based mixed-reality framework for testing autonomous driving systems". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR37.
Texto completoThis thesis explores the development and validation of autonomous navigation systems within a mixed-reality (MR) framework, aiming to bridge the gap between virtual simulation and real-world testing. The research emphasizes the potential of MR environments for safely, efficiently, and cost-effectively testing autonomous systems. The thesis is structured around several chapters, beginning with a review of state-of-the-art technologies in autonomous navigation and mixed-reality applications. Through both rule-based and learning-based models, the research investigates the performance of autonomous robots within simulated, real, and MR environments. One of the core objectives is to reduce the "reality gap"—the discrepancy between behaviors observed in simulations versus real-world applications—by integrating real- world elements with virtual components in MR environments. This approach allows for more accurate testing and validation of algorithms without the risks associated with physical trials. A significant part of the work is dedicated to implementing and testing an offline augmentation strategy aimed at enhancing the perception capabilities of autonomous systems using depth information. Furthermore, reinforcement learning (RL) is applied to evaluate its potential within MR environments. The thesis demonstrates that RL models can effectively learn to navigate and avoid obstacles in virtual simulations and perform similarly well when transferred to MR environments, highlighting the framework’s flexibility for different autonomous system models. Through these experiments, the thesis establishes MR environments as a versatile and robust platform for advancing autonomous navigation technologies, offering a safer, more scalable approach to model validation before real-world deployment
Sune, Jean-Luc. "Estimation de la profondeur d'une scène à partir d'une caméra en mouvement". Grenoble INPG, 1995. http://www.theses.fr/1995INPG0189.
Texto completoSoontranon, Narut. "Appariement entre images de point de vue éloignés par utilisation de carte de profondeur". Amiens, 2013. http://www.theses.fr/2013AMIE0114.
Texto completoThe work presented in this thesis focuses on extraction of homologous points (point matches) between images acquired from different viewpoints. This is a first essential step in many applications, particularly 3D reconstruction. Existing algorithms are very efficient for the images closed viewpoints but the performance can drop drastically for wide baseline images. We are interested in case that, in addition of radiometry, we have a depth information associated each pixel. The information can be obtained from different systems (LIDAR, RGB-D camera, light field camera). Our approach uses this information to solve the problem of distortion between the viewpoints, which is not efficient by the methods using only the radiometry. The first approach deals with planar surface scenes (urban environment, modern building). Each planar region is rectified to orthogonal view. The second approach is applied for more complex scenes (museum collections, sculptures). We rely on projection of conformal mapping. For two schemes, our contributions have efficiently improved the detection of point matches, both in terms of their quantity and their distribution (comparing with the algorithms: SIFT, MSER and ASIFT). Comparative results show that the homologous points obtained by our approach are numerous and efficiently distributed on the images