Dissertationen zum Thema „Localisation et cartographie visuelles simultanées“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-32 Dissertationen für die Forschung zum Thema "Localisation et cartographie visuelles simultanées" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Decrouez, Marion. „Localisation et cartographie visuelles simultanées en milieu intérieur et en temps réel“. Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM010/document.
Der volle Inhalt der QuelleIn this thesis, we explore the problem of modeling an unknown environment using monocular vision for localization applications. We focus in modeling dynamic indoor environments. Many objects in indoor environments are likely to be moved. These movements significantly affect the structure and appearance of the environment and disrupt the existing methods of visual localization. We present in this work a new approach for modeling the environment and its evolution with time. We define explicitly the scene as a static structure and a set of dynamic objects. The object is defined as a rigid entity that a user can take, move and that is visually detectable. First, we show how to automatically discover new objects in a dynamic environment. Existing methods of visual localization simply ignore the inconsistencies due to changes in the scene. We aim to analyze these changes to extract additional information. Without any prior knowledge, an object is a set of points with coherent motion relative to the static structure of the scene. We combine two methods of visual localization to compare various explorations in the same environment taken at different time. The comparison enables to detect objects that have moved between the two shots. For each object, a geometric model and an appearance model are learned. Moreover, we extend the scene model while updating the metrical map and the topological map of the static structure of the environment. Object discovery using motion is based on a new algorithm of multiple structures detection in an image pair. Given a set of correspondences between two views, the method based on RANSAC extracts the different structures corresponding to different model parameterizations seen in the data. The method is applied to homography estimation to detect planar structures and to fundamental matrix estimation to detect structures that have been shifted one from another. Our approach for dynamic scene modeling is applied in a new formulation of place recognition to take into account the presence of dynamic objects in the environment. The model of the place consists in an appearance model of the static structure observed in that place. An object database is learned from previous observations in the environment with the method of object discovery using motion. The place recognition we propose detects the dynamic objects seen in the place and rejects the false detection due to these objects. The different methods described in this dissertation are tested on synthetic and real data. Qualitative and quantitative results are presented throughout the dissertation
Angeli, Adrien. „Détection visuelle de fermeture de boucle et applications à la localisation et cartographie simultanées“. Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://pastel.archives-ouvertes.fr/pastel-00004634.
Der volle Inhalt der QuelleAngeli, Adrien. „Détection visuelle de fermeture de boucle et applications à la localisation et catographie simultanées“. Paris 6, 2008. http://www.theses.fr/2008PA066388.
Der volle Inhalt der QuelleLemaire, Thomas. „Localisation et Cartographie Simultanées avec Vision Monoculaire“. Phd thesis, Ecole nationale superieure de l'aeronautique et de l'espace, 2006. http://tel.archives-ouvertes.fr/tel-00452478.
Der volle Inhalt der QuelleVincke, Bastien. „Architectures pour des systèmes de localisation et de cartographie simultanées“. Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00770323.
Der volle Inhalt der QuelleDujardin, Aymeric. „Détection d’obstacles par stéréovision en environnement non structuré“. Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR09.
Der volle Inhalt der QuelleAutonomous vehicles and robots represent the future of transportation and production industries. The challenge ahead will come from the robustness of perception and flexibility from unexpected situations and changing environments. Stereoscopic cameras are passive sensors that provide color images and depth information of the scene by correlating 2 images like the human vision. In this work, we developed a localization system, by visual odometry that can determine efficiently the position in space of the sensor by exploiting the dense depth map. It is also combined with a SLAM system that enables robust localization against disturbances and potentials drifts. Additionally, we developed a few mapping and obstacles detections solutions, both for aerial and terrestrial vehicles. These algorithms are now partly integrated into commercial products
Dine, Abdelhamid. „Localisation et cartographie simultanées par optimisation de graphe sur architectures hétérogènes pour l’embarqué“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS303/document.
Der volle Inhalt der QuelleSimultaneous Localization And Mapping is the process that allows a robot to build a map of an unknown environment while at the same time it determines the robot position on this map.In this work, we are interested in graph-based SLAM method. This method uses a graph to represent and solve the SLAM problem. A graph optimization consists in finding a graph configuration (trajectory and map) that better matches the constraints introduced by the sensors measurements. Graph optimization is characterized by a high computational complexity that requires high computational and memory resources, particularly to explore large areas. This limits the use of graph-based SLAM in real-time embedded systems. This thesis contributes to the reduction of the graph-based computational complexity. Our approach is based on two complementary axes: data representation in memory and implementation on embedded heterogeneous architectures. In the first axis, we propose an incremental data structure to efficiently represent and then optimize the graph. In the second axis, we explore the use of the recent heterogeneous architectures to speed up graph-based SLAM. We propose an efficient implementation model for embedded applications. We highlight the advantages and disadvantages of the evaluated architectures, namely GPU-based and FPGA-based System-On-Chips
El, Hamzaoui Oussama. „Localisation et cartographie simultanées pour un robot mobile équipé d'un laser à balayage : CoreSLAM“. Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://pastel.archives-ouvertes.fr/pastel-00935600.
Der volle Inhalt der QuelleBoucher, Maxime. „Quelques contributions en localisation et cartographie simultanées multi-capteurs : application à la réalité augmentée“. Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0055/document.
Der volle Inhalt der QuelleGathering informations from the images of a camera, over time, in order to map the environment and localize the camera in it, is a task refered to as Simultaneous Localization and Mapping, or SLAM. Developped both by the robotics and computer vision scientific communities, its applications are many. Robots gain autonomy from this ability. Quite recently, impressive results have been obtained in applications to autonomous transportation vehicles. Another field of application is augmented reality. The localization offered by SLAM enables us to display virtual objects in a consistent way a user movements. Thus, cinema, video games, tourisme applications can benefit from SLAM methods. Visual aids to workers performing complex or repetetive tasks is also an interesting application of SLAM methods. During this PhD thesis, we took interest in SLAM with the idea of realistic augmented reality applications in mind. Though the topic has been extensively explored and many impressive results obtained, the task isn't completely solved. The problem is still an open one, regarding spatial facets (drift, loop closure) as well as temporal (processing time). As part of our monocular SLAM explorations, we mainly studied the drift issue. We then explored multisensor SLAM, both as a mean to handle problematical rotational movements for the monocular setup and as mean to reduce the substantial processing times needed to solve the problem
Weber, Michael. „Development of a method for practical testing of camera-based advanced driver assistance systems in automotive vehicles using augmented reality“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. http://www.theses.fr/2023UBFCA027.
Der volle Inhalt der QuelleAdvanced Driver Assistance Systems (ADAS) support the driver, offer comfort, and take responsibility for increasing road safety. These complex systems endure an extensive testing phase resulting in optimization potential regarding quality, reproducibility, and costs. ADAS of the future will support ever-larger proportions of driving situations in increasingly complex scenarios and represent a key factor for Autonomous Driving (AD). Current testing methods for ADAS can be divided into simulation and reality. The core concept behind the simulation is to benefit from reproducibility, flexibility, and cost reduction. However, simulation cannot yet completely replace real-world tests. Physical conditions, such as weather, road surface, and other variables, play a crucial role in evaluating ADAS road tests and cannot be fully replicated in a virtual environment. These test methods rely on real driving tests on special test sites as well as in real road traffic and are very time-consuming and costly. Therefore, new and efficient test methods are required to pave the way for future ADAS. A new approach Vehicle in the Loop (VIL), which is already being used in the industry today, combines the advantages of simulation and reality. The approach in this project is a new method besides existing VIL solutions. Taking advantage of testing ADAS in simulation and reality, this project presents a new approach to using Augmented Reality (AR) to test camera-based ADAS in a reproducible, cost- and time-efficient way. High computer power is needed for complex automotive environmental conditions, such as high vehicle speed and fewer orientation points on a test track compared to AR applications inside a building. A three-dimensional model with accurate information about the test site is generated based on the combination of visual Simultaneous Localization and Mapping (vSLAM) and Semantic Segmentation. The use of a special augmentation process allows us to enrich reality with virtual road users to present a proof of concept for future test procedures
Servant, F. „Localisation et cartographie simultanées en vision monoculaire et en temps réel basé sur les structures planes“. Phd thesis, Université Rennes 1, 2009. http://tel.archives-ouvertes.fr/tel-00844909.
Der volle Inhalt der QuelleServant, Fabien. „Localisation et cartographie simultanées en vision monoculaire et en temps réel basé sur les structures planes“. Rennes 1, 2009. ftp://ftp.irisa.fr/techreports/theses/2009/servant.pdf.
Der volle Inhalt der QuelleOur work deals with computer vision. The problem of augmented reality implies a real time estimation of the relive position between camera and scene. This thesis presents a complete method of pose tracking that works with planar structures which are abundant in indoor and outdoor urban environments. The pose tracking is done using a low cost camera and an inertial sensor. Our approach is to use the planes to make the pose estimation easier. Homographies computed by an image tracking algorithm presented in this document are used as measurements for our Simultaneous Localization And Mapping method. This SLAM method permits a long term and robust pose tracking by propagating the measurements uncertainties. Works about selection of regions to track and their corresponding plane parameters initialization are also described in this thesis. Numerical and image based experiments shows the validity of our approach
Le, Bars Fabrice. „Analyse par intervalles pour la localisation et la cartographie simultanées : application à la robotique sous-marine“. Phd thesis, Université de Bretagne occidentale - Brest, 2011. http://tel.archives-ouvertes.fr/tel-00670495.
Der volle Inhalt der QuelleGérossier, Franck. „Localisation et cartographie simultanées en environnement extérieur à partir de données issues d'un radar panoramique hyperfréquence“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00864181.
Der volle Inhalt der QuelleLothe, Pierre. „Localication et cartographie simultanées par vision monoculaire contraintes par un SIG : application à la géolocalisation d'un véhicule“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2010. http://tel.archives-ouvertes.fr/tel-00625652.
Der volle Inhalt der QuelleLe, Corff Sylvain. „Estimations pour les modèles de Markov cachés et approximations particulaires : Application à la cartographie et à la localisation simultanées“. Phd thesis, Telecom ParisTech, 2012. http://tel.archives-ouvertes.fr/tel-00773405.
Der volle Inhalt der QuelleLe, Corff Sylvain. „Estimations pour les modèles de Markov cachés et approximations particulaires : Application à la cartographie et à la localisation simultanées“. Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0052.
Der volle Inhalt der QuelleThis document is dedicated to inference problems in hidden Markov models. The first part is devoted to an online maximum likelihood estimation procedure which does not store the observations. We propose a new Expectation Maximization based method called the Block Online Expectation Maximization (BOEM) algorithm. This algorithm solves the online estimation problem for general hidden Markov models. In complex situations, it requires the introduction of Sequential Monte Carlo methods to approximate several expectations under the fixed interval smoothing distributions. The convergence of the algorithm is shown under the assumption that the Lp mean error due to the Monte Carlo approximation can be controlled explicitly in the number of observations and in the number of particles. Therefore, a second part of the document establishes such controls for several Sequential Monte Carlo algorithms. This BOEM algorithm is then used to solve the simultaneous localization and mapping problem in different frameworks. Finally, the last part of this thesis is dedicated to nonparametric estimation in hidden Markov models. It is assumed that the Markov chain (Xk) is a random walk lying in a compact set with increment distribution known up to a scaling factor a. At each time step k, Yk is a noisy observations of f(Xk) where f is an unknown function. We establish the identifiability of the statistical model and we propose estimators of f and a based on the pairwise likelihood of the observations
Eudes, Alexandre. „Localisation et cartographie simultanées par ajustement de faisceaux local : propagation d'erreurs et réduction de la dérive à l'aide d'un odomètre“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00662438.
Der volle Inhalt der QuelleCorrêa, Victorino Alessandro. „La commande référencée capteur : une approche robuste au problème de navigation, localisation et cartographie simultanées pour un robot d'intérieur“. Nice, 2002. http://www.theses.fr/2002NICE5748.
Der volle Inhalt der QuelleChanier, François. „Localisation et cartographie simultanées de l'environnement à bord de véhicules autonomes : analyse de solutions fondées sur le filtrage de Kalman“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2010. http://tel.archives-ouvertes.fr/tel-00719341.
Der volle Inhalt der QuelleChanier, François. „Localisation et cartographie simultanées de l'environnement à bord de véhicules autonomes : analyse de solutions fondées sur le filtrage de Kalman“. Phd thesis, Clermont-Ferrand 2, 2010. http://www.theses.fr/2010CLF22021.
Der volle Inhalt der QuelleVivet, Damien. „Perception de l'environnement par radar hyperfréquence. Application à la localisation et la cartographie simultanées, à la détection et au suivi d'objets mobiles en milieu extérieur“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00659270.
Der volle Inhalt der QuelleLarnaout, Dorra. „Localisation d'un véhicule à l'aide d'un SLAM visuel contraint“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2014. http://tel.archives-ouvertes.fr/tel-01038016.
Der volle Inhalt der QuelleDia, Roxana. „Towards Environment Perception using Integer Arithmetic for Embedded Application“. Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM038.
Der volle Inhalt der QuelleThe main drawback of using grid-based representations for SLAM and for global localization is the required exponential computational complexity in terms of the grid size (of the map and the pose maps). The required grid size for modeling the environment surrounding a robot or of a vehicle can be in the order of thousands of millions of cells. For instance, a 2D square-shape space of size 100m × 100m, with a cell size of 10cm is modelled with a grid of 1 million cells. If we include a 2m of height to represent the third dimension, 20 millions of cells are required. Consequently, classical grid-based SLAM and global localization approaches require a parallel computing unit in order to meet the latency imposed by safety standards. Such a computation is usually done over workstations embedding Graphical Processing Units (GPUs) and/or a high-end CPUs. However, autonomous vehicles cannot handle such platforms for cost reason, and certification issues. Also, these platforms require a high power consumption that cannot fit within the limited source of energy available in some robots. Embedded hardware platforms are com- monly used as an alternative solution in automotive applications. These platforms meet the low-cost, low-power and small-space constraints. Moreover, some of them are automotive certified1, following the ISO26262 standard. However, most of them are not equipped with a floating-point unit, which limits the computational performance.The sigma-fusion project team in the LIALP laboratory at CEA-Leti has developed an integer-based perception method suitable for embedded devices. This method builds an occupancy grid via Bayesian fusion using integer arithmetic only, thus its "embeddability" on embedded computing platforms, without floating-point unit. This constitutes the major contribution of the PhD thesis of Tiana Rakotovao [Rakotovao Andriamahefa 2017].The objective of the present PhD thesis is to extend the integer perception framework to SLAM and global localization problems, thus offering solutions “em- beddable” on embedded systems
Aguilar-Gonzalez, Abiel. „Monocular-SLAM dense mapping algorithm and hardware architecture for FPGA acceleration“. Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC055.
Der volle Inhalt der QuelleSimultaneous Localization and Mapping (SLAM) is the problem of constructing a 3D map while simultaneously keeping track of an agent location within the map. In recent years, work has focused on systems that use a single moving camera as the only sensing mechanism (monocular-SLAM). This choice was motivated because nowadays, it is possible to find inexpensive commercial cameras, smaller and lighter than other sensors previously used and, they provide visual environmental information that can be exploited to create complex 3D maps while camera poses can be simultaneously estimated. Unfortunately, previous monocular-SLAM systems are based on optimization techniques that limits the performance for real-time embedded applications. To solve this problem, in this work, we propose a new monocular SLAM formulation based on the hypothesis that it is possible to reach high efficiency for embedded applications, increasing the density of the point cloud map (and therefore, the 3D map density and the overall positioning and mapping) by reformulating the feature-tracking/feature-matching process to achieve high performance for embedded hardware architectures, such as FPGA or CUDA. In order to increase the point cloud map density, we propose new feature-tracking/feature-matching and depth-from-motion algorithms that consists of extensions of the stereo matching problem. Then, two different hardware architectures (based on FPGA and CUDA, respectively) fully compliant for real-time embedded applications are presented. Experimental results show that it is possible to obtain accurate camera pose estimations. Compared to previous monocular systems, we are ranked as the 5th place in the KITTI benchmark suite, with a higher processing speed (we are the fastest algorithm in the benchmark) and more than x10 the density of the point cloud from previous approaches
Melbouci, Kathia. „Contributions au RGBD-SLAM“. Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC006/document.
Der volle Inhalt der QuelleTo guarantee autonomous and safely navigation for a mobile robot, the processing achieved for its localization must be fast and accurate enough to enable the robot to perform high-level tasks for navigation and obstacle avoidance. The authors of Simultaneous Localization And Mapping (SLAM) based works, are trying since year, to ensure the speed/accuracy trade-off. Most existing works in the field of monocular (SLAM) has largely centered around sparse feature-based representations of the environment. By tracking salient image points across many frames of video, both the positions of the features and the motion of the camera can be inferred live. Within the visual SLAM community, there has been a focus on both increasing the number of features that can be tracked across an image and efficiently managing and adjusting this map of features in order to improve camera trajectory and feature location accuracy. However, visual SLAM suffers from some limitations. Indeed, with a single camera and without any assumptions or prior knowledge about the camera environment, rotation can be retrieved, but the translation is up to scale. Furthermore, visual monocular SLAM is an incremental process prone to small drifts in both pose measurement and scale, which when integrated over time, become increasingly significant over large distances. To cope with these limitations, we have centered our work around the following issues : integrate additional information into an existing monocular visual SLAM system, in order to constrain the camera localization and the mapping points. Provided that the high speed of the initial SLAM process is kept and the lack of these added constraints should not give rise to the failure of the process. For these last reasons, we have chosen to integrate the depth information provided by a 3D sensor (e.g. Microsoft Kinect) and geometric information about scene structure. The primary contribution of this work consists of modifying the SLAM algorithm proposed by Mouragnon et al. (2006b) to take into account the depth measurement provided by a 3D sensor. This consists of several rather straightforward changes, but also on a way to combine the depth and visual data in the bundle adjustment process. The second contribution is to propose a solution that uses, in addition to the depth and visual data, the constraints lying on points belonging to the plans of the scene. The proposed solutions have been validated on a synthetic sequences as well as on a real sequences, which depict various environments. These solutions have been compared to the state of art methods. The performances obtained with the previous solutions demonstrate that the additional constraints developed, improves significantly the accuracy and the robustness of the SLAM localization. Furthermore, these solutions are easy to roll out and not much time consuming
Loesch, Angélique. „Localisation d'objets 3D industriels à l'aide d'un algorithme de SLAM contraint au modèle“. Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC059/document.
Der volle Inhalt der QuelleIn the industry domain, applications such as quality control, automation of complex tasks or maintenance support with Augmented Reality (AR) could greatly benefit from visual tracking of 3D objects. However, this technology is under-exploited due to the difficulty of providing deployment easiness, localization quality and genericity simultaneously. Most existing solutions indeed involve a complex or an expensive deployment of motion capture sensors, or require human supervision to simplify the 3D model. And finally, most tracking solutions are restricted to textured or polyhedral objects to achieved an accurate camera pose estimation.Tracking any object is a challenging task due to the large variety of object forms and appearances. Industrial objects may indeed have sharp edges, or occluding contours that correspond to non-static and view-point dependent edges. They may also be textured or textureless. Moreover, some applications require to take large amplitude motions as well as object occlusions into account, tasks that are not always dealt with common model-based tracking methods. These approaches indeed exploit 3D features extracted from a model, that are matched with 2D features in the image of a video-stream. However the accuracy and robustness of the camera localization depend on the visibility of the object as well as on the motion of the camera. To better constrain the localization when the object is static, recent solutions rely on environment features that are reconstructed online, in addition to the model ones. These approaches combine SLAM (Simultaneous Localization And Mapping) and model-based tracking solutions by using constraints from the 3D model of the object of interest. Constraining SLAM algorithms with a 3D model results in a drift free localization. However, such approaches are not generic since they are only adapted for textured or polyhedral objects. Furthermore, using the 3D model to constrain the optimization process may generate high memory consumption,and limit the optimization to a temporal window of few cameras. In this thesis, we propose a solution that fulfills the requirements concerning deployment easiness, localization quality and genericity. This solution, based on a visual key-frame-based constrained SLAM, only exploits an RGB camera and a geometric CAD model of the static object of interest. An RGB camera is indeed preferred over an RGBD sensor, since the latter imposes limits on the volume, the reflectiveness or the absorptiveness of the object, and the lighting conditions. A geometric CAD model is also preferred over a textured model since textures may hardly be considered as stable in time (deterioration, marks,...) and may vary for one manufactured object. Furthermore, textured CAD models are currently not widely spread. Contrarily to previous methods, the presented approach deals with polyhedral and curved objects by extracting dynamically 3D contour points from a model rendered on GPU. This extraction is integrated as a structure constraint into the constrained bundle adjustment of a SLAM algorithm. Moreover we propose different formalisms of this constraint to reduce the memory consumption of the optimization process. These formalisms correspond to hybrid structure/trajectory constraints, that uses output camera poses of a model-based tracker. These formalisms take into account the structure information given by the 3D model while relying on the formalism of trajectory constraints. The proposed solution is real-time, accurate and robust to occlusion or sudden motion. It has been evaluated on synthetic and real sequences of different kind of objects. The results show that the accuracy achieved on the camera trajectory is sufficient to ensure a solution perfectly adapted for high-quality Augmented Reality experiences for the industry
Abouzahir, Mohamed. „Algorithmes SLAM : Vers une implémentation embarquée“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS058/document.
Der volle Inhalt der QuelleAutonomous navigation is a main axis of research in the field of mobile robotics. In this context, the robot must have an algorithm that allow the robot to move autonomously in a complex and unfamiliar environments. Mapping in advance by a human operator is a tedious and time consuming task. On the other hand, it is not always reliable, especially when the structure of the environment changes. SLAM algorithms allow a robot to map its environment while localizing it in the space.SLAM algorithms are becoming more efficient, but there is no full hardware or architectural implementation that has taken place . Such implantation of architecture must take into account the energy consumption, the embeddability and computing power. This scientific work aims to evaluate the embedded systems implementing locatization and scene reconstruction (SLAM). The methodology will adopt an approach AAM ( Algorithm Architecture Matching) to improve the efficiency of the implementation of algorithms especially for systems with high constaints. SLAM embedded system must have an electronic and software architecture to ensure the production of relevant data from sensor information, while ensuring the localization of the robot in its environment. Therefore, the objective is to define, for a chosen algorithm, an architecture model that meets the constraints of embedded systems. The first work of this thesis was to explore the different algorithmic approaches for solving the SLAM problem. Further study of these algorithms is performed. This allows us to evaluate four different kinds of algorithms: FastSLAM2.0, ORB SLAM, SLAM RatSLAM and linear. These algorithms were then evaluated on multiple architectures for embedded systems to study their portability on energy low consumption systems and limited resources. The comparison takes into account the time of execution and consistency of results. After having deeply analyzed the temporal evaluations for each algorithm, the FastSLAM2.0 was finally chosen for its compromise performance-consistency of localization result and execution time, as a candidate for further study on an embedded heterogeneous architecture. The second part of this thesis is devoted to the study of an embedded implementing of the monocular FastSLAM2.0 which is dedicated to large scale environments. An algorithmic modification of the FastSLAM2.0 was necessary in order to better adapt it to the constraints imposed by the largescale environments. The resulting system is designed around a parallel multi-core architecture. Using an algorithm architecture matching approach, the FastSLAM2.0 was implemeted on a heterogeneous CPU-GPU architecture. Uisng an effective algorithme partitioning, an overall acceleration factor o about 22 was obtained on a recent dedicated architecture for embedded systems. The nature of the execution of FastSLAM2.0 algorithm could benefit from a highly parallel architecture. A second instance hardware based on programmable FPGA architecture is proposed. The implantation was performed using high-level synthesis tools to reduce development time. A comparison of the results of implementation on the hardware architecture compared to GPU-based architectures was realized. The gains obtained are promising, even compared to a high-end GPU that currently have a large number of cores. The resulting system can map a large environments while maintainingthe balance between the consistency of the localization results and real time performance. Using multiple calculators involves the use of a means of data exchange between them. This requires strong coupling (communication bus and shared memory). This thesis work has put forward the interests of parallel heterogeneous architectures (multicore, GPU) for embedding the SLAM algorithms. The FPGA-based heterogeneous architectures can particularly become potential candidatesto bring complex algorithms dealing with massive data
Zureiki, Ayman. „Fusion de données multi-capteurs pour la construction incrémentale du modèle tridimensionnel texturé d'un environnement intérieur par un robot mobilen“. Toulouse 3, 2008. http://thesesups.ups-tlse.fr/319/.
Der volle Inhalt der QuelleThis thesis examines the problem of 3D Modelling of indoor environment by a mobile robot. Our main contribution consists in constructing a heterogeneous geometrical model containing textured planar landmarks, 3D lines and interest points. For that, we must fuse geometrical and photometrical data. Hence, we began by improving the stereo vision algorithm, and proposed a new approach of stereo matching by graph cuts. The most significant contribution is the construction of a reduced graph that allows to accelerate the global method and to provide better results than the local methods. Also, to perceive the environment, the robot is equipped by a 3D laser scanner and by a camera. We proposed an algorithmic chain allowing to incrementally constructing a heterogeneous map, using the algorithm of Simultaneous Localization and Mapping based (EKF-SLAM). Mapping the texture on the planar landmarks makes more robust the phase of data association
Tamaazousti, Mohamed. „L'ajustement de faisceaux contraint comme cadre d'unification des méthodes de localisation : application à la réalité augmentée sur des objets 3D“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00881206.
Der volle Inhalt der QuelleGohard, Philippe-Antoine. „De la réalité augmentée sans marqueur pour l'aménagement d'intérieur à la réalité diminuée sur plateforme mobile“. Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30344.
Der volle Inhalt der QuelleAugmented reality in computer vision means the systems allowing inlay/incrustation of virtual objects inside a sequence of images in real time. Applications of this technology are multiple and affect more and more field, but the massive spread of mobile phone equipped with camera allowed the deployment of first public services exploiting Augmented Reality. The context of this thesis frames the use of these systems on mobile phone for the fitting of virtual furniture in an indoor environment. Augmented reality involves the location of the camera as well as a partial reconstruction of the observed scene in order to be able to arrange the piece of furniture in a manner that is physically consistent with the environment : our contributions concern first visual odometry for a rolling shutter camera, then simple methods for the 3D reconstruction of indoor environments
Joly, Cyril. „Contributions aux méthodes de localisation et cartographie simultanées par vision omnidirectionnelle“. Phd thesis, 2010. http://tel.archives-ouvertes.fr/tel-00766366.
Der volle Inhalt der Quelle