Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Localisation et cartographie visuelles simultanées“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Localisation et cartographie visuelles simultanées" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Localisation et cartographie visuelles simultanées"
Monod, Marie-Odile, Roland Chapuis, Philippe Gosset, Raphaël Rouveure, Damien Vivet, Franck Gérossier, Patrick Faure et al. „Projet IMPALA. Radar panoramique hyperfréquence pour la localisation et la cartographie simultanées en environnement extérieur“. Traitement du signal 29, Nr. 6 (28.12.2012): 463–92. http://dx.doi.org/10.3166/ts.29.463-492.
Der volle Inhalt der QuelleGuyonneau, Rémy, und Franck Mercier. „IstiABot ou la Conception d’un Robot Libre pour l’Éducation et la Recherche“. J3eA 20 (2021): 0002. http://dx.doi.org/10.1051/j3ea/20210002.
Der volle Inhalt der QuelleLUCIDARME, Philippe, und Olivier SIMONIN. „Cartographie et localisation simultanées multirobots“. Robotique, Mai 2015. http://dx.doi.org/10.51257/a-v1-s7738.
Der volle Inhalt der QuelleFILLIAT, David. „Cartographie et localisation simultanées en robotique mobile“. Robotique, März 2014. http://dx.doi.org/10.51257/a-v1-s7785.
Der volle Inhalt der QuelleDissertationen zum Thema "Localisation et cartographie visuelles simultanées"
Decrouez, Marion. „Localisation et cartographie visuelles simultanées en milieu intérieur et en temps réel“. Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM010/document.
Der volle Inhalt der QuelleIn this thesis, we explore the problem of modeling an unknown environment using monocular vision for localization applications. We focus in modeling dynamic indoor environments. Many objects in indoor environments are likely to be moved. These movements significantly affect the structure and appearance of the environment and disrupt the existing methods of visual localization. We present in this work a new approach for modeling the environment and its evolution with time. We define explicitly the scene as a static structure and a set of dynamic objects. The object is defined as a rigid entity that a user can take, move and that is visually detectable. First, we show how to automatically discover new objects in a dynamic environment. Existing methods of visual localization simply ignore the inconsistencies due to changes in the scene. We aim to analyze these changes to extract additional information. Without any prior knowledge, an object is a set of points with coherent motion relative to the static structure of the scene. We combine two methods of visual localization to compare various explorations in the same environment taken at different time. The comparison enables to detect objects that have moved between the two shots. For each object, a geometric model and an appearance model are learned. Moreover, we extend the scene model while updating the metrical map and the topological map of the static structure of the environment. Object discovery using motion is based on a new algorithm of multiple structures detection in an image pair. Given a set of correspondences between two views, the method based on RANSAC extracts the different structures corresponding to different model parameterizations seen in the data. The method is applied to homography estimation to detect planar structures and to fundamental matrix estimation to detect structures that have been shifted one from another. Our approach for dynamic scene modeling is applied in a new formulation of place recognition to take into account the presence of dynamic objects in the environment. The model of the place consists in an appearance model of the static structure observed in that place. An object database is learned from previous observations in the environment with the method of object discovery using motion. The place recognition we propose detects the dynamic objects seen in the place and rejects the false detection due to these objects. The different methods described in this dissertation are tested on synthetic and real data. Qualitative and quantitative results are presented throughout the dissertation
Angeli, Adrien. „Détection visuelle de fermeture de boucle et applications à la localisation et cartographie simultanées“. Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://pastel.archives-ouvertes.fr/pastel-00004634.
Der volle Inhalt der QuelleAngeli, Adrien. „Détection visuelle de fermeture de boucle et applications à la localisation et catographie simultanées“. Paris 6, 2008. http://www.theses.fr/2008PA066388.
Der volle Inhalt der QuelleLemaire, Thomas. „Localisation et Cartographie Simultanées avec Vision Monoculaire“. Phd thesis, Ecole nationale superieure de l'aeronautique et de l'espace, 2006. http://tel.archives-ouvertes.fr/tel-00452478.
Der volle Inhalt der QuelleVincke, Bastien. „Architectures pour des systèmes de localisation et de cartographie simultanées“. Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00770323.
Der volle Inhalt der QuelleDujardin, Aymeric. „Détection d’obstacles par stéréovision en environnement non structuré“. Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR09.
Der volle Inhalt der QuelleAutonomous vehicles and robots represent the future of transportation and production industries. The challenge ahead will come from the robustness of perception and flexibility from unexpected situations and changing environments. Stereoscopic cameras are passive sensors that provide color images and depth information of the scene by correlating 2 images like the human vision. In this work, we developed a localization system, by visual odometry that can determine efficiently the position in space of the sensor by exploiting the dense depth map. It is also combined with a SLAM system that enables robust localization against disturbances and potentials drifts. Additionally, we developed a few mapping and obstacles detections solutions, both for aerial and terrestrial vehicles. These algorithms are now partly integrated into commercial products
Dine, Abdelhamid. „Localisation et cartographie simultanées par optimisation de graphe sur architectures hétérogènes pour l’embarqué“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS303/document.
Der volle Inhalt der QuelleSimultaneous Localization And Mapping is the process that allows a robot to build a map of an unknown environment while at the same time it determines the robot position on this map.In this work, we are interested in graph-based SLAM method. This method uses a graph to represent and solve the SLAM problem. A graph optimization consists in finding a graph configuration (trajectory and map) that better matches the constraints introduced by the sensors measurements. Graph optimization is characterized by a high computational complexity that requires high computational and memory resources, particularly to explore large areas. This limits the use of graph-based SLAM in real-time embedded systems. This thesis contributes to the reduction of the graph-based computational complexity. Our approach is based on two complementary axes: data representation in memory and implementation on embedded heterogeneous architectures. In the first axis, we propose an incremental data structure to efficiently represent and then optimize the graph. In the second axis, we explore the use of the recent heterogeneous architectures to speed up graph-based SLAM. We propose an efficient implementation model for embedded applications. We highlight the advantages and disadvantages of the evaluated architectures, namely GPU-based and FPGA-based System-On-Chips
El, Hamzaoui Oussama. „Localisation et cartographie simultanées pour un robot mobile équipé d'un laser à balayage : CoreSLAM“. Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://pastel.archives-ouvertes.fr/pastel-00935600.
Der volle Inhalt der QuelleBoucher, Maxime. „Quelques contributions en localisation et cartographie simultanées multi-capteurs : application à la réalité augmentée“. Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0055/document.
Der volle Inhalt der QuelleGathering informations from the images of a camera, over time, in order to map the environment and localize the camera in it, is a task refered to as Simultaneous Localization and Mapping, or SLAM. Developped both by the robotics and computer vision scientific communities, its applications are many. Robots gain autonomy from this ability. Quite recently, impressive results have been obtained in applications to autonomous transportation vehicles. Another field of application is augmented reality. The localization offered by SLAM enables us to display virtual objects in a consistent way a user movements. Thus, cinema, video games, tourisme applications can benefit from SLAM methods. Visual aids to workers performing complex or repetetive tasks is also an interesting application of SLAM methods. During this PhD thesis, we took interest in SLAM with the idea of realistic augmented reality applications in mind. Though the topic has been extensively explored and many impressive results obtained, the task isn't completely solved. The problem is still an open one, regarding spatial facets (drift, loop closure) as well as temporal (processing time). As part of our monocular SLAM explorations, we mainly studied the drift issue. We then explored multisensor SLAM, both as a mean to handle problematical rotational movements for the monocular setup and as mean to reduce the substantial processing times needed to solve the problem
Weber, Michael. „Development of a method for practical testing of camera-based advanced driver assistance systems in automotive vehicles using augmented reality“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. http://www.theses.fr/2023UBFCA027.
Der volle Inhalt der QuelleAdvanced Driver Assistance Systems (ADAS) support the driver, offer comfort, and take responsibility for increasing road safety. These complex systems endure an extensive testing phase resulting in optimization potential regarding quality, reproducibility, and costs. ADAS of the future will support ever-larger proportions of driving situations in increasingly complex scenarios and represent a key factor for Autonomous Driving (AD). Current testing methods for ADAS can be divided into simulation and reality. The core concept behind the simulation is to benefit from reproducibility, flexibility, and cost reduction. However, simulation cannot yet completely replace real-world tests. Physical conditions, such as weather, road surface, and other variables, play a crucial role in evaluating ADAS road tests and cannot be fully replicated in a virtual environment. These test methods rely on real driving tests on special test sites as well as in real road traffic and are very time-consuming and costly. Therefore, new and efficient test methods are required to pave the way for future ADAS. A new approach Vehicle in the Loop (VIL), which is already being used in the industry today, combines the advantages of simulation and reality. The approach in this project is a new method besides existing VIL solutions. Taking advantage of testing ADAS in simulation and reality, this project presents a new approach to using Augmented Reality (AR) to test camera-based ADAS in a reproducible, cost- and time-efficient way. High computer power is needed for complex automotive environmental conditions, such as high vehicle speed and fewer orientation points on a test track compared to AR applications inside a building. A three-dimensional model with accurate information about the test site is generated based on the combination of visual Simultaneous Localization and Mapping (vSLAM) and Semantic Segmentation. The use of a special augmentation process allows us to enrich reality with virtual road users to present a proof of concept for future test procedures