Dissertations / Theses on the topic 'SLAM mapping'

To see the other types of publications on this topic, follow the link: SLAM mapping.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'SLAM mapping.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Valencia, Carreño Rafael. "Mapping, planning and exploration with Pose SLAM." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/117471.

Full text
Abstract:
This thesis reports research on mapping, path planning, and autonomous exploration. These are classical problems in robotics, typically studied independently, and here we link such problems by framing them within a common SLAM approach, adopting Pose SLAM as the basic state estimation machinery. The main contribution of this thesis is an approach that allows a mobile robot to plan a path using the map it builds with Pose SLAM and to select the appropriate actions to autonomously construct this map. Pose SLAM is the variant of SLAM where only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. In Pose SLAM, observations come in the form of relative-motion measurements between robot poses. With regards to extending the original Pose SLAM formulation, this thesis studies the computation of such measurements when they are obtained with stereo cameras and develops the appropriate noise propagation models for such case. Furthermore, the initial formulation of Pose SLAM assumes poses in SE(2) and in this thesis we extend this formulation to SE(3), parameterizing rotations either with Euler angles and quaternions. We also introduce a loop closure test that exploits the information from the filter using an independent measure of information content between poses. In the application domain, we present a technique to process the 3D volumetric maps obtained with this SLAM methodology, but with laser range scanning as the sensor modality, to derive traversability maps. Aside from these extensions to Pose SLAM, the core contribution of the thesis is an approach for path planning that exploits the modeled uncertainties in Pose SLAM to search for the path in the pose graph with the lowest accumulated robot pose uncertainty, i.e., the path that allows the robot to navigate to a given goal with the least probability of becoming lost. An added advantage of the proposed path planning approach is that since Pose SLAM is agnostic with respect to the sensor modalities used, it can be used in different environments and with different robots, and since the original pose graph may come from a previous mapping session, the paths stored in the map already satisfy constraints not easy modeled in the robot controller, such as the existence of restricted regions, or the right of way along paths. The proposed path planning methodology has been extensively tested both in simulation and with a real outdoor robot. Our path planning approach is adequate for scenarios where a robot is initially guided during map construction, but autonomous during execution. For other scenarios in which more autonomy is required, the robot should be able to explore the environment without any supervision. The second core contribution of this thesis is an autonomous exploration method that complements the aforementioned path planning strategy. The method selects the appropriate actions to drive the robot so as to maximize coverage and at the same time minimize localization and map uncertainties. An occupancy grid is maintained for the sole purpose of guaranteeing coverage. A significant advantage of the method is that since the grid is only computed to hypothesize entropy reduction of candidate map posteriors, it can be computed at a very coarse resolution since it is not used to maintain neither the robot localization estimate, nor the structure of the environment. Our technique evaluates two types of actions: exploratory actions and place revisiting actions. Action decisions are made based on entropy reduction estimates. By maintaining a Pose SLAM estimate at run time, the technique allows to replan trajectories online should significant change in the Pose SLAM estimate be detected. The proposed exploration strategy was tested in a common publicly available dataset comparing favorably against frontier based exploration
APA, Harvard, Vancouver, ISO, and other styles
2

Carlson, Justin. "Mapping Large, Urban Environments with GPS-Aided SLAM." Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/44.

Full text
Abstract:
Simultaneous Localization and Mapping (SLAM) has been an active area of research for several decades, and has become a foundation of indoor mobile robotics. However, although the scale and quality of results have improved markedly in that time period, no current technique can e ectively handle city-sized urban areas. The Global Positioning System (GPS) is an extraordinarily useful source of localization information. Unfortunately, the noise characteristics of the system are complex, arising from a large number of sources, some of which have large autocorrelation. Incorporation of GPS signals into SLAM algorithms requires using low-level system information and explicit models of the underlying system to make appropriate use of the information. The potential bene ts of combining GPS and SLAM include increased robustness, increased scalability, and improved accuracy of localization. This dissertation presents a theoretical background for GPS-SLAM fusion. The presented model balances ease of implementation with correct handling of the highly colored sources of noise in a GPS system.. This utility of the theory is explored and validated in the framework of a simulated Extended Kalman Filter driven by real-world noise. The model is then extended to Smoothing and Mapping (SAM), which overcomes the linearization and algorithmic complexity limitations of the EKF formulation. This GPS-SAM model is used to generate a probabilistic landmark-based urban map covering an area an order of magnitude larger than previous work.
APA, Harvard, Vancouver, ISO, and other styles
3

Touchette, Sébastien. "Recovering Cholesky Factor in Smoothing and Mapping." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37935.

Full text
Abstract:
Autonomous vehicles, from self driving cars to small sized unmanned aircraft, is a hotly contested market experiencing significant growth. As a result, fundamental concepts of autonomous vehicle navigation, such as simultaneous localisation and mapping (SLAM) are very active fields of research garnering significant interest in the drive to improve effectiveness. Traditionally, SLAM has been performed by filtering methods but several improvements have brought smoothing and mapping (SAM) based methods to the forefront of SLAM research. Although recent works have made such methods incremental, they retain some batch functionalities from their bundle-adjustment origins. More specifically, re-linearisation and column reordering still require the full re-computation of the solution. In this thesis, the problem of re-computation after column reordering is addressed. A novel method to reflect changes in ordering directly on the Cholesky factor, called Factor Recovery, is proposed. Under the assumption that changes to the ordering are small and localised, the proposed method can be executed faster than the re-computation of the Cholesky factor. To define each method’s optimal region of operation, a function estimating the computational cost of Factor Recovery is derived and compared with the known cost of Cholesky factorisation obtained using experimental data. Combining Factor Recovery and traditional Cholesky decomposition, the Hybrid Cholesky decomposition algorithm is proposed. This novel algorithm attempts to select the most efficient algorithm to compute the Cholesky factor based on an estimation of the work required. To obtain experimental results, the Hybrid Cholesky decomposition algorithm was integrated in the SLAM++ software and executed on popular datasets from the literature. The proposed method yields an average reduction of 1.9 % on the total execution time with reductions of up to 31 % obtained in certain situations. When considering only the time spend performing reordering and factorisation for batch steps, reductions of 18 % on average and up to 78 % in certain situations are observed.
APA, Harvard, Vancouver, ISO, and other styles
4

Maddern, William Paul. "Continuous appearance-based localisation and mapping." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/65841/2/William_Maddern_Thesis.pdf.

Full text
Abstract:
This thesis presents a novel approach to mobile robot navigation using visual information towards the goal of long-term autonomy. A novel concept of a continuous appearance-based trajectory is proposed in order to solve the limitations of previous robot navigation systems, and two new algorithms for mobile robots, CAT-SLAM and CAT-Graph, are presented and evaluated. These algorithms yield performance exceeding state-of-the-art methods on public benchmark datasets and large-scale real-world environments, and will help enable widespread use of mobile robots in everyday applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Üzer, Ferit. "Hybrid mapping for large urban environments." Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22675/document.

Full text
Abstract:
Dans cette thèse, nous présentons une nouvelle méthode de cartographie visuelle hybride qui exploite des informations métriques, topologiques et sémantiques. Notre but est de réduire le coût calculatoire par rapport à des techniques de cartographie purement métriques. Comparé à de la cartographie topologique, nous voulons plus de précision ainsi que la possibilité d’utiliser la carte pour le guidage de robots. Cette méthode hybride de construction de carte comprend deux étapes. La première étape peut être vue comme une carte topo-métrique avec des nœuds correspondants à certaines régions de l’environnement. Ces cartes sont ensuite complétées avec des données métriques aux nœuds correspondant à des sous-séquences d’images acquises quand le robot revenait dans des zones préalablement visitées. La deuxième étape augmente ce modèle en ajoutant des informations sémantiques. Une classification est effectuée sur la base des informations métriques en utilisant des champs de Markov conditionnels (CRF) pour donner un label sémantique à la trajectoire locale du robot (la route dans notre cas) qui peut être "doit", "virage" ou "intersection". L’information métrique des secteurs de route en virage ou en intersection est conservée alors que la métrique des lignes droites est effacée de la carte finale. La fermeture de boucle n’est réalisée que dans les intersections ce qui accroît l’efficacité du calcul et la précision de la carte. En intégrant tous ces nouveaux algorithmes, cette méthode hybride est robuste et peut être étendue à des environnements de grande taille. Elle peut être utilisée pour la navigation d’un robot mobile ou d’un véhicule autonome en environnement urbain. Nous présentons des résultats expérimentaux obtenus sur des jeux de données publics acquis en milieu urbain pour démontrer l’efficacité de l’approche proposée
In this thesis, a novel vision based hybrid mapping framework which exploits metric, topological and semantic information is presented. We aim to obtain better computational efficiency than pure metrical mapping techniques, better accuracy as well as usability for robot guidance compared to the topological mapping. A crucial step of any mapping system is the loop closure detection which is the ability of knowing if the robot is revisiting a previously mapped area. Therefore, we first propose a hierarchical loop closure detection framework which also constructs the global topological structure of our hybrid map. Using this loop closure detection module, a hybrid mapping framework is proposed in two step. The first step can be understood as a topo-metric map with nodes corresponding to certain regions in the environment. Each node in turn is made up of a set of images acquired in that region. These maps are further augmented with metric information at those nodes which correspond to image sub-sequences acquired while the robot is revisiting the previously mapped area. The second step augments this model by using road semantics. A Conditional Random Field based classification on the metric reconstruction is used to semantically label the local robot path (road in our case) as straight, curved or junctions. Metric information of regions with curved roads and junctions is retained while that of other regions is discarded in the final map. Loop closure is performed only on junctions thereby increasing the efficiency and also accuracy of the map. By incorporating all of these new algorithms, the hybrid framework presented can perform as a robust, scalable SLAM approach, or act as a main part of a navigation tool which could be used on a mobile robot or an autonomous car in outdoor urban environments. Experimental results obtained on public datasets acquired in challenging urban environments are provided to demonstrate our approach
APA, Harvard, Vancouver, ISO, and other styles
6

Frost, Duncan. "Long range monocular SLAM." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:af38cfa6-fc0a-48ab-b919-63c440ae8774.

Full text
Abstract:
This thesis explores approaches to two problems in the frame-rate computation of a priori unknown 3D scene structure and camera pose using a single camera, or monocular simultaneous localisation and mapping. The thesis reflects two trends in vision in general and structure from motion in particular: (i) the move from directly recovered and towards learnt geometry; and (ii) the sparsification of otherwise dense direct methods. The first contributions mitigate scale drift. Beyond the inevitable accumulation of random error, monocular SLAM accumulates error via the depth/speed scaling ambiguity. Three solutions are investigated. The first detects objects of known class and size using fixed descriptors, and incorporates their measurements in the 3D map. Experiments using databases with ground truth show that metric accuracy can be restored over kilometre distances; and similar gains are made using a hand-held camera. Our second method avoids explicit feature choice, instead employing a deep convolutional neural network to yield depth priors. Relative depths are learnt well, but absolute depths less so, and recourse to database-wide scaling is investigated. The third approach uses a novel trained network to infer speed from imagery. The second part of the thesis develops sparsified direct methods for monocular SLAM. The first contribution is a novel camera tracker operating directly using affine image warping, but on patches around sparse corners. Camera pose is recovered with an accuracy at least equal to the state of the art, while requiring only half the computational time. The second introduces a least squares adjustment to sparsified direct map refinement, again using patches from sparse corners. The accuracy of its 3D structure estimation is compared with that from the widely used method of depth filtering. It is found empirically that the new method's accuracy is often higher than that of its filtering counterpart, but that the method is more troubled by occlusion.
APA, Harvard, Vancouver, ISO, and other styles
7

Pereira, Savio Joseph. "On the utilization of Simultaneous Localization and Mapping(SLAM) along with vehicle dynamics in Mobile Road Mapping Systems." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/94425.

Full text
Abstract:
Mobile Road Mapping Systems (MRMS) are the current solution to the growing demand for high definition road surface maps in wide ranging applications from pavement management to autonomous vehicle testing. The focus of this research work is to improve the accuracy of MRMS by using the principles of Simultaneous Localization and Mapping (SLAM). First a framework for describing the sensor measurement models in MRMS is developed. Next the problem of estimating the road surface from the set of sensor measurements is formulated as a SLAM problem and two approaches are proposed to solve the formulated problem. The first is an incremental solution wherein sensor measurements are processed in sequence using an Extended Kalman Filter (EKF). The second is a post-processing solution wherein the SLAM problem is formulated as an inference problem over a factor graph and existing factor graph SLAM techniques are used to solve the problem. For the mobile road mapping problem, the road surface being measured is one the primary inputs to the dynamics of the MRMS. Hence, concurrent to the main objective this work also investigates the use of the dynamics of the host vehicle of the system to improve the accuracy of the MRMS. Finally a novel method that builds off the concepts of the popular model fitting algorithm, Random Sampling and Consensus (RANSAC), is developed in order to identify outliers in road surface measurements and estimate the road elevations at grid nodes using these measurements. The developed methods are validated in a simulated environment and the results demonstrate a significant improvement in the accuracy of MRMS over current state-of-the art methods.
Doctor of Philosophy
Mobile Road Mapping Systems (MRMS) are the current solution to the growing demand for high definition road surface maps in wide ranging applications from pavement management to autonomous vehicle testing. The objective of this research work is to improve the accuracy of MRMS by investigating methods to improve the sensor data fusion process. The main focus of this work is to apply the principles from the field of Simultaneous Localization and Mapping (SLAM) in order to improve the accuracy of MRMS. The concept of SLAM has been successfully applied to the field of mobile robot navigation and thus the motivation of this work is to investigate its application to the problem of mobile road mapping. For the mobile road mapping problem, the road surface being measured is one the primary inputs to the dynamics of the MRMS. Hence this work also investigates whether knowledge regarding the dynamics of the system can be used to improve the accuracy. Also developed as part of this work is a novel method for identifying outliers in road surface datasets and estimating elevations at road surface grid nodes. The developed methods are validated in a simulated environment and the results demonstrate a significant improvement in the accuracy of MRMS over current state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Carranza, Jose Martinez. "Efficient monocular SLAM by using a structure-driven mapping." Thesis, University of Bristol, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574263.

Full text
Abstract:
Important progress has been achieved in recent years with regards to the monocular SLAM problem, which consists of estimating the 6-D pose of a single camera, whilst building a 3-D map representation of scene structure observed by the camera. Nowa- days, there exist various monocular SLAM systems capable of outputting camera and map estimates at camera frame rates over long trajectories and for indoor and outdoor scenarios. These systems are attractive due to their low cost - a consequence of using a conventional camera - and have been widely utilised in different applications such as in augmented and virtual reality. However, the main utility of the built map has been reduced to work as an effective reference system for robust and fast camera localisation. In order to produce more useful maps, different works have proposed the use of higher-level structures such as lines, planes and even meshes. Planar structure is one of the most popular structures to be incorporated into the map, given that they are abundant in man-made scenes, and because a plane by itself provides implicit semantic cues about the scene structure. Nevertheless, very often planar structure detection is carried out by ad-hoc auxiliary methods delivering a delayed detection and therefore a delayed mapping which becomes a problem when rapid planar mapping is demanded. This thesis addresses the problem of planar structure detection and mapping by propos- ing a novel mapping mechanism called structure-driven mapping. This new approach aims at enabling a monocular SLAM system to perform planar or point mapping ac- cording to scene structure observed by the camera. In order to achieve this, we propose to incorporate the plane detection task into the SLAM process. For this purpose, we have ' developed a novel framework that unifies planar and point mapping under a common parameterisation. This enables map components to evolve according to the incremen- tal visual observations of the scene structure thus providing undelayed planar mapping. Moreover, the plane detection task stops as soon as the camera explores a non planar structure scenario, which avoids wasting unnecessary processing time, starting again as soon as planar structure gets into view. We present a thorough evaluation of this novel approach through simulation experiments and results obtained with real data. We also present a visual odometry application which takes advantage of the efficient way in which the scene structure is mapped by using the novel mapping mechanism presented in this work. Therefore, the results suggest the feasibility of performing simultaneous planar structure detection, localisation and mapping within the same coherent estimation framework.
APA, Harvard, Vancouver, ISO, and other styles
9

Pascoe, Geoffrey. "Robust lifelong visual navigation and mapping." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:c0bfa5fb-fa0a-48ed-8d26-90fa167ef6cd.

Full text
Abstract:
The ability to precisely determine one's location in within the world (localisation) is a key requirement for any robot wishing to navigate through the world. For long-term operation, such a localisation system must be robust to changes in the environment, both short term (eg. traffic, weather) and long term (eg. seasons). This thesis presents two methods for performing such localisation using cameras - small, cheap, lightweight sensors that are universally available. Whilst many image-based localisation systems have been proposed in the past, they generally rely on either feature matching, which fails under many degradations such as motion blur, or on photometric consistency, which fails under changing illumination. The methods we propose here directly align images with a dense prior map. The first method uses maps synthesised from a combination of LIDAR scanners to generate geometry and cameras to generate appearance, whilst the second uses vision for both mapping and localisation. Both make use of an information-theoretic metric, Normalised Information Distance (NID), for image alignment, relaxing the appearance constancy assumption inherent in photometric methods. Our methods require significant computational resources, but through the use of commodity GPUs, we are able to run them at a rate of 8-10Hz. Our GPU implementations make use of low level OpenGL, enabling compatibility across almost any GPU hardware. We also present a method for calibrating multi-sensor systems, enabling the joint use of cameras and LIDAR for mapping. Through experiments on both synthetic data and real-world data from over 100km of driving outdoors, we demonstrate the robustness of our localisation system to large variations in appearance. Comparisons with state-of-the-art feature-based and direct methods show that ours is significantly more robust, whilst maintaining similar precision.
APA, Harvard, Vancouver, ISO, and other styles
10

Pretto, Alberto. "Visual-SLAM for Humanoid Robots." Doctoral thesis, Università degli studi di Padova, 2009. http://hdl.handle.net/11577/3426516.

Full text
Abstract:
In robotics the Simultaneous Localization and Mapping (SLAM) is the problem in which an autonomous robots acquires a map of the surrounding environment while at the same time localizes itself inside this map. In the last years a lot of researchers have spent a great effort in developing new families of algorithms, using several sensors and robotic platforms. One of the most challenging field of research in SLAM is the so called Visual-SLAM problem, in which various types of cameras are used as sensor for the navigation. Cameras are inexpensive sensors and can provide rich information about the surrounding environment, on the other hand the complexity of the computer vision tasks and the strong dependence on the characteristics of the environment in current approaches makes the Visual-SLAM far to be considered a closed problem. Most of the SLAM algorithm are usually tested on wheeled robot. These platforms have become robust and stable, on the other hand the research in robot design moves toward a new family of robot platforms, the humanoid robots. Just like humans, a humanoid robot can adapt itself to changes in the environment in order to efficiently reach its goals. Despite that, only a few roboticists focused theirs research on stable implementation of SLAM and Visual SLAM algorithms well suited for humanoid robots. Humanoid platforms raise issues which can compromise the stability of the conventional navigation algorithms, especially for vision-based approaches. A humanoid robot can move in 3D without the usual planar motion assumption that constraint the movement in 2D, usually with quick and complex movements combined with unpredictable vibrations, compromising the reliability of the acquired sensors data, for example introducing in the images grabbed by the camera an undesired motion blur effect. Due to the strong balance constraints, a humanoid robot usually can’t be equipped with powerfull but hefty computer boards: this limits the implementation of complex and computational expensive algorithms. Moreover, unlike wheeled robots, its complex kinematics usually forbids a reliable reconstruction of the motion from the servo-motor encoders. In this thesis, we focus on studying and developing new techniques addressing the Visual-SLAM problem, with particular attention to the issues related to using as experimental platform small humanoid robots equipped with a single perspective camera. The main efforts in SLAM and Visual SLAM research areas have been put into the estimation functionality. However, most of the functionalities involved in Visual SLAM are in perception processes. In this thesis we therefore focus on the improvement of the perceptual processes, from a computer vision point-of-view. We faced small humanoid robot related issues like low-computational capability, the low quality of the sensor data and the high degrees of freedom of the motion. We cope with the low computational resources presenting a new similarity measure for images based on a compact signature to be used in image-based topological SLAM problem. The motion blur problem is faced proposing a new feature detection and tracking scheme that is robust even to non-uniform motion blur. We develop a framework for visual odometry based on features robust to motion blur. We finally propose an homography-based approach to 3D visual SLAM, using the information provided by a single camera mounted on a humanoid robot, based on the assumption that the robot moves on a planar environment. All proposed methods have been validated with experiments and comparative validation using both standard datasets and images taken by the cameras mounted on walking small humanoid robots.
Nell’ambito della robotica, il Simultaneous Localization and Mapping (SLAM) é il processo grazie al quale un robot autonomo é in grado di creare una mappa dell’ambiente circostante e allo stesso tempo di localizzarsi avvalendosi di tale mappa. Negli ultimi anni un considerevole numero di ricercatori ha sviluppato nuove famiglie di algoritmi di SLAM, basati su vari sensori e utilizzando varie piattaforme robotiche. Uno degli ambiti più complessi nella ricerca sullo SLAM é il cosiddetto Visual-SLAM, che prevede l’utilizzo di vari tipi di telecamera come sensore per la navigazione. Le telecamere sono sensori economici che raccolgono molte informazioni sull’ambiente circostante. D’altro canto, la complessità degli algoritmi di visione artificiale e la forte dipendenza degli approcci attualmente realizzati dalle caratteristiche dell’ambiente, rendono il Visual-SLAM un problema lontano dal poter essere considerato risolto. Molti degli algoritmi di SLAM sono solitamente testati usando robot dotati di ruote. Sebbene tali piattaforme siano ormai robuste e stabili, la ricerca sulla progettazione di nuove piattaforme robotiche sta in parte migrando verso la robotica umanoide. Proprio come gli esseri umani, i robot umanoidi sono in grado di adattarsi ai cambiamenti dell’ambiente per raggiungere efficacemente i propri obiettivi. Nonostante ciò, solo pochi ricercatori hanno focalizzato i loro sforzi su implementazioni stabili di algoritmi di SLAM e Visual-SLAM adatti ai robot umanoidi. Tali piattaforme robotiche introducono nuove problematiche che possono compromettere la stabilità degli algoritmi di navigazione convenzionali, specie se basati sulla visione. I robot umanoidi sono dotati di un alto grado di libertà di movimento, con la possibilità di effettuare velocemente movimenti complessi: tali caratteristiche introducono negli spostamenti vibrazioni non deterministiche in grado di compromettere l’affidabilit` dei dati sensoriali acquisiti, per esempio introducendo nei flussi video effetti indesiderati quali il motion blur. A causa dei vincoli imposti dal bilanciamento del corpo, inoltre, tali robot non sempre possono essere dotati di unit` di elaborazione molto performanti che spesso sono ingombranti e dal peso elevato: ci` limita l’utilizzo di algoritmi complessi e computazionalmente gravosi. Infine, al contrario di quanto accade per i robot dotati di ruote, la complessa cinematica di un robot umanoide impedisce di ricostruire il movimento basandosi sulle informazioni provenienti dagli encoder posti sui motori. In questa tesi ci si é focalizzati sullo studio e sullo sviluppo di nuove metodologie per affrontare il problema del Visual-SLAM, ponendo particolare enfasi ai problemi legati all’utilizzo di piccoli robot umanoidi dotati di una singola telecamera come piattaforme per gli esperimenti. I maggiori sforzi nell’ambito della ricerca sullo SLAM e sul Visual-SLAM si sono concentrati nel campo del processo di stima dello stato del robot, ad esempio la stima della propria posizione e della mappa dell’ambiente. D’altra parte, la maggior parte delle problematiche incontrate nella ricerca sul Visual-SLAM sono legate al processo di percezione, ovvero all’interpretazione dei dati provenienti dai sensori. In questa tesi ci si é perciò concentrati sul miglioramento dei processi percettivi da un punto di vista della visione artificiale. Sono stati affrontati i problemi che scaturiscono dall’utilizzo di piccoli robot umanoidi come piattaforme sperimentali, come ad esempio la bassa capacità di calcolo, la bassa qualit` dei dati sensoriali e l’elevato numero di gradi di libertà nei movimenti. La bassa capacità di calcolo ha portato alla creazione di un nuovo metodo per misurare la similarità tra le immagini, che fa uso di una descrizione dell’immagine compatta, utilizzabile in applicazioni di SLAM topologico. Il problema del motion blur é stato affrontato proponendo una nuova tecnica di rilevamento di feature visive, unitamente ad un nuovo schema di tracking, robusto an- che in caso di motion blur non uniforme. E’ stato altresì sviluppato un framework per l’odometria basata sulle immagini, che fa uso delle feature visive presentate. Si propone infine un approccio al Visual-SLAM basato sulle omografie, che sfrutta le informazioni ottenute da una singola telecamera montata su un robot umanoide. Tale approccio si basa sull’assunzione che il robot si muove su una superficie piana. Tutti i metodi proposti sono stati validati con esperimenti e studi comparativi, usando sia dataset standard che immagini acquisite dalle telecamere installate su piccoli robot umanoidi.
APA, Harvard, Vancouver, ISO, and other styles
11

Ferrera, Maxime. "Monocular Visual-Inertial-Pressure fusion for Underwater localization and 3D mapping." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS089.

Full text
Abstract:
Cette thèse aborde le problème de la localisation et cartographie 3D sous-marine en temps-réel. Dans le domaine de l'archéologie sous-marine, des véhicules téléopérés (ROV – Remotely Operated Vehicle) sont utilisés pour étudier les sites. La localisation et la cartographie précises en temps-réel sont des informations essentielles pour le pilotage manuel ou automatique de ces engins. Bien que plusieurs solutions de localisation existent, la plupart d'entre elles reposent sur l'utilisation de capteurs tels que les lochs Doppler (DVL – Doppler Velocity Log) ou les centrales inertielles à gyroscopes à fibre optique,qui sont très coûteux et peuvent être trop volumineux ou trop lourds pour les ROVs les plus petits. Les systèmes de positionnement acoustique sont également fréquemment utilisés en complément des systèmes précédents, mais leur fréquence d’échantillonnage et leur précision sont limitées.Dans cette thèse, nous étudions l'utilisation de capteurs à faible coût pour la localisation sous-marine.Notre étude porte sur l'utilisation d'une caméra monoculaire, d'un capteur de pression et d'une centrale inertielle MEMS (Micro ElectroMechanical System) à faible coût comme seul moyen de localisation et de cartographie en contexte archéologique sous-marin.Nous avons mené une évaluation de différentes méthodes de suivi de point d'intérêts sur des images affectées par des perturbations typiques rencontrées dans un contexte sous-marin. À partir des résultats obtenus nous avons développé une méthode monoculaire de SLAM (Simultaneous Localization and Mapping) robuste aux perturbations spécifiques de l’environnement sous-marin. Ensuite, nous proposons une extension de cette méthode pour intégrer étroitement les mesures du capteur de pression etde la centrale inertielle dans l’algorithme de SLAM. La méthode finale fournit une localisation très précise et s'exécute en temps-réel. En outre, un module de reconstruction 3D dense, en ligne, compatible avec une configuration monoculaire, est également proposé. Deux prototypes compacts et légers de ce système nt été conçus et utilisés pour enregistrer des jeux de données qui ont été publiés. En outre, ces prototypes ont été utilisés avec succès pour tester et valider en conditions réelles les algorithmes de localisation et de cartographie proposés
This thesis addresses the problem of real-time 3D localization and mapping in underwater environments.In the underwater archaeology field, Remotely Operated Vehicles (ROVs) are used to conduct deep-seasurveys and excavations. Providing both accurate localization and mapping information in real-time iscrucial for manual or automated piloting of the robots. While many localization solutions already existfor underwater robots, most of them rely on very accurate sensors, such as Doppler velocity logs or fiberoptic gyroscopes, which are very expensive and may be too bulky for small ROVs. Acoustic positioningsystems are also commonly used for underwater positioning, but they provide low frequencymeasurements, with limited accuracy.In this thesis, we study the use of low-cost sensors for accurate underwater localization. Our studyinvestigates the use of a monocular camera, a pressure sensor and a low-cost MEMS-IMU as the onlymeans of performing localization and mapping in the context of underwater archaeology.We have conducted an evaluation of different features tracking methods on images affected by typicaldisturbances met in an underwater context. From the results obtained with this evaluation, we havedeveloped a monocular Visual SLAM (Simultaneous Localization and Mapping) method, robust to thespecific disturbances of underwater environments. Then, we propose an extension of this method totightly integrate the measurements of a pressure sensor and an IMU in the SLAM algorithm. The finalmethod provides a very accurate localization and runs in real-time. In addition, an online dense 3Dreconstruction module, compliant with a monocular setup, is also proposed. Two lightweight and compactprototypes of this system have been designed and used to record datasets that have been publiclyreleased. Furthermore, these prototypes have been successfully used to test and validate the proposedlocalization and mapping algorithms in real-case scenarios
APA, Harvard, Vancouver, ISO, and other styles
12

Natarajan, Ramkumar. "Efficient Factor Graph Fusion for Multi-robot Mapping." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/1201.

Full text
Abstract:
"This work presents a novel method to efficiently factorize the combination of multiple factor graphs having common variables of estimation. The fast-paced innovation in the algebraic graph theory has enabled new tools of state estimation like factor graphs. Recent factor graph formulation for Simultaneous Localization and Mapping (SLAM) like Incremental Smoothing and Mapping using the Bayes tree (ISAM2) has been very successful and garnered much attention. Variable ordering, a well-known technique in linear algebra is employed for solving the factor graph. Our primary contribution in this work is to reuse the variable ordering of the graphs being combined to find the ordering of the fused graph. In the case of mapping, multiple robots provide a great advantage over single robot by providing a faster map coverage and better estimation quality. This coupled with an inevitable increase in the number of robots around us produce a demand for faster algorithms. For example, a city full of self-driving cars could pool their observation measurements rapidly to plan a traffic free navigation. By reusing the variable ordering of the parent graphs we were able to produce an order-of-magnitude difference in the time required for solving the fused graph. We also provide a formal verification to show that the proposed strategy does not violate any of the relevant standards. A common problem in multi-robot SLAM is relative pose graph initialization to produce a globally consistent map. The other contribution addresses this by minimizing a specially formulated error function as a part of solving the factor graph. The performance is illustrated on a publicly available SuiteSparse dataset and the multi-robot AP Hill dataset."
APA, Harvard, Vancouver, ISO, and other styles
13

Cunningham, Alexander G. "Scalable online decentralized smoothing and mapping." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51848.

Full text
Abstract:
Many applications for field robots can benefit from large numbers of robots, especially applications where the objective is for the robots to cover or explore a region. A key enabling technology for robust autonomy in these teams of small and cheap robots is the development of collaborative perception to account for the shortcomings of the small and cheap sensors on the robots. In this dissertation, I present DDF-SAM to address the decentralized data fusion (DDF) inference problem with a smoothing and mapping (SAM) approach to single-robot mapping that is online, scalable and consistent while supporting a variety of sensing modalities. The DDF-SAM approach performs fully decentralized simultaneous localization and mapping in which robots choose a relevant subset of variables from their local map to share with neighbors. Each robot summarizes their local map to yield a density on exactly this chosen set of variables, and then distributes this summarized map to neighboring robots, allowing map information to propagate throughout the network. Each robot fuses summarized maps it receives to yield a map solution with an extended sensor horizon. I introduce two primary variations on DDF-SAM, one that uses a batch nonlinear constrained optimization procedure to combine maps, DDF-SAM 1.0, and one that uses an incremental solving approach for substantially faster performance, DDF-SAM 2.0. I validate these systems using a combination of real-world and simulated experiments. In addition, I evaluate design trade-offs for operations within DDF-SAM, with a focus on efficient approximate map summarization to minimize communication costs.
APA, Harvard, Vancouver, ISO, and other styles
14

Ihemadu, Okechukwu Clifford. "Robotic navigation in large environments using simultaneous localisation and mapping (SLAM)." Thesis, Queen's University Belfast, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.602546.

Full text
Abstract:
This thesis provides techniques to address some outstanding problems in robotic navigation in relatively large environments using simultaneous localisation and mapping (SLAM), resulting in improved competence and reliability of autonomous agents. Autonomous mobile agents are helpful in a diverse range of applications deemed as dull, dirty or dangerous for human operators including mining, defence and underwater explorations. In order to be fully autonomous the robot is required to incrementally construct a map of its vicinity and simultaneously localise itself within the environment based only on its on-board sensory data. However, the robot is confronted with a number of serious challenges that impair large-scale navigation and could even render SLAM results intractable in real time. This work provides efficient strategies for addressing such problems. The techniques presented include an effective method of reducing computational burden of updating the covariance matrix at every step, and cutting down the storage requirements. In addition, the usually complex and tedious task of transforming and fusing sub-maps into a single global map is simplified. Furthermore, an intelligent SLAM (I -SLAM) is introduced that enables accurate place recognition and minimise pose estimation errors. It also provides the means of adaptively adjusting to the nature of natural surface terrains for both indoor and outdoor environments. These techniques promise an enormous potential for autonomous agents operating in unknown environments in terms of consistency, accuracy and efficiency. Simulation results demonstrate the reliability and effectiveness of the system as a means of addressing some of the outstanding challenges with regards to large-scale SLAM performance
APA, Harvard, Vancouver, ISO, and other styles
15

Bao, Guanqun. "On Simultaneous Localization and Mapping inside the Human Body (Body-SLAM)." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-dissertations/206.

Full text
Abstract:
Wireless capsule endoscopy (WCE) offers a patient-friendly, non-invasive and painless investigation of the entire small intestine, where other conventional wired endoscopic instruments can barely reach. As a critical component of the capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of intestinal disease after it is detected by the video source. To define the position of the endoscopic capsule, we need to have a map of inside the human body. However, since the shape of the small intestine is extremely complex and the RF signal propagates differently in the non-homogeneous body tissues, accurate mapping and localization inside small intestine is very challenging. In this dissertation, we present an in-body simultaneous localization and mapping technique (Body-SLAM) to enhance the positioning accuracy of the WCE inside the small intestine and reconstruct the trajectory the capsule has traveled. In this way, the positions of the intestinal diseases can be accurately located on the map of inside human body, therefore, facilitates the following up therapeutic operations. The proposed approach takes advantage of data fusion from two sources that come with the WCE: image sequences captured by the WCE's embedded camera and the RF signal emitted by the capsule. This approach estimates the speed and orientation of the endoscopic capsule by analyzing displacements of feature points between consecutive images. Then, it integrates this motion information with the RF measurements by employing a Kalman filter to smooth the localization results and generate the route that the WCE has traveled. The performance of the proposed motion tracking algorithm is validated using empirical data from the patients and this motion model is later imported into a virtual testbed to test the performance of the alternative Body-SLAM algorithms. Experimental results show that the proposed Body-SLAM technique is able to provide accurate tracking of the WCE with average error of less than 2.3cm.
APA, Harvard, Vancouver, ISO, and other styles
16

Aguilar-Gonzalez, Abiel. "Monocular-SLAM dense mapping algorithm and hardware architecture for FPGA acceleration." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC055.

Full text
Abstract:
La localisation et la cartographie simultanées (SLAM) consiste à construire une carte 3D tout en situant le ou les capteurs (ayant servi au SLAM) dans cette carte. Ces dernières années, le travail s'est focalisé sur des systèmes utilisant une seule caméra mobile comme moyen de perception (monoculaire-SLAM). Ce choix a été motivé par le fait qu'il est aujourd'hui possible de trouver des caméras commerciales peu coûteuses, plus petites et plus légères que les autres capteurs utilisés auparavant. De plus ces caméras fournissent des informations environnementales visuelles qui peuvent être exploitées pour créer des cartes 3D complexes tandis que les poses des caméras peuvent être estimées simultanément. Malheureusement, les systèmes monoculaires SLAM sont basés sur des techniques d'optimisation qui limitent les performances des applications embarquées en temps réel. Pour résoudre ce problème, nous proposons dans ce travail une nouvelle formulation SLAM monoculaire basée sur l'hypothèse qu'il est possible d'atteindre une haute efficacité pour les applications embarquées, en augmentant la densité de la carte des nuages de points (et donc la densité de la carte 3D et le positionnement et la cartographie globale) et en reformulant le processus de suivi des caractéristiques/rappariement des fonctionnalités pour obtenir de hautes performances pour les architectures matérielles embarquées, comme le FPGA ou CUDA. Afin d'augmenter la densité de la carte des nuages de points, nous proposons de nouveaux algorithmes pour le suivi et la mise en correspondance de primitives ainsi que des algorithmes de calcul profondeur à partir du mouvement pouvant se ramener à une extension d'un problème de mise en correspondance stéréo. Ensuite, deux architectures matérielles différentes (basées respectivement sur FPGA et CUDA) entièrement compatibles avec les contraintes embarquées temps réel sont proposées. Les résultats expérimentaux montrent qu'il est possible d'obtenir des estimations précises de la pose de la caméra. Par rapport aux systèmes monoculaires de l'état de l'art, nous occupons la 5ème place dans la suite de benchmarks KITTI, avec un score supérieur à celui de l'année dernière (nous sommes l'algorithme le plus rapide du benchmark) et une densité du nuage de points dix fois plus élevée que les approches précédentes
Simultaneous Localization and Mapping (SLAM) is the problem of constructing a 3D map while simultaneously keeping track of an agent location within the map. In recent years, work has focused on systems that use a single moving camera as the only sensing mechanism (monocular-SLAM). This choice was motivated because nowadays, it is possible to find inexpensive commercial cameras, smaller and lighter than other sensors previously used and, they provide visual environmental information that can be exploited to create complex 3D maps while camera poses can be simultaneously estimated. Unfortunately, previous monocular-SLAM systems are based on optimization techniques that limits the performance for real-time embedded applications. To solve this problem, in this work, we propose a new monocular SLAM formulation based on the hypothesis that it is possible to reach high efficiency for embedded applications, increasing the density of the point cloud map (and therefore, the 3D map density and the overall positioning and mapping) by reformulating the feature-tracking/feature-matching process to achieve high performance for embedded hardware architectures, such as FPGA or CUDA. In order to increase the point cloud map density, we propose new feature-tracking/feature-matching and depth-from-motion algorithms that consists of extensions of the stereo matching problem. Then, two different hardware architectures (based on FPGA and CUDA, respectively) fully compliant for real-time embedded applications are presented. Experimental results show that it is possible to obtain accurate camera pose estimations. Compared to previous monocular systems, we are ranked as the 5th place in the KITTI benchmark suite, with a higher processing speed (we are the fastest algorithm in the benchmark) and more than x10 the density of the point cloud from previous approaches
APA, Harvard, Vancouver, ISO, and other styles
17

Rogers, John Gilbert. "Life-long mapping of objects and places in domestic environments." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47736.

Full text
Abstract:
In the future, robots will expand from industrial and research applications to the home. Domestic service robots will work in the home to perform useful tasks such as object retrieval, cleaning, organization, and security. The tireless support of these systems will not only enable able bodied people to avoid mundane chores; they will also enable the elderly to remain independent from institutional care by providing service, safety, and companionship. Robots will need to understand the relationship between objects and their environments to perform some of these tasks. Structured indoor environments are organized according to architectural guidelines and convenience for their residents. Utilizing this information makes it possible to predict the location of objects. Conversely, one can also predict the function of a room from the detection of a few objects within a given space. This thesis introduces a framework for combining object permanence and context called the probabilistic cognitive model. This framework combines reasoning about spatial extent of places and the identity of objects and their relationships to one another and to the locations where they appear. This type of reasoning takes into account the context in which objects appear to determine their identity and purpose. The probabilistic cognitive model combines a mapping system called OmniMapper with a conditional random field probabilistic model for context representation. The conditional random field models the dependencies between location and identity in a real-world domestic environment. This model is used by mobile robot systems to predict the effects of their actions during autonomous object search tasks in unknown environments.
APA, Harvard, Vancouver, ISO, and other styles
18

Inostroza, Ferrari Felipe Ignacio. "The estimation of detection statistics in simultaneus localization and mapping." Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/134725.

Full text
Abstract:
Magíster en Ciencias de la Ingeniería, Mención Ingeniería Eléctrica
Ingeniero Civil Eléctrico
El uso de Conjuntos Aleatorios Finitos (RFS por su sigla en inglés) tiene varias ventajas respecto de los métodos tradicionales basados en vectores. Entre ellas están el incluir las estadísticas de detección del sensor y la eliminación de las heurísticas tanto para la asociación de datos como para la inicialización y eliminación de objetos en mapa. Para obtener los beneficios de los estimadores basados en RFS en el problema de Construcción de Mapas y Localización Simultanea (SLAM por su acrónimo en inglés), las estadísticas de detección y falsa alarma del extractor de características deben ser modeladas y utilizadas en cada actualización del mapa. Esta Tesis presenta técnicas para obtener estas estadísticas en el caso de características semánticas extraídas de mediciones láser. Además se concentra en la extracción de objetos cilíndricos, como pilares, árboles y postes de luz, en ambientes exteriores. Las estadísticas de detección obtenidas son utilizadas dentro de una solución a SLAM basada en RFS, conocida como Rao-Blackwellized (RB)-probability hypothesis density (PHD)-SLAM, y el algoritmo multiple hypothesis (MH)-factored solution to SLAM (FastSLAM), solución a SLAM basada en vectores. El desempeño de cada algoritmo al usar estas estadísticas es comparado con el de utilizar estadísticas constantes. Los resultados muestran las ventajas de modelar las estadísticas de detección, particularmente en el caso del paradigma RFS. En particular, el error en las estimaciones del mapa, medido utilizando la distancia optimal sub- pattern assignment (OSPA) a un mapa ground truth generado de forma independiente, disminuye en un 13% en el caso de MH-FastSLAM y en un 13% para RB-PHD-SLAM al modelar las estadísticas de detección. A pesar de que no se tiene un ground truth para la trayectoria del robot, se evalúan las trayectorias visualmente, encontradose estimaciones superiores para el método propuesto. Por lo tanto, se concluye que el modelamiento de las estadísticas de detección es de gran importancia al implementar una aplicación de SLAM.
APA, Harvard, Vancouver, ISO, and other styles
19

Bacca, Cortés Eval Bladimir. "Appearance-based mapping and localization using feature stability histograms for mobile robot navigation." Doctoral thesis, Universitat de Girona, 2012. http://hdl.handle.net/10803/83589.

Full text
Abstract:
This work proposes an appearance-based SLAM method whose main contribution is the Feature Stability Histogram (FSH). The FSH is built using a voting schema, if the feature is re-observed, it will be promoted; otherwise it progressively decreases its corresponding FSH value. The FSH is based on the human memory model to deal with changing environments and long-term SLAM. This model introduces concepts of Short-Term memory (STM), which retains information long enough to use it, and Long-Term memory (LTM), which retains information for longer periods of time. If the entries in the STM are rehearsed, they become part of the LTM (i.e. they become more stable). However, this work proposes a different memory model, allowing to any input be part of the STM or LTM considering the input strength. The most stable features are only used for SLAM. This innovative feature management approach is able to cope with changing environments, and long-term SLAM.
Este trabajo propone un método de SLAM basado en apariencia cuya principal contribución es el Histograma de Estabilidad de Características (FSH). El FSH es construido por votación, si una característica es re-observada, ésta será promovida; de lo contrario su valor FSH progresivamente es reducido. El FSH es basado en el modelo de memoria humana para ocuparse de ambientes cambiantes y SLAM a largo término. Este modelo introduce conceptos como memoria a corto plazo (STM) y largo plazo (LTM), las cuales retienen información por cortos y largos periodos de tiempo. Si una entrada a la STM es reforzada, ésta hará parte de la LTM (i.e. es más estable). Sin embargo, este trabajo propone un modelo de memoria diferente, permitiendo a cualquier entrada ser parte de la STM o LTM considerando su intensidad. Las características más estables son solamente usadas en SLAM. Esta innovadora estrategia de manejo de características es capaz de hacer frente a ambientes cambiantes y SLAM de largo término.
APA, Harvard, Vancouver, ISO, and other styles
20

Skoglund, Martin. "Inertial Navigation and Mapping for Autonomous Vehicles." Doctoral thesis, Linköpings universitet, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110373.

Full text
Abstract:
Navigation and mapping in unknown environments is an important building block for increased autonomy of unmanned vehicles, since external positioning systems can be susceptible to interference or simply being inaccessible. Navigation and mapping require signal processing of vehicle sensor data to estimate motion relative to the surrounding environment and to simultaneously estimate various properties of the surrounding environment. Physical models of sensors, vehicle motion and external influences are used in conjunction with statistically motivated methods to solve these problems. This thesis mainly addresses three navigation and mapping problems which are described below. We study how a vessel with known magnetic signature and a sensor network with magnetometers can be used to determine the sensor positions and simultaneously determine the vessel's route in an extended Kalman filter (EKF). This is a so-called simultaneous localisation and mapping (SLAM) problem with a reversed measurement relationship. Previously determined hydrodynamic models for a remotely operated vehicle (ROV) are used together with the vessel's sensors to improve the navigation performance using an EKF. Data from sea trials is used to evaluate the system and the results show that especially the linear velocity relative to the water can be accurately determined. The third problem addressed is SLAM with inertial sensors, accelerometers and gyroscopes, and an optical camera contained in a single sensor unit. This problem spans over three publications. We study how a SLAM estimate, consisting of a point cloud map, the sensor unit's three dimensional trajectory and speed as well as its orientation, can be improved by solving a nonlinear least-squares (NLS) problem. NLS minimisation of the predicted motion error and the predicted point cloud coordinates given all camera measurements is initialised using EKF-SLAM. We show how NLS-SLAM can be initialised as a sequence of almost uncoupled problems with simple and often linear solutions. It also scales much better to larger data sets than EKF-SLAM. The results obtained using NLS-SLAM are significantly better using the proposed initialisation method than if started from arbitrary points. A SLAM formulation using the expectation maximisation (EM) algorithm is proposed. EM splits the original problem into two simpler problems and solves them iteratively. Here the platform motion is one problem and the landmark map is the other. The first problem is solved using an extended Rauch-Tung-Striebel smoother while the second problem is solved with a quasi-Newton method. The results using EM-SLAM are better than NLS-SLAM both in terms of accuracy and complexity.
LINK-SIC
APA, Harvard, Vancouver, ISO, and other styles
21

Melbouci, Kathia. "Contributions au RGBD-SLAM." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC006/document.

Full text
Abstract:
Pour assurer la navigation autonome d’un robot mobile, les traitements effectués pour sa localisation doivent être faits en ligne et doivent garantir une précision suffisante pour permettre au robot d’effectuer des tâches de haut niveau pour la navigation et l’évitement d’obstacles. Les auteurs de travaux basés sur le SLAM visuel (Simultaneous Localization And Mapping) tentent depuis quelques années de garantir le meilleur compromis rapidité/précision. La majorité des solutions SLAM visuel existantes sont basées sur une représentation éparse de l’environnement. En suivant des primitives visuelles sur plusieurs images, il est possible d’estimer la position 3D de ces primitives ainsi que les poses de la caméra. La communauté du SLAM visuel a concentré ses efforts sur l’augmentation du nombre de primitives visuelles suivies et sur l’ajustement de la carte 3D, afin d’améliorer l’estimation de la trajectoire de la caméra et les positions 3D des primitives. Cependant, la localisation par SLAM visuel présente souvent des dérives dues au cumul d’erreurs, et dans le cas du SLAM visuel monoculaire, la position de la caméra n’est connue qu’à un facteur d’échelle près. Ce dernier peut être fixé initialement mais dérive au cours du temps. Pour faire face à ces limitations, nous avons centré nos travaux de thèse sur la problématique suivante : intégrer des informations supplémentaires dans un algorithme de SLAM visuel monoculaire afin de mieux contraindre la trajectoire de la caméra et la reconstruction 3D. Ces contraintes ne doivent pas détériorer les performances calculatoires de l’algorithme initial et leur absence ne doit pas mettre l’algorithme en échec. C’est pour cela que nous avons choisi d’intégrer l’information de profondeur fournie par un capteur 3D (e.g. Microsoft Kinect) et des informations géométriques sur la structure de la scène. La première contribution de cette thèse est de modifier l’algorithme SLAM visuel monoculaire proposé par Mouragnon et al. (2006b) pour prendre en compte la mesure de profondeur fournie par un capteur 3D, en proposant particulièrement un ajustement de faisceaux qui combine, d’une manière simple, des informations visuelles et des informations de profondeur. La deuxième contribution est de proposer une nouvelle fonction de coût du même ajustement de faisceaux qui intègre, en plus des contraintes sur les profondeurs des points, des contraintes géométriques d’appartenance aux plans de la scène. Les solutions proposées ont été validées sur des séquences de synthèse et sur des séquences réelles, représentant des environnements variés. Ces solutions ont été comparées aux récentes méthodes de l’état de l’art. Les résultats obtenus montrent que les différentes contraintes développées permettent d’améliorer significativement la précision de la localisation du SLAM. De plus les solutions proposées sont faciles à déployer et peu couteuses en temps de calcul
To guarantee autonomous and safely navigation for a mobile robot, the processing achieved for its localization must be fast and accurate enough to enable the robot to perform high-level tasks for navigation and obstacle avoidance. The authors of Simultaneous Localization And Mapping (SLAM) based works, are trying since year, to ensure the speed/accuracy trade-off. Most existing works in the field of monocular (SLAM) has largely centered around sparse feature-based representations of the environment. By tracking salient image points across many frames of video, both the positions of the features and the motion of the camera can be inferred live. Within the visual SLAM community, there has been a focus on both increasing the number of features that can be tracked across an image and efficiently managing and adjusting this map of features in order to improve camera trajectory and feature location accuracy. However, visual SLAM suffers from some limitations. Indeed, with a single camera and without any assumptions or prior knowledge about the camera environment, rotation can be retrieved, but the translation is up to scale. Furthermore, visual monocular SLAM is an incremental process prone to small drifts in both pose measurement and scale, which when integrated over time, become increasingly significant over large distances. To cope with these limitations, we have centered our work around the following issues : integrate additional information into an existing monocular visual SLAM system, in order to constrain the camera localization and the mapping points. Provided that the high speed of the initial SLAM process is kept and the lack of these added constraints should not give rise to the failure of the process. For these last reasons, we have chosen to integrate the depth information provided by a 3D sensor (e.g. Microsoft Kinect) and geometric information about scene structure. The primary contribution of this work consists of modifying the SLAM algorithm proposed by Mouragnon et al. (2006b) to take into account the depth measurement provided by a 3D sensor. This consists of several rather straightforward changes, but also on a way to combine the depth and visual data in the bundle adjustment process. The second contribution is to propose a solution that uses, in addition to the depth and visual data, the constraints lying on points belonging to the plans of the scene. The proposed solutions have been validated on a synthetic sequences as well as on a real sequences, which depict various environments. These solutions have been compared to the state of art methods. The performances obtained with the previous solutions demonstrate that the additional constraints developed, improves significantly the accuracy and the robustness of the SLAM localization. Furthermore, these solutions are easy to roll out and not much time consuming
APA, Harvard, Vancouver, ISO, and other styles
22

Huai, Jianzhu. "Collaborative SLAM with Crowdsourced Data." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1483669256597152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ni, Kai. "Tectonic smoothing and mapping." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41072.

Full text
Abstract:
Large-scale mapping has become the key to numerous applications, e.g. simultaneous localization and mapping (SLAM) for autonomous robots. Despite of the success of many SLAM projects, there are still some challenging scenarios in which most of the current algorithms are not able to deliver an exact solution fast enough. One of these challenges is the size of SLAM problems, which has increased by several magnitudes over the last decade. Another challenge for SLAM problems is the large amount of noise baked in the measurements, which often yields poor initializations and slows or even fails the optimization. Urban 3D reconstruction is another popular application for large-scale mapping and has received considerable attention recently from the computer vision community. High-quality 3D models are useful in various successful cartographic and architectural applications, such as Google Earth or Microsoft Live Local. At the heart of urban reconstruction problems is structure from motion (SfM). Due to the wide availability of cameras, especially on handhold devices, SfM is becoming a more and more crucial technique to handle a large amount of images. In the thesis, I present a novel batch algorithm, namely Tectonic Smoothing and Mapping (TSAM). I will show that the original SLAM graph can be recursively partitioned into multiple-level submaps using the nested dissection algorithm, which leads to the cluster tree, a powerful graph representation. By employing the nested dissection algorithm, the algorithm greatly minimizes the dependencies between two subtrees, and the optimization of the original graph can be done using a bottom-up inference along the corresponding cluster tree. To speed up the computation, a base node is introduced for each submap and is used to represent the rigid transformation of the submap in the global coordinate frame. As a result, the optimization moves the base nodes rather than the actual submap variables. I will also show that TSAM can be successfully applied to the SfM problem as well, in which a hypergraph representation is employed to capture the pairwise constraints between cameras. The hierarchical partitioning based on the hypergraph not only yields a cluster tree as in the SLAM problem but also forces resulting submaps to be nonsingular. I will demonstrate the TSAM algorithm using various simulation and real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
24

Barkby, Stephen. "Efficient and Featureless Approaches to Bathymetric Simultaneous Localisation and Mapping." Thesis, The University of Sydney, 2011. http://hdl.handle.net/2123/7919.

Full text
Abstract:
This thesis investigates efficient forms of Simultaneous Localization and Mapping (SLAM) that do not require explicit identification, tracking or association of map features. The specific application considered here is subsea robotic bathymetric mapping. In this context, SLAM allows a GPS-denied robot operating near the sea floor to create a self-consistent bathymetric map. This is accomplished using a Rao-Blackwellized Particle Filter (RBPF) whereby each particle maintains a hypothesis of the current vehicle state and map that is efficiently maintained using Distributed Particle Mapping. Through particle weighting and resampling, successive observations of the seafloor structure are used to improve the estimated trajectory and resulting map by enforcing map self consistency. The main contributions of this thesis are two novel map representations, either of which can be paired with the RBPF to perform SLAM. The first is a grid-based 2D depth map that is efficiently stored by exploiting redundancies between different maps. The second is a trajectory map representation that, instead of directly storing estimates of seabed depth, records the trajectory of each particle and synchronises it to a common log of bathymetric observations. Upon detecting a loop closure each particle is weighted by matching new observations to the current predictions. For the grid map approach this is done by extracting the predictions stored in the observed cells. For the trajectory map approach predictions are instead generated from a local reconstruction of their map using Gaussian Process Regression. While the former allows for faster map access the latter requires less memory and fully exploits the spatial correlation in the environment, allowing predictions of seabed depth to be generated in areas that were not directly observed previously. In this case particle resampling therefore not only enforces self-consistency in overlapping sections of the map but additionally enforces self-consistency between neighboring map borders. Both approaches are validated using multibeam sonar data collected from several missions of varying scale by a variety of different Unmanned Underwater Vehicles. These trials demonstrate how the corrections provided by both approaches improve the trajectory and map when compared to dead reckoning fused with Ultra Short Baseline or Long Baseline observations. Furthermore, results are compared with a pre-existing state of the art bathymetric SLAM technique, confirming that similar results can be achieved at a fraction of the computation cost. Lastly the added capabilities of the trajectory map are validated using two different bathymetric datasets. These demonstrate how navigation and mapping corrections can still be achieved when only sparse bathymetry is available (e.g. from a four beam Doppler Velocity Log sensor) or in missions where map overlap is minimal or even non-existent.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, X. "Visual navigation in unmanned air vehicles with simultaneous location and mapping (SLAM)." Thesis, Cranfield University, 2014. http://dspace.lib.cranfield.ac.uk/handle/1826/8644.

Full text
Abstract:
This thesis focuses on the theory and implementation of visual navigation techniques for Autonomous Air Vehicles in outdoor environments. The target of this study is to fuse and cooperatively develop an incremental map for multiple air vehicles under the application of Simultaneous Location and Mapping (SLAM). Without loss of generality, two unmanned air vehicles (UAVs) are investigated for the generation of ground maps from current and a priori data. Each individual UAV is equipped with inertial navigation systems and external sensitive elements which can provide the possible mixture of visible, thermal infrared (IR) image sensors, with a special emphasis on the stereo digital cameras. The corresponding stereopsis is able to provide the crucial three-dimensional (3-D) measurements. Therefore, the visual aerial navigation problems tacked here are interpreted as stereo vision based SLAM (vSLAM) for both single and multiple UAVs applications. To begin with, the investigation is devoted to the methodologies of feature extraction. Potential landmarks are selected from airborne camera images as distinctive points identified in the images are the prerequisite for the rest. Feasible feature extraction algorithms have large influence over feature matching/association in 3-D mapping. To this end, effective variants of scale-invariant feature transform (SIFT) algorithms are employed to conduct comprehensive experiments on feature extraction for both visible and infrared aerial images. As the UAV is quite often in an uncertain location within complex and cluttered environments, dense and blurred images are practically inevitable. Thus, it becomes a challenge to find feature correspondences, which involves feature matching between 1st and 2nd image in the same frame, and data association of mapped landmarks and camera measurements. A number of tests with different techniques are conducted by incorporating the idea of graph theory and graph matching. The novel approaches, which could be tagged as classification and hypergraph transformation (HGTM) based respectively, have been proposed to solve the data association in stereo vision based navigation. These strategies are then utilised and investigated for UAV application within SLAM so as to achieve robust matching/association in highly cluttered environments. The unknown nonlinearities in the system model, including noise would introduce undesirable INS drift and errors. Therefore, appropriate appraisals on the pros and cons of various potential data filtering algorithms to resolve this issue are undertaken in order to meet the specific requirements of the applications. These filters within visual SLAM were put under investigation for data filtering and fusion of both single and cooperative navigation. Hence updated information required for construction and maintenance of a globally consistent map can be provided by using a suitable algorithm with the compromise between computational accuracy and intensity imposed by the increasing map size. The research provides an overview of the feasible filters, such as extended Kalman Filter, extended Information Filter, unscented Kalman Filter and unscented H Infinity Filter. As visual intuition always plays an important role for humans to recognise objects, research on 3-D mapping in textures is conducted in order to fulfil the purpose of both statistical and visual analysis for aerial navigation. Various techniques are proposed to smooth texture and minimise mosaicing errors during the reconstruction of 3-D textured maps with vSLAM for UAVs. Finally, with covariance intersection (CI) techniques adopted on multiple sensors, various cooperative and data fusion strategies are introduced for the distributed and decentralised UAVs for Cooperative vSLAM (C-vSLAM). Together with the complex structure of high nonlinear system models that reside in cooperative platforms, the robustness and accuracy of the estimations in collaborative mapping and location are achieved through HGTM association and communication strategies. Data fusion among UAVs and estimation for visual navigation via SLAM were impressively verified and validated in conditions of both simulation and real data sets.
APA, Harvard, Vancouver, ISO, and other styles
26

Torresani, Alessandro. "A portable V-SLAM based solution for advanced visual 3D mobile mapping." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/362031.

Full text
Abstract:
The need for accurate 3D reconstructions of complex and large environments or structures has risen dramatically in recent years. In this context, devices known as portable mobile mapping systems have lately emerged as fast and accurate reconstruction solutions. While most of the research and commercial works have relied so far on laser scanners, solutions solely based on cameras and photogrammetry are attracting an increasing interest for the minor costs, size and power consumption of cameras. This thesis presents a novel handheld mobile mapping system based on stereo vision and image-based 3D reconstruction techniques. The main novelty of the system is that it leverages Visual Simultaneous Localization And Mapping (V-SLAM) technology to support and control the acquisition of the images. The real-time estimates of the system trajectory and 3D structure of the scene are used not only to enable a live feedback of the mapped area, but also to optimize the saving of the images, provide geometric and radiometric quality measures of the imagery, and robustly control the acquisition parameters of the cameras. To the best of authors’ knowledge, the proposed system is the first handheld mobile mapping system to offer these features during the acquisition of the images, and the results support its advantages in enabling accurate and controlled visual mapping experiences even in complex and challenging scenarios.
APA, Harvard, Vancouver, ISO, and other styles
27

Papakis, Ioannis. "A Bayesian Framework for Multi-Stage Robot, Map and Target Localization." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/93024.

Full text
Abstract:
This thesis presents a generalized Bayesian framework for a mobile robot to localize itself and a target, while building a map of the environment. The proposed technique builds upon the Bayesian Simultaneous Robot Localization and Mapping (SLAM) method, to allow the robot to localize itself and the environment using map features or landmarks in close proximity. The target feature is distinguished from the rest of features since the robot has to navigate to its location and thus needs to be observed from a long distance. The contribution of the proposed approach is on enabling the robot to track a target object or region, using a multi-stage technique. In the first stage, the target state is corrected sequentially to the robot correction in the Recursive Bayesian Estimation. In the second stage, with the target being closer, the target state is corrected simultaneously with the robot and the landmarks. The process allows the robot's state uncertainty to be propagated into the estimated target's state, bridging the gap between tracking only methods where the target is estimated assuming known observer state and SLAM methods where only landmarks are considered. When the robot is located far, the sequential stage is efficient in tracking the target position while maintaining an accurate robot state using close only features. Also, target belief is always maintained in comparison to temporary tracking methods such as image-tracking. When the robot is closer to the target and most of its field of view is covered by the target, it is shown that simultaneous correction needs to be used in order to minimize robot, target and map entropies in the absence of other landmarks.
M.S.
This thesis presents a generalized framework with the goal of allowing a robot to localize itself and a static target, while building a map of the environment. This map is used as in the Simultaneous Localization and Mapping (SLAM) framework to enhance robot accuracy and with close features. Target, here, is distinguished from the rest of features since the robot has to navigate to its location and thus needs to be continuously observed from a long distance. The contribution of the proposed approach is on enabling the robot to track a target object or region, using a multi-stage technique. In the first stage, the robot and close landmarks are estimated simultaneously and they are both corrected. Using the robot's uncertainty in its estimate, the target state is then estimated sequentially, considering known robot state. That decouples the target estimation from the rest of the process. In the second stage, with the target being closer, target, robot and landmarks are estimated simultaneously. When the robot is located far, the sequential stage is efficient in tracking the target position while maintaining an accurate robot state using close only features. When the robot is closer to the target and most of its field of view is covered by the target, it is shown that simultaneous correction needs to be used in order to minimize robot, target and map uncertainties in the absence of other landmarks.
APA, Harvard, Vancouver, ISO, and other styles
28

Morrell, Benjamin. "Enhancing 3D Autonomous Navigation Through Obstacle Fields: Homogeneous Localisation and Mapping, with Obstacle-Aware Trajectory Optimisation." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/19992.

Full text
Abstract:
Small flying robots have numerous potential applications, from quadrotors for search and rescue, infrastructure inspection and package delivery to free-flying satellites for assistance activities inside a space station. To enable these applications, a key challenge is autonomous navigation in 3D, near obstacles on a power, mass and computation constrained platform. This challenge requires a robot to perform localisation, mapping, dynamics-aware trajectory planning and control. The current state-of-the-art uses separate algorithms for each component. Here, the aim is for a more homogeneous approach in the search for improved efficiencies and capabilities. First, an algorithm is described to perform Simultaneous Localisation And Mapping (SLAM) with physical, 3D map representation that can also be used to represent obstacles for trajectory planning: Non-Uniform Rational B-Spline (NURBS) surfaces. Termed NURBSLAM, this algorithm is shown to combine the typically separate tasks of localisation and obstacle mapping. Second, a trajectory optimisation algorithm is presented that produces dynamically-optimal trajectories with direct consideration of obstacles, providing a middle ground between path planners and trajectory smoothers. Called the Admissible Subspace TRajectory Optimiser (ASTRO), the algorithm can produce trajectories that are easier to track than the state-of-the-art for flight near obstacles, as shown in flight tests with quadrotors. For quadrotors to track trajectories, a critical component is the differential flatness transformation that links position and attitude controllers. Existing singularities in this transformation are analysed, solutions are proposed and are then demonstrated in flight tests. Finally, a combined system of NURBSLAM and ASTRO are brought together and tested against the state-of-the-art in a novel simulation environment to prove the concept that a single 3D representation can be used for localisation, mapping, and planning.
APA, Harvard, Vancouver, ISO, and other styles
29

Jama, Michal. "Monocular vision based localization and mapping." Diss., Kansas State University, 2011. http://hdl.handle.net/2097/8561.

Full text
Abstract:
Doctor of Philosophy
Department of Electrical and Computer Engineering
Balasubramaniam Natarajan
Dale E. Schinstock
In this dissertation, two applications related to vision-based localization and mapping are considered: (1) improving navigation system based satellite location estimates by using on-board camera images, and (2) deriving position information from video stream and using it to aid an auto-pilot of an unmanned aerial vehicle (UAV). In the first part of this dissertation, a method for analyzing a minimization process called bundle adjustment (BA) used in stereo imagery based 3D terrain reconstruction to refine estimates of camera poses (positions and orientations) is presented. In particular, imagery obtained with pushbroom cameras is of interest. This work proposes a method to identify cases in which BA does not work as intended, i.e., the cases in which the pose estimates returned by the BA are not more accurate than estimates provided by a satellite navigation systems due to the existence of degrees of freedom (DOF) in BA. Use of inaccurate pose estimates causes warping and scaling effects in the reconstructed terrain and prevents the terrain from being used in scientific analysis. Main contributions of this part of work include: 1) formulation of a method for detecting DOF in the BA; and 2) identifying that two camera geometries commonly used to obtain stereo imagery have DOF. Also, this part presents results demonstrating that avoidance of the DOF can give significant accuracy gains in aerial imagery. The second part of this dissertation proposes a vision based system for UAV navigation. This is a monocular vision based simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video-stream from a single camera. This is different from common SLAM solutions that use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. The SLAM solution was built by significantly modifying and extending a recent open-source SLAM solution that is fundamentally different from a traditional approach to solving SLAM problem. The modifications made are those needed to provide the position measurements necessary for the navigation solution on a UAV while simultaneously building the map, all while maintaining control of the UAV. The main contributions of this part include: 1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; 2) improved performance of the SLAM algorithm for lower camera frame rates; and 3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible, and can be effective in Global Positioning System denied environments.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Zhan. "Guaranteed Localization and Mapping for Autonomous Vehicles." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS395.

Full text
Abstract:
Avec le développement rapide et les applications étendues de la technologie de robot, la recherche sur le robot mobile intelligent a été programmée dans le plan de développement de haute technologie dans beaucoup de pays. La navigation autonome joue un rôle de plus en plus important dans le domaine de recherche du robot mobile intelligent. La localisation et la construction de cartes sont les principaux problèmes à résoudre par le robot pour réaliser une navigation autonome. Les techniques probabilistes (telles que le filtre étendu de Kalman et le filtre de particules) ont longtemps été utilisées pour résoudre le problème de localisation et de cartographie robotisées. Malgré leurs bonnes performances dans les applications pratiques, ils pourraient souffrir du problème d'incohérence dans les scénarios non linéaires, non gaussiens. Cette thèse se concentre sur l'étude des méthodes basées sur l'analyse par intervalles appliquées pour résoudre le problème de localisation et de cartographie robotisées. Au lieu de faire des hypothèses sur la distribution de probabilité, tous les bruits de capteurs sont supposés être bornés dans des limites connues. Sur la base d'une telle base, cette thèse formule le problème de localisation et de cartographie dans le cadre du problème de satisfaction de contraintes d'intervalle et applique des techniques d'intervalles cohérentes pour les résoudre de manière garantie. Pour traiter le problème du "lacet non corrigé" rencontré par les approches de localisation par ICP (Interval Constraint Propagation), cette thèse propose un nouvel algorithme ICP traitant de la localisation en temps réel du véhicule. L'algorithme proposé utilise un algorithme de cohérence de bas niveau et est capable de diriger la correction d'incertitude. Par la suite, la thèse présente un algorithme SLAM basé sur l'analyse d'intervalle (IA-SLAM) dédié à la caméra monoculaire. Une paramétrisation d'erreur liée et une initialisation non retardée pour un point de repère naturel sont proposées. Le problème SLAM est formé comme ICSP et résolu par des techniques de propagation par contrainte d'intervalle. Une méthode de rasage pour la contraction de l'incertitude historique et une méthode d'optimisation basée sur un graphique ICSP sont proposées pour améliorer le résultat obtenu. L'analyse théorique de la cohérence de la cartographie est également fournie pour illustrer la force de IA-SLAM. De plus, sur la base de l'algorithme IA-SLAM proposé, la thèse présente une approche cohérente et peu coûteuse pour la localisation de véhicules en extérieur. Il fonctionne dans un cadre en deux étapes (enseignement visuel et répétition) et est validé avec un véhicule de type voiture équipé de capteurs de navigation à l'estime et d'une caméra monoculaire
With the rapid development and extensive applications of robot technology, the research on intelligent mobile robot has been scheduled in high technology development plan in many countries. Autonomous navigation plays a more and more important role in the research field of intelligent mobile robot. Localization and map building are the core problems to be solved by the robot to realize autonomous navigation. Probabilistic techniques (such as Extented Kalman Filter and Particle Filter) have long been used to solve the robotic localization and mapping problem. Despite their good performance in practical applications, they could suffer the inconsistency problem in the non linear, non Gaussian scenarios. This thesis focus on study the interval analysis based methods applied to solve the robotic localization and mapping problem. Instead of making hypothesis on the probability distribution, all the sensor noises are assumed to be bounded within known limits. Based on such foundation, this thesis formulates the localization and mapping problem in the framework of Interval Constraint Satisfaction Problem and applied consistent interval techniques to solve them in a guaranteed way. To deal with the “uncorrected yaw” problem encountered by Interval Constraint Propagation (ICP) based localization approaches, this thesis proposes a new ICP algorithm dealing with the real-time vehicle localization. The proposed algorithm employs a low-level consistency algorithm and is capable of heading uncertainty correction. Afterwards, the thesis presents an interval analysis based SLAM algorithm (IA-SLAM) dedicates for monocular camera. Bound-error parameterization and undelayed initialization for nature landmark are proposed. The SLAM problem is formed as ICSP and solved via interval constraint propagation techniques. A shaving method for landmark uncertainty contraction and an ICSP graph based optimization method are put forward to improve the obtaining result. Theoretical analysis of mapping consistency is also provided to illustrated the strength of IA-SLAM. Moreover, based on the proposed IA-SLAM algorithm, the thesis presents a low cost and consistent approach for outdoor vehicle localization. It works in a two-stage framework (visual teach and repeat) and is validated with a car-like vehicle equipped with dead reckoning sensors and monocular camera
APA, Harvard, Vancouver, ISO, and other styles
31

Pietzsch, Tobias. "Towards Dense Visual SLAM." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-78943.

Full text
Abstract:
Visual Simultaneous Localisation and Mapping (SLAM) is concerned with simultaneously estimating the pose of a camera and a map of the environment from a sequence of images. Traditionally, sparse maps comprising isolated point features have been employed, which facilitate robust localisation but are not well suited to advanced applications. In this thesis, we present map representations that allow a more dense description of the environment. In one approach, planar features are used to represent textured planar surfaces in the scene. This model is applied within a visual SLAM framework based on the Extended Kalman Filter. We presents solutions to several challenges which arise from this approach.
APA, Harvard, Vancouver, ISO, and other styles
32

Desrochers, Benoît. "Simultaneous localization and mapping in unstructured environments : a set-membership approach." Thesis, Brest, École nationale supérieure de techniques avancées Bretagne, 2018. http://www.theses.fr/2018ENTA0006/document.

Full text
Abstract:
Cette thèse étudie le problème de la localisation et de la cartographie simultanée (SLAM), dans des environnements non structurés, c'est-à-dire, qui ne peuvent pas être décrits par des équations ou des formes géométriques. Ces types d'environnements sont souvent rencontrés dans le domaine sous-marin. Contrairement aux approches classiques, l'environnement n'est pas modélisé par une collection de descripteurs ou d'amers ponctuels, mais directement par des ensembles. Ces ensembles, appelés forme ou shape, sont associés à des caractéristiques physiques de l'environnement, comme par exemple, des textures, du relief ou, de manière plus symbolique, à l'espace libre autour du véhicule. D'un point de vue théorique, le problème du SLAM, basé sur des formes, est formalisé par un réseau de contraintes hybrides dont les variables sont des vecteurs de Rn et des sous-ensembles de Rn. De la même façon que l'incertitude sur une variable réelle est représentée par un intervalle de réels, l'incertitude sur les formes sera représentée par un intervalle de forme. La principale contribution de cette thèse est de proposer un formalisme, basé sur le calcul par intervalle, capable de calculer ces domaines. En application, les algorithmes développés ont été appliqués au problème du SLAM à partir de données bathymétriques recueillies par un véhicule sous-marin autonome (AUV)
This thesis deals with the simultaneous localization and mapping (SLAM) problem in unstructured environments, i.e. which cannot be described by geometrical features. This type of environment frequently occurs in an underwater context.Unlike classical approaches, the environment is not described by a collection of punctual features or landmarks, but directly by sets. These sets, called shapes, are associated with physical features such as the relief, some textures or, in a more symbolic way, the space free of obstacles that can be sensed around a robot. In a theoretical point of view, the SLAM problem is formalized as an hybrid constraint network where the variables are vectors and subsets of Rn. Whereas an uncertain real number is enclosed in an interval, an uncertain shape is enclosed in an interval of sets. The main contribution of this thesis is the introduction of a new formalism, based on interval analysis, able to deal with these domains. As an application, we illustrate our method on a SLAM problem based on bathymetric data acquired by an autonomous underwater vehicle (AUV)
APA, Harvard, Vancouver, ISO, and other styles
33

Pomerleau, François. "Registration algorithm optimized for simultaneous localization and mapping." Mémoire, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/1465.

Full text
Abstract:
Building maps within an unknown environment while keeping track of the current position is a major step to accomplish safe and autonomous robot navigation. Within the last 20 years, Simultaneous Localization And Mapping (SLAM) became a topic of great interest in robotics. The basic idea of this technique is to combine proprioceptive robot motion information with external environmental information to minimize global positioning errors. Because the robot is moving in its environment, exteroceptive data comes from different points of view and must be expressed in the same coordinate system to be combined. The latter process is called registration. Iterative Closest Point (ICP) is a registration algorithm with very good performances in several 3D model reconstruction applications, and was recently applied to SLAM. However, SLAM has specific needs in terms of real-time and robustness comparatively to 3D model reconstructions, leaving room for specialized robotic mapping optimizations in relation to robot mapping. After reviewing existing SLAM approaches, this thesis introduces a new registration variant called Kd-ICP. This referencing technique iteratively decreases the error between misaligned point clouds without extracting specific environmental features. Results demonstrate that the new rejection technique used to achieve mapping registration is more robust to large initial positioning errors. Experiments with simulated and real environments suggest that Kd-ICP is more robust compared to other ICP variants. Moreover, the Kd-ICP is fast enough for real-time applications and is able to deal with sensor occlusions and partially overlapping maps. Realizing fast and robust local map registrations opens the door to new opportunities in SLAM. It becomes feasible to minimize the cumulation of robot positioning errors, to fuse local environmental information, to reduce memory usage when the robot is revisiting the same location. It is also possible to evaluate network constrains needed to minimize global mapping errors.
APA, Harvard, Vancouver, ISO, and other styles
34

Skinner, John R. "Simulation for robot vision." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227404/1/John_Skinner_Thesis.pdf.

Full text
Abstract:
This thesis examined the effectiveness of using computer graphics technologies to create simulated data for vision-enabled robots. The research demonstrated that while the behaviour of a robot is greatly affected by what it is looking at, simulated scenes can produce position estimates similar to specific real locations. The findings show that robots need to be tested in a very wide range of different contexts to understand their performance, and simulation provides a cost-effective route to that evaluation.
APA, Harvard, Vancouver, ISO, and other styles
35

Vidiyala, Sai Krishna. "Simultaneous localization and mapping with radio signals." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24138/.

Full text
Abstract:
Simultaneous localization and mapping is a concept that originated in the robotics area, where robots are traditionally equipped with a laser sensor and can navigate and map an unknown surrounding environment. High-definition distance estimates and very narrow steering beams are provided by high-accuracy SLAM technologies based on laser and mechanical steering devices. Unfortunately, these technologies are prohibitively expensive and cumbersome, and they cannot be integrated into mobile devices. This Thesis investigates millimeter wave (mmWave) radar technology to solve a SLAM problem. The technologies used are consistent with the concept of a personal mobile radar and they can be considered a proof-of-concept for further development of this concept.
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Chun-Fan Computer Science &amp Engineering Faculty of Engineering UNSW. "Towards topological mapping with vision-based simultaneous localization and map building." Awarded by:University of New South Wales. Computer Science & Engineering, 2008. http://handle.unsw.edu.au/1959.4/41551.

Full text
Abstract:
Although the theory of Simultaneous Localization and Map Building (SLAM) is well developed, there are many challenges to overcome when incorporating vision sensors into SLAM systems. Visual sensors have different properties when compared to range finding sensors and therefore require different considerations. Existing vision-based SLAM algorithms extract point landmarks, which are required for SLAM algorithms such as the Kalman filter. Under this restriction, the types of image features that can be used are limited and the full advantages of vision not realized. This thesis examines the theoretical formulation of the SLAM problem and the characteristics of visual information in the SLAM domain. It also examines different representations of uncertainty, features and environments. It identifies the necessity to develop a suitable framework for vision-based SLAM systems and proposes a framework called VisionSLAM, which utilizes an appearance-based landmark representation and topological map structure to model metric relations between landmarks. A set of Haar feature filters are used to extract image structure statistics, which are robust against illumination changes, have good uniqueness property and can be computed in real time. The algorithm is able to resolve and correct false data associations and is robust against random correlation resulting from perceptual aliasing. The algorithm has been tested extensively in a natural outdoor environment.
APA, Harvard, Vancouver, ISO, and other styles
37

Gonzalez, Cadenillas Clayder Alejandro. "An improved feature extractor for the lidar odometry and mapping algorithm." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/171499.

Full text
Abstract:
Tesis para optar al grado de Magíster en Ciencias de la Ingeniería, Mención Eléctrica
La extracción de características es una tarea crítica en la localización y mapeo simultáneo o Simultaneous Localization and Mapping (SLAM) basado en características, que es uno de los problemas más importantes de la comunidad robótica. Un algoritmo que resuelve SLAM utilizando características basadas en LiDAR es el algoritmo LiDAR Odometry and Mapping (LOAM). Este algoritmo se considera actualmente como el mejor algoritmo SLAM según el Benchmark KITTI. El algoritmo LOAM resuelve el problema de SLAM a través de un enfoque de emparejamiento de características y su algoritmo de extracción de características detecta las características clasifican los puntos de una nube de puntos como planos o agudos. Esta clasificación resulta de una ecuación que define el nivel de suavidad para cada punto. Sin embargo, esta ecuación no considera el ruido de rango del sensor. Por lo tanto, si el ruido de rango del LiDAR es alto, el extractor de características de LOAM podría confundir los puntos planos y agudos, lo que provocaría que la tarea de emparejamiento de características falle. Esta tesis propone el reemplazo del algoritmo de extracción de características del LOAM original por el algoritmo Curvature Scale Space (CSS). La elección de este algoritmo se realizó después de estudiar varios extractores de características en la literatura. El algoritmo CSS puede mejorar potencialmente la tarea de extracción de características en entornos ruidosos debido a sus diversos niveles de suavizado Gaussiano. La sustitución del extractor de características original de LOAM por el algoritmo CSS se logró mediante la adaptación del algoritmo CSS al Velodyne VLP-16 3D LiDAR. El extractor de características de LOAM y el extractor de características de CSS se probaron y compararon con datos reales y simulados, incluido el dataset KITTI utilizando las métricas Optimal Sub-Pattern Assignment (OSPA) y Absolute Trajectory Error (ATE). Para todos estos datasets, el rendimiento de extracción de características de CSS fue mejor que el del algoritmo LOAM en términos de métricas OSPA y ATE.
APA, Harvard, Vancouver, ISO, and other styles
38

Dahlin, Alfred. "Simultaneous Localization and Mapping for an Unmanned Aerial Vehicle Using Radar and Radio Transmitters." Thesis, Linköpings universitet, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110645.

Full text
Abstract:
The Global Positioning System (GPS) is a cornerstone in Unmanned Aerial Vehicle (UAV) navigation and is by far the most common way to obtain the position of a UAV. However, since there are many scenarios in which GPS measurements might not be available, the possibility of estimating the UAV position without using the GPS would greatly improve the overall robustness of the navigation. This thesis studies the possibility of instead using Simultaneous Localisation and Mapping (SLAM) in order to estimate the position of a UAV using an Inertial Measurement Unit (IMU) and the direction towards ground based radio transmitters without prior knowledge of their position. Simulations using appropriately generated data provides a feasibility analysis which shows promising results for position errors for outdoor trajectories over large areas, however with some issues regarding overall offset. The method seems to have potential but further studies are required using the measurements from a live flight, in order to determine the true performance.
APA, Harvard, Vancouver, ISO, and other styles
39

Contreras, Samamé Luis Federico. "SLAM collaboratif dans des environnements extérieurs." Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0012/document.

Full text
Abstract:
Cette thèse propose des modèles cartographiques à grande échelle d'environnements urbains et ruraux à l'aide de données en 3D acquises par plusieurs robots. La mémoire contribue de deux manières principales au domaine de recherche de la cartographie. La première contribution est la création d'une nouvelle structure, CoMapping, qui permet de générer des cartes 3D de façon collaborative. Cette structure s’applique aux environnements extérieurs en ayant une approche décentralisée. La fonctionnalité de CoMapping comprend les éléments suivants : Tout d’abord, chaque robot réalise la construction d'une carte de son environnement sous forme de nuage de points.Pour cela, le système de cartographie a été mis en place sur des ordinateurs dédiés à chaque voiture, en traitant les mesures de distance à partir d'un LiDAR 3D se déplaçant en six degrés de liberté (6-DOF). Ensuite, les robots partagent leurs cartes locales et fusionnent individuellement les nuages de points afin d'améliorer leur estimation de leur cartographie locale. La deuxième contribution clé est le groupe de métriques qui permettent d'analyser les processus de fusion et de partage de cartes entre les robots. Nous présentons des résultats expérimentaux en vue de valider la structure CoMapping et ses métriques. Tous les tests ont été réalisés dans des environnements extérieurs urbains du campus de l’École Centrale de Nantes ainsi que dans des milieux ruraux
This thesis proposes large-scale mapping model of urban and rural environments using 3D data acquired by several robots. The work contributes in two main ways to the research field of mapping. The first contribution is the creation of a new framework, CoMapping, which allows to generate 3D maps in a cooperative way. This framework applies to outdoor environments with a decentralized approach. The CoMapping's functionality includes the following elements: First of all, each robot builds a map of its environment in point cloud format.To do this, the mapping system was set up on computers dedicated to each vehicle, processing distance measurements from a 3D LiDAR moving in six degrees of freedom (6-DOF). Then, the robots share their local maps and merge the point clouds individually to improve their local map estimation. The second key contribution is the group of metrics that allow to analyze the merging and card sharing processes between robots. We present experimental results to validate the CoMapping framework with their respective metrics. All tests were carried out in urban outdoor environments on the surrounding campus of the École Centrale de Nantes as well as in rural areas
APA, Harvard, Vancouver, ISO, and other styles
40

Sünderhauf, Niko. "Robust Optimization for Simultaneous Localization and Mapping." Doctoral thesis, Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-86443.

Full text
Abstract:
SLAM (Simultaneous Localization And Mapping) has been a very active and almost ubiquitous problem in the field of mobile and autonomous robotics for over two decades. For many years, filter-based methods have dominated the SLAM literature, but a change of paradigms could be observed recently. Current state of the art solutions of the SLAM problem are based on efficient sparse least squares optimization techniques. However, it is commonly known that least squares methods are by default not robust against outliers. In SLAM, such outliers arise mostly from data association errors like false positive loop closures. Since the optimizers in current SLAM systems are not robust against outliers, they have to rely heavily on certain preprocessing steps to prevent or reject all data association errors. Especially false positive loop closures will lead to catastrophically wrong solutions with current solvers. The problem is commonly accepted in the literature, but no concise solution has been proposed so far. The main focus of this work is to develop a novel formulation of the optimization-based SLAM problem that is robust against such outliers. The developed approach allows the back-end part of the SLAM system to change parts of the topological structure of the problem\'s factor graph representation during the optimization process. The back-end can thereby discard individual constraints and converge towards correct solutions even in the presence of many false positive loop closures. This largely increases the overall robustness of the SLAM system and closes a gap between the sensor-driven front-end and the back-end optimizers. The approach is evaluated on both large scale synthetic and real-world datasets. This work furthermore shows that the developed approach is versatile and can be applied beyond SLAM, in other domains where least squares optimization problems are solved and outliers have to be expected. This is successfully demonstrated in the domain of GPS-based vehicle localization in urban areas where multipath satellite observations often impede high-precision position estimates.
APA, Harvard, Vancouver, ISO, and other styles
41

Abouzahir, Mohamed. "Algorithmes SLAM : Vers une implémentation embarquée." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS058/document.

Full text
Abstract:
La navigation autonome est un axe de recherche principal dans le domaine de la robotique mobile. Dans ce contexte, le robot doit disposer des algorithmes qui lui permettent d’évoluer de manière autonome dans des environnements complexes et inconnus. Les algorithmes de SLAM permettent à un robot de cartographier son environnement tout en se localisant dans l’espace. Les algorithmes SLAM sont de plus en plus performants, mais aucune implémentation matérielle ou architecturale complète n’a eu. Une telle implantation d’architecture doit prendre en considération la consommation d’énergie, l’embarquabilité et la puissance de calcul. Ce travail scientifique vise à évaluer des systèmes embarqués impliquant de la localisation ou reconstruction de scène. La méthodologie adoptera une approche A3 (Adéquation Algorithme Architecture) pour améliorer l’efficacité de l’implantation des algorithmes plus particulièrement pour des systèmes à fortes contraintes. Le système SLAM embarqué doit disposer d’une architecture électronique et logicielle permettant d’assurer la production d’information pertinentes à partir de données capteurs, tout en assurant la localisation de l’embarquant dans son environnement. L’objectif est donc de définir, pour un algorithme choisi, un modèle d’architecture répondant aux contraintes de l’embarqué. Les premiers travaux de cette thèse ont consisté à explorer les différentes approches algorithmiques permettant la résolution du problème de SLAM. Une étude plus approfondie de ces algorithmes est réalisée. Ceci nous a permet d’évaluer quatre algorithmes de différente nature : FastSLAM2.0, ORB SLAM, RatSLAM et le SLAM linéaire. Ces algorithmes ont été ensuite évalués sur plusieurs architectures pour l’embarqué afin d’étudier leur portabilité sur des systèmes de faible consommation énergétique et de ressources limitées. La comparaison prend en compte les temps d’exécutions et la consistance des résultats. Après avoir analysé profondément les évaluations temporelles de chaque algorithme, le FastSLAM2.0 est finalement choisi, pour un compromis temps d’exécution-consistance de résultat de localisation, comme candidat pour une étude plus approfondie sur une architecture hétérogène embarquée. La second partie de cette thèse est consacré à l’étude d’un système embarqué implémentant le FastSLAM2.0 monoculaire dédié aux environnements larges. Une réécriture algorithmique du FastSLAM2.0 a été nécessaire afin de l’adapter au mieux aux contraintes imposées par les environnements de grande échelle. Dans une démarche A3, le FastSLAM2.0 a été implanté sur une architecture hétérogène CPU-GPU. Grâce à un partitionnement efficace, un facteur d’accélération global de l’ordre de 22 a été obtenu sur une architecture récente dédiée pour l’embarqué. La nature du traitement de l’algorithme FastSLAM2.0 pouvait bénéficier d’une architecture fortement parallèle. Une deuxième instance matérielle basée sur une architecture programmable FPGA est proposée. L’implantation a été réalisée en utilisant des outils de synthèse de haut-niveau afin de réduire le temps de développement. Une comparaison des résultats d’implantation sur cette architecture matérielle par rapport à des architectures à base de GPU a été réalisée. Les gains obtenus sont conséquent, même par rapport aux GPU haut-de-gamme avec un grand nombre de cœurs. Le système résultant peut cartographier des environnements larges tout en garantissant le compromis entre la consistance des résultats de localisation et le temps réel. L’utilisation de plusieurs calculateurs implique d’utiliser des moyens d’échanges de données entre ces derniers. Cela passe par des couplages forts. Ces travaux de thèse ont permis de mettre en avant l’intérêt des architectures hétérogènes parallèles pour le portage des algorithmes SLAM. Les architectures hétérogènes à base de FPGA peuvent particulièrement devenir des candidats potentiels pour porter des algorithmes complexes traitant des données massives
Autonomous navigation is a main axis of research in the field of mobile robotics. In this context, the robot must have an algorithm that allow the robot to move autonomously in a complex and unfamiliar environments. Mapping in advance by a human operator is a tedious and time consuming task. On the other hand, it is not always reliable, especially when the structure of the environment changes. SLAM algorithms allow a robot to map its environment while localizing it in the space.SLAM algorithms are becoming more efficient, but there is no full hardware or architectural implementation that has taken place . Such implantation of architecture must take into account the energy consumption, the embeddability and computing power. This scientific work aims to evaluate the embedded systems implementing locatization and scene reconstruction (SLAM). The methodology will adopt an approach AAM ( Algorithm Architecture Matching) to improve the efficiency of the implementation of algorithms especially for systems with high constaints. SLAM embedded system must have an electronic and software architecture to ensure the production of relevant data from sensor information, while ensuring the localization of the robot in its environment. Therefore, the objective is to define, for a chosen algorithm, an architecture model that meets the constraints of embedded systems. The first work of this thesis was to explore the different algorithmic approaches for solving the SLAM problem. Further study of these algorithms is performed. This allows us to evaluate four different kinds of algorithms: FastSLAM2.0, ORB SLAM, SLAM RatSLAM and linear. These algorithms were then evaluated on multiple architectures for embedded systems to study their portability on energy low consumption systems and limited resources. The comparison takes into account the time of execution and consistency of results. After having deeply analyzed the temporal evaluations for each algorithm, the FastSLAM2.0 was finally chosen for its compromise performance-consistency of localization result and execution time, as a candidate for further study on an embedded heterogeneous architecture. The second part of this thesis is devoted to the study of an embedded implementing of the monocular FastSLAM2.0 which is dedicated to large scale environments. An algorithmic modification of the FastSLAM2.0 was necessary in order to better adapt it to the constraints imposed by the largescale environments. The resulting system is designed around a parallel multi-core architecture. Using an algorithm architecture matching approach, the FastSLAM2.0 was implemeted on a heterogeneous CPU-GPU architecture. Uisng an effective algorithme partitioning, an overall acceleration factor o about 22 was obtained on a recent dedicated architecture for embedded systems. The nature of the execution of FastSLAM2.0 algorithm could benefit from a highly parallel architecture. A second instance hardware based on programmable FPGA architecture is proposed. The implantation was performed using high-level synthesis tools to reduce development time. A comparison of the results of implementation on the hardware architecture compared to GPU-based architectures was realized. The gains obtained are promising, even compared to a high-end GPU that currently have a large number of cores. The resulting system can map a large environments while maintainingthe balance between the consistency of the localization results and real time performance. Using multiple calculators involves the use of a means of data exchange between them. This requires strong coupling (communication bus and shared memory). This thesis work has put forward the interests of parallel heterogeneous architectures (multicore, GPU) for embedding the SLAM algorithms. The FPGA-based heterogeneous architectures can particularly become potential candidatesto bring complex algorithms dealing with massive data
APA, Harvard, Vancouver, ISO, and other styles
42

Williamson, Benjamin. "Developing a holonomic iROV as a tool for kelp bed mapping." Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.600219.

Full text
Abstract:
Kelp beds support a vast and diverse ecosystem including marine mammals, fish, invertebrates, other algae and epibiota, yet these kelp beds can be highly ephemeral. Mapping the density and distribution of kelp beds, and assessing change over yearly cycles, are important objectives for coastal oceanography. However, nearshore habitat mapping is challenging, affected by dynamic currents, tides, shallow depths, frequent non-uniform obstacles and often turbid water. Noisy and often incomplete sensor data compound a lack of landmarks available for navigation. The intelligent, position-aware holonomic ROV (iROV) SeaBiscuit was designed specifically for this nearshore habitat mapping application and represents a novel synthesis of techniques and innovative solutions to nearshore habitat mapping. The concept of an iROV combines the benefits of autonomous underwater navigation and mapping while maintaining the flexibility and security of remote high-level control and supervision required for operation in hostile, complex underwater environments. An onboard battery provides an energy buffer for high-powered thrust and security of energy supply. Onboard low-level autonomy provides robust autopilot features, including station-keeping or course-holding in a flow, allowing the operator to direct the survey and supervise mapping data in realtime during acquisition. With the aim of providing high-usability maps on a budget feasible for small-scale field research groups, SeaBiscuit fuses the data from an orthogonal arrangement of a forward-facing multibeam sonar and a complementary 360° scanning sonar with a full navigation suite to explore and map the nearshore environment. Sensor fusion, coupled with the holonomic propulsion system, also allows optimal use of the information available from the limited budget sensor suite. Robust and reliable localisation is achieved even with noisy and incomplete sensor data using a relatively basic Inertial Navigation System and sonar-aided SLAM in the absence of an expensive Doppler velocity log or baseline navigation system. Holonomic motion in the horizontal plane and an axisymmetric hull provide the manoeuvrability required to operate in this complex environment, while allowing 3D maps to be generated in-transit. The navigation algorithms were tested mapping a piling dock and the habitat mapping sensors calibrated using an ‘artificial’ kelp bed of manually dimensioned kelp stipes transplanted to a sheltered but open-water real-world environment. Sea trials demonstrated mapping open ocean kelp beds, identifying clusters of stipes, converting this into a useful measure of biomass and generating a density surface across the kelp bed. This research provides field-proven techniques to improve the nearshore habitat mapping capabilities of underwater vehicles. Future work includes the transition to full-scale kelp bed mapping, and further development of the vehicle and sensor fusion algorithms to improve nearshore navigation.
APA, Harvard, Vancouver, ISO, and other styles
43

Sinivaara, Kristian. "Simultaneous Localisation and Mapping using Autonomous Target Detection and Recognition." Thesis, Linköpings universitet, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110410.

Full text
Abstract:
Simultaneous localisation and mapping (SLAM) is an often used positioning approach in GPS denied indoor environments. This thesis presents a novel method of combining SLAM with autonomous/aided target detection and recognition (ATD/R), which is beneficial for both methods. The method uses physical objects that are recognisable by ATR as unambiguous features in SLAM, while SLAM provides the ATR with better position estimates. The intended application is to improve the positioning of a first responder moving through an indoor environment, where the map offers localisation and simultaneously helps locate people, furniture and potentially dangerous objects like gas cannisters. The developed algorithm, dubbed ATR-SLAM, uses existing methods from different fields such as EKF-SLAM and ATR based on rectangle estimation. Landmarks in the form of 3D point features based on NARF are used in conjunction with identified objects and 3D object models are used to replace landmarks when the same information is used. This leads to a more compact map representation with fewer landmarks, which partly compensates for the introduced cost of the ATR. Experiments performed using point clouds generated from a time-of-flight laser scanner show that ATR-SLAM produces more consistent maps and more robust loop closures than EKF-SLAM using only NARF landmarks.
APA, Harvard, Vancouver, ISO, and other styles
44

Kondrath, Andrew Stephen. "Frequency Modulated Continuous Wave Radar and Video Fusion for Simultaneous Localization and Mapping." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1347715085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Svensson, Depraetere Xavier. "Application of new particle-based solutions to the Simultaneous Localization and Mapping (SLAM) problem." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-212999.

Full text
Abstract:
In this thesis, we explore novel solutions to the Simultaneous Localization and Mapping (SLAM) problem based on particle filtering and smoothing methods. In essence, the SLAM problem constitutes of two interdependent tasks: map building and tracking. Three solution methods utilizing different smoothing techniques are explored. The smoothing methods used are fixed lag smoothing (FLS), forward-only forward-filtering backward-smoothing (forward-only FFBSm) and the particle-based, rapid incremental smoother (PaRIS). In conjunction with these smoothing techniques the well-established Expectation-Maximization (EM) algorithm is used to produce maximum-likelihood estimates of the map. The three solution method are then evaluated and compared in a simulated setting.
I detta examensarbete utforskas nya lösningar till Simultaneous Localization and Mapping (SLAM) problemet baserat på partikelfilter- och partikelglättnings-metoder. I sin grund består SLAM problemet av två av varandra beroende uppgifter: kartläggning och spårning. Tre lösningsmetoder som använder olika glättnings-metoder appliceras för att lösa dessa uppgifter.  Dessa glättningsmetoder är fixed lag smoothing (FLS), forward-only forward-filtering backward-smoothing (forward-only FFBSm) och the particle-based, rapid incremental smoother (PaRIS). I samband med dessa glättningstekniker används den väletablerade Expectation-Maximization (EM) algoritmen för att skapa maximum-likelihood skattningar av kartan. De tre lösningsmetoderna utvärderas sedan och jämförs i en simulerad miljö.
APA, Harvard, Vancouver, ISO, and other styles
46

Klečka, Jan. "Pořízení a zpracování dat pro 2D a 3D SLAM úlohy robotické navigace." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220918.

Full text
Abstract:
This paper describe design and implementation of SLAM algorithm for selflocalization and mapping in indoor environment using data from laser scanner. Design is focused on 2D variant of SLAM, but parts is purposely reliazed to be usable in 3D SLAM. This ability is demonstrated at the end of paper.
APA, Harvard, Vancouver, ISO, and other styles
47

Terzakis, George. "Visual odometry and mapping in natural environments for arbitrary camera motion models." Thesis, University of Plymouth, 2016. http://hdl.handle.net/10026.1/6686.

Full text
Abstract:
This is a thesis on outdoor monocular visual SLAM in natural environments. The techniques proposed herein aim at estimating camera pose and 3D geometrical structure of the surrounding environment. This problem statement was motivated by the GPS-denied scenario for a sea-surface vehicle developed at Plymouth University named Springer. The algorithms proposed in this thesis are mainly adapted for the Springer’s environmental conditions, so that the vehicle can navigate on a vision based localization system when GPS is not available; such environments include estuarine areas, forests and the occasional semi-urban territories. The research objectives are constrained versions of the ever-abiding problems in the fields of multiple view geometry and mobile robotics. The research is proposing new techniques or improving existing ones for problems such as scene reconstruction, relative camera pose recovery and filtering, always in the context of the aforementioned landscapes (i.e., rivers, forests, etc.). Although visual tracking is paramount for the generation of data point correspondences, this thesis focuses primarily on the geometric aspect of the problem as well as with the probabilistic framework in which the optimization of pose and structure estimates takes place. Besides algorithms, the deliverables of this research should include the respective implementations and test data for these algorithms in the form of a software library and a dataset containing footage of estuarine regions taken from a boat, along with synchronized sensor logs. This thesis is not the final analysis on vision based navigation. It merely proposes various solutions for the localization problem of a vehicle navigating in natural environments either on land or on the surface of the water. Although these solutions can be used to provide position and orientation estimates when GPS is not available, they have limitations and there is still a vast new world of ideas to be explored.
APA, Harvard, Vancouver, ISO, and other styles
48

Trevor, Alexander J. B. "Semantic mapping for service robots: building and using maps for mobile manipulators in semi-structured environments." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53583.

Full text
Abstract:
Although much progress has been made in the field of robotic mapping, many challenges remain including: efficient semantic segmentation using RGB-D sensors, map representations that include complex features (structures and objects), and interfaces for interactive annotation of maps. This thesis addresses how prior knowledge of semi-structured human environments can be leveraged to improve segmentation, mapping, and semantic annotation of maps. We present an organized connected component approach for segmenting RGB-D data into planes and clusters. These segments serve as input to our mapping approach that utilizes them as planar landmarks and object landmarks for Simultaneous Localization and Mapping (SLAM), providing necessary information for service robot tasks and improving data association and loop closure. These features are meaningful to humans, enabling annotation of mapped features to establish common ground and simplifying tasking. A modular, open-source software framework, the OmniMapper, is also presented that allows a number of different sensors and features to be combined to generate a combined map representation, and enabling easy addition of new feature types.
APA, Harvard, Vancouver, ISO, and other styles
49

Ireta, Munoz Fernando Israel. "Estimation de pose globale et suivi pour la localisation RGB-D et cartographie 3D." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4054/document.

Full text
Abstract:
Ce rapport de thèse présente une analyse détaillée de nouvelles techniques d'estimation de pose à partir des images de couleur et de profondeur provenant de capteurs RGB-D. Etant donné que l'estimation de la pose nécessite d'établir une cartographie en simultanée, la reconstruction 3D de la scène sera aussi étudié dans cette thèse. La localisation et la cartographie ont été largement étudiés par la communauté de robotique et de vision par ordinateur, et ces techniques ont aussi été largement employés pour la robotique mobile et les systèmes autonomes afin d'exécuter des tâches telles que le suivi de caméra, la reconstruction 3D dense ou encore la localisation robuste. Le défi de l'estimation de pose réside dans la façon de relier les mesures des capteurs pour estimer l'état système en position et en orientation. Lorsqu'une multitude de capteurs fournisse différentes observations des mêmes variables, il devient alors complexe de fusionner au mieux ces informations acquises à des instants différents. De manière à développer un algorithme efficace pour traiter ces problèmes, une nouvelle méthode de recalage nommée Point-to-hyperplane sera introduite, analysée, comparée et appliquée à l'estimation de pose et à la cartographie basée sur des frames-clés. La méthode proposée permet de minimiser différentes métriques sous la forme d'un seul vecteur de mesure en n-dimensions, sans avoir besoin de définir un facteur d'échelle qui pondère l'influence de chaque terme durant la minimisation d'énergie. Au sein du concept Point-to-hyperplane, deux lignes principales ont été examinées. Premièrement, la méthode proposée sera employée dans des applications d'odométrie visuelle et de cartographie 3D. Compte-tenu des résultats expérimentaux, il a été montré que la méthode proposée permet d'estimer la pose localement avec précision en augmentant le domaine et la vitesse de convergence. L'invariance est mathématiquement prouvée et des résultats sont fournis à la fois pour environnements réels et synthétiques. Deuxièmement, une méthode pour la localisation globale a été proposée qui adresse les problèmes de reconnaissance et de détection de lieux. Cette méthode s'appuie sur l'utilisation du Point-to-hyperplane combinée à une optimisation Branch-and-bound pour estimer la pose globalement. Étant donné que les stratégies de Branch-and-Bound permettent d'obtenir des alignements grossiers sans la nécessité d'avoir la pose initiale entre les images, le Point-tohyperplane peut être utiliser pour raffiner l'estimation. Il sera démontré que cette stratégie est mieux contrainte quand davantage de dimensions sont utilisées. Cette stratégie s'avère être utile pour résoudre les problèmes de désalignement et pour obtenir des cartes 3D globalement consistantes. Pour finir cette thèse et pour démontrer la performance des méthodes proposées, des résultats sur des applications de SLAM visuel et de cartographie 3D sont présentés
This thesis presents a detailed account of novel techniques for pose estimation by using both, color and depth information from RGB-D sensors. Since pose estimation simultaneously requires an environment map, 3D scene reconstruction will also be considered in this thesis. Localization and mapping has been extensively studied by the robotics and computer vision communities and it is widely employed in mobile robotics and autonomous systems for performing tasks such as tracking, dense 3D mapping and robust localization. The central challenge of pose estimation lies in how to relate sensor measurements to the state of position and orientation. When a variety of sensors, which provide different information about the same data points, are available, the challenge then becomes part of how to best fuse acquired information at different times. In order to develop an effective algorithm to deal with these problems, a novel registration method named Point-to-hyperplane Iterative Closest Point will be introduced, analysed, compared and applied to pose estimation and key-frame mapping. The proposed method allows to jointly minimize different metric errors as a single measurement vector with n-dimensions without requiring a scaling factor to tune their importance during the minimization process. Within the Point-to-hyperplane framework two main axes have been investigated. Firstly, the proposed method will be employed for performing visual odometry and 3D mapping. Based on actual experiments, it has been shown that the proposed method allows to accurately estimate the pose locally by increasing the domain of convergence and by speeding up the alignment. The invariance is mathematically proven and results in both, simulated and real environments, are provided. Secondly, a method is proposed for global localization for enabling place recognition and detection. This method involves using the point-to-hyperplane methods within a Branch-and-bound architecture to estimate the pose globally. Therefore, the proposed method has been combined with the Branch-and-bound algorithm to estimate the pose globally. Since Branch-and-bound strategies obtain rough alignments regardless of the initial position between frames, the Point-to-hyperplane can be used for refinement. It will be demonstrated that the bounds are better constrained when more dimensions are considered. This last approach is shown to be useful for solving mistracking problems and for obtaining globally consistent 3D maps. In a last part of the thesis and in order to demonstrate the proposed approaches and their performance, both visual SLAM and 3D mapping results are provided
APA, Harvard, Vancouver, ISO, and other styles
50

Sünderhauf, Niko. "Robust optimization for simultaneous localization and mapping." Thesis, Technischen Universitat Chemnitz, 2012. https://eprints.qut.edu.au/109667/1/109667.pdf.

Full text
Abstract:
SLAM (Simultaneous Localization And Mapping) has been a very active and almost ubiquitous problem in the field of mobile and autonomous robotics for over two decades. For many years, filter-based methods have dominated the SLAM literature, but a change of paradigms could be observed recently. Current state of the art solutions of the SLAM problem are based on efficient sparse least squares optimization techniques. However, it is commonly known that least squares methods are by default not robust against outliers . In SLAM, such outliers arise mostly from data association errors like false positive loop closures. Since the optimizers in current SLAM systems are not robust against outliers, they have to rely heavily on certain preprocessing steps to prevent or reject all data association errors. Especially false positive loop closures will lead to catastrophically wrong solutions with current solvers. The problem is commonly accepted in the literature, but no concise solution has been proposed so far. The main focus of this work is to develop a novel formulation of the optimization-based SLAM problem that is robust against such outliers. The developed approach allows the back-end part of the SLAM system to change parts of the topological structure of the problem’s factor graph representation during the optimization process. The back-end can thereby discard individual constraints and converge towards correct solutions even in the presence of many false positive loop closures. This largely increases the overall robustness of the SLAM system and closes a gap between the sensor-driven front-end and the back-end optimizers. The approach is evaluated on both large scale synthetic and real-world datasets. This work furthermore shows that the developed approach is versatile and can be applied beyond SLAM, in other domains where least squares optimization problems are solved and outliers have to be expected. This is successfully demonstrated in the domain of GPS-based vehicle localization in urban areas where multipath satellite observations often impede high-precision position estimates.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography