Tesis sobre el tema "SLAM mapping"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "SLAM mapping".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Valencia, Carreño Rafael. "Mapping, planning and exploration with Pose SLAM". Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/117471.
Texto completoCarlson, Justin. "Mapping Large, Urban Environments with GPS-Aided SLAM". Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/44.
Texto completoTouchette, Sébastien. "Recovering Cholesky Factor in Smoothing and Mapping". Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37935.
Texto completoMaddern, William Paul. "Continuous appearance-based localisation and mapping". Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/65841/2/William_Maddern_Thesis.pdf.
Texto completoÜzer, Ferit. "Hybrid mapping for large urban environments". Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22675/document.
Texto completoIn this thesis, a novel vision based hybrid mapping framework which exploits metric, topological and semantic information is presented. We aim to obtain better computational efficiency than pure metrical mapping techniques, better accuracy as well as usability for robot guidance compared to the topological mapping. A crucial step of any mapping system is the loop closure detection which is the ability of knowing if the robot is revisiting a previously mapped area. Therefore, we first propose a hierarchical loop closure detection framework which also constructs the global topological structure of our hybrid map. Using this loop closure detection module, a hybrid mapping framework is proposed in two step. The first step can be understood as a topo-metric map with nodes corresponding to certain regions in the environment. Each node in turn is made up of a set of images acquired in that region. These maps are further augmented with metric information at those nodes which correspond to image sub-sequences acquired while the robot is revisiting the previously mapped area. The second step augments this model by using road semantics. A Conditional Random Field based classification on the metric reconstruction is used to semantically label the local robot path (road in our case) as straight, curved or junctions. Metric information of regions with curved roads and junctions is retained while that of other regions is discarded in the final map. Loop closure is performed only on junctions thereby increasing the efficiency and also accuracy of the map. By incorporating all of these new algorithms, the hybrid framework presented can perform as a robust, scalable SLAM approach, or act as a main part of a navigation tool which could be used on a mobile robot or an autonomous car in outdoor urban environments. Experimental results obtained on public datasets acquired in challenging urban environments are provided to demonstrate our approach
Frost, Duncan. "Long range monocular SLAM". Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:af38cfa6-fc0a-48ab-b919-63c440ae8774.
Texto completoPereira, Savio Joseph. "On the utilization of Simultaneous Localization and Mapping(SLAM) along with vehicle dynamics in Mobile Road Mapping Systems". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/94425.
Texto completoDoctor of Philosophy
Mobile Road Mapping Systems (MRMS) are the current solution to the growing demand for high definition road surface maps in wide ranging applications from pavement management to autonomous vehicle testing. The objective of this research work is to improve the accuracy of MRMS by investigating methods to improve the sensor data fusion process. The main focus of this work is to apply the principles from the field of Simultaneous Localization and Mapping (SLAM) in order to improve the accuracy of MRMS. The concept of SLAM has been successfully applied to the field of mobile robot navigation and thus the motivation of this work is to investigate its application to the problem of mobile road mapping. For the mobile road mapping problem, the road surface being measured is one the primary inputs to the dynamics of the MRMS. Hence this work also investigates whether knowledge regarding the dynamics of the system can be used to improve the accuracy. Also developed as part of this work is a novel method for identifying outliers in road surface datasets and estimating elevations at road surface grid nodes. The developed methods are validated in a simulated environment and the results demonstrate a significant improvement in the accuracy of MRMS over current state-of-the-art methods.
Carranza, Jose Martinez. "Efficient monocular SLAM by using a structure-driven mapping". Thesis, University of Bristol, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574263.
Texto completoPascoe, Geoffrey. "Robust lifelong visual navigation and mapping". Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:c0bfa5fb-fa0a-48ed-8d26-90fa167ef6cd.
Texto completoPretto, Alberto. "Visual-SLAM for Humanoid Robots". Doctoral thesis, Università degli studi di Padova, 2009. http://hdl.handle.net/11577/3426516.
Texto completoNell’ambito della robotica, il Simultaneous Localization and Mapping (SLAM) é il processo grazie al quale un robot autonomo é in grado di creare una mappa dell’ambiente circostante e allo stesso tempo di localizzarsi avvalendosi di tale mappa. Negli ultimi anni un considerevole numero di ricercatori ha sviluppato nuove famiglie di algoritmi di SLAM, basati su vari sensori e utilizzando varie piattaforme robotiche. Uno degli ambiti più complessi nella ricerca sullo SLAM é il cosiddetto Visual-SLAM, che prevede l’utilizzo di vari tipi di telecamera come sensore per la navigazione. Le telecamere sono sensori economici che raccolgono molte informazioni sull’ambiente circostante. D’altro canto, la complessità degli algoritmi di visione artificiale e la forte dipendenza degli approcci attualmente realizzati dalle caratteristiche dell’ambiente, rendono il Visual-SLAM un problema lontano dal poter essere considerato risolto. Molti degli algoritmi di SLAM sono solitamente testati usando robot dotati di ruote. Sebbene tali piattaforme siano ormai robuste e stabili, la ricerca sulla progettazione di nuove piattaforme robotiche sta in parte migrando verso la robotica umanoide. Proprio come gli esseri umani, i robot umanoidi sono in grado di adattarsi ai cambiamenti dell’ambiente per raggiungere efficacemente i propri obiettivi. Nonostante ciò, solo pochi ricercatori hanno focalizzato i loro sforzi su implementazioni stabili di algoritmi di SLAM e Visual-SLAM adatti ai robot umanoidi. Tali piattaforme robotiche introducono nuove problematiche che possono compromettere la stabilità degli algoritmi di navigazione convenzionali, specie se basati sulla visione. I robot umanoidi sono dotati di un alto grado di libertà di movimento, con la possibilità di effettuare velocemente movimenti complessi: tali caratteristiche introducono negli spostamenti vibrazioni non deterministiche in grado di compromettere l’affidabilit` dei dati sensoriali acquisiti, per esempio introducendo nei flussi video effetti indesiderati quali il motion blur. A causa dei vincoli imposti dal bilanciamento del corpo, inoltre, tali robot non sempre possono essere dotati di unit` di elaborazione molto performanti che spesso sono ingombranti e dal peso elevato: ci` limita l’utilizzo di algoritmi complessi e computazionalmente gravosi. Infine, al contrario di quanto accade per i robot dotati di ruote, la complessa cinematica di un robot umanoide impedisce di ricostruire il movimento basandosi sulle informazioni provenienti dagli encoder posti sui motori. In questa tesi ci si é focalizzati sullo studio e sullo sviluppo di nuove metodologie per affrontare il problema del Visual-SLAM, ponendo particolare enfasi ai problemi legati all’utilizzo di piccoli robot umanoidi dotati di una singola telecamera come piattaforme per gli esperimenti. I maggiori sforzi nell’ambito della ricerca sullo SLAM e sul Visual-SLAM si sono concentrati nel campo del processo di stima dello stato del robot, ad esempio la stima della propria posizione e della mappa dell’ambiente. D’altra parte, la maggior parte delle problematiche incontrate nella ricerca sul Visual-SLAM sono legate al processo di percezione, ovvero all’interpretazione dei dati provenienti dai sensori. In questa tesi ci si é perciò concentrati sul miglioramento dei processi percettivi da un punto di vista della visione artificiale. Sono stati affrontati i problemi che scaturiscono dall’utilizzo di piccoli robot umanoidi come piattaforme sperimentali, come ad esempio la bassa capacità di calcolo, la bassa qualit` dei dati sensoriali e l’elevato numero di gradi di libertà nei movimenti. La bassa capacità di calcolo ha portato alla creazione di un nuovo metodo per misurare la similarità tra le immagini, che fa uso di una descrizione dell’immagine compatta, utilizzabile in applicazioni di SLAM topologico. Il problema del motion blur é stato affrontato proponendo una nuova tecnica di rilevamento di feature visive, unitamente ad un nuovo schema di tracking, robusto an- che in caso di motion blur non uniforme. E’ stato altresì sviluppato un framework per l’odometria basata sulle immagini, che fa uso delle feature visive presentate. Si propone infine un approccio al Visual-SLAM basato sulle omografie, che sfrutta le informazioni ottenute da una singola telecamera montata su un robot umanoide. Tale approccio si basa sull’assunzione che il robot si muove su una superficie piana. Tutti i metodi proposti sono stati validati con esperimenti e studi comparativi, usando sia dataset standard che immagini acquisite dalle telecamere installate su piccoli robot umanoidi.
Ferrera, Maxime. "Monocular Visual-Inertial-Pressure fusion for Underwater localization and 3D mapping". Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS089.
Texto completoThis thesis addresses the problem of real-time 3D localization and mapping in underwater environments.In the underwater archaeology field, Remotely Operated Vehicles (ROVs) are used to conduct deep-seasurveys and excavations. Providing both accurate localization and mapping information in real-time iscrucial for manual or automated piloting of the robots. While many localization solutions already existfor underwater robots, most of them rely on very accurate sensors, such as Doppler velocity logs or fiberoptic gyroscopes, which are very expensive and may be too bulky for small ROVs. Acoustic positioningsystems are also commonly used for underwater positioning, but they provide low frequencymeasurements, with limited accuracy.In this thesis, we study the use of low-cost sensors for accurate underwater localization. Our studyinvestigates the use of a monocular camera, a pressure sensor and a low-cost MEMS-IMU as the onlymeans of performing localization and mapping in the context of underwater archaeology.We have conducted an evaluation of different features tracking methods on images affected by typicaldisturbances met in an underwater context. From the results obtained with this evaluation, we havedeveloped a monocular Visual SLAM (Simultaneous Localization and Mapping) method, robust to thespecific disturbances of underwater environments. Then, we propose an extension of this method totightly integrate the measurements of a pressure sensor and an IMU in the SLAM algorithm. The finalmethod provides a very accurate localization and runs in real-time. In addition, an online dense 3Dreconstruction module, compliant with a monocular setup, is also proposed. Two lightweight and compactprototypes of this system have been designed and used to record datasets that have been publiclyreleased. Furthermore, these prototypes have been successfully used to test and validate the proposedlocalization and mapping algorithms in real-case scenarios
Natarajan, Ramkumar. "Efficient Factor Graph Fusion for Multi-robot Mapping". Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/1201.
Texto completoCunningham, Alexander G. "Scalable online decentralized smoothing and mapping". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51848.
Texto completoIhemadu, Okechukwu Clifford. "Robotic navigation in large environments using simultaneous localisation and mapping (SLAM)". Thesis, Queen's University Belfast, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.602546.
Texto completoBao, Guanqun. "On Simultaneous Localization and Mapping inside the Human Body (Body-SLAM)". Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-dissertations/206.
Texto completoAguilar-Gonzalez, Abiel. "Monocular-SLAM dense mapping algorithm and hardware architecture for FPGA acceleration". Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC055.
Texto completoSimultaneous Localization and Mapping (SLAM) is the problem of constructing a 3D map while simultaneously keeping track of an agent location within the map. In recent years, work has focused on systems that use a single moving camera as the only sensing mechanism (monocular-SLAM). This choice was motivated because nowadays, it is possible to find inexpensive commercial cameras, smaller and lighter than other sensors previously used and, they provide visual environmental information that can be exploited to create complex 3D maps while camera poses can be simultaneously estimated. Unfortunately, previous monocular-SLAM systems are based on optimization techniques that limits the performance for real-time embedded applications. To solve this problem, in this work, we propose a new monocular SLAM formulation based on the hypothesis that it is possible to reach high efficiency for embedded applications, increasing the density of the point cloud map (and therefore, the 3D map density and the overall positioning and mapping) by reformulating the feature-tracking/feature-matching process to achieve high performance for embedded hardware architectures, such as FPGA or CUDA. In order to increase the point cloud map density, we propose new feature-tracking/feature-matching and depth-from-motion algorithms that consists of extensions of the stereo matching problem. Then, two different hardware architectures (based on FPGA and CUDA, respectively) fully compliant for real-time embedded applications are presented. Experimental results show that it is possible to obtain accurate camera pose estimations. Compared to previous monocular systems, we are ranked as the 5th place in the KITTI benchmark suite, with a higher processing speed (we are the fastest algorithm in the benchmark) and more than x10 the density of the point cloud from previous approaches
Rogers, John Gilbert. "Life-long mapping of objects and places in domestic environments". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47736.
Texto completoInostroza, Ferrari Felipe Ignacio. "The estimation of detection statistics in simultaneus localization and mapping". Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/134725.
Texto completoIngeniero Civil Eléctrico
El uso de Conjuntos Aleatorios Finitos (RFS por su sigla en inglés) tiene varias ventajas respecto de los métodos tradicionales basados en vectores. Entre ellas están el incluir las estadísticas de detección del sensor y la eliminación de las heurísticas tanto para la asociación de datos como para la inicialización y eliminación de objetos en mapa. Para obtener los beneficios de los estimadores basados en RFS en el problema de Construcción de Mapas y Localización Simultanea (SLAM por su acrónimo en inglés), las estadísticas de detección y falsa alarma del extractor de características deben ser modeladas y utilizadas en cada actualización del mapa. Esta Tesis presenta técnicas para obtener estas estadísticas en el caso de características semánticas extraídas de mediciones láser. Además se concentra en la extracción de objetos cilíndricos, como pilares, árboles y postes de luz, en ambientes exteriores. Las estadísticas de detección obtenidas son utilizadas dentro de una solución a SLAM basada en RFS, conocida como Rao-Blackwellized (RB)-probability hypothesis density (PHD)-SLAM, y el algoritmo multiple hypothesis (MH)-factored solution to SLAM (FastSLAM), solución a SLAM basada en vectores. El desempeño de cada algoritmo al usar estas estadísticas es comparado con el de utilizar estadísticas constantes. Los resultados muestran las ventajas de modelar las estadísticas de detección, particularmente en el caso del paradigma RFS. En particular, el error en las estimaciones del mapa, medido utilizando la distancia optimal sub- pattern assignment (OSPA) a un mapa ground truth generado de forma independiente, disminuye en un 13% en el caso de MH-FastSLAM y en un 13% para RB-PHD-SLAM al modelar las estadísticas de detección. A pesar de que no se tiene un ground truth para la trayectoria del robot, se evalúan las trayectorias visualmente, encontradose estimaciones superiores para el método propuesto. Por lo tanto, se concluye que el modelamiento de las estadísticas de detección es de gran importancia al implementar una aplicación de SLAM.
Bacca, Cortés Eval Bladimir. "Appearance-based mapping and localization using feature stability histograms for mobile robot navigation". Doctoral thesis, Universitat de Girona, 2012. http://hdl.handle.net/10803/83589.
Texto completoEste trabajo propone un método de SLAM basado en apariencia cuya principal contribución es el Histograma de Estabilidad de Características (FSH). El FSH es construido por votación, si una característica es re-observada, ésta será promovida; de lo contrario su valor FSH progresivamente es reducido. El FSH es basado en el modelo de memoria humana para ocuparse de ambientes cambiantes y SLAM a largo término. Este modelo introduce conceptos como memoria a corto plazo (STM) y largo plazo (LTM), las cuales retienen información por cortos y largos periodos de tiempo. Si una entrada a la STM es reforzada, ésta hará parte de la LTM (i.e. es más estable). Sin embargo, este trabajo propone un modelo de memoria diferente, permitiendo a cualquier entrada ser parte de la STM o LTM considerando su intensidad. Las características más estables son solamente usadas en SLAM. Esta innovadora estrategia de manejo de características es capaz de hacer frente a ambientes cambiantes y SLAM de largo término.
Skoglund, Martin. "Inertial Navigation and Mapping for Autonomous Vehicles". Doctoral thesis, Linköpings universitet, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110373.
Texto completoLINK-SIC
Melbouci, Kathia. "Contributions au RGBD-SLAM". Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC006/document.
Texto completoTo guarantee autonomous and safely navigation for a mobile robot, the processing achieved for its localization must be fast and accurate enough to enable the robot to perform high-level tasks for navigation and obstacle avoidance. The authors of Simultaneous Localization And Mapping (SLAM) based works, are trying since year, to ensure the speed/accuracy trade-off. Most existing works in the field of monocular (SLAM) has largely centered around sparse feature-based representations of the environment. By tracking salient image points across many frames of video, both the positions of the features and the motion of the camera can be inferred live. Within the visual SLAM community, there has been a focus on both increasing the number of features that can be tracked across an image and efficiently managing and adjusting this map of features in order to improve camera trajectory and feature location accuracy. However, visual SLAM suffers from some limitations. Indeed, with a single camera and without any assumptions or prior knowledge about the camera environment, rotation can be retrieved, but the translation is up to scale. Furthermore, visual monocular SLAM is an incremental process prone to small drifts in both pose measurement and scale, which when integrated over time, become increasingly significant over large distances. To cope with these limitations, we have centered our work around the following issues : integrate additional information into an existing monocular visual SLAM system, in order to constrain the camera localization and the mapping points. Provided that the high speed of the initial SLAM process is kept and the lack of these added constraints should not give rise to the failure of the process. For these last reasons, we have chosen to integrate the depth information provided by a 3D sensor (e.g. Microsoft Kinect) and geometric information about scene structure. The primary contribution of this work consists of modifying the SLAM algorithm proposed by Mouragnon et al. (2006b) to take into account the depth measurement provided by a 3D sensor. This consists of several rather straightforward changes, but also on a way to combine the depth and visual data in the bundle adjustment process. The second contribution is to propose a solution that uses, in addition to the depth and visual data, the constraints lying on points belonging to the plans of the scene. The proposed solutions have been validated on a synthetic sequences as well as on a real sequences, which depict various environments. These solutions have been compared to the state of art methods. The performances obtained with the previous solutions demonstrate that the additional constraints developed, improves significantly the accuracy and the robustness of the SLAM localization. Furthermore, these solutions are easy to roll out and not much time consuming
Huai, Jianzhu. "Collaborative SLAM with Crowdsourced Data". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1483669256597152.
Texto completoNi, Kai. "Tectonic smoothing and mapping". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41072.
Texto completoBarkby, Stephen. "Efficient and Featureless Approaches to Bathymetric Simultaneous Localisation and Mapping". Thesis, The University of Sydney, 2011. http://hdl.handle.net/2123/7919.
Texto completoLi, X. "Visual navigation in unmanned air vehicles with simultaneous location and mapping (SLAM)". Thesis, Cranfield University, 2014. http://dspace.lib.cranfield.ac.uk/handle/1826/8644.
Texto completoTorresani, Alessandro. "A portable V-SLAM based solution for advanced visual 3D mobile mapping". Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/362031.
Texto completoPapakis, Ioannis. "A Bayesian Framework for Multi-Stage Robot, Map and Target Localization". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/93024.
Texto completoM.S.
This thesis presents a generalized framework with the goal of allowing a robot to localize itself and a static target, while building a map of the environment. This map is used as in the Simultaneous Localization and Mapping (SLAM) framework to enhance robot accuracy and with close features. Target, here, is distinguished from the rest of features since the robot has to navigate to its location and thus needs to be continuously observed from a long distance. The contribution of the proposed approach is on enabling the robot to track a target object or region, using a multi-stage technique. In the first stage, the robot and close landmarks are estimated simultaneously and they are both corrected. Using the robot's uncertainty in its estimate, the target state is then estimated sequentially, considering known robot state. That decouples the target estimation from the rest of the process. In the second stage, with the target being closer, target, robot and landmarks are estimated simultaneously. When the robot is located far, the sequential stage is efficient in tracking the target position while maintaining an accurate robot state using close only features. When the robot is closer to the target and most of its field of view is covered by the target, it is shown that simultaneous correction needs to be used in order to minimize robot, target and map uncertainties in the absence of other landmarks.
Morrell, Benjamin. "Enhancing 3D Autonomous Navigation Through Obstacle Fields: Homogeneous Localisation and Mapping, with Obstacle-Aware Trajectory Optimisation". Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/19992.
Texto completoJama, Michal. "Monocular vision based localization and mapping". Diss., Kansas State University, 2011. http://hdl.handle.net/2097/8561.
Texto completoDepartment of Electrical and Computer Engineering
Balasubramaniam Natarajan
Dale E. Schinstock
In this dissertation, two applications related to vision-based localization and mapping are considered: (1) improving navigation system based satellite location estimates by using on-board camera images, and (2) deriving position information from video stream and using it to aid an auto-pilot of an unmanned aerial vehicle (UAV). In the first part of this dissertation, a method for analyzing a minimization process called bundle adjustment (BA) used in stereo imagery based 3D terrain reconstruction to refine estimates of camera poses (positions and orientations) is presented. In particular, imagery obtained with pushbroom cameras is of interest. This work proposes a method to identify cases in which BA does not work as intended, i.e., the cases in which the pose estimates returned by the BA are not more accurate than estimates provided by a satellite navigation systems due to the existence of degrees of freedom (DOF) in BA. Use of inaccurate pose estimates causes warping and scaling effects in the reconstructed terrain and prevents the terrain from being used in scientific analysis. Main contributions of this part of work include: 1) formulation of a method for detecting DOF in the BA; and 2) identifying that two camera geometries commonly used to obtain stereo imagery have DOF. Also, this part presents results demonstrating that avoidance of the DOF can give significant accuracy gains in aerial imagery. The second part of this dissertation proposes a vision based system for UAV navigation. This is a monocular vision based simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video-stream from a single camera. This is different from common SLAM solutions that use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. The SLAM solution was built by significantly modifying and extending a recent open-source SLAM solution that is fundamentally different from a traditional approach to solving SLAM problem. The modifications made are those needed to provide the position measurements necessary for the navigation solution on a UAV while simultaneously building the map, all while maintaining control of the UAV. The main contributions of this part include: 1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; 2) improved performance of the SLAM algorithm for lower camera frame rates; and 3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible, and can be effective in Global Positioning System denied environments.
Wang, Zhan. "Guaranteed Localization and Mapping for Autonomous Vehicles". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS395.
Texto completoWith the rapid development and extensive applications of robot technology, the research on intelligent mobile robot has been scheduled in high technology development plan in many countries. Autonomous navigation plays a more and more important role in the research field of intelligent mobile robot. Localization and map building are the core problems to be solved by the robot to realize autonomous navigation. Probabilistic techniques (such as Extented Kalman Filter and Particle Filter) have long been used to solve the robotic localization and mapping problem. Despite their good performance in practical applications, they could suffer the inconsistency problem in the non linear, non Gaussian scenarios. This thesis focus on study the interval analysis based methods applied to solve the robotic localization and mapping problem. Instead of making hypothesis on the probability distribution, all the sensor noises are assumed to be bounded within known limits. Based on such foundation, this thesis formulates the localization and mapping problem in the framework of Interval Constraint Satisfaction Problem and applied consistent interval techniques to solve them in a guaranteed way. To deal with the “uncorrected yaw” problem encountered by Interval Constraint Propagation (ICP) based localization approaches, this thesis proposes a new ICP algorithm dealing with the real-time vehicle localization. The proposed algorithm employs a low-level consistency algorithm and is capable of heading uncertainty correction. Afterwards, the thesis presents an interval analysis based SLAM algorithm (IA-SLAM) dedicates for monocular camera. Bound-error parameterization and undelayed initialization for nature landmark are proposed. The SLAM problem is formed as ICSP and solved via interval constraint propagation techniques. A shaving method for landmark uncertainty contraction and an ICSP graph based optimization method are put forward to improve the obtaining result. Theoretical analysis of mapping consistency is also provided to illustrated the strength of IA-SLAM. Moreover, based on the proposed IA-SLAM algorithm, the thesis presents a low cost and consistent approach for outdoor vehicle localization. It works in a two-stage framework (visual teach and repeat) and is validated with a car-like vehicle equipped with dead reckoning sensors and monocular camera
Pietzsch, Tobias. "Towards Dense Visual SLAM". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-78943.
Texto completoDesrochers, Benoît. "Simultaneous localization and mapping in unstructured environments : a set-membership approach". Thesis, Brest, École nationale supérieure de techniques avancées Bretagne, 2018. http://www.theses.fr/2018ENTA0006/document.
Texto completoThis thesis deals with the simultaneous localization and mapping (SLAM) problem in unstructured environments, i.e. which cannot be described by geometrical features. This type of environment frequently occurs in an underwater context.Unlike classical approaches, the environment is not described by a collection of punctual features or landmarks, but directly by sets. These sets, called shapes, are associated with physical features such as the relief, some textures or, in a more symbolic way, the space free of obstacles that can be sensed around a robot. In a theoretical point of view, the SLAM problem is formalized as an hybrid constraint network where the variables are vectors and subsets of Rn. Whereas an uncertain real number is enclosed in an interval, an uncertain shape is enclosed in an interval of sets. The main contribution of this thesis is the introduction of a new formalism, based on interval analysis, able to deal with these domains. As an application, we illustrate our method on a SLAM problem based on bathymetric data acquired by an autonomous underwater vehicle (AUV)
Pomerleau, François. "Registration algorithm optimized for simultaneous localization and mapping". Mémoire, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/1465.
Texto completoSkinner, John R. "Simulation for robot vision". Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227404/1/John_Skinner_Thesis.pdf.
Texto completoVidiyala, Sai Krishna. "Simultaneous localization and mapping with radio signals". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24138/.
Texto completoLee, Chun-Fan Computer Science & Engineering Faculty of Engineering UNSW. "Towards topological mapping with vision-based simultaneous localization and map building". Awarded by:University of New South Wales. Computer Science & Engineering, 2008. http://handle.unsw.edu.au/1959.4/41551.
Texto completoGonzalez, Cadenillas Clayder Alejandro. "An improved feature extractor for the lidar odometry and mapping algorithm". Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/171499.
Texto completoLa extracción de características es una tarea crítica en la localización y mapeo simultáneo o Simultaneous Localization and Mapping (SLAM) basado en características, que es uno de los problemas más importantes de la comunidad robótica. Un algoritmo que resuelve SLAM utilizando características basadas en LiDAR es el algoritmo LiDAR Odometry and Mapping (LOAM). Este algoritmo se considera actualmente como el mejor algoritmo SLAM según el Benchmark KITTI. El algoritmo LOAM resuelve el problema de SLAM a través de un enfoque de emparejamiento de características y su algoritmo de extracción de características detecta las características clasifican los puntos de una nube de puntos como planos o agudos. Esta clasificación resulta de una ecuación que define el nivel de suavidad para cada punto. Sin embargo, esta ecuación no considera el ruido de rango del sensor. Por lo tanto, si el ruido de rango del LiDAR es alto, el extractor de características de LOAM podría confundir los puntos planos y agudos, lo que provocaría que la tarea de emparejamiento de características falle. Esta tesis propone el reemplazo del algoritmo de extracción de características del LOAM original por el algoritmo Curvature Scale Space (CSS). La elección de este algoritmo se realizó después de estudiar varios extractores de características en la literatura. El algoritmo CSS puede mejorar potencialmente la tarea de extracción de características en entornos ruidosos debido a sus diversos niveles de suavizado Gaussiano. La sustitución del extractor de características original de LOAM por el algoritmo CSS se logró mediante la adaptación del algoritmo CSS al Velodyne VLP-16 3D LiDAR. El extractor de características de LOAM y el extractor de características de CSS se probaron y compararon con datos reales y simulados, incluido el dataset KITTI utilizando las métricas Optimal Sub-Pattern Assignment (OSPA) y Absolute Trajectory Error (ATE). Para todos estos datasets, el rendimiento de extracción de características de CSS fue mejor que el del algoritmo LOAM en términos de métricas OSPA y ATE.
Dahlin, Alfred. "Simultaneous Localization and Mapping for an Unmanned Aerial Vehicle Using Radar and Radio Transmitters". Thesis, Linköpings universitet, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110645.
Texto completoContreras, Samamé Luis Federico. "SLAM collaboratif dans des environnements extérieurs". Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0012/document.
Texto completoThis thesis proposes large-scale mapping model of urban and rural environments using 3D data acquired by several robots. The work contributes in two main ways to the research field of mapping. The first contribution is the creation of a new framework, CoMapping, which allows to generate 3D maps in a cooperative way. This framework applies to outdoor environments with a decentralized approach. The CoMapping's functionality includes the following elements: First of all, each robot builds a map of its environment in point cloud format.To do this, the mapping system was set up on computers dedicated to each vehicle, processing distance measurements from a 3D LiDAR moving in six degrees of freedom (6-DOF). Then, the robots share their local maps and merge the point clouds individually to improve their local map estimation. The second key contribution is the group of metrics that allow to analyze the merging and card sharing processes between robots. We present experimental results to validate the CoMapping framework with their respective metrics. All tests were carried out in urban outdoor environments on the surrounding campus of the École Centrale de Nantes as well as in rural areas
Sünderhauf, Niko. "Robust Optimization for Simultaneous Localization and Mapping". Doctoral thesis, Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-86443.
Texto completoAbouzahir, Mohamed. "Algorithmes SLAM : Vers une implémentation embarquée". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS058/document.
Texto completoAutonomous navigation is a main axis of research in the field of mobile robotics. In this context, the robot must have an algorithm that allow the robot to move autonomously in a complex and unfamiliar environments. Mapping in advance by a human operator is a tedious and time consuming task. On the other hand, it is not always reliable, especially when the structure of the environment changes. SLAM algorithms allow a robot to map its environment while localizing it in the space.SLAM algorithms are becoming more efficient, but there is no full hardware or architectural implementation that has taken place . Such implantation of architecture must take into account the energy consumption, the embeddability and computing power. This scientific work aims to evaluate the embedded systems implementing locatization and scene reconstruction (SLAM). The methodology will adopt an approach AAM ( Algorithm Architecture Matching) to improve the efficiency of the implementation of algorithms especially for systems with high constaints. SLAM embedded system must have an electronic and software architecture to ensure the production of relevant data from sensor information, while ensuring the localization of the robot in its environment. Therefore, the objective is to define, for a chosen algorithm, an architecture model that meets the constraints of embedded systems. The first work of this thesis was to explore the different algorithmic approaches for solving the SLAM problem. Further study of these algorithms is performed. This allows us to evaluate four different kinds of algorithms: FastSLAM2.0, ORB SLAM, SLAM RatSLAM and linear. These algorithms were then evaluated on multiple architectures for embedded systems to study their portability on energy low consumption systems and limited resources. The comparison takes into account the time of execution and consistency of results. After having deeply analyzed the temporal evaluations for each algorithm, the FastSLAM2.0 was finally chosen for its compromise performance-consistency of localization result and execution time, as a candidate for further study on an embedded heterogeneous architecture. The second part of this thesis is devoted to the study of an embedded implementing of the monocular FastSLAM2.0 which is dedicated to large scale environments. An algorithmic modification of the FastSLAM2.0 was necessary in order to better adapt it to the constraints imposed by the largescale environments. The resulting system is designed around a parallel multi-core architecture. Using an algorithm architecture matching approach, the FastSLAM2.0 was implemeted on a heterogeneous CPU-GPU architecture. Uisng an effective algorithme partitioning, an overall acceleration factor o about 22 was obtained on a recent dedicated architecture for embedded systems. The nature of the execution of FastSLAM2.0 algorithm could benefit from a highly parallel architecture. A second instance hardware based on programmable FPGA architecture is proposed. The implantation was performed using high-level synthesis tools to reduce development time. A comparison of the results of implementation on the hardware architecture compared to GPU-based architectures was realized. The gains obtained are promising, even compared to a high-end GPU that currently have a large number of cores. The resulting system can map a large environments while maintainingthe balance between the consistency of the localization results and real time performance. Using multiple calculators involves the use of a means of data exchange between them. This requires strong coupling (communication bus and shared memory). This thesis work has put forward the interests of parallel heterogeneous architectures (multicore, GPU) for embedding the SLAM algorithms. The FPGA-based heterogeneous architectures can particularly become potential candidatesto bring complex algorithms dealing with massive data
Williamson, Benjamin. "Developing a holonomic iROV as a tool for kelp bed mapping". Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.600219.
Texto completoSinivaara, Kristian. "Simultaneous Localisation and Mapping using Autonomous Target Detection and Recognition". Thesis, Linköpings universitet, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110410.
Texto completoKondrath, Andrew Stephen. "Frequency Modulated Continuous Wave Radar and Video Fusion for Simultaneous Localization and Mapping". Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1347715085.
Texto completoSvensson, Depraetere Xavier. "Application of new particle-based solutions to the Simultaneous Localization and Mapping (SLAM) problem". Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-212999.
Texto completoI detta examensarbete utforskas nya lösningar till Simultaneous Localization and Mapping (SLAM) problemet baserat på partikelfilter- och partikelglättnings-metoder. I sin grund består SLAM problemet av två av varandra beroende uppgifter: kartläggning och spårning. Tre lösningsmetoder som använder olika glättnings-metoder appliceras för att lösa dessa uppgifter. Dessa glättningsmetoder är fixed lag smoothing (FLS), forward-only forward-filtering backward-smoothing (forward-only FFBSm) och the particle-based, rapid incremental smoother (PaRIS). I samband med dessa glättningstekniker används den väletablerade Expectation-Maximization (EM) algoritmen för att skapa maximum-likelihood skattningar av kartan. De tre lösningsmetoderna utvärderas sedan och jämförs i en simulerad miljö.
Klečka, Jan. "Pořízení a zpracování dat pro 2D a 3D SLAM úlohy robotické navigace". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220918.
Texto completoTerzakis, George. "Visual odometry and mapping in natural environments for arbitrary camera motion models". Thesis, University of Plymouth, 2016. http://hdl.handle.net/10026.1/6686.
Texto completoTrevor, Alexander J. B. "Semantic mapping for service robots: building and using maps for mobile manipulators in semi-structured environments". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53583.
Texto completoIreta, Munoz Fernando Israel. "Estimation de pose globale et suivi pour la localisation RGB-D et cartographie 3D". Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4054/document.
Texto completoThis thesis presents a detailed account of novel techniques for pose estimation by using both, color and depth information from RGB-D sensors. Since pose estimation simultaneously requires an environment map, 3D scene reconstruction will also be considered in this thesis. Localization and mapping has been extensively studied by the robotics and computer vision communities and it is widely employed in mobile robotics and autonomous systems for performing tasks such as tracking, dense 3D mapping and robust localization. The central challenge of pose estimation lies in how to relate sensor measurements to the state of position and orientation. When a variety of sensors, which provide different information about the same data points, are available, the challenge then becomes part of how to best fuse acquired information at different times. In order to develop an effective algorithm to deal with these problems, a novel registration method named Point-to-hyperplane Iterative Closest Point will be introduced, analysed, compared and applied to pose estimation and key-frame mapping. The proposed method allows to jointly minimize different metric errors as a single measurement vector with n-dimensions without requiring a scaling factor to tune their importance during the minimization process. Within the Point-to-hyperplane framework two main axes have been investigated. Firstly, the proposed method will be employed for performing visual odometry and 3D mapping. Based on actual experiments, it has been shown that the proposed method allows to accurately estimate the pose locally by increasing the domain of convergence and by speeding up the alignment. The invariance is mathematically proven and results in both, simulated and real environments, are provided. Secondly, a method is proposed for global localization for enabling place recognition and detection. This method involves using the point-to-hyperplane methods within a Branch-and-bound architecture to estimate the pose globally. Therefore, the proposed method has been combined with the Branch-and-bound algorithm to estimate the pose globally. Since Branch-and-bound strategies obtain rough alignments regardless of the initial position between frames, the Point-to-hyperplane can be used for refinement. It will be demonstrated that the bounds are better constrained when more dimensions are considered. This last approach is shown to be useful for solving mistracking problems and for obtaining globally consistent 3D maps. In a last part of the thesis and in order to demonstrate the proposed approaches and their performance, both visual SLAM and 3D mapping results are provided
Sünderhauf, Niko. "Robust optimization for simultaneous localization and mapping". Thesis, Technischen Universitat Chemnitz, 2012. https://eprints.qut.edu.au/109667/1/109667.pdf.
Texto completo