Добірка наукової літератури з теми "SLAM mapping"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "SLAM mapping".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "SLAM mapping"

1

Saat, Shahrizal, AN MF Airini, Muhammad Salihin Saealal, A. R. Wan Norhisyam, and M. S. Farees Ezwan. "Hector SLAM 2D Mapping for Simultaneous Localization and Mapping (SLAM)." Journal of Engineering and Applied Sciences 14, no. 16 (November 10, 2019): 5610–15. http://dx.doi.org/10.36478/jeasci.2019.5610.5615.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lu, Xiaoyun, Hu Wang, Shuming Tang, Huimin Huang, and Chuang Li. "DM-SLAM: Monocular SLAM in Dynamic Environments." Applied Sciences 10, no. 12 (June 21, 2020): 4252. http://dx.doi.org/10.3390/app10124252.

Повний текст джерела
Анотація:
Many classic visual monocular SLAM (simultaneous localization and mapping) systems have been developed over the past decades, yet most of them fail when dynamic scenarios dominate. DM-SLAM is proposed for handling dynamic objects in environments based on ORB-SLAM2. This article mainly concentrates on two aspects. Firstly, we proposed a distribution and local-based RANSAC (Random Sample Consensus) algorithm (DLRSAC) to extract static features from the dynamic scene based on awareness of the nature difference between motion and static, which is integrated into initialization of DM-SLAM. Secondly, we designed a candidate map points selection mechanism based on neighborhood mutual exclusion to balance the accuracy of tracking camera pose and system robustness in motion scenes. Finally, we conducted experiments in the public dataset and compared DM-SLAM with ORB-SLAM2. The experiments corroborated the superiority of the DM-SLAM.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Boyu, Kuang, Chen Yuheng, and Rana Zeeshan A. "OG-SLAM: A real-time and high-accurate monocular visual SLAM framework." Trends in Computer Science and Information Technology 7, no. 2 (July 26, 2022): 047–54. http://dx.doi.org/10.17352/tcsit.000050.

Повний текст джерела
Анотація:
The challenge of improving the accuracy of monocular Simultaneous Localization and Mapping (SLAM) is considered, which widely appears in computer vision, autonomous robotics, and remote sensing. A new framework (ORB-GMS-SLAM (or OG-SLAM)) is proposed, which introduces the region-based motion smoothness into a typical Visual SLAM (V-SLAM) system. The region-based motion smoothness is implemented by integrating the Oriented Fast and Rotated Brief (ORB) features and the Grid-based Motion Statistics (GMS) algorithm into the feature matching process. The OG-SLAM significantly reduces the absolute trajectory error (ATE) on the key-frame trajectory estimation without compromising the real-time performance. This study compares the proposed OG-SLAM to an advanced V-SLAM system (ORB-SLAM2). The results indicate the highest accuracy improvement of almost 75% on a typical RGB-D SLAM benchmark. Compared with other ORB-SLAM2 settings (1800 key points), the OG-SLAM improves the accuracy by around 20% without losing performance in real-time. The OG-SLAM framework has a significant advantage over the ORB-SLAM2 system in that it is more robust for rotation, loop-free, and long ground-truth length scenarios. Furthermore, as far as the authors are aware, this framework is the first attempt to integrate the GMS algorithm into the V-SLAM.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Peng, Tao, Dingnan Zhang, Don Lahiru Nirmal Hettiarachchi, and John Loomis. "An Evaluation of Embedded GPU Systems for Visual SLAM Algorithms." Electronic Imaging 2020, no. 6 (January 26, 2020): 325–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.6.iriacv-074.

Повний текст джерела
Анотація:
Simultaneous Localization and Mapping (SLAM) solves the computational problem of estimating the location of a robot and the map of the environment. SLAM is widely used in the area of navigation, odometry, and mobile robot mapping. However, the performance and efficiency of the small industrial mobile robots and unmanned aerial vehicles (UAVs) are highly constrained to the battery capacity. Therefore, a mobile robot, especially a UAV, requires low power consumption while maintaining high performance. This paper demonstrates holistic and quantitative performance evaluations of embedded computing devices that run on the Nvidia Jetson platform. Evaluations are based on the execution of two state-of-the-art Visual SLAM algorithms, ORB-SLAM2 and OpenVSLAM, on Nvidia Jetson Nano, Nvidia Jetson TX2, and Nvidia Jetson Xavier.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sun, Liuxin, Junyu Wei, Shaojing Su, and Peng Wu. "SOLO-SLAM: A Parallel Semantic SLAM Algorithm for Dynamic Scenes." Sensors 22, no. 18 (September 15, 2022): 6977. http://dx.doi.org/10.3390/s22186977.

Повний текст джерела
Анотація:
Simultaneous localization and mapping (SLAM) is a core technology for mobile robots working in unknown environments. Most existing SLAM techniques can achieve good localization accuracy in static scenes, as they are designed based on the assumption that unknown scenes are rigid. However, real-world environments are dynamic, resulting in poor performance of SLAM algorithms. Thus, to optimize the performance of SLAM techniques, we propose a new parallel processing system, named SOLO-SLAM, based on the existing ORB-SLAM3 algorithm. By improving the semantic threads and designing a new dynamic point filtering strategy, SOLO-SLAM completes the tasks of semantic and SLAM threads in parallel, thereby effectively improving the real-time performance of SLAM systems. Additionally, we further enhance the filtering effect for dynamic points using a combination of regional dynamic degree and geometric constraints. The designed system adds a new semantic constraint based on semantic attributes of map points, which solves, to some extent, the problem of fewer optimization constraints caused by dynamic information filtering. Using the publicly available TUM dataset, SOLO-SLAM is compared with other state-of-the-art schemes. Our algorithm outperforms ORB-SLAM3 in accuracy (maximum improvement is 97.16%) and achieves better results than Dyna-SLAM with respect to time efficiency (maximum improvement is 90.07%).
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Skrzypczyński, Piotr. "Simultaneous localization and mapping: A feature-based probabilistic approach." International Journal of Applied Mathematics and Computer Science 19, no. 4 (December 1, 2009): 575–88. http://dx.doi.org/10.2478/v10006-009-0045-z.

Повний текст джерела
Анотація:
Simultaneous localization and mapping: A feature-based probabilistic approachThis article provides an introduction to Simultaneous Localization And Mapping (SLAM), with the focus on probabilistic SLAM utilizing a feature-based description of the environment. A probabilistic formulation of the SLAM problem is introduced, and a solution based on the Extended Kalman Filter (EKF-SLAM) is shown. Important issues of convergence, consistency, observability, data association and scaling in EKF-SLAM are discussed from both theoretical and practical points of view. Major extensions to the basic EKF-SLAM method and some recent advances in SLAM are also presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Song, Jooeun, and Joongjin Kook. "Visual SLAM Based Spatial Recognition and Visualization Method for Mobile AR Systems." Applied System Innovation 5, no. 1 (January 5, 2022): 11. http://dx.doi.org/10.3390/asi5010011.

Повний текст джерела
Анотація:
The simultaneous localization and mapping (SLAM) market is growing rapidly with advances in Machine Learning, Drones, and Augmented Reality (AR) technologies. However, due to the absence of an open source-based SLAM library for developing AR content, most SLAM researchers are required to conduct their own research and development to customize SLAM. In this paper, we propose an open source-based Mobile Markerless AR System by building our own pipeline based on Visual SLAM. To implement the Mobile AR System of this paper, we use ORB-SLAM3 and Unity Engine and experiment with running our system in a real environment and confirming it in the Unity Engine’s Mobile Viewer. Through this experimentation, we can verify that the Unity Engine and the SLAM System are tightly integrated and communicate smoothly. In addition, we expect to accelerate the growth of SLAM technology through this research.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Haoyang. "Deep Learning Applications in Simultaneous Localization and Mapping." Journal of Physics: Conference Series 2181, no. 1 (January 1, 2022): 012012. http://dx.doi.org/10.1088/1742-6596/2181/1/012012.

Повний текст джерела
Анотація:
Abstract Simultaneous Location and Mapping (SLAM) is a research hotspot in the field of intelligent robots in recent years. Its processing object is the visual image. Deep learning has achieved great success in the field of computer vision, which makes the combination of deep learning and slam technology a feasible scheme. This paper summarizes some applications of deep learning in SLAM technology and introduces its latest research results. The advantages and disadvantages of deep-learning-based-SLAM technology are compared with those of traditional SLAM. Finally, the future development direction of SLAM plus deep learning technology is prospected.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhang, Zijie, and Jing Zeng. "A Survey on Visual Simultaneously Localization and Mapping." Frontiers in Computing and Intelligent Systems 1, no. 1 (August 2, 2022): 18–21. http://dx.doi.org/10.54097/fcis.v1i1.1089.

Повний текст джерела
Анотація:
Visual simultaneous localization and mapping (VSLAM) is an important branch of intelligent robot technology, which refers to the use of cameras as the only external sensors to achieve self-localization in unfamiliar environments while creating environmental maps. The map constructed by slam is the basis for subsequent robots to achieve autonomous positioning, path planning and obstacle avoidance tasks. This paper introduces the development of visual Slam at home and abroad, the basic methods of visual slam, and the key problems in visual slam, and discusses the main development trends and research hotspots of visual slam.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Luo, Kaiqing, Manling Lin, Pengcheng Wang, Siwei Zhou, Dan Yin, and Haolan Zhang. "Improved ORB-SLAM2 Algorithm Based on Information Entropy and Image Sharpening Adjustment." Mathematical Problems in Engineering 2020 (September 23, 2020): 1–13. http://dx.doi.org/10.1155/2020/4724310.

Повний текст джерела
Анотація:
Simultaneous Localization and Mapping (SLAM) has become a research hotspot in the field of robots in recent years. However, most visual SLAM systems are based on static assumptions which ignored motion effects. If image sequences are not rich in texture information or the camera rotates at a large angle, SLAM system will fail to locate and map. To solve these problems, this paper proposes an improved ORB-SLAM2 algorithm based on information entropy and sharpening processing. The information entropy corresponding to the segmented image block is calculated, and the entropy threshold is determined by the adaptive algorithm of image entropy threshold, and then the image block which is smaller than the information entropy threshold is sharpened. The experimental results show that compared with the ORB-SLAM2 system, the relative trajectory error decreases by 36.1% and the absolute trajectory error decreases by 45.1% compared with ORB-SLAM2. Although these indicators are greatly improved, the processing time is not greatly increased. To some extent, the algorithm solves the problem of system localization and mapping failure caused by camera large angle rotation and insufficient image texture information.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "SLAM mapping"

1

Valencia, Carreño Rafael. "Mapping, planning and exploration with Pose SLAM." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/117471.

Повний текст джерела
Анотація:
This thesis reports research on mapping, path planning, and autonomous exploration. These are classical problems in robotics, typically studied independently, and here we link such problems by framing them within a common SLAM approach, adopting Pose SLAM as the basic state estimation machinery. The main contribution of this thesis is an approach that allows a mobile robot to plan a path using the map it builds with Pose SLAM and to select the appropriate actions to autonomously construct this map. Pose SLAM is the variant of SLAM where only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. In Pose SLAM, observations come in the form of relative-motion measurements between robot poses. With regards to extending the original Pose SLAM formulation, this thesis studies the computation of such measurements when they are obtained with stereo cameras and develops the appropriate noise propagation models for such case. Furthermore, the initial formulation of Pose SLAM assumes poses in SE(2) and in this thesis we extend this formulation to SE(3), parameterizing rotations either with Euler angles and quaternions. We also introduce a loop closure test that exploits the information from the filter using an independent measure of information content between poses. In the application domain, we present a technique to process the 3D volumetric maps obtained with this SLAM methodology, but with laser range scanning as the sensor modality, to derive traversability maps. Aside from these extensions to Pose SLAM, the core contribution of the thesis is an approach for path planning that exploits the modeled uncertainties in Pose SLAM to search for the path in the pose graph with the lowest accumulated robot pose uncertainty, i.e., the path that allows the robot to navigate to a given goal with the least probability of becoming lost. An added advantage of the proposed path planning approach is that since Pose SLAM is agnostic with respect to the sensor modalities used, it can be used in different environments and with different robots, and since the original pose graph may come from a previous mapping session, the paths stored in the map already satisfy constraints not easy modeled in the robot controller, such as the existence of restricted regions, or the right of way along paths. The proposed path planning methodology has been extensively tested both in simulation and with a real outdoor robot. Our path planning approach is adequate for scenarios where a robot is initially guided during map construction, but autonomous during execution. For other scenarios in which more autonomy is required, the robot should be able to explore the environment without any supervision. The second core contribution of this thesis is an autonomous exploration method that complements the aforementioned path planning strategy. The method selects the appropriate actions to drive the robot so as to maximize coverage and at the same time minimize localization and map uncertainties. An occupancy grid is maintained for the sole purpose of guaranteeing coverage. A significant advantage of the method is that since the grid is only computed to hypothesize entropy reduction of candidate map posteriors, it can be computed at a very coarse resolution since it is not used to maintain neither the robot localization estimate, nor the structure of the environment. Our technique evaluates two types of actions: exploratory actions and place revisiting actions. Action decisions are made based on entropy reduction estimates. By maintaining a Pose SLAM estimate at run time, the technique allows to replan trajectories online should significant change in the Pose SLAM estimate be detected. The proposed exploration strategy was tested in a common publicly available dataset comparing favorably against frontier based exploration
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Carlson, Justin. "Mapping Large, Urban Environments with GPS-Aided SLAM." Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/44.

Повний текст джерела
Анотація:
Simultaneous Localization and Mapping (SLAM) has been an active area of research for several decades, and has become a foundation of indoor mobile robotics. However, although the scale and quality of results have improved markedly in that time period, no current technique can e ectively handle city-sized urban areas. The Global Positioning System (GPS) is an extraordinarily useful source of localization information. Unfortunately, the noise characteristics of the system are complex, arising from a large number of sources, some of which have large autocorrelation. Incorporation of GPS signals into SLAM algorithms requires using low-level system information and explicit models of the underlying system to make appropriate use of the information. The potential bene ts of combining GPS and SLAM include increased robustness, increased scalability, and improved accuracy of localization. This dissertation presents a theoretical background for GPS-SLAM fusion. The presented model balances ease of implementation with correct handling of the highly colored sources of noise in a GPS system.. This utility of the theory is explored and validated in the framework of a simulated Extended Kalman Filter driven by real-world noise. The model is then extended to Smoothing and Mapping (SAM), which overcomes the linearization and algorithmic complexity limitations of the EKF formulation. This GPS-SAM model is used to generate a probabilistic landmark-based urban map covering an area an order of magnitude larger than previous work.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Touchette, Sébastien. "Recovering Cholesky Factor in Smoothing and Mapping." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37935.

Повний текст джерела
Анотація:
Autonomous vehicles, from self driving cars to small sized unmanned aircraft, is a hotly contested market experiencing significant growth. As a result, fundamental concepts of autonomous vehicle navigation, such as simultaneous localisation and mapping (SLAM) are very active fields of research garnering significant interest in the drive to improve effectiveness. Traditionally, SLAM has been performed by filtering methods but several improvements have brought smoothing and mapping (SAM) based methods to the forefront of SLAM research. Although recent works have made such methods incremental, they retain some batch functionalities from their bundle-adjustment origins. More specifically, re-linearisation and column reordering still require the full re-computation of the solution. In this thesis, the problem of re-computation after column reordering is addressed. A novel method to reflect changes in ordering directly on the Cholesky factor, called Factor Recovery, is proposed. Under the assumption that changes to the ordering are small and localised, the proposed method can be executed faster than the re-computation of the Cholesky factor. To define each method’s optimal region of operation, a function estimating the computational cost of Factor Recovery is derived and compared with the known cost of Cholesky factorisation obtained using experimental data. Combining Factor Recovery and traditional Cholesky decomposition, the Hybrid Cholesky decomposition algorithm is proposed. This novel algorithm attempts to select the most efficient algorithm to compute the Cholesky factor based on an estimation of the work required. To obtain experimental results, the Hybrid Cholesky decomposition algorithm was integrated in the SLAM++ software and executed on popular datasets from the literature. The proposed method yields an average reduction of 1.9 % on the total execution time with reductions of up to 31 % obtained in certain situations. When considering only the time spend performing reordering and factorisation for batch steps, reductions of 18 % on average and up to 78 % in certain situations are observed.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Maddern, William Paul. "Continuous appearance-based localisation and mapping." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/65841/2/William_Maddern_Thesis.pdf.

Повний текст джерела
Анотація:
This thesis presents a novel approach to mobile robot navigation using visual information towards the goal of long-term autonomy. A novel concept of a continuous appearance-based trajectory is proposed in order to solve the limitations of previous robot navigation systems, and two new algorithms for mobile robots, CAT-SLAM and CAT-Graph, are presented and evaluated. These algorithms yield performance exceeding state-of-the-art methods on public benchmark datasets and large-scale real-world environments, and will help enable widespread use of mobile robots in everyday applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Üzer, Ferit. "Hybrid mapping for large urban environments." Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22675/document.

Повний текст джерела
Анотація:
Dans cette thèse, nous présentons une nouvelle méthode de cartographie visuelle hybride qui exploite des informations métriques, topologiques et sémantiques. Notre but est de réduire le coût calculatoire par rapport à des techniques de cartographie purement métriques. Comparé à de la cartographie topologique, nous voulons plus de précision ainsi que la possibilité d’utiliser la carte pour le guidage de robots. Cette méthode hybride de construction de carte comprend deux étapes. La première étape peut être vue comme une carte topo-métrique avec des nœuds correspondants à certaines régions de l’environnement. Ces cartes sont ensuite complétées avec des données métriques aux nœuds correspondant à des sous-séquences d’images acquises quand le robot revenait dans des zones préalablement visitées. La deuxième étape augmente ce modèle en ajoutant des informations sémantiques. Une classification est effectuée sur la base des informations métriques en utilisant des champs de Markov conditionnels (CRF) pour donner un label sémantique à la trajectoire locale du robot (la route dans notre cas) qui peut être "doit", "virage" ou "intersection". L’information métrique des secteurs de route en virage ou en intersection est conservée alors que la métrique des lignes droites est effacée de la carte finale. La fermeture de boucle n’est réalisée que dans les intersections ce qui accroît l’efficacité du calcul et la précision de la carte. En intégrant tous ces nouveaux algorithmes, cette méthode hybride est robuste et peut être étendue à des environnements de grande taille. Elle peut être utilisée pour la navigation d’un robot mobile ou d’un véhicule autonome en environnement urbain. Nous présentons des résultats expérimentaux obtenus sur des jeux de données publics acquis en milieu urbain pour démontrer l’efficacité de l’approche proposée
In this thesis, a novel vision based hybrid mapping framework which exploits metric, topological and semantic information is presented. We aim to obtain better computational efficiency than pure metrical mapping techniques, better accuracy as well as usability for robot guidance compared to the topological mapping. A crucial step of any mapping system is the loop closure detection which is the ability of knowing if the robot is revisiting a previously mapped area. Therefore, we first propose a hierarchical loop closure detection framework which also constructs the global topological structure of our hybrid map. Using this loop closure detection module, a hybrid mapping framework is proposed in two step. The first step can be understood as a topo-metric map with nodes corresponding to certain regions in the environment. Each node in turn is made up of a set of images acquired in that region. These maps are further augmented with metric information at those nodes which correspond to image sub-sequences acquired while the robot is revisiting the previously mapped area. The second step augments this model by using road semantics. A Conditional Random Field based classification on the metric reconstruction is used to semantically label the local robot path (road in our case) as straight, curved or junctions. Metric information of regions with curved roads and junctions is retained while that of other regions is discarded in the final map. Loop closure is performed only on junctions thereby increasing the efficiency and also accuracy of the map. By incorporating all of these new algorithms, the hybrid framework presented can perform as a robust, scalable SLAM approach, or act as a main part of a navigation tool which could be used on a mobile robot or an autonomous car in outdoor urban environments. Experimental results obtained on public datasets acquired in challenging urban environments are provided to demonstrate our approach
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Frost, Duncan. "Long range monocular SLAM." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:af38cfa6-fc0a-48ab-b919-63c440ae8774.

Повний текст джерела
Анотація:
This thesis explores approaches to two problems in the frame-rate computation of a priori unknown 3D scene structure and camera pose using a single camera, or monocular simultaneous localisation and mapping. The thesis reflects two trends in vision in general and structure from motion in particular: (i) the move from directly recovered and towards learnt geometry; and (ii) the sparsification of otherwise dense direct methods. The first contributions mitigate scale drift. Beyond the inevitable accumulation of random error, monocular SLAM accumulates error via the depth/speed scaling ambiguity. Three solutions are investigated. The first detects objects of known class and size using fixed descriptors, and incorporates their measurements in the 3D map. Experiments using databases with ground truth show that metric accuracy can be restored over kilometre distances; and similar gains are made using a hand-held camera. Our second method avoids explicit feature choice, instead employing a deep convolutional neural network to yield depth priors. Relative depths are learnt well, but absolute depths less so, and recourse to database-wide scaling is investigated. The third approach uses a novel trained network to infer speed from imagery. The second part of the thesis develops sparsified direct methods for monocular SLAM. The first contribution is a novel camera tracker operating directly using affine image warping, but on patches around sparse corners. Camera pose is recovered with an accuracy at least equal to the state of the art, while requiring only half the computational time. The second introduces a least squares adjustment to sparsified direct map refinement, again using patches from sparse corners. The accuracy of its 3D structure estimation is compared with that from the widely used method of depth filtering. It is found empirically that the new method's accuracy is often higher than that of its filtering counterpart, but that the method is more troubled by occlusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pereira, Savio Joseph. "On the utilization of Simultaneous Localization and Mapping(SLAM) along with vehicle dynamics in Mobile Road Mapping Systems." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/94425.

Повний текст джерела
Анотація:
Mobile Road Mapping Systems (MRMS) are the current solution to the growing demand for high definition road surface maps in wide ranging applications from pavement management to autonomous vehicle testing. The focus of this research work is to improve the accuracy of MRMS by using the principles of Simultaneous Localization and Mapping (SLAM). First a framework for describing the sensor measurement models in MRMS is developed. Next the problem of estimating the road surface from the set of sensor measurements is formulated as a SLAM problem and two approaches are proposed to solve the formulated problem. The first is an incremental solution wherein sensor measurements are processed in sequence using an Extended Kalman Filter (EKF). The second is a post-processing solution wherein the SLAM problem is formulated as an inference problem over a factor graph and existing factor graph SLAM techniques are used to solve the problem. For the mobile road mapping problem, the road surface being measured is one the primary inputs to the dynamics of the MRMS. Hence, concurrent to the main objective this work also investigates the use of the dynamics of the host vehicle of the system to improve the accuracy of the MRMS. Finally a novel method that builds off the concepts of the popular model fitting algorithm, Random Sampling and Consensus (RANSAC), is developed in order to identify outliers in road surface measurements and estimate the road elevations at grid nodes using these measurements. The developed methods are validated in a simulated environment and the results demonstrate a significant improvement in the accuracy of MRMS over current state-of-the art methods.
Doctor of Philosophy
Mobile Road Mapping Systems (MRMS) are the current solution to the growing demand for high definition road surface maps in wide ranging applications from pavement management to autonomous vehicle testing. The objective of this research work is to improve the accuracy of MRMS by investigating methods to improve the sensor data fusion process. The main focus of this work is to apply the principles from the field of Simultaneous Localization and Mapping (SLAM) in order to improve the accuracy of MRMS. The concept of SLAM has been successfully applied to the field of mobile robot navigation and thus the motivation of this work is to investigate its application to the problem of mobile road mapping. For the mobile road mapping problem, the road surface being measured is one the primary inputs to the dynamics of the MRMS. Hence this work also investigates whether knowledge regarding the dynamics of the system can be used to improve the accuracy. Also developed as part of this work is a novel method for identifying outliers in road surface datasets and estimating elevations at road surface grid nodes. The developed methods are validated in a simulated environment and the results demonstrate a significant improvement in the accuracy of MRMS over current state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Carranza, Jose Martinez. "Efficient monocular SLAM by using a structure-driven mapping." Thesis, University of Bristol, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574263.

Повний текст джерела
Анотація:
Important progress has been achieved in recent years with regards to the monocular SLAM problem, which consists of estimating the 6-D pose of a single camera, whilst building a 3-D map representation of scene structure observed by the camera. Nowa- days, there exist various monocular SLAM systems capable of outputting camera and map estimates at camera frame rates over long trajectories and for indoor and outdoor scenarios. These systems are attractive due to their low cost - a consequence of using a conventional camera - and have been widely utilised in different applications such as in augmented and virtual reality. However, the main utility of the built map has been reduced to work as an effective reference system for robust and fast camera localisation. In order to produce more useful maps, different works have proposed the use of higher-level structures such as lines, planes and even meshes. Planar structure is one of the most popular structures to be incorporated into the map, given that they are abundant in man-made scenes, and because a plane by itself provides implicit semantic cues about the scene structure. Nevertheless, very often planar structure detection is carried out by ad-hoc auxiliary methods delivering a delayed detection and therefore a delayed mapping which becomes a problem when rapid planar mapping is demanded. This thesis addresses the problem of planar structure detection and mapping by propos- ing a novel mapping mechanism called structure-driven mapping. This new approach aims at enabling a monocular SLAM system to perform planar or point mapping ac- cording to scene structure observed by the camera. In order to achieve this, we propose to incorporate the plane detection task into the SLAM process. For this purpose, we have ' developed a novel framework that unifies planar and point mapping under a common parameterisation. This enables map components to evolve according to the incremen- tal visual observations of the scene structure thus providing undelayed planar mapping. Moreover, the plane detection task stops as soon as the camera explores a non planar structure scenario, which avoids wasting unnecessary processing time, starting again as soon as planar structure gets into view. We present a thorough evaluation of this novel approach through simulation experiments and results obtained with real data. We also present a visual odometry application which takes advantage of the efficient way in which the scene structure is mapped by using the novel mapping mechanism presented in this work. Therefore, the results suggest the feasibility of performing simultaneous planar structure detection, localisation and mapping within the same coherent estimation framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pascoe, Geoffrey. "Robust lifelong visual navigation and mapping." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:c0bfa5fb-fa0a-48ed-8d26-90fa167ef6cd.

Повний текст джерела
Анотація:
The ability to precisely determine one's location in within the world (localisation) is a key requirement for any robot wishing to navigate through the world. For long-term operation, such a localisation system must be robust to changes in the environment, both short term (eg. traffic, weather) and long term (eg. seasons). This thesis presents two methods for performing such localisation using cameras - small, cheap, lightweight sensors that are universally available. Whilst many image-based localisation systems have been proposed in the past, they generally rely on either feature matching, which fails under many degradations such as motion blur, or on photometric consistency, which fails under changing illumination. The methods we propose here directly align images with a dense prior map. The first method uses maps synthesised from a combination of LIDAR scanners to generate geometry and cameras to generate appearance, whilst the second uses vision for both mapping and localisation. Both make use of an information-theoretic metric, Normalised Information Distance (NID), for image alignment, relaxing the appearance constancy assumption inherent in photometric methods. Our methods require significant computational resources, but through the use of commodity GPUs, we are able to run them at a rate of 8-10Hz. Our GPU implementations make use of low level OpenGL, enabling compatibility across almost any GPU hardware. We also present a method for calibrating multi-sensor systems, enabling the joint use of cameras and LIDAR for mapping. Through experiments on both synthetic data and real-world data from over 100km of driving outdoors, we demonstrate the robustness of our localisation system to large variations in appearance. Comparisons with state-of-the-art feature-based and direct methods show that ours is significantly more robust, whilst maintaining similar precision.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Pretto, Alberto. "Visual-SLAM for Humanoid Robots." Doctoral thesis, Università degli studi di Padova, 2009. http://hdl.handle.net/11577/3426516.

Повний текст джерела
Анотація:
In robotics the Simultaneous Localization and Mapping (SLAM) is the problem in which an autonomous robots acquires a map of the surrounding environment while at the same time localizes itself inside this map. In the last years a lot of researchers have spent a great effort in developing new families of algorithms, using several sensors and robotic platforms. One of the most challenging field of research in SLAM is the so called Visual-SLAM problem, in which various types of cameras are used as sensor for the navigation. Cameras are inexpensive sensors and can provide rich information about the surrounding environment, on the other hand the complexity of the computer vision tasks and the strong dependence on the characteristics of the environment in current approaches makes the Visual-SLAM far to be considered a closed problem. Most of the SLAM algorithm are usually tested on wheeled robot. These platforms have become robust and stable, on the other hand the research in robot design moves toward a new family of robot platforms, the humanoid robots. Just like humans, a humanoid robot can adapt itself to changes in the environment in order to efficiently reach its goals. Despite that, only a few roboticists focused theirs research on stable implementation of SLAM and Visual SLAM algorithms well suited for humanoid robots. Humanoid platforms raise issues which can compromise the stability of the conventional navigation algorithms, especially for vision-based approaches. A humanoid robot can move in 3D without the usual planar motion assumption that constraint the movement in 2D, usually with quick and complex movements combined with unpredictable vibrations, compromising the reliability of the acquired sensors data, for example introducing in the images grabbed by the camera an undesired motion blur effect. Due to the strong balance constraints, a humanoid robot usually can’t be equipped with powerfull but hefty computer boards: this limits the implementation of complex and computational expensive algorithms. Moreover, unlike wheeled robots, its complex kinematics usually forbids a reliable reconstruction of the motion from the servo-motor encoders. In this thesis, we focus on studying and developing new techniques addressing the Visual-SLAM problem, with particular attention to the issues related to using as experimental platform small humanoid robots equipped with a single perspective camera. The main efforts in SLAM and Visual SLAM research areas have been put into the estimation functionality. However, most of the functionalities involved in Visual SLAM are in perception processes. In this thesis we therefore focus on the improvement of the perceptual processes, from a computer vision point-of-view. We faced small humanoid robot related issues like low-computational capability, the low quality of the sensor data and the high degrees of freedom of the motion. We cope with the low computational resources presenting a new similarity measure for images based on a compact signature to be used in image-based topological SLAM problem. The motion blur problem is faced proposing a new feature detection and tracking scheme that is robust even to non-uniform motion blur. We develop a framework for visual odometry based on features robust to motion blur. We finally propose an homography-based approach to 3D visual SLAM, using the information provided by a single camera mounted on a humanoid robot, based on the assumption that the robot moves on a planar environment. All proposed methods have been validated with experiments and comparative validation using both standard datasets and images taken by the cameras mounted on walking small humanoid robots.
Nell’ambito della robotica, il Simultaneous Localization and Mapping (SLAM) é il processo grazie al quale un robot autonomo é in grado di creare una mappa dell’ambiente circostante e allo stesso tempo di localizzarsi avvalendosi di tale mappa. Negli ultimi anni un considerevole numero di ricercatori ha sviluppato nuove famiglie di algoritmi di SLAM, basati su vari sensori e utilizzando varie piattaforme robotiche. Uno degli ambiti più complessi nella ricerca sullo SLAM é il cosiddetto Visual-SLAM, che prevede l’utilizzo di vari tipi di telecamera come sensore per la navigazione. Le telecamere sono sensori economici che raccolgono molte informazioni sull’ambiente circostante. D’altro canto, la complessità degli algoritmi di visione artificiale e la forte dipendenza degli approcci attualmente realizzati dalle caratteristiche dell’ambiente, rendono il Visual-SLAM un problema lontano dal poter essere considerato risolto. Molti degli algoritmi di SLAM sono solitamente testati usando robot dotati di ruote. Sebbene tali piattaforme siano ormai robuste e stabili, la ricerca sulla progettazione di nuove piattaforme robotiche sta in parte migrando verso la robotica umanoide. Proprio come gli esseri umani, i robot umanoidi sono in grado di adattarsi ai cambiamenti dell’ambiente per raggiungere efficacemente i propri obiettivi. Nonostante ciò, solo pochi ricercatori hanno focalizzato i loro sforzi su implementazioni stabili di algoritmi di SLAM e Visual-SLAM adatti ai robot umanoidi. Tali piattaforme robotiche introducono nuove problematiche che possono compromettere la stabilità degli algoritmi di navigazione convenzionali, specie se basati sulla visione. I robot umanoidi sono dotati di un alto grado di libertà di movimento, con la possibilità di effettuare velocemente movimenti complessi: tali caratteristiche introducono negli spostamenti vibrazioni non deterministiche in grado di compromettere l’affidabilit` dei dati sensoriali acquisiti, per esempio introducendo nei flussi video effetti indesiderati quali il motion blur. A causa dei vincoli imposti dal bilanciamento del corpo, inoltre, tali robot non sempre possono essere dotati di unit` di elaborazione molto performanti che spesso sono ingombranti e dal peso elevato: ci` limita l’utilizzo di algoritmi complessi e computazionalmente gravosi. Infine, al contrario di quanto accade per i robot dotati di ruote, la complessa cinematica di un robot umanoide impedisce di ricostruire il movimento basandosi sulle informazioni provenienti dagli encoder posti sui motori. In questa tesi ci si é focalizzati sullo studio e sullo sviluppo di nuove metodologie per affrontare il problema del Visual-SLAM, ponendo particolare enfasi ai problemi legati all’utilizzo di piccoli robot umanoidi dotati di una singola telecamera come piattaforme per gli esperimenti. I maggiori sforzi nell’ambito della ricerca sullo SLAM e sul Visual-SLAM si sono concentrati nel campo del processo di stima dello stato del robot, ad esempio la stima della propria posizione e della mappa dell’ambiente. D’altra parte, la maggior parte delle problematiche incontrate nella ricerca sul Visual-SLAM sono legate al processo di percezione, ovvero all’interpretazione dei dati provenienti dai sensori. In questa tesi ci si é perciò concentrati sul miglioramento dei processi percettivi da un punto di vista della visione artificiale. Sono stati affrontati i problemi che scaturiscono dall’utilizzo di piccoli robot umanoidi come piattaforme sperimentali, come ad esempio la bassa capacità di calcolo, la bassa qualit` dei dati sensoriali e l’elevato numero di gradi di libertà nei movimenti. La bassa capacità di calcolo ha portato alla creazione di un nuovo metodo per misurare la similarità tra le immagini, che fa uso di una descrizione dell’immagine compatta, utilizzabile in applicazioni di SLAM topologico. Il problema del motion blur é stato affrontato proponendo una nuova tecnica di rilevamento di feature visive, unitamente ad un nuovo schema di tracking, robusto an- che in caso di motion blur non uniforme. E’ stato altresì sviluppato un framework per l’odometria basata sulle immagini, che fa uso delle feature visive presentate. Si propone infine un approccio al Visual-SLAM basato sulle omografie, che sfrutta le informazioni ottenute da una singola telecamera montata su un robot umanoide. Tale approccio si basa sull’assunzione che il robot si muove su una superficie piana. Tutti i metodi proposti sono stati validati con esperimenti e studi comparativi, usando sia dataset standard che immagini acquisite dalle telecamere installate su piccoli robot umanoidi.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "SLAM mapping"

1

Valencia, Rafael, and Juan Andrade-Cetto. Mapping, Planning and Exploration with Pose SLAM. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-60603-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mullane, John, Ba-Ngu Vo, Martin Adams, and Ba-Tuong Vo. Random Finite Sets for Robot Mapping and SLAM. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21390-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mullane, John. Random Finite Sets for Robot Mapping and SLAM: New Concepts in Autonomous Robotic Map Representations. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Amen, Alan E. Soil Landscape Analysis Project (SLAP) methods in soil surveys. a Springfield, VA: Denver, CO, 1987.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Amen, Alan E. Soil Landscape Analysis Project (SLAP) methods in soil surveys. Denver, CO: U.S. Dept. of the Interior, Bureau of Land Management, 1987.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Andrade-Cetto, Juan, and Rafael Valencia. Mapping, Planning and Exploration with Pose SLAM. Springer, 2018.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Andrade-Cetto, Juan, and Rafael Valencia. Mapping, Planning and Exploration with Pose SLAM. Springer, 2017.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Burguera, Antoni, and Francisco Bonin-Font, eds. Localization, Mapping and SLAM in Marine and Underwater Environments. MDPI, 2022. http://dx.doi.org/10.3390/books978-3-0365-5498-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Vo, Ba-Ngu, Martin David Adams, and John Stephen Mullane. Random Finite Sets for Robot Mapping & SLAM: New Concepts in Autonomous Robotic Map Representations. Springer, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Vo, Ba-Ngu, Martin David Adams, John Stephen Mullane, and Ba-Tuong Vo. Random Finite Sets for Robot Mapping & SLAM: New Concepts in Autonomous Robotic Map Representations. Springer, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "SLAM mapping"

1

Berns, Karsten, and Ewald von Puttkamer. "Simultaneous localization and mapping (SLAM)." In Autonomous Land Vehicles, 146–72. Wiesbaden: Vieweg+Teubner, 2009. http://dx.doi.org/10.1007/978-3-8348-9334-5_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Perera, Samunda, Dr Nick Barnes, and Dr Alexander Zelinsky. "Exploration: Simultaneous Localization and Mapping (SLAM)." In Computer Vision, 268–75. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_280.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Perera, Samunda, Nick Barnes, and Alexander Zelinsky. "Exploration: Simultaneous Localization and Mapping (SLAM)." In Computer Vision, 412–20. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_280.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ellery, Alex. "Autonomous Navigation—Self-localization and Mapping (SLAM)." In Planetary Rovers, 331–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-642-03259-2_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Joukhadar, Abdulkader, Dalia Kass Hanna, Andreas Müller, and Christoph Stöger. "UKF-Assisted SLAM for 4WDDMR Localization and Mapping." In Mechanism, Machine, Robotics and Mechatronics Sciences, 259–70. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-89911-4_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chatterjee, Amitava, Anjan Rakshit, and N. Nirmal Singh. "Simultaneous Localization and Mapping (SLAM) in Mobile Robots." In Vision Based Autonomous Robot Navigation, 167–206. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-33965-3_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kung, Da-Wei, Chen-Chien Hsu, Wei-Yen Wang, and Jacky Baltes. "Adaptive Computation Algorithm for Simultaneous Localization and Mapping (SLAM)." In Advances in Intelligent Systems and Computing, 75–83. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-31293-4_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Fernández-Moral, Eduardo, Vicente Arévalo, and Javier González-Jiménez. "Hybrid Metric-topological Mapping for Large Scale Monocular SLAM." In Informatics in Control, Automation and Robotics, 217–32. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10891-9_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Brooks, Alex, and Tim Bailey. "HybridSLAM: Combining FastSLAM and EKF-SLAM for Reliable Mapping." In Springer Tracts in Advanced Robotics, 647–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00312-7_40.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhang, He, Zifeng Hou, Nanjun Li, and Shuang Song. "A Graph-Based Hierarchical SLAM Framework for Large-Scale Mapping." In Intelligent Robotics and Applications, 439–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33515-0_44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "SLAM mapping"

1

Aguilar-Gonzalez, Abiel, and Miguel Arias-Estrada. "Dense mapping for monocular-SLAM." In 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, 2016. http://dx.doi.org/10.1109/ipin.2016.7743671.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yang, Zhuoyue, and Dianxi Shi. "Mapping Technology in Visual SLAM." In the 2018 2nd International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3297156.3297163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Xue, Wuyang, Rendong Ying, Zheng Gong, Ruihang Miao, Fei Wen, and Peilin Liu. "SLAM Based Topological Mapping and Navigation." In 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS). IEEE, 2020. http://dx.doi.org/10.1109/plans46316.2020.9110190.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Li, Ping, and Zhongming Ke. "Feature-based SLAM for Dense Mapping." In 2019 International Conference on Advanced Mechatronic Systems (ICAMechS). IEEE, 2019. http://dx.doi.org/10.1109/icamechs.2019.8861671.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tong, Chi Hay, Timothy D. Barfoot, and Erick Dupuis. "3D SLAM for planetary worksite mapping." In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011). IEEE, 2011. http://dx.doi.org/10.1109/iros.2011.6048242.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chi Hay Tong, T. D. Barfoot, and E. Dupuis. "3D SLAM for planetary worksite mapping." In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011). IEEE, 2011. http://dx.doi.org/10.1109/iros.2011.6094577.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Choudhary, Siddharth, Alexander J. B. Trevor, Henrik I. Christensen, and Frank Dellaert. "SLAM with object discovery, modeling and mapping." In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014). IEEE, 2014. http://dx.doi.org/10.1109/iros.2014.6942683.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Khairuddin, Alif Ridzuan, Mohamad Shukor Talib, and Habibollah Haron. "Review on simultaneous localization and mapping (SLAM)." In 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE). IEEE, 2015. http://dx.doi.org/10.1109/iccsce.2015.7482163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chan, Teng Hooi, Henrik Hesse, and Song Guang Ho. "LiDAR-Based 3D SLAM for Indoor Mapping." In 2021 7th International Conference on Control, Automation and Robotics (ICCAR). IEEE, 2021. http://dx.doi.org/10.1109/iccar52225.2021.9463503.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chen, Yuwei, Changhui Jiang, Lingli Zhu, Harri Kaartinen, Juha Hyyppa, Jian Tan, Hannu Hyyppa, Hui Zhou, Ruizhi Chen, and Ling Pei. "SLAM Based Indoor Mapping Comparison:Mobile or Terrestrial ?" In 2018 Ubiquitous Positioning, Indoor Navigation and Location-Based Services (UPINLBS). IEEE, 2018. http://dx.doi.org/10.1109/upinlbs.2018.8559707.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "SLAM mapping"

1

Kelley, Troy D. Using a Cognitive Architecture to Solve Simultaneous Localization and Mapping (SLAM) Problems. Fort Belvoir, VA: Defense Technical Information Center, April 2006. http://dx.doi.org/10.21236/ad1016045.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kelley, Troy D. Using a Cognitive Architecture to Solve Simultaneous Localization and Mapping (SLAM) Problems. Fort Belvoir, VA: Defense Technical Information Center, April 2006. http://dx.doi.org/10.21236/ada636872.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Christie, Benjamin, Osama Ennasr, and Garry Glaspell. Autonomous navigation and mapping in a simulated environment. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/42006.

Повний текст джерела
Анотація:
Unknown Environment Exploration (UEE) with an Unmanned Ground Vehicle (UGV) is extremely challenging. This report investigates a frontier exploration approach, in simulation, that leverages Simultaneous Localization And Mapping (SLAM) to efficiently explore unknown areas by finding navigable routes. The solution utilizes a diverse sensor payload that includes wheel encoders, three-dimensional (3-D) LIDAR, and Red, Green, Blue and Depth (RGBD) cameras. The main goal of this effort is to leverage frontier-based exploration with a UGV to produce a 3-D map (up to 10 cm resolution). The solution provided leverages the Robot Operating System (ROS).
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Christie, Benjamin, Osama Ennasr, and Garry Glaspell. ROS integrated object detection for SLAM in unknown, low-visibility environments. Engineer Research and Development Center (U.S.), November 2021. http://dx.doi.org/10.21079/11681/42385.

Повний текст джерела
Анотація:
Integrating thermal (or infrared) imagery on a robotics platform allows Unmanned Ground Vehicles (UGV) to function in low-visibility environments, such as pure darkness or low-density smoke. To maximize the effectiveness of this approach we discuss the modifications required to integrate our low-visibility object detection model on a Robot Operating System (ROS). Furthermore, we introduce a method for reporting detected objects while performing Simultaneous Localization and Mapping (SLAM) by generating bounding boxes and their respective transforms in visually challenging environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lee, W. S., Victor Alchanatis, and Asher Levi. Innovative yield mapping system using hyperspectral and thermal imaging for precision tree crop management. United States Department of Agriculture, January 2014. http://dx.doi.org/10.32747/2014.7598158.bard.

Повний текст джерела
Анотація:
Original objectives and revisions – The original overall objective was to develop, test and validate a prototype yield mapping system for unit area to increase yield and profit for tree crops. Specific objectives were: (1) to develop a yield mapping system for a static situation, using hyperspectral and thermal imaging independently, (2) to integrate hyperspectral and thermal imaging for improved yield estimation by combining thermal images with hyperspectral images to improve fruit detection, and (3) to expand the system to a mobile platform for a stop-measure- and-go situation. There were no major revisions in the overall objective, however, several revisions were made on the specific objectives. The revised specific objectives were: (1) to develop a yield mapping system for a static situation, using color and thermal imaging independently, (2) to integrate color and thermal imaging for improved yield estimation by combining thermal images with color images to improve fruit detection, and (3) to expand the system to an autonomous mobile platform for a continuous-measure situation. Background, major conclusions, solutions and achievements -- Yield mapping is considered as an initial step for applying precision agriculture technologies. Although many yield mapping systems have been developed for agronomic crops, it remains a difficult task for mapping yield of tree crops. In this project, an autonomous immature fruit yield mapping system was developed. The system could detect and count the number of fruit at early growth stages of citrus fruit so that farmers could apply site-specific management based on the maps. There were two sub-systems, a navigation system and an imaging system. Robot Operating System (ROS) was the backbone for developing the navigation system using an unmanned ground vehicle (UGV). An inertial measurement unit (IMU), wheel encoders and a GPS were integrated using an extended Kalman filter to provide reliable and accurate localization information. A LiDAR was added to support simultaneous localization and mapping (SLAM) algorithms. The color camera on a Microsoft Kinect was used to detect citrus trees and a new machine vision algorithm was developed to enable autonomous navigations in the citrus grove. A multimodal imaging system, which consisted of two color cameras and a thermal camera, was carried by the vehicle for video acquisitions. A novel image registration method was developed for combining color and thermal images and matching fruit in both images which achieved pixel-level accuracy. A new Color- Thermal Combined Probability (CTCP) algorithm was created to effectively fuse information from the color and thermal images to classify potential image regions into fruit and non-fruit classes. Algorithms were also developed to integrate image registration, information fusion and fruit classification and detection into a single step for real-time processing. The imaging system achieved a precision rate of 95.5% and a recall rate of 90.4% on immature green citrus fruit detection which was a great improvement compared to previous studies. Implications – The development of the immature green fruit yield mapping system will help farmers make early decisions for planning operations and marketing so high yield and profit can be achieved.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Maksud, A. K. M., Khandaker Reaz Hossain, and Amit Arulanantham. Mapping of Slums and Identifying Children Engaged in Worst Forms of Child Labour Living in Slums and Working in Neighbourhood Areas. Institute of Development Studies, May 2022. http://dx.doi.org/10.19088/clarissa.2022.002.

Повний текст джерела
Анотація:
Dhaka has a population of about 19 million and many think it is a city of fortune. People come from all over the country to settle in Dhaka and many low-cost settlements (known as slums) have emerged since the country became independent. Findings of national survey reports suggest there is a high concentration of child labour in the slums of Dhaka, linked with the global supply chain of products. In order to understand the drivers of child labour in the slum areas of Dhaka, a research team formed of the Grambangla Unnayan Committee (GUC) with ChildHope UK designed and conducted a mapping and listing exercise, in consultation with CLARISSA consortium colleagues. The overall objective of the mapping and listing process was to identify and map children engaged in WFCL living in eight slum areas in Dhaka.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії