Journal articles on the topic '3D real-time localization'

To see the other types of publications on this topic, follow the link: 3D real-time localization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic '3D real-time localization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, M. H., Sheng Zhang, and Yi Pan. "UWB-based Real-Time 3D High Precision Localization System." Journal of Physics: Conference Series 2290, no. 1 (June 1, 2022): 012082. http://dx.doi.org/10.1088/1742-6596/2290/1/012082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Ultra-Wideband (UWB) positioning technique featuring high accuracy and low cost has been drawing increasing interest. Angel of arrival (AOA), time difference of arrival (TDOA) and time of arrival (TOA) are common methods for positioning. With DecaWave1000 ranging systems, which applies TOA method, sub-meter accuracy can be achieved easily. However, the traditional DW1000 UWB localization system is unscalable and the ranging stability remains improving. In this paper, a brand new Scalable Multi-Base Multi-Tag (SMBMT) scheme based on traditional DW1000 UWB localization system is proposed to improve the system’s efficiency and scalability. Then an effective data processing algorithm Data Kalman Filter is offered using ranging information only, to diminish the ranging error in the system. By changing the system equation, Data KF is also able to optimize the dynamic trajectory. Finally, the proposed system is tested and the algorithm is analysed. The result shows that not only does applying SMBMT scheme improves the scalability and ranging efficiency of UWB localization system but also increases the accuracy and stability in positioning.
2

Lynen, Simon, Bernhard Zeisl, Dror Aiger, Michael Bosse, Joel Hesch, Marc Pollefeys, Roland Siegwart, and Torsten Sattler. "Large-scale, real-time visual–inertial localization revisited." International Journal of Robotics Research 39, no. 9 (July 7, 2020): 1061–84. http://dx.doi.org/10.1177/0278364920931151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The overarching goals in image-based localization are scale, robustness, and speed. In recent years, approaches based on local features and sparse 3D point-cloud models have both dominated the benchmarks and seen successful real-world deployment. They enable applications ranging from robot navigation, autonomous driving, virtual and augmented reality to device geo-localization. Recently, end-to-end learned localization approaches have been proposed which show promising results on small-scale datasets. However, the positioning accuracy, scalability, latency, and compute and storage requirements of these approaches remain open challenges. We aim to deploy localization at a global scale where one thus relies on methods using local features and sparse 3D models. Our approach spans from offline model building to real-time client-side pose fusion. The system compresses the appearance and geometry of the scene for efficient model storage and lookup leading to scalability beyond what has been demonstrated previously. It allows for low-latency localization queries and efficient fusion to be run in real-time on mobile platforms by combining server-side localization with real-time visual–inertial-based camera pose tracking. In order to further improve efficiency, we leverage a combination of priors, nearest-neighbor search, geometric match culling, and a cascaded pose candidate refinement step. This combination outperforms previous approaches when working with large-scale models and allows deployment at unprecedented scale. We demonstrate the effectiveness of our approach on a proof-of-concept system localizing 2.5 million images against models from four cities in different regions of the world achieving query latencies in the 200 ms range.
3

LI, Wei, Yi WU, Chunlin SHEN, and Huajun GONG. "Robust 3D Surface Reconstruction in Real-Time with Localization Sensor." IEICE Transactions on Information and Systems E101.D, no. 8 (August 1, 2018): 2168–72. http://dx.doi.org/10.1587/transinf.2018edl8056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mair, Elmar, Klaus H. Strobl, Tim Bodenmüller, Michael Suppa, and Darius Burschka. "Real-time Image-based Localization for Hand-held 3D-modeling." KI - Künstliche Intelligenz 24, no. 3 (May 26, 2010): 207–14. http://dx.doi.org/10.1007/s13218-010-0037-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Będkowski, Janusz, Andrzej Masłowski, and Geert De Cubber. "Real time 3D localization and mapping for USAR robotic application." Industrial Robot: An International Journal 39, no. 5 (August 17, 2012): 464–74. http://dx.doi.org/10.1108/01439911211249751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baeck, P. J., N. Lewyckyj, B. Beusen, W. Horsten, and K. Pauly. "DRONE BASED NEAR REAL-TIME HUMAN DETECTION WITH GEOGRAPHIC LOCALIZATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W8 (August 20, 2019): 49–53. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w8-49-2019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p><strong>Abstract.</strong> Detection of humans, e.g. for search and rescue operations has been enabled by the availability of compact, easy to use cameras and drones. On the other hand, aerial photogrammetry techniques for inspection applications allow for precise geographic localization and the generation of an overview orthomosaic and 3D terrain model. The proposed solution is based on nadir drone imagery and combines both deep learning and photogrammetric algorithms to detect people and position them with geographical coordinates on an overview orthomosaic and 3D terrain map. The drone image processing chain is fully automated and near real-time and therefore allows search and rescue teams to operate more efficiently in difficult to reach areas.</p>
7

Hauser, Fabian, and Jaroslaw Jacak. "Real-time 3D single-molecule localization microscopy analysis using lookup tables." Biomedical Optics Express 12, no. 8 (July 16, 2021): 4955. http://dx.doi.org/10.1364/boe.424016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yiming, Markus Mund, Philipp Hoess, Joran Deschamps, Ulf Matti, Bianca Nijmeijer, Vilma Jimenez Sabinina, Jan Ellenberg, Ingmar Schoen, and Jonas Ries. "Real-time 3D single-molecule localization using experimental point spread functions." Nature Methods 15, no. 5 (April 9, 2018): 367–69. http://dx.doi.org/10.1038/nmeth.4661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, Wenjun, Peng Wang, Rui Li, and Xiangli Nie. "Real-time 3D work-piece tracking with monocular camera based on static and dynamic model libraries." Assembly Automation 37, no. 2 (April 3, 2017): 219–29. http://dx.doi.org/10.1108/aa-02-2017-018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose This paper aims to propose a novel real-time three-dimensional (3D) model-based work-piece tracking method with monocular camera for high-precision assembly. Tracking of 3D work-pieces with real-time speed is becoming more and more important for some industrial tasks, such as work-pieces grasping and assembly, especially in complex environment. Design/methodology/approach A three-step process method was provided, i.e. the offline static global library generation process, the online dynamic local library updating and selection process and the 3D work-piece localization process. In the offline static global library generation process, the computer-aided design models of the work-piece are used to generate a set of discrete two-dimensional (2D) hierarchical views matching libraries. In the online dynamic library updating and selection process, the previous 3D location information of the work-piece is used to predict the following location range, and a discrete matching library with a small number of 2D hierarchical views is selected from dynamic local library for localization. Then, the work-piece is localized with high-precision and real-time speed in the 3D work-piece localization process. Findings The method is suitable for the texture-less work-pieces in industrial applications. Originality/value The small range of the library enables a real-time matching. Experimental results demonstrate the high accuracy and high efficiency of the proposed method.
10

Feng, Sheng, Chengdong Wu, Yunzhou Zhang, and Shigen Shen. "Collaboration calibration and three-dimensional localization in multi-view system." International Journal of Advanced Robotic Systems 15, no. 6 (November 1, 2018): 172988141881377. http://dx.doi.org/10.1177/1729881418813778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this research, the authors have addressed the collaboration calibration and real-time three-dimensional (3D) localization problem in the multi-view system. The 3D localization method is proposed to fuse the two-dimensional image coordinates from multi-views and provide the 3D space location in real time. It is a fundamental solution to obtain the 3D location of the moving object in the research field of computer vision. Improved common perpendicular centroid algorithm is presented to reduce the side effect of the shadow detection and improve localization accuracy. The collaboration calibration is used to generate the intrinsic and extrinsic parameters of multi-view cameras synchronously. The experimental results show that the algorithm can complete accurate positioning in indoor multi-view monitoring and reduce the complexity.
11

Song, Yan, and Bo He. "Feature-Based Real-Time Visual SLAM Using Kinect." Advanced Materials Research 989-994 (July 2014): 2651–54. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.2651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, a novel feature-based real-time visual Simultaneous localization and mapping (SLAM) system is proposed. This system generates colored 3-D reconstruction models and 3-D estimated trajectory using a Kinect style camera. Microsoft Kinect, a low priced 3-D camera, is the only sensor we use in our experiment. Kinect style sensors give RGB-D (red-green-blue depth) data which contains 2D image and per-pixel depth information. ORB (Oriented FAST and Rotated BRIEF) is the algorithm used to extract image features for speed up the whole system. Our system can be used to generate 3-D detailed reconstruction models. Furthermore, an estimated 3D trajectory of the sensor is given in this paper. The results of the experiments demonstrate that our system performs robustly and effectively in both getting detailed 3D models and mapping camera trajectory.
12

Xiong, Juntao, Junhao Liang, Yanyun Zhuang, Dan Hong, Zhenhui Zheng, Shisheng Liao, Wenxin Hu, and Zhengang Yang. "Real-time localization and 3D semantic map reconstruction for unstructured citrus orchards." Computers and Electronics in Agriculture 213 (October 2023): 108217. http://dx.doi.org/10.1016/j.compag.2023.108217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Wei, Hao Pu, Hai Feng Zhao, and Yi Chang Cai. "Data Compression Method for Road 3D Scene Real-Time Network Transmission." Advanced Materials Research 779-780 (September 2013): 1817–21. http://dx.doi.org/10.4028/www.scientific.net/amr.779-780.1817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In view of the contradiction between massive data of road 3D scene and bandwidth of current network transmission, a data real-time compression method for massive 3D road scene is proposed. Firstly, three-dimensional coordinates are reduced to two-dimensional by localization. The bit rate is also significantly reduced by relative treatment. Secondly, the inter-frame prediction method applicable to road scene rendering is proposed in order to reduce data redundancy of adjacent stations. Thirdly, a lossless and high compact data format is defined to store the coordinates for road scene network transmission. Finally, the bit stream is further compressed by using adaptive arithmetic coding and thus higher compression ratio is obtained. Applications show that this compression method meets the requirements of road scene real-time network transmission.
14

Zhao, Yue, Adeline Bernard, Christian Cachard, and Hervé Liebgott. "Biopsy Needle Localization and Tracking Using ROI-RK Method." Abstract and Applied Analysis 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/973147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ROI-RK method is a biopsy needle localization and tracking method. Previous research work has proved that it has a robust performance on different series of simulated 3D US volumes. Unfortunately, in real situations, because of the strong speckle noise of the ultrasound image and the different echogenic properties of the tissues, the real 3D US volumes have more complex background than the simulated images used previously. In this paper, to adapt the ROI-RK method in real 3D US volumes, a line-filter enhancement calculation only in the ROI is added to increase the contrast between the needle and background tissue, decreasing the phenomenon of expansion of the biopsy needle due to reverberation of ultrasound in the needle. To make the ROI-RK method more stable, a self-correction system is also implemented. Real data have been acquired on anex vivoheart of lamb. The result of the ROI-RK method shows that it is capable to localize and track the biopsy needle in real situations, and it satisfies the demand of real-time application.
15

Sánchez, Carlos, Pierluigi Taddei, Simone Ceriani, Erik Wolfart, and Vítor Sequeira. "Localization and tracking in known large environments using portable real-time 3D sensors." Computer Vision and Image Understanding 149 (August 2016): 197–208. http://dx.doi.org/10.1016/j.cviu.2015.11.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Monica, Stefania, and Gianluigi Ferrari. "A swarm-based approach to real-time 3D indoor localization: Experimental performance analysis." Applied Soft Computing 43 (June 2016): 489–97. http://dx.doi.org/10.1016/j.asoc.2016.02.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Tong, Guofeng, Yong Li, Yuanyuan Li, Fan Gao, and Lihao Cao. "Lidar-based system for high-precision localization and real-time 3D map construction." Journal of Applied Remote Sensing 14, no. 02 (May 13, 2020): 1. http://dx.doi.org/10.1117/1.jrs.14.020501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mat Daud, Marizuana, Zulaikha Kadim, Fazlina Mohd Ali, and Kit Chaw Jun. "REAL-TIME 3D MAPPING AND LOCALIZATION OF PALM OIL TREE FOR HARVEST DATA MANAGEMENT SYSTEM." Journal of Information System and Technology Management 8, no. 33 (December 7, 2023): 91–101. http://dx.doi.org/10.35631/jistm.833008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The palm oil industry, particularly vital in Asian nations such as Malaysia and Indonesia, plays a pivotal role in their economies. Its efficiency hinges on effective palm oil harvest management. This industry is poised for substantial growth, with global palm oil demand projected to surge from 51 million tons to an estimated 120 to 156 million tons in the next three decades. However, it faces significant challenges, notably its heavy reliance on foreign labor, particularly for the labor-intensive tasks of harvesting and assessing fruit bunch maturity, a situation further exacerbated by recent labor shortages due to the COVID-19 pandemic. Therefore, this study proposed a comprehensive approach to 3D mapping and tagging, centered around the detection and localization of palm oil trees. The key methodology involves employing object detection technology to extract the coordinates of these trees. Subsequently, the RTAB-Map is harnessed to precisely localize these identified palm oil trees within a 3D space. The results of this research demonstrate a robust and efficient system for palm oil tree detection and 3D localization. By utilizing object detection and RTAB-Map, we achieve high accuracy in tree identification and localization, laying the foundation for improved management of palm oil plantations. This technology not only facilitates the monitoring and assessment of palm oil tree health but also enhances overall plantation management, resource allocation, and sustainability efforts. The findings herein signify a significant step towards optimizing palm oil cultivation practices and, consequently, fostering environmental conservation and sustainable agricultural practices within the industry.
19

Xie, Xing, Lin Bai, and Xinming Huang. "Real-Time LiDAR Point Cloud Semantic Segmentation for Autonomous Driving." Electronics 11, no. 1 (December 22, 2021): 11. http://dx.doi.org/10.3390/electronics11010011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
LiDAR has been widely used in autonomous driving systems to provide high-precision 3D geometric information about the vehicle’s surroundings for perception, localization, and path planning. LiDAR-based point cloud semantic segmentation is an important task with a critical real-time requirement. However, most of the existing convolutional neural network (CNN) models for 3D point cloud semantic segmentation are very complex and can hardly be processed at real-time on an embedded platform. In this study, a lightweight CNN structure was proposed for projection-based LiDAR point cloud semantic segmentation with only 1.9 M parameters that gave an 87% reduction comparing to the state-of-the-art networks. When evaluated on a GPU, the processing time was 38.5 ms per frame, and it achieved a 47.9% mIoU score on Semantic-KITTI dataset. In addition, the proposed CNN is targeted on an FPGA using an NVDLA architecture, which results in a 2.74x speedup over the GPU implementation with a 46 times improvement in terms of power efficiency.
20

Ren, Zhuli, and Liguan Wang. "Accurate Real-Time Localization Estimation in Underground Mine Environments Based on a Distance-Weight Map (DWM)." Sensors 22, no. 4 (February 14, 2022): 1463. http://dx.doi.org/10.3390/s22041463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The precise localization of an underground mine environment is key to achieving unmanned and intelligent underground mining. However, in an underground environment, GPS is unavailable, there are variable and often poor lighting conditions, there is visual aliasing in long tunnels, and the occurrence of airborne dust and water, presenting great difficulty for localization. We demonstrate a high-precision, real-time, without-infrastructure underground localization method based on 3D LIDAR. The underground mine environment map was constructed based on GICP-SLAM, and inverse distance weighting (IDW) was first proposed to implement error correction based on point cloud mapping called a distance-weight map (DWM). The map was used for the localization of the underground mine environment for the first time. The approach combines point cloud frames matching and DWM matching in an unscented Kalman filter fusion process. Finally, the localization method was tested in four underground scenes, where a spatial localization error of 4 cm and 60 ms processing time per frame were obtained. We also analyze the impact of the initial pose and point cloud segmentation with respect to localization accuracy. The results showed that this new algorithm can realize low-drift, real-time localization in an underground mine environment.
21

Meng, Yu, Kwei-Jay Lin, Bo-Lung Tsai, Ching-Chi Chuang, Yuheng Cao, and Bin Zhang. "Visual-Based Localization Using Pictorial Planar Objects in Indoor Environment." Applied Sciences 10, no. 23 (November 30, 2020): 8583. http://dx.doi.org/10.3390/app10238583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Localization is an important technology for smart services like autonomous surveillance, disinfection or delivery robots in future distributed indoor IoT applications. Visual-based localization (VBL) is a promising self-localization approach that identifies a robot’s location in an indoor or underground 3D space by using its camera to scan and match the robot’s surrounding objects and scenes. In this study, we present a pictorial planar surface based 3D object localization framework. We have designed two object detection methods for localization, ArPico and PicPose. ArPico detects and recognizes framed pictures by converting them into binary marker codes for matching with known codes in the library. It then uses the corner points on a picture’s border to identify the camera’s pose in the 3D space. PicPose detects the pictorial planar surface of an object in a camera view and produces the pose output by matching the feature points in the view with that in the original picture and producing the homography to map the object’s actual location in the 3D real world map. We have built an autonomous moving robot that can self-localize itself using its on-board camera and the PicPose technology. The experiment study shows that our localization methods are practical, have very good accuracy, and can be used for real time robot navigation.
22

Einizinab, Sajjad, Kourosh Khoshelham, Stephan Winter, and Philip Christopher. "Global localization for Mixed Reality visualization using wireframe extraction from images." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-4/W5-2024 (June 27, 2024): 119–26. http://dx.doi.org/10.5194/isprs-annals-x-4-w5-2024-119-2024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. Mixed Reality (MR) global localization involves precisely tracking the device’s position and orientation within a digital representation, such as Building Information Model (BIM). Existing model-based MR global localization approaches have difficulty addressing environmental changes between the BIM and real-world, particularly in dynamic construction sites. Additionally, a significant challenge in MR systems arises from localization drift, where the gradual accumulation of positional errors over time can lead to inaccuracies in determining the device’s position and orientation within the virtual model. We develop a method that extracts structural elements of the building, referred to as a wireframe, which are less likely to change due to their inherent permanence. The extraction of these features is computationally inexpensive enough that can be performed on MR device, ensuring a reliable and continuous global localization over time, thereby overcoming issues associated with localization drift. The method incorporates a deep Convolutional Neural Network (CNN) to extract the 2D wireframes from images. The reconstruction of 3D wireframes is achieved by utilizing the extracted 2D wireframe along with their depth information. The simplified 3D wireframe is subsequently aligned with the BIM. Real-world experiments demonstrate the method’s effectiveness in 3D wireframe extraction and alignment with the BIM, successfully mitigating drift issues by 4cm in prolonged corridor scans.
23

Mittet, M. A., T. Landes, and P. Grussenmeyer. "Localization using RGB-D cameras orthoimages." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (June 6, 2014): 425–32. http://dx.doi.org/10.5194/isprsarchives-xl-5-425-2014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
3D cameras are a new generation of sensors more and more used in geomatics. The main advantages of 3D cameras are their handiness, their price, and the ability to produce range images or point clouds in real-time. They are used in many areas and the use of this kind of sensors has grown especially as the Kinect (Microsoft) arrived on the market. This paper presents a new localization system based exclusively on the combination of several 3D cameras on a mobile platform. It is planed that the platform moves on sidewalks, acquires the environment and enables the determination of most appropriate routes for disabled persons. The paper will present the key features of our approach as well as promising solutions for the challenging task of localization based on 3D-cameras. We give examples of mobile trajectory estimated exclusively from 3D cameras acquisitions. We evaluate the accuracy of the calculated trajectory, thanks to a reference trajectory obtained by a total station.
24

Deng, Qi, Hao Sun, Fupeng Chen, Yuhao Shu, Hui Wang, and Yajun Ha. "An Optimized FPGA-Based Real-Time NDT for 3D-LiDAR Localization in Smart Vehicles." IEEE Transactions on Circuits and Systems II: Express Briefs 68, no. 9 (September 2021): 3167–71. http://dx.doi.org/10.1109/tcsii.2021.3095764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Yoon, Sukjune, Seungyong Hyung, Minhyung Lee, Kyung Shik Roh, SungHwan Ahn, Andrew Gee, Pished Bunnun, Andrew Calway, and Waterio W. Mayol-Cuevas. "Real-time 3D simultaneous localization and map-building for a dynamic walking humanoid robot." Advanced Robotics 27, no. 10 (July 2013): 759–72. http://dx.doi.org/10.1080/01691864.2013.785379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Ruijiang, John H. Lewis, Xun Jia, Xuejun Gu, Michael Folkerts, Chunhua Men, William Y. Song, and Steve B. Jiang. "3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy." Medical Physics 38, no. 5 (May 9, 2011): 2783–94. http://dx.doi.org/10.1118/1.3582693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Xinbao, Xiaodong Zhu, and Chang Liu. "Real-Time 3D Reconstruction of UAV Acquisition System for the Urban Pipe Based on RTAB-Map." Applied Sciences 13, no. 24 (December 12, 2023): 13182. http://dx.doi.org/10.3390/app132413182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In urban underground projects, such as urban drainage systems, the real-time acquisition and generation of 3D models of pipes can provide an important foundation for pipe safety inspection and maintenance. The simultaneous localization and mapping (SLAM) technique, compared to the traditional structure from motion (SfM) reconstruction technique, offers high real-time performance and improves the efficiency of 3D object reconstruction. Underground pipes are situated in complex environments with unattended individuals and often lack natural lighting. To address this, this paper presents a real-time and cost-effective 3D perception and reconstruction system that utilizes an unmanned aerial vehicle (UAV) equipped with Intel RealSense D435 depth cameras and an artificial light-supplementation device. This system carries out real-time 3D reconstruction of underground pipes using the RTAB-Map (real-time appearance-based mapping) method. RTAB-Map is a graph-based visual SLAM method that combines closed-loop detection and graph optimization algorithms. The unique memory management mechanism of RTAB-Map enables synchronous mapping for multiple sessions during UAV flight. Experimental results demonstrate that the proposed system, based on RTAB-Map, exhibits the robustness, textures, and feasibility for 3D reconstruction of underground pipes.
28

Zhang, Dongxiang, Ryo Kurazume, Yumi Iwashita, and Tsutomu Hasegawa. "Robust Global Localization Using Laser Reflectivity." Journal of Robotics and Mechatronics 25, no. 1 (February 20, 2013): 38–52. http://dx.doi.org/10.20965/jrm.2013.p0038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Global localization, which determines an accurate global position without prior knowledge, is a fundamental requirement for a mobile robot. Map-based global localization gives a precise position by comparing a provided geometric map and current sensory data. Although 3D range data is preferable for 6D global localization in terms of accuracy and reliability, comparison with large 3D data is quite timeconsuming. On the other hand, appearance-based global localization, which determines the global position by comparing a captured image with recorded ones, is simple and suitable for real-time processing. However, this technique does not work in the dark or in an environment in which the lighting conditions change remarkably. We herein propose a two-step strategy, which combines map-based global localization and appearance-based global localization. Instead of camera images, which are used for appearance-based global localization, we use reflectance images, which are captured by a laser range finder as a byproduct of range sensing. The effectiveness of the proposed technique is demonstrated through experiments in real environments.
29

Schraml, S., T. Hinterhofer, M. Pfennigbauer, and M. Hofstätter. "PRECISE RADIONUCLIDE LOCALIZATION USING UAV-BASED LIDAR AND GAMMA PROBE WITH REAL-TIME PROCESSING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W8 (August 23, 2019): 503–8. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w8-503-2019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p><strong>Abstract.</strong> In this work we propose an effective radiation source localization device employing a RIEGL VUX-1UAV laser scanner and a highly sensitive Hotzone Technologies gamma radiation probe mounted on a RiCOPTER UAV combined with real-time data processing. The on-board processing and radio communication system integrated within the UAV enables instant and continuously updated access to georeferenced 3D lidar point clouds and gamma radiation intensities. Further processing is done fully automated on the ground. We present a novel combination of both the 3D laser data and the gamma readings within an optimization algorithm that can locate the radioactive source in real-time. Furthermore, this technique can be used to estimate an on-ground radiation intensity, which also considers the actual topography as well as radiation barriers like vegetation or buildings. Results from field tests with real radioactive sources show that single sources can be located precisely, even if the source was largely covered. Outcomes are displayed to the person in charge in an intuitive and user-friendly way, e.g. on a tablet. The whole system is designed to operate in real-time and while the UAV is in the air, resulting in a highly flexible and possibly life-saving asset for firstresponders in time-critical scenarios.</p>
30

Abu-Qasmieh, Isam, and Ali Mohammad Alqudah. "Triad system for object's 3D localization using low-resolution 2D ultrasonic sensor array." International Review of Applied Sciences and Engineering 11, no. 2 (August 2020): 115–22. http://dx.doi.org/10.1556/1848.2020.20010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractIn the recently published researches in the object localization field, 3D object localization takes the largest part of this research due to its importance in our daily life. 3D object localization has many applications such as collision avoidance, robotic guiding and vision and object surfaces topography modeling. This research study represents a novel localization algorithm and system design using a low-resolution 2D ultrasonic sensor array for 3D real-time object localization. A novel localization algorithm is developed and applied to the acquired data using the three sensors having the minimum calculated distances at each acquired sample, the algorithm was tested on objects at different locations in 3D space and validated with acceptable level of precision and accuracy. Polytope Faces Pursuit (PFP) algorithm was used for finding an approximate sparse solution to the object location from the measured three minimum distances. The proposed system successfully localizes the object at different positions with an error average of ±1.4 mm, ±1.8 mm, and ±3.7 mm in x-direction, y-direction, and z-direction, respectively, which are considered as low error rates.
31

Mauri, Antoine, Redouane Khemmar, Benoit Decoux, Madjid Haddad, and Rémi Boutteau. "Real-Time 3D Multi-Object Detection and Localization Based on Deep Learning for Road and Railway Smart Mobility." Journal of Imaging 7, no. 8 (August 12, 2021): 145. http://dx.doi.org/10.3390/jimaging7080145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For smart mobility, autonomous vehicles, and advanced driver-assistance systems (ADASs), perception of the environment is an important task in scene analysis and understanding. Better perception of the environment allows for enhanced decision making, which, in turn, enables very high-precision actions. To this end, we introduce in this work a new real-time deep learning approach for 3D multi-object detection for smart mobility not only on roads, but also on railways. To obtain the 3D bounding boxes of the objects, we modified a proven real-time 2D detector, YOLOv3, to predict 3D object localization, object dimensions, and object orientation. Our method has been evaluated on KITTI’s road dataset as well as on our own hybrid virtual road/rail dataset acquired from the video game Grand Theft Auto (GTA) V. The evaluation of our method on these two datasets shows good accuracy, but more importantly that it can be used in real-time conditions, in road and rail traffic environments. Through our experimental results, we also show the importance of the accuracy of prediction of the regions of interest (RoIs) used in the estimation of 3D bounding box parameters.
32

Dubé, Renaud, Andrei Cramariuc, Daniel Dugas, Hannes Sommer, Marcin Dymczyk, Juan Nieto, Roland Siegwart, and Cesar Cadena. "SegMap: Segment-based mapping and localization using data-driven descriptors." International Journal of Robotics Research 39, no. 2-3 (July 10, 2019): 339–55. http://dx.doi.org/10.1177/0278364919863090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Precisely estimating a robot’s pose in a prior, global map is a fundamental capability for mobile robotics, e.g., autonomous driving or exploration in disaster zones. This task, however, remains challenging in unstructured, dynamic environments, where local features are not discriminative enough and global scene descriptors only provide coarse information. We therefore present SegMap: a map representation solution for localization and mapping based on the extraction of segments in 3D point clouds. Working at the level of segments offers increased invariance to view-point and local structural changes, and facilitates real-time processing of large-scale 3D data. SegMap exploits a single compact data-driven descriptor for performing multiple tasks: global localization, 3D dense map reconstruction, and semantic information extraction. The performance of SegMap is evaluated in multiple urban driving and search and rescue experiments. We show that the learned SegMap descriptor has superior segment retrieval capabilities, compared with state-of-the-art handcrafted descriptors. As a consequence, we achieve a higher localization accuracy and a 6% increase in recall over state-of-the-art handcrafted descriptors. These segment-based localizations allow us to reduce the open-loop odometry drift by up to 50%. SegMap is open-source available along with easy to run demonstrations.
33

Dong, Zhijie, Shuangliang Li, Chengwu Huang, Matthew R. Lowerison, Dongliang Yan, Yike Wang, Shigao Chen, Jun Zou, and Pengfei Song. "Real-time 3D ultrasound imaging with a clip-on device attached to common 1D array transducers." Journal of the Acoustical Society of America 155, no. 3_Supplement (March 1, 2024): A102. http://dx.doi.org/10.1121/10.0026955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Performing 3D ultrasound imaging at a real-time volume rate (e.g., &gt;20 Hz) is a challenging task. While 2D array transducers remain the most practical approach for real-time 3D imaging, the large number of transducer elements (e.g., several thousand) that are necessary to cover an effective 3D field-of-view impose a fundamental constraint on imaging speed. Although solutions such as multiplexing and specialized transducers, including sparse arrays and row-column-addressing arrays, have been developed to address this limitation, they inevitably compromise imaging quality (e.g., SNR, resolution) in favor of speed. Coupled with the high equipment cost of 2D arrays, these compromises hinder the widespread adoption of 3D ultrasound imaging technologies in clinical settings. In this presentation, we introduce an innovative transducer clip-on device comprising a water-immersible, fast-tilting electromechanical acoustic reflector and a redirecting reflector to enable real-time 3D ultrasound imaging using common 1D array transducers. We will first introduce the principles underlying our novel technique, followed by validation studies incorporating simulation and experimental data. We will also demonstrate the feasibility of using the clip-on device to achieve a high 3D imaging volume rate that is suitable for advanced imaging modes such as shear wave elastography, blood flow imaging, and super-resolution ultrasound localization microscopy.
34

Perdices, Eduardo, and José Cañas. "SDVL: Efficient and Accurate Semi-Direct Visual Localization." Sensors 19, no. 2 (January 14, 2019): 302. http://dx.doi.org/10.3390/s19020302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Visual Simultaneous Localization and Mapping (SLAM) approaches have achieved a major breakthrough in recent years. This paper presents a new monocular visual odometry algorithm able to localize in 3D a robot or a camera inside an unknown environment in real time, even on slow processors such as those used in unmanned aerial vehicles (UAVs) or cell phones. The so-called semi-direct visual localization (SDVL) approach is focused on localization accuracy and uses semi-direct methods to increase feature-matching efficiency. It uses inverse-depth 3D point parameterization. The tracking thread includes a motion model, direct image alignment, and optimized feature matching. Additionally, an outlier rejection mechanism (ORM) has been implemented to rule out misplaced features, improving accuracy especially in partially dynamic environments. A relocalization module is also included but keeping the real-time operation. The mapping thread performs an automatic map initialization with homography, a sampled integration of new points and a selective map optimization. The proposed algorithm was experimentally tested with international datasets and compared to state-of-the-art algorithms.
35

Bedkowski, Janusz Marian, and Timo Röhling. "Online 3D LIDAR Monte Carlo localization with GPU acceleration." Industrial Robot: An International Journal 44, no. 4 (June 19, 2017): 442–56. http://dx.doi.org/10.1108/ir-11-2016-0309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose This paper aims to focus on real-world mobile systems, and thus propose relevant contribution to the special issue on “Real-world mobile robot systems”. This work on 3D laser semantic mobile mapping and particle filter localization dedicated for robot patrolling urban sites is elaborated with a focus on parallel computing application for semantic mapping and particle filter localization. The real robotic application of patrolling urban sites is the goal; thus, it has been shown that crucial robotic components have reach high Technology Readiness Level (TRL). Design/methodology/approach Three different robotic platforms equipped with different 3D laser measurement system were compared. Each system provides different data according to the measured distance, density of points and noise; thus, the influence of data into final semantic maps has been compared. The realistic problem is to use these semantic maps for robot localization; thus, the influence of different maps into particle filter localization has been elaborated. A new approach has been proposed for particle filter localization based on 3D semantic information, and thus, the behavior of particle filter in different realistic conditions has been elaborated. The process of using proposed robotic components for patrolling urban site, such as the robot checking geometrical changes of the environment, has been detailed. Findings The focus on real-world mobile systems requires different points of view for scientific work. This study is focused on robust and reliable solutions that could be integrated with real applications. Thus, new parallel computing approach for semantic mapping and particle filter localization has been proposed. Based on the literature, semantic 3D particle filter localization has not yet been elaborated; thus, innovative solutions for solving this issue have been proposed. Recently, a semantic mapping framework that was already published was developed. For this reason, this study claimed that the authors’ applied studies during real-world trials with such mapping system are added value relevant for this special issue. Research limitations/implications The main problem is the compromise between computer power and energy consumed by heavy calculations, thus our main focus is to use modern GPGPU, NVIDIA PASCAL parallel processor architecture. Recent advances in GPGPUs shows great potency for mobile robotic applications, thus this study is focused on increasing mapping and localization capabilities by improving the algorithms. Current limitation is related with the number of particles processed by a single processor, and thus achieved performance of 500 particles in real-time is the current limitation. The implication is that multi-GPU architectures for increasing the number of processed particle can be used. Thus, further studies are required. Practical implications The research focus is related to real-world mobile systems; thus, practical aspects of the work are crucial. The main practical application is semantic mapping that could be used for many robotic applications. The authors claim that their particle filter localization is ready to integrate with real robotic platforms using modern 3D laser measurement system. For this reason, the authors claim that their system can improve existing autonomous robotic platforms. The proposed components can be used for detection of geometrical changes in the scene; thus, many practical functionalities can be applied such as: detection of cars, detection of opened/closed gate, etc. […] These functionalities are crucial elements of the safe and security domain. Social implications Improvement of safe and security domain is a crucial aspect of modern society. Protecting critical infrastructure plays an important role, thus introducing autonomous mobile platforms capable of supporting human operators of safe and security systems could have a positive impact if viewed from many points of view. Originality/value This study elaborates the novel approach of particle filter localization based on 3D data and semantic mapping. This original work could have a great impact on the mobile robotics domain, and thus, this study claims that many algorithmic and implementation issues were solved assuming real-task experiments. The originality of this work is influenced by the use of modern advanced robotic systems being a relevant set of technologies for proper evaluation of the proposed approach. Such a combination of experimental hardware and original algorithms and implementation is definitely an added value.
36

Önol, F. F., H. Palayapalayam Ganapathi, T. Rogers, S. Roof, and V. Patel. "Anatomical 3D image guidance for real-time lymph node localization during robot-assisted salvage lymphadenectomy." European Urology Supplements 17, no. 2 (March 2018): e1983. http://dx.doi.org/10.1016/s1569-9056(18)32385-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ipsen, S., O. Blanck, N. J. Lowther, G. P. Liney, R. Rai, F. Bode, J. Dunst, A. Schweikard, and P. J. Keall. "Towards real-time MRI-guided 3D localization of deforming targets for non-invasive cardiac radiosurgery." Physics in Medicine and Biology 61, no. 22 (October 25, 2016): 7848–63. http://dx.doi.org/10.1088/0031-9155/61/22/7848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Islam, Md Asiful, and John L. Volakis. "Real-Time Detection and 3D Localization of Coronary Atherosclerosis Using a Microwave Imaging Technique: A Simulation Study." Sensors 22, no. 22 (November 15, 2022): 8822. http://dx.doi.org/10.3390/s22228822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Obtaining the exact position of accumulated calcium on the inner walls of coronary arteries is critical for successful angioplasty procedures. For the first time to our knowledge, in this work, we present a high accuracy imaging of the inner coronary artery using microwaves for precise calcium identification. Specifically, a cylindrical catheter radiating microwave signals is designed. The catheter has multiple dipole-like antennas placed around it to enable a 360° field-of-view around the catheter. In addition, to resolve image ambiguity, a metallic rod is inserted along the axis of the plastic catheter. The reconstructed images using data obtained from simulations show successful detection and 3D localization of the accumulated calcium on the inner walls of the coronary artery in the presence of blood flow. Considering the space and shape limitations, and the highly lossy biological tissue environment, the presented imaging approach is promising and offers a potential solution for accurate localization of coronary atherosclerosis during angioplasty or other related procedures.
39

Qin, Hailong, Yingcai Bi, Lin Feng, Y. F. Zhang, and Ben M. Chen. "A 3D Rotating Laser-Based Navigation Solution for Micro Aerial Vehicles in Dynamic Environments." Unmanned Systems 06, no. 04 (October 2018): 297–305. http://dx.doi.org/10.1142/s2301385018500103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we present a 3D rotating laser-based navigation framework for micro aerial vehicles (MAVs) to fly autonomously in dynamic environments. It consists of a 6-degree of freedom (DoF) localization module and a 3D dynamic mapping module. A self-designed rotating laser scanner generates dense point clouds in which 3D features are extracted and aligned. The localization module is able to solve scan distortion issue while estimating the 6-DoF pose of MAVs. At the same time, the dynamic mapping module can further eliminate dynamic trails so that a clear dense 3D map is reconstructed. The dynamic targets are detected based on the spatial constraints and therefore without the need of dense point cloud clustering. Through filtering the detected dynamic obstacles, the localization approach can be robust to the dynamic environment variations. To verify the robustness and effectiveness of our proposed framework, we have tested our system in both real indoor environment with dynamic obstacles and outdoor foliage condition using a customized MAV platform.
40

Zhang, Peng, and Wenfen Liu. "DLALoc: Deep-Learning Accelerated Visual Localization Based on Mesh Representation." Applied Sciences 13, no. 2 (January 13, 2023): 1076. http://dx.doi.org/10.3390/app13021076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Visual localization, i.e., the camera pose localization within a known three-dimensional (3D) model, is a basic component for numerous applications such as autonomous driving cars and augmented reality systems. The most widely used methods from the literature are based on local feature matching between a query image that needs to be localized and database images with known camera poses and local features. However, this method still struggles with different illumination conditions and seasonal changes. Additionally, the scene is normally presented by a sparse structure-from-motion point cloud that has corresponding local features to match. This scene representation depends heavily on different local feature types, and changing the different local feature types requires an expensive feature-matching step to generate the 3D model. Moreover, the state-of-the-art matching strategies are too resource intensive for some real-time applications. Therefore, in this paper, we introduce a novel framework called deep-learning accelerated visual localization (DLALoc) based on mesh representation. In detail, we employ a dense 3D model, i.e., mesh, to represent a scene that can provide more robust 2D-3D matches than 3D point clouds and database images. We can obtain their corresponding 3D points from the depth map rendered from the mesh. Under this scene representation, we use a pretrained multilayer perceptron combined with homotopy continuation to calculate the relative pose of the query and database images. We also use the scale consistency of 2D-3D matches to perform the efficient random sample consensus to find the best 2D inlier set for the subsequential perspective-n-point localization step. Furthermore, we evaluate the proposed visual localization pipeline experimentally on Aachen DayNight v1.1 and RobotCar Seasons datasets. The results show that the proposed approach can achieve state-of-the-art accuracy and shorten the localization time about five times.
41

Sirmacek, B., R. Rashad, and P. Radl. "AUTONOMOUS UAV-BASED 3D-RECONSTRUCTION OF STRUCTURES FOR AERIAL PHYSICAL INTERACTION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 4, 2019): 601–5. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-601-2019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p><strong>Abstract.</strong> We introduce a fully automated only path planning approach especially for drones. This novel method relies on usage of a stereo camera mounted at the bottom of a hexagonal drone for real-time point cloud reconstruction and localization. The real-time point cloud is analyzed in a software loop where the entropy of the point cloud and the surface normals are calculated. The low entropy positions (which indicate the 3D areas with less point density and less information) and the surface normals are used for calculating the next inspection point which can be targeted by the drone in order to enhance the point cloud best. Path planning to these automatically selected target points is done during the flight (quite real-time) and automatically. The initial experiments are performed on Gazebo simulation environment within the ROS system using realistic parameters of our real drone and real stereo camera.</p>
42

Gutnik, Yevgeni, and Morel Groper. "Terminal Phase Navigation for AUV Docking: An Innovative Electromagnetic Approach." Journal of Marine Science and Engineering 12, no. 1 (January 21, 2024): 192. http://dx.doi.org/10.3390/jmse12010192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study introduces a groundbreaking approach for real-time 3D localization, specifically focusing on achieving seamless and precise localization during the terminal guidance phase of an autonomous underwater vehicle (AUV) as it approaches an omnidirectional docking component in an automated deployable launch and recovery system (LARS). Using the AUV’s magnetometer, an economical electromagnetic beacon embedded in the docking component, and an advanced signal processing algorithm, this novel approach ensures the accurate localization of the docking component in three dimensions without the need for direct line-of-sight contact. The method’s real-time capabilities were rigorously evaluated via simulations, prototype experiments in a controlled lab setting, and extensive full-scale pool experiments. These assessments consistently demonstrated an exceptional average positioning accuracy of under 3 cm, marking a significant advancement in AUV guidance systems.
43

Hensel, Stefan, Marin B. Marinov, and Markus Obert. "3D LiDAR Based SLAM System Evaluation with Low-Cost Real-Time Kinematics GPS Solution." Computation 10, no. 9 (September 4, 2022): 154. http://dx.doi.org/10.3390/computation10090154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Positioning mobile systems with high accuracy is a prerequisite for intelligent autonomous behavior, both in industrial environments and in field robotics. This paper describes the setup of a robotic platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. A configuration using a mobile robot Husky A200, and a LiDAR (light detection and ranging) sensor was used to implement the setup. For verification of the proposed setup, different scan matching methods for odometry determination in indoor and outdoor environments are tested. An assessment of the accuracy of the baseline 3D-SLAM system and the selected evaluation system is presented by comparing different scenarios and test situations. It was shown that the hdl_graph_slam in combination with the LiDAR OS1 and the scan matching algorithms FAST_GICP and FAST_VGICP achieves good mapping results with accuracies up to 2 cm.
44

Szabó, István Adorján, Ildikó Kocsis, Zoltán Fogarasi, Boglárka Belényi, and Attila Frigy. "Transthoracic 3D Echocardiographic Imaging of Type A Aortic Dissection – Case Presentation." Acta Medica Marisiensis 63, no. 3 (September 1, 2017): 152–54. http://dx.doi.org/10.1515/amma-2017-0027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractIn type A aortic dissection (AoD) an early and accurate diagnosis is essential to improve survival, by applying urgent surgical repair. 3D transthoracic echocardiography (3D-TTE), an advanced noninvasive imaging technique, could offer a comprehensive evaluation of the ascending aorta and aortic arch in this regard. Both modalities of real-time 3D imaging – live 3D and full-volume aquisition – proved to be useful in evaluating the localization and extent of AoD. Our case illustrates the utility of 3D-TTE in the complex assessment AoD. By providing the proper anatomical dataset, 3D-TTE could facilitate considerably the diagnosis of type A AoD.
45

Li, Zhiteng, Jiannan Zhao, Xiang Zhou, Shengxian Wei, Pei Li, and Feng Shuang. "RTSDM: A Real-Time Semantic Dense Mapping System for UAVs." Machines 10, no. 4 (April 18, 2022): 285. http://dx.doi.org/10.3390/machines10040285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Intelligent drones or flying robots play a significant role in serving our society in applications such as rescue, inspection, agriculture, etc. Understanding the scene of the surroundings is an essential capability for further autonomous tasks. Intuitively, knowing the self-location of the UAV and creating a semantic 3D map is significant for fully autonomous tasks. However, integrating simultaneous localization, 3D reconstruction, and semantic segmentation together is a huge challenge for power-limited systems such as UAVs. To address this, we propose a real-time semantic mapping system that can help a power-limited UAV system to understand its location and surroundings. The proposed approach includes a modified visual SLAM with the direct method to accelerate the computationally intensive feature matching process and a real-time semantic segmentation module at the back end. The semantic module runs a lightweight network, BiSeNetV2, and performs segmentation only at key frames from the front-end SLAM task. Considering fast navigation and the on-board memory resources, we provide a real-time dense-map-building module to generate an OctoMap with the segmented semantic map. The proposed system is verified in real-time experiments on a UAV platform with a Jetson TX2 as the computation unit. A frame rate of around 12 Hz, with a semantic segmentation accuracy of around 89% demonstrates that our proposed system is computationally efficient while providing sufficient information for fully autonomous tasks such as rescue, inspection, etc.
46

Li, Jiang, and Zhang Lei. "3D Localization Algorithm Based on Linear Regression and Least Squares in NLOS Environments." Computer and Information Science 11, no. 4 (September 11, 2018): 1. http://dx.doi.org/10.5539/cis.v11n4p1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Based on the positive bias property of the time of arrival(TOA) measurement error caused by the non-line-of-sight(NLOS) propagation, a simple and effective three dimensional(3D) geometrical localization algorithm was proposed, the algorithm needs no prior knowledge of time delay distribution of TOA, and only linear regression was used to estimate the parameters of the relationship between the NLOS distance error and the true distance, thus, the approximate real distance between mobile terminal (MT) and base station (BS) was reduced, then, the 3D geometric localization of mobile terminal was carried out by the least square method. The experimental results shows the effectiveness of the algorithm, and the positional accuracy is far higher than the required accuracy by E-911 in NLOS environments.
47

Adurthi, Nagavenkat. "Scan Matching-Based Particle Filter for LIDAR-Only Localization." Sensors 23, no. 8 (April 15, 2023): 4010. http://dx.doi.org/10.3390/s23084010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper deals with the development of a localization methodology for autonomous vehicles using only a 3D LIDAR sensor. In the context of this paper, localizing a vehicle in a known 3D global map of the environment is equivalent to finding the vehicle’s global 3D pose (position and orientation), in addition to other vehicle states, within this map. Once localized, the problem of tracking uses the sequential LIDAR scans to continuously estimate the states of the vehicle. While the proposed scan matching-based particle filters can be used for both localization and tracking, in this paper, we emphasize only the localization problem. Particle filters are a well-known solution for robot/vehicle localization, but they become computationally prohibitive as the states and the number of particles increases. Further, computing the likelihood of a LIDAR scan for each particle is in itself a computationally expensive task, thus limiting the number of particles that can be used for real-time performance. To this end, a hybrid approach is proposed that combines the advantages of a particle filter with a global-local scan matching method to better inform the resampling stage of the particle filter. We also use a pre-computed likelihood grid to speed up the computation of LIDAR scan likelihoods. Using simulation data of real-world LIDAR scans from the KITTI datasets, we show the efficacy of the proposed approach.
48

ZHANG, Chi, Zhong YANG, Hao XU, Luwei LIAO, Tang ZHU, Guotao LI, Xin YANG, and Qiuyan ZHANG. "RRVPE: A Robust and Real-Time Visual-Inertial-GNSS Pose Estimator for Aerial Robot Navigation." Wuhan University Journal of Natural Sciences 28, no. 1 (February 2023): 20–28. http://dx.doi.org/10.1051/wujns/2023281020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Self-localization and orientation estimation are the essential capabilities for mobile robot navigation. In this article, a robust and real-time visual-inertial-GNSS(Global Navigation Satellite System) tightly coupled pose estimation (RRVPE) method for aerial robot navigation is presented. The aerial robot carries a front-facing stereo camera for self-localization and an RGB-D camera to generate 3D voxel map. Ulteriorly, a GNSS receiver is used to continuously provide pseudorange, Doppler frequency shift and universal time coordinated (UTC) pulse signals to the pose estimator. The proposed system leverages the Kanade Lucas algorithm to track Shi-Tomasi features in each video frame, and the local factor graph solution process is bounded in a circumscribed container, which can immensely abandon the computational complexity in nonlinear optimization procedure. The proposed robot pose estimator can achieve camera-rate (30 Hz) performance on the aerial robot companion computer. We thoroughly experimented the RRVPE system in both simulated and practical circumstances, and the results demonstrate dramatic advantages over the state-of-the-art robot pose estimators.
49

Sualeh, Muhammad, and Gon-Woo Kim. "Semantics Aware Dynamic SLAM Based on 3D MODT." Sensors 21, no. 19 (September 23, 2021): 6355. http://dx.doi.org/10.3390/s21196355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The idea of SLAM (Simultaneous Localization and Mapping) being a solved problem revolves around the static world assumption, even though autonomous systems are gaining environmental perception capabilities by exploiting the advances in computer vision and data-driven approaches. The computational demands and time complexities remain the main impediment in the effective fusion of the paradigms. In this paper, a framework to solve the dynamic SLAM problem is proposed. The dynamic regions of the scene are handled by making use of Visual-LiDAR based MODT (Multiple Object Detection and Tracking). Furthermore, minimal computational demands and real-time performance are ensured. The framework is tested on the KITTI Datasets and evaluated against the publicly available evaluation tools for a fair comparison with state-of-the-art SLAM algorithms. The results suggest that the proposed dynamic SLAM framework can perform in real-time with budgeted computational resources. In addition, the fused MODT provides rich semantic information that can be readily integrated into SLAM.
50

Aghili, Farhad. "3D simultaneous localization and mapping using IMU and its observability analysis." Robotica 29, no. 6 (December 9, 2010): 805–14. http://dx.doi.org/10.1017/s0263574710000809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
SUMMARYThis paper investigates 3-dimensional (3D) Simultaneous Localization and Mapping (SLAM) and the corresponding observability analysis by fusing data from landmark sensors and a strap-down Inertial Measurement Unit (IMU) in an adaptive Kalman filter (KF). In addition to the vehicle's states and landmark positions, the self-tuning filter estimates the IMU calibration parameters as well as the covariance of the measurement noise. The discrete-time covariance matrix of the process noise, the state transition matrix and the observation sensitivity matrix are derived in closed form, making it suitable for real-time implementation. Examination of the observability of the 3D SLAM system leads to the the conclusion that the system remains observable, provided that at least three known landmarks, which are not placed in a straight line, are observed.

To the bibliography