Academic literature on the topic '3D real-time localization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '3D real-time localization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "3D real-time localization":

1

Liu, M. H., Sheng Zhang, and Yi Pan. "UWB-based Real-Time 3D High Precision Localization System." Journal of Physics: Conference Series 2290, no. 1 (June 1, 2022): 012082. http://dx.doi.org/10.1088/1742-6596/2290/1/012082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Ultra-Wideband (UWB) positioning technique featuring high accuracy and low cost has been drawing increasing interest. Angel of arrival (AOA), time difference of arrival (TDOA) and time of arrival (TOA) are common methods for positioning. With DecaWave1000 ranging systems, which applies TOA method, sub-meter accuracy can be achieved easily. However, the traditional DW1000 UWB localization system is unscalable and the ranging stability remains improving. In this paper, a brand new Scalable Multi-Base Multi-Tag (SMBMT) scheme based on traditional DW1000 UWB localization system is proposed to improve the system’s efficiency and scalability. Then an effective data processing algorithm Data Kalman Filter is offered using ranging information only, to diminish the ranging error in the system. By changing the system equation, Data KF is also able to optimize the dynamic trajectory. Finally, the proposed system is tested and the algorithm is analysed. The result shows that not only does applying SMBMT scheme improves the scalability and ranging efficiency of UWB localization system but also increases the accuracy and stability in positioning.
2

Lynen, Simon, Bernhard Zeisl, Dror Aiger, Michael Bosse, Joel Hesch, Marc Pollefeys, Roland Siegwart, and Torsten Sattler. "Large-scale, real-time visual–inertial localization revisited." International Journal of Robotics Research 39, no. 9 (July 7, 2020): 1061–84. http://dx.doi.org/10.1177/0278364920931151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The overarching goals in image-based localization are scale, robustness, and speed. In recent years, approaches based on local features and sparse 3D point-cloud models have both dominated the benchmarks and seen successful real-world deployment. They enable applications ranging from robot navigation, autonomous driving, virtual and augmented reality to device geo-localization. Recently, end-to-end learned localization approaches have been proposed which show promising results on small-scale datasets. However, the positioning accuracy, scalability, latency, and compute and storage requirements of these approaches remain open challenges. We aim to deploy localization at a global scale where one thus relies on methods using local features and sparse 3D models. Our approach spans from offline model building to real-time client-side pose fusion. The system compresses the appearance and geometry of the scene for efficient model storage and lookup leading to scalability beyond what has been demonstrated previously. It allows for low-latency localization queries and efficient fusion to be run in real-time on mobile platforms by combining server-side localization with real-time visual–inertial-based camera pose tracking. In order to further improve efficiency, we leverage a combination of priors, nearest-neighbor search, geometric match culling, and a cascaded pose candidate refinement step. This combination outperforms previous approaches when working with large-scale models and allows deployment at unprecedented scale. We demonstrate the effectiveness of our approach on a proof-of-concept system localizing 2.5 million images against models from four cities in different regions of the world achieving query latencies in the 200 ms range.
3

LI, Wei, Yi WU, Chunlin SHEN, and Huajun GONG. "Robust 3D Surface Reconstruction in Real-Time with Localization Sensor." IEICE Transactions on Information and Systems E101.D, no. 8 (August 1, 2018): 2168–72. http://dx.doi.org/10.1587/transinf.2018edl8056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mair, Elmar, Klaus H. Strobl, Tim Bodenmüller, Michael Suppa, and Darius Burschka. "Real-time Image-based Localization for Hand-held 3D-modeling." KI - Künstliche Intelligenz 24, no. 3 (May 26, 2010): 207–14. http://dx.doi.org/10.1007/s13218-010-0037-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Będkowski, Janusz, Andrzej Masłowski, and Geert De Cubber. "Real time 3D localization and mapping for USAR robotic application." Industrial Robot: An International Journal 39, no. 5 (August 17, 2012): 464–74. http://dx.doi.org/10.1108/01439911211249751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baeck, P. J., N. Lewyckyj, B. Beusen, W. Horsten, and K. Pauly. "DRONE BASED NEAR REAL-TIME HUMAN DETECTION WITH GEOGRAPHIC LOCALIZATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W8 (August 20, 2019): 49–53. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w8-49-2019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p><strong>Abstract.</strong> Detection of humans, e.g. for search and rescue operations has been enabled by the availability of compact, easy to use cameras and drones. On the other hand, aerial photogrammetry techniques for inspection applications allow for precise geographic localization and the generation of an overview orthomosaic and 3D terrain model. The proposed solution is based on nadir drone imagery and combines both deep learning and photogrammetric algorithms to detect people and position them with geographical coordinates on an overview orthomosaic and 3D terrain map. The drone image processing chain is fully automated and near real-time and therefore allows search and rescue teams to operate more efficiently in difficult to reach areas.</p>
7

Hauser, Fabian, and Jaroslaw Jacak. "Real-time 3D single-molecule localization microscopy analysis using lookup tables." Biomedical Optics Express 12, no. 8 (July 16, 2021): 4955. http://dx.doi.org/10.1364/boe.424016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yiming, Markus Mund, Philipp Hoess, Joran Deschamps, Ulf Matti, Bianca Nijmeijer, Vilma Jimenez Sabinina, Jan Ellenberg, Ingmar Schoen, and Jonas Ries. "Real-time 3D single-molecule localization using experimental point spread functions." Nature Methods 15, no. 5 (April 9, 2018): 367–69. http://dx.doi.org/10.1038/nmeth.4661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, Wenjun, Peng Wang, Rui Li, and Xiangli Nie. "Real-time 3D work-piece tracking with monocular camera based on static and dynamic model libraries." Assembly Automation 37, no. 2 (April 3, 2017): 219–29. http://dx.doi.org/10.1108/aa-02-2017-018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose This paper aims to propose a novel real-time three-dimensional (3D) model-based work-piece tracking method with monocular camera for high-precision assembly. Tracking of 3D work-pieces with real-time speed is becoming more and more important for some industrial tasks, such as work-pieces grasping and assembly, especially in complex environment. Design/methodology/approach A three-step process method was provided, i.e. the offline static global library generation process, the online dynamic local library updating and selection process and the 3D work-piece localization process. In the offline static global library generation process, the computer-aided design models of the work-piece are used to generate a set of discrete two-dimensional (2D) hierarchical views matching libraries. In the online dynamic library updating and selection process, the previous 3D location information of the work-piece is used to predict the following location range, and a discrete matching library with a small number of 2D hierarchical views is selected from dynamic local library for localization. Then, the work-piece is localized with high-precision and real-time speed in the 3D work-piece localization process. Findings The method is suitable for the texture-less work-pieces in industrial applications. Originality/value The small range of the library enables a real-time matching. Experimental results demonstrate the high accuracy and high efficiency of the proposed method.
10

Feng, Sheng, Chengdong Wu, Yunzhou Zhang, and Shigen Shen. "Collaboration calibration and three-dimensional localization in multi-view system." International Journal of Advanced Robotic Systems 15, no. 6 (November 1, 2018): 172988141881377. http://dx.doi.org/10.1177/1729881418813778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this research, the authors have addressed the collaboration calibration and real-time three-dimensional (3D) localization problem in the multi-view system. The 3D localization method is proposed to fuse the two-dimensional image coordinates from multi-views and provide the 3D space location in real time. It is a fundamental solution to obtain the 3D location of the moving object in the research field of computer vision. Improved common perpendicular centroid algorithm is presented to reduce the side effect of the shadow detection and improve localization accuracy. The collaboration calibration is used to generate the intrinsic and extrinsic parameters of multi-view cameras synchronously. The experimental results show that the algorithm can complete accurate positioning in indoor multi-view monitoring and reduce the complexity.

Dissertations / Theses on the topic "3D real-time localization":

1

Lee, Young Jin. "Real-Time Object Motion and 3D Localization from Geometry." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408443773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Uhercik, Marian. "Surgical tools localization in 3D ultrasound images." Phd thesis, INSA de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00735702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis deals with automatic localization of thin surgical tools such as needles or electrodes in 3D ultrasound images. The precise and reliable localization is important for medical interventions such as needle biopsy or electrode insertion into tissue. The reader is introduced to basics of medical ultrasound (US) imaging. The state of the art localization methods are reviewed in the work. Many methods such as Hough transform (HT) or Parallel Integral Projection (PIP) are based on projections. As the existing PIP implementations are relatively slow, we suggest an acceleration by using a multiresolution approach. We propose to use model fitting approach which uses randomized sample consensus (RANSAC) and local optimization. It is a fast method suitable for real-time use and it is robust with respect to the presence of other high-intensity structures in the background. We propose two new shape and appearance models of tool in 3D US images. Tool localization can be improved by exploiting its tubularity. We propose a tool model which uses line filtering and we incorporated it into the model fitting scheme. The robustness of such localization algorithm is improved at the expense of additional time for pre-processing. The real-time localization using the shape model is demonstrated by implementation on the 3D US scanner Ultrasonix RP. All proposed methods were tested on simulated data, phantom US data (a replacement for a tissue) and real tissue US data of breast with biopsy needle. The proposed methods had comparable accuracy and the lower number of failures than the state of the art projection based methods.
3

Picard, Quentin. "Proposition de mécanismes d'optimisation des données pour la perception temps-réel dans un système embarqué hétérogène." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le développement des systèmes autonomes engendre des besoins en perception de l'environnement de plus en plus élevés dans les systèmes électroniques embarqués. Les voitures autonomes, les drones, ou encore les casques à réalité mixte présentent des facteurs de forme limités et un budget de puissance énergétique restreint pour les performances en temps-réel. Par exemple, les cas applicatifs cités ont un budget compris entre 300W-10W, 15W-10W et 10W-10mW respectivement. La thèse se place dans le domaine des systèmes autonomes et mobiles ayant un budget de consommation de 10mW à 15W avec l'utilisation de capteurs d'images et de la centrale inertielle (IMU). La méthode Simultaneous Localization And Mapping (SLAM) fournit aux systèmes autonomes et mobiles une perception précise et robuste de l'environnement en temps-réel et sans connaissances préalables. La thèse a pour objectif de répondre à la problématique de l'exécution temps-réel du système SLAM dans son ensemble composé de fonctions de perception avancées allant de la localisation à la reconstruction 3D sur du matériel à ressources restreintes. Dans ce cadre, deux principales questions sont posées pour répondre aux défis de l'état de l'art. Comment réduire les besoins en ressources de ces fonctions de perception ? Quel est le partitionnement de la chaîne de traitement SLAM pour le système hétérogène intégrant plusieurs unités de calcul allant du circuit intégré au capteur d'image, au traitement proche capteur (FPGA) et dans la plateforme embarquée (ARM, GPU embarqué) ?. La première question abordée dans la thèse concerne le besoin de réduire les ressources utilisées par la chaîne de traitement SLAM, de la sortie du capteur à la reconstruction 3D. Dans ce sens, les travaux détaillés dans le manuscrit apportent deux principales contributions. La première présente le traitement dans le circuit intégré au capteur en impactant les caractéristiques de l'image grâce à la réduction de la dynamique. La seconde impacte le pilotage du flux d'image envoyé dans la chaîne de traitement SLAM avec un traitement proche capteur. La première contribution vise à réduire l'empreinte mémoire des algorithmes SLAM en évaluant l'impact de la réduction de la dynamique du pixel sur la précision et la robustesse de la localisation et la reconstruction 3D temps-réel. Les expérimentations menées ont montré que l'on peut réduire la donnée d'entrée jusqu'à 75% correspondant à moins de 2 bits par pixel tout en obtenant une précision similaire à la référence 8 bits par pixel. Ces résultats ont été obtenus en évaluant la précision et la robustesse de quatre algorithmes SLAM différents sur deux bases de données. La seconde contribution vise à réduire la quantité de données injectée dans le SLAM par le filtrage adaptatif qui est la stratégie de décimation pour contrôler le flux d'entrée des images. Le déclenchement des images provenant du capteur est initialement à un flux constant (20 images par seconde). Cela implique une consommation d'énergie, de mémoire, de bande passante et augmente la complexité calculatoire. Pouvons-nous réduire cette quantité de données à traiter ? Dans le cadre du SLAM, la précision et le nombre de calculs à effectuer dépendent fortement du mouvement du système. Grâce à l'IMU qui fournit les accélérations linéaires et angulaires, les données sont injectées en fonction du mouvement du système. Ces images clés sont obtenues grâce à la méthode de filtrage adaptatif (AF). Bien que les résultats dépendent de la difficulté des bases de données choisies, les expérimentations menées ont montré que l'AF permet de décimer jusqu'à 80% des images tout en assurant une erreur de localisation faible et similaire à la référence. L'étude de l'impact mémoire dans le contexte embarqué montre que le pic de consommation est réduit jusqu'à 92%
The development of autonomous systems has an increasing need for perception of the environment in embedded systems. Autonomous cars, drones, mixed reality devices have limited form factor and a restricted budget of power consumption for real-time performances. For instance, those use cases have a budget in the range of 300W-10W, 15W-10W and 10W-10mW respectively. This thesis is focused on autonomous and mobile systems with a budget of 10mW to 15W with the use of imager sensors and the inertial measurement unit (IMU). Simultaneous Localization And Mapping (SLAM) provides accurate and robust perception of the environment in real-time without prior knowledge for autonomous and mobile systems. The thesis aims at the real-time execution of the whole SLAM system composed of advanced perception functions, from localization to 3D reconstruction, with restricted hardware resources. In this context, two main questions are raised to answer the challenges of the literature. How to reduce the resource requirements of advanced perception functions? What is the SLAM pipeline partitioning for the heterogeneous system that integrates several computing units, from the embedded chip in the imager, to the near-sensor processing (FPGA) and in the embedded platform (ARM, embedded GPU)?. The first issue addressed in the thesis is about the need to reduce the hardware resources used by the SLAM pipeline, from the sensor output to the 3D reconstruction. In this regard, the work described in the manuscript provides two main contributions. The first one presents the processing in the embedded chip with an impact on the image characteristics by reducing the dynamic range. The second has an impact on the management of the image flow injected in the SLAM pipeline with a near-sensor processing. The first contribution aims at reducing the memory footprint of the SLAM algorithms with the evaluation of the pixel dynamic reduction on the accuracy and robustness of real-time localization and 3D reconstruction. The experiments show that we can reduce the input data up to 75% corresponding to 2 bits per pixel while maintaining a similar accuracy than the baseline 8 bits per pixel. Those results have been obtained with the evaluation of the accuracy and robustness of four SLAM algorithms on two databases. The second contribution aims at reducing the amount of data injected in SLAM with a decimation strategy to control the input frame rate, called the adaptive filtering. Data are initially injected in constant rate (20 frames per second). This implies a consumption of energy, memory, bandwidth and increases the complexity of calculation. Can we reduce this amount of data ? In SLAM, the accuracy and the number of operations depend on the movement of the system. With the linear and angular accelerations from the IMU, data are injected based on the movement of the system. Those key images are injected with the adaptive filtering approach (AF). Although the results depend on the difficulty of the chosen database, the experiments describe that the AF allows the decimation of up to 80% of the images while maintaining low localization and reconstruction errors similar to the baseline. This study shows that in the embedded context, the peak memory consumption is reduced up to 92%
4

Zarader, Pierre. "Transcranial ultrasound tracking of a neurosurgical microrobot." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans l'objectif de traiter les tumeurs cérébrales difficilement accessibles avec les outils chirugicaux actuels, Robeauté développe un microrobot innovant dans l'objectif de naviguer dans les zones cérébrales profondes avec un minimum d'invasivité. L'objectif de cette thèse a été de développer et de valider un système de suivi ultrasonore transcrânien du microrobot afin de pouvoir implémenter des commandes robotiques et garantir ainsi la sûreté et l'efficacité de l'intervention.L'approche proposée consiste à placer trois émetteurs ultrasonores sur la tête du patient, et à embarquer un récepteur ultrasonore sur le microrobot. En connaissant la vitesse du son dans les tissus biologiques et l'épaisseur de crâne traversée, il est possible d'estimer les distances entre les émetteurs et le récepteur par mesure de temps de vol, et d'en déduire sa position 3D par trilatération. Une preuve de concept a d'abord été réalisée à travers un modèle de crâne d'épaisseur constante, démontrant une précision de localisation submillimétrique. Pour se placer dans un contexte clinique, le système a ensuite été évalué à travers un modèle de calvaria dont l'épaisseur et la vitesse du son en face de chaque émetteur ont été déduites par tomodensitométrie. Le système a démontré une précision de localisation moyenne de 1.5 mm, soit une dégradation de la précision d'1 mm comparée à celle du suivi à travers le modèle de crâne d'épaisseur constante, expliquée par l'incertitude apportée par l'épaisseur hétérogène de la calvaria. Enfin, trois tests pré-cliniques, sans possibilité d'évaluer l'erreur de localisation, ont été réalisés : (i) un test post-mortem sur un humain, (ii) un test post-mortem sur une brebis, (iii) et un test in vivo sur une brebis.De futures pistes d'amélioration du système de suivi ont été proposées, telles que (i) l'utilisation de simulation de propagation ultrasonore transcrânienne basée sur une tomodensitométrie pour la prise en compte des hétérogénéités du crâne, (ii) la miniaturisation du capteur ultrasonore embarqué sur le microrobot, (iii) ainsi que l'intégration d'une imagerie ultrasonore pour la visualisation de la vascularisation locale autour du microrobot, permettant ainsi de réduire le risque de lésions et de détecter d'éventuelles angiogenèses pathologiques
With the aim of treating brain tumors difficult to access with current surgical tools, Robeauté is developing an innovative microrobot to navigate deep brain areas with minimal invasiveness. The aim of this thesis was to develop and validate a transcranial ultrasound-based tracking system for the microrobot, in order to be able to implement robotic commands and thus guarantee both the safety and the effectiveness of the intervention.The proposed approach consists in positioning three ultrasound emitters on the patient's head, and embedding an ultrasound receiver on the microrobot. Knowing the speed of sound in biological tissue and the skull thickness crossed, it is possible to estimate the distances from the emitters to the receiver by time-of-flight measurements, and to deduce its 3D position by trilateration. A proof of concept was first carried out using a skull phantom of constant thickness, demonstrating submillimeter localization accuracy. The system was then evaluated using a calvaria phantom whose thickness and speed of sound in front of each emitter were deduced by CT scan. The system demonstrated an mean localization accuracy of 1.5 mm, i.e. a degradation in accuracy of 1 mm compared with the tracking through the skull phantom of constant thickness, explained by the uncertainty brought by the heterogeneous shape of the calvaria. Finally, three preclinical tests, without the possibility of assessing localization error, were carried out: (i) a post-mortem test on a human, (ii) a post-mortem test on a ewe, (iii) and an in vivo test on a ewe.Further improvements to the tracking system have been proposed, such as (i) the use of CT scan-based transcranial ultrasound propagation simulation to take account of skull heterogeneities, (ii) the miniaturization of the ultrasound sensor embedded in the microrobot, (iii) as well as the integration of ultrasound imaging to visualize local vascularization around the microrobot, thereby reducing the risk of lesions and detecting possible pathological angiogenesis
5

Huang, Liang-zheng, and 黃良正. "3D Feature-based Localization for Mobile Robot in Real-time Environment." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/56540472759084742787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立雲林科技大學
資訊工程系碩士班
101
In this thesis, a 3D feature-based localization for mobile robot in real-time environments is proposed. First, a 3D environment can be obtained by the point cloud images from the LIDAR mounted on a robot. Then, the proposed method identifies the predetermined 3D objects by Fast Point Feature Histogram (FPFH) algorithm. Since the locations of the objects are known, the current pose of the robot can be computed by the geometry relation between the captured point cloud image and the 3D objects. The proposed method is more precise then the methods based on 2D image because the 3D images have more information than 2D images. Furthermore, the proposed method also reduces the image deformation and position displacement.
6

Zhao, Shi. "3D real-time stockpile mapping and modelling with accurate quality calculation using voxels." Thesis, 2016. http://hdl.handle.net/2440/103494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Stockpile blending is widely accepted as an effective method to reduce the short-term quality variations and optimise the homogeneity of bulk materials, such as iron ore. Currently, both industry practice and academic research focus on planning, scheduling and optimisation algorithms to stack a stockpile that meets the predefined quality requirements. Namely, using ‘selective stacking’ algorithms to optimise the quality of a stockpile and improve the operational efficiency. However, it has been identified that stockpiled products are currently being reclaimed at approximately 50% of their potential engineering productive rates after applying such ‘selective stacking’ methods at most iron ore loading ports in Australia. There is an evident lack of solutions to this issue in the literature. This study focuses on stockpile modelling techniques to estimate the quality of a stockpile in both stacking and reclaiming operations for consistent and efficient product quality planning and control. The main objective of this work is to build an up-to-date geometric model of a stockpile using laser scanning data and apply this model to quality calculations throughout the stacking and reclaiming operations. The significant elements of the proposed research are to: (1) upgrade a stockyard machine used to stack or reclaim the stockpile (i.e. a Bucket Wheel Reclaimer) into a mobile scanning device using Kalman filtering to measure the stockpile surface continuously; (2) build a 3D stockpile model from the measurement data in real time using polynomial and B-spline surface modelling techniques and use this model to calculate the quality of a stockpile with a great degree of accuracy when the quality composition is available; (3) associate the 3D model with the reclaiming machine model to achieve autonomous operation and predict the quality of the reclaimed material through voxelization techniques. In order to validate the developed techniques, several experimental tests were conducted using simulation and real scenarios. It was verified that the proposed 3D stockpile modelling algorithms are adequate to represent the real geometric shape with great accuracy. The percentage error in volume is better than 0.2%. Therefore, the combination of stock pile and BWR (Bucket Wheel Reclaimer) models enables the reclaiming to be conducted automatically. To the best of author’s knowledge, this is the first time that a stockpile is modelled automatically in real-time and the integration of the stockpile and BWR model generates a novel stockpile management model allows true reclaiming automation. Thus, the quality of material composition after every stacking/reclaiming operation is calculated from the geometric shape/volume, density and quality assay results. Through accomplishing this project, the quality of a stockpile and its distribution inside the stockpile can be tracked continuously and the stacking/reclaiming trajectory of the machine can be controlled precisely. By making available such information, it is then possible to develop proactive stacking or reclaiming pattern strategies with more accurate product quality grade planning and control. Therefore, the workload of current selectively stacking and reactive reclaiming algorithms can be relieved, and the production rates can be improved with good output product quality control.
Thesis (Ph.D.) -- University of Adelaide, School of Mechanical Engineering, 2016.
7

Tefera, Yonas Teodros. "Perception-driven approaches to real-time remote immersive visualization." Doctoral thesis, 2022. http://hdl.handle.net/11562/1070226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In remote immersive visualization systems, real-time 3D perception through RGB-D cameras, combined with modern Virtual Reality (VR) interfaces, enhances the user’s sense of presence in a remote scene through 3D reconstruction rendered in a remote immersive visualization system. Particularly, in situations when there is a need to visualize, explore and perform tasks in inaccessible environments, too hazardous or distant. However, a remote visualization system requires the entire pipeline from 3D data acquisition to VR rendering satisfies the speed, throughput, and high visual realism. Mainly when using point-cloud, there is a fundamental quality difference between the acquired data of the physical world and the displayed data because of network latency and throughput limitations that negatively impact the sense of presence and provoke cybersickness. This thesis presents state-of-the-art research to address these problems by taking the human visual system as inspiration, from sensor data acquisition to VR rendering. The human visual system does not have a uniform vision across the field of view; It has the sharpest visual acuity at the center of the field of view. The acuity falls off towards the periphery. The peripheral vision provides lower resolution to guide the eye movements so that the central vision visits all the interesting crucial parts. As a first contribution, the thesis developed remote visualization strategies that utilize the acuity fall-off to facilitate the processing, transmission, buffering, and rendering in VR of 3D reconstructed scenes while simultaneously reducing throughput requirements and latency. As a second contribution, the thesis looked into attentional mechanisms to select and draw user engagement to specific information from the dynamic spatio-temporal environment. It proposed a strategy to analyze the remote scene concerning the 3D structure of the scene, its layout, and the spatial, functional, and semantic relationships between objects in the scene. The strategy primarily focuses on analyzing the scene with models the human visual perception uses. It sets a more significant proportion of computational resources on objects of interest and creates a more realistic visualization. As a supplementary contribution, A new volumetric point-cloud density-based Peak Signal-to-Noise Ratio (PSNR) metric is proposed to evaluate the introduced techniques. An in-depth evaluation of the presented systems, comparative examination of the proposed point cloud metric, user studies, and experiments demonstrated that the methods introduced in this thesis are visually superior while significantly reducing latency and throughput.

Book chapters on the topic "3D real-time localization":

1

Nagy, Balázs, and Csaba Benedek. "Real-Time Point Cloud Alignment for Vehicle Localization in a High Resolution 3D Map." In Lecture Notes in Computer Science, 226–39. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11009-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Szklarski, Jacek, Cezary Ziemiecki, Jacek Szałtys, and Marian Ostrowski. "Real-Time 3D Mapping with Visual-Inertial Odometry Pose Coupled with Localization in an Occupancy Map." In Advances in Intelligent Systems and Computing, 388–97. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13273-6_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Ruijiang, Xun Jia, John H. Lewis, Xuejun Gu, Michael Folkerts, Chunhua Men, and Steve B. Jiang. "Single-Projection Based Volumetric Image Reconstruction and 3D Tumor Localization in Real Time for Lung Cancer Radiotherapy." In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2010, 449–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15711-0_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Alvin I., Max L. Balter, Timothy J. Maguire, and Martin L. Yarmush. "3D Near Infrared and Ultrasound Imaging of Peripheral Blood Vessels for Real-Time Localization and Needle Guidance." In Lecture Notes in Computer Science, 388–96. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46726-9_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Florido, Alberto Martín, Francisco Rivas Montero, and Jose María Cañas Plaza. "Robust 3D Visual Localization Based on RTABmaps." In Advancements in Computer Vision and Image Processing, 1–17. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5628-2.ch001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Visual localization is a key capability in robotics and in augmented reality applications. It estimates the 3D position of a camera on real time just analyzing the image stream. This chapter presents a robust map-based 3D visual localization system. It relies on maps of the scenarios built with the known tool RTABmap. It consists of three steps on continuous loop: feature points computation on the input frame, matching with feature points on the map keyframes (using kNN and outlier rejection), and 3D final estimation using PnP geometry and optimization. The system has been experimentally validated in several scenarios. In addition, an empirical study of the effect of three matching outlier rejection mechanisms (radio test, fundamental matrix, and homography matrix) on the quality of estimated 3D localization has been performed. The outlier rejection mechanisms, combined themselves or alone, reduce the number of matched feature points but increase their quality, and so, the accuracy of the 3D estimation. The combination of ratio test and homography matrix provides the best results.
6

Lu, Qinghua, Guanhong Zhang, Xueqian Mao, and Xia Xu. "Research on High-Altitude Estimation Method Based on Fusion of Semantic Information with ElasticFusion." In Advances in Transdisciplinary Engineering. IOS Press, 2024. http://dx.doi.org/10.3233/atde240285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the context of autonomous exploration and real-time 3D reconstruction in complex indoor environments for quadruped robots, the accurate extraction of elevation information in different terrains is a crucial technology. Currently, in elevation estimation solutions that utilize visual sensors, directly processing dense point cloud map data results in a massive computational load, which is not suitable for robot navigation applications. On the other hand, using 2D occupancy grid maps, while having a smaller data scale, struggles to accurately represent complex terrain features and limits the robot’s autonomy when exploring unknown environments.In this article, addressing the elevation estimation problem in complex indoor terrain environments within buildings, we employ the ElasticFusion SLAM (Simultaneous Localization and Mapping) approach for real-time, high-quality 3D reconstruction and localization. We then process and analyze the reconstructed point cloud map to identify terrain features such as steps, stairs, obstacles, etc. Furthermore, by processing the point cloud information, we extract accurate semantic information. This information serves as the basis for obtaining local elevation data, enabling quadruped robots to create dense maps in a global terrain context and make judgments by integrating semantic information with real-time local elevation data. This facilitates backend decision-making and allows robots to autonomously perform tasks such as crossing thresholds, crawling, climbing stairs, and maneuvering around obstacles. These capabilities provide essential data support for quadruped robots to autonomously navigate and explore dynamic indoor unfamiliar environments in the future.
7

Yang, Yingyi, Hao Wu, Fan Yang, Xiaoming Mai, and Hui Chen. "Design and Implementation of Substation Operation Safety Monitoring and Management System Based on Three-Dimensional Reconstruction." In Machine Learning and Artificial Intelligence. IOS Press, 2020. http://dx.doi.org/10.3233/faia200793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In order to reduce operational risks and to improve the risk management and control level in substation, a substation operation safety monitoring and management system (3D2S2M) has been structured based on three-dimensional (3D) laser modeling technology. In this paper, we introduce how to build such a system and to describe its implementation details. A 3D lidar scanning technology is used to perform a holographic scan of the whole internal area in a substation to obtain color point cloud data of buildings and all equipment. Then, a novel 3D visualization safety monitoring and management system, named 3D2S2M, is developed by performing a 3D reconstruction of the point cloud data. Based on the real 3D scene model of 3D2S2M, the method of 3D distance measurement is used to replace manual on-site investigation for improving operation and maintenance efficiency. In addition, a real-time high-accuracy localization method is proposed, in order to identify and analyze the positioning and the behavior of the personnel, and the movement trajectory of the equipment. By combining positioning information and the electronic fence that used in 3D2S2M, risk levels of the personnel (or equipment) are evaluated and the corresponding alarm is issued to prevent dangerous behavior, thereby the operation risk is reduced in substation.
8

Lou, E., A. Chan, B. Coutts, E. Parent, and J. Mahood. "Accuracy of pedicle localization using a 3D ultrasound navigator on vertebral phantoms for posterior spinal surgery." In Studies in Health Technology and Informatics. IOS Press, 2021. http://dx.doi.org/10.3233/shti210443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Severe adolescent idiopathic scoliosis (AIS) requires surgery to halt curve progression. Accurate insertion of pedicle screws is important. This study reports a newly developed 3D ultrasound (3DUS) to localize pedicles intraoperatively and register a pre-op 3D vertebral model to the surface to be displayed for navigation. The objective was to determine speed of the custom 3DUS navigator and accuracy of pedicle probe placement. The developed 3DUS navigator integrated an ultrasound scanner with motion capture cameras. Two adolescent 3D printed spine models T2-T8 and T7-T11 were modified to include pedicle holes with known trajectory and be mounted on a high precision LEGO pegboard in a water bath for imaging. Calibration of the motion cameras and the 3DUS were conducted prior to the study. A total of 27 scans from T3 to T11 vertebrae with 3 individual scans were performed to validate the repeatability. Three accuracy tests that varied vertebral a) orientation, b) position and c) a combination of location and orientation were completed. Based on all experiments, the acquisition-to-display time was 18.9±3.1s. The repeatability of the trajectory error and positional error were 0.5±0.2° and 0.3±0.1mm, respectively. The a) center orientation, b) position and c) orientation/position on trajectory and positional error were for a) 1.4±0.9° and 0.5±0.4mm, b) 1.4±0.8° and 0.3±0.3mm and c) 2.0±0.8° and 0.5±0.5mm, respectively. These results demonstrated that a high precision real-time 3DUS navigator for screw placement in scoliosis surgery is feasible. The next step will study the effect of surrounding soft tissues on navigation accuracy.
9

Qize Yuan, Evan, and Calvin Sze Hang Ng. "Role of Hybrid Operating Room: Present and Future." In Immunosuppression. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.91187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the dramatic progress of medical imaging modalities and growing needs for high-resolution intraoperative imaging in minimally invasive surgery, hybrid operative room (OR) has been developed as a powerful tool for different surgical scenarios. Under the guidance of high-definition cone beam CT (CBCT), an electromagnetic navigation bronchoscopy (ENB)-based marker implantation and subsequent localization of the pulmonary nodules can be implemented within a hybrid OR. Furthermore, the unparalleled real-time imaging capabilities and the ability to perform multiple tasks within the hybrid OR can facilitate image-guided single-port video-assisted thoracic surgery (iSPVATS), increasing the precision and improving outcomes of the procedure. With the help of a hybrid theatre, catheter-based thermal ablation can provide a safer and less invasive treatment option for select patient groups with early-stage non-small cell lung carcinomas (NSCLC) or metastases. In the future, the combination of hybrid operating room and other inspiring innovative techniques, such as robotic bronchoscopy, 3D-printing, natural orifice transluminal endoscopic surgery (NOTES) lung surgery could lead to a paradigm shift in the way thoracic surgery is conducted.

Conference papers on the topic "3D real-time localization":

1

Jaworski, Wojciech, Pawel Wilk, Pawel Zborowski, Witold Chmielowiec, Andrew YongGwon Lee, and Abhishek Kumar. "Real-time 3D indoor localization." In 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, 2017. http://dx.doi.org/10.1109/ipin.2017.8115874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mouragnon, E., M. Lhuillier, M. Dhome, F. Dekeyser, and P. Sayd. "Real Time Localization and 3D Reconstruction." In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). IEEE, 2006. http://dx.doi.org/10.1109/cvpr.2006.236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Yilong, Bohuan Xue, Linwei Zheng, Huaiyang Huang, Ming Liu, and Rui Fan. "Real-Time, Environmentally-Robust 3D LiDAR Localization." In 2019 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2019. http://dx.doi.org/10.1109/ist48021.2019.9010305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Xuehong, and Shuhua Yang. "The indoor real-time 3D localization algorithm using UWB." In 2015 International Conference on Advanced Mechatronic Systems (ICAMechS). IEEE, 2015. http://dx.doi.org/10.1109/icamechs.2015.7287085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ruiz, Luis, and Zhidong Wang. "Real time multi robot 3D localization system using trilateration." In 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2016. http://dx.doi.org/10.1109/robio.2016.7866541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Ruixu, Tao Peng, Vijayan K.Asari, and John S. Loomis. "Real-time 3D scene reconstruction and localization with surface optimization." In NAECON 2018 - IEEE National Aerospace and Electronics Conference. IEEE, 2018. http://dx.doi.org/10.1109/naecon.2018.8556661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hirosue, Kauzki, Shohei Ukawa, Yuichi Itoh, Takao Onoye, and Masanori Hashimoto. "GPGPU-based Highly Parallelized 3D Node Localization for Real-Time 3D Model Reproduction." In IUI'17: 22nd International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3025171.3025183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Qiu, Jian, David Chu, Xiangying Meng, and Thomas Moscibroda. "On the feasibility of real-time phone-to-phone 3D localization." In the 9th ACM Conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2070942.2070962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kulkarni, Ashutosh P., Glen P. Abousleman, and Jennie Si. "Real-time 3D target tracking and localization for arbitrary camera geometries." In Defense and Security Symposium, edited by Ivan Kadar. SPIE, 2007. http://dx.doi.org/10.1117/12.720064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Yipeng, Tao Wang, and Shiqiang Zhu. "Real-time Monocular 3D People Localization and Tracking on Embedded System." In 2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2021. http://dx.doi.org/10.1109/icarm52023.2021.9536118.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography