Gotowa bibliografia na temat „Sensors fusion for localisation”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Sensors fusion for localisation”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Sensors fusion for localisation"

1

Ashokaraj, Immanuel, Antonios Tsourdos, Peter Silson i Brian White. "SENSOR BASED ROBOT LOCALISATION AND NAVIGATION: USING INTERVAL ANALYSIS AND NONLINEAR KALMAN FILTERS." Transactions of the Canadian Society for Mechanical Engineering 29, nr 2 (czerwiec 2005): 211–27. http://dx.doi.org/10.1139/tcsme-2005-0014.

Pełny tekst źródła
Streszczenie:
Multiple sensor fusion for robot localisation and navigation has attracted a lot of interest in recent years. This paper describes a sensor based navigation and localisation approach for an autonomous mobile robot using an interval analysis (IA) based adaptive mechanism for the non-linear Kalman filter namely the Extended Kalman filter (EKF). The map used for this study is two-dimensional and assumed to be known a priori. The robot is equipped with inertial sensors (INS), encoders and ultrasonic sensors. A non-linear Kalman filter is used to estimate the robots position using the inertial sensors and encoders. The ultrasonic sensors use an Interval Analysis (IA) algorithm for guaranteed robot localisation. Since the Kalman Filter estimates may be affected by bias, drift etc. we propose an adaptive mechanism using IA to correct these defects in estimates. In the presence of landmarks the complementary interval robot position information from the IA algorithm with uniform distribution using ultrasonic sensors is used to estimate and bound the errors in the non-linear Kalman filter robot position estimate with a Gaussian distribution.
Style APA, Harvard, Vancouver, ISO itp.
2

Meng, Lijun, Zhengang Guo i Chenglong Ma. "Research on multiple damage localisation based on fusion of the Lamb wave ellipse algorithm and RAPID algorithm". Insight - Non-Destructive Testing and Condition Monitoring 66, nr 1 (1.01.2024): 34–40. http://dx.doi.org/10.1784/insi.2024.66.1.34.

Pełny tekst źródła
Streszczenie:
Current damage localisation methods often require many sensors and complex signal processing methods. This paper proposes a fusion algorithm based on elliptical localisation and the reconstruction algorithm for probabilistic inspection of damage (RAPID) to locate and image multiple damages. Experimental verification of the damage algorithm was conducted. An ultrasonic probe was used to excite Lamb signals on an aluminium alloy plate, the ultrasonic response signals at different positions within the plate under multiple damages were measured and the constructed algorithm was employed to image the damage location. In the experiment, this method improved localisation efficiency by excluding invalid sensing paths in the sensor network, saving 31.32% of computational time. When some sensors in the sensor network were damaged, this algorithm ensured a positioning accuracy with a positioning error of 5.83 mm. Finally, the algorithm was used to locate multiple damages in the sensor network and the results showed the good robustness of the algorithm.
Style APA, Harvard, Vancouver, ISO itp.
3

Nikitenko, Agris, Aleksis Liekna, Martins Ekmanis, Guntis Kulikovskis i Ilze Andersone. "Single Robot Localisation Approach for Indoor Robotic Systems through Integration of Odometry and Artificial Landmarks". Applied Computer Systems 14, nr 1 (1.06.2013): 50–58. http://dx.doi.org/10.2478/acss-2013-0006.

Pełny tekst źródła
Streszczenie:
Abstract we present an integrated approach for robot localization that allows to integrate for the artificial landmark localization data with odometric sensors and signal transfer function data to provide means for different practical application scenarios. The sensor data fusion deals with asynchronous sensor data using inverse Laplace transform. We demonstrate a simulation software system that ensures smooth integration of the odometry-based and signal transfer - based localization into one approach.
Style APA, Harvard, Vancouver, ISO itp.
4

Tibebu, Haileleol, Varuna De-Silva, Corentin Artaud, Rafael Pina i Xiyu Shi. "Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation". Sensors 22, nr 20 (20.10.2022): 8021. http://dx.doi.org/10.3390/s22208021.

Pełny tekst źródła
Streszczenie:
Recent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2 using multiple sensors, including camera, light detecting and ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser scan data for odometry application. The proposed method contains a convolutional encoder, a compressed representation and a recurrent neural network. Besides feature extraction and outlier rejection, the convolutional encoder produces a compressed representation, which is used to visualise the network’s learning process and to pass useful sequential information. The recurrent neural network uses this compressed sequential data to learn the relationship between consecutive time steps. We use the Loughborough autonomous vehicle (LboroAV2) and the Karlsruhe Institute of Technology and Toyota Institute (KITTI) Visual Odometry (VO) datasets to experiment and evaluate our results. In addition to visualising the network’s learning process, our approach provides superior results compared to other similar methods. The code for the proposed architecture is released in GitHub and accessible publicly.
Style APA, Harvard, Vancouver, ISO itp.
5

Moretti, Michele, Federico Bianchi i Nicola Senin. "Towards the development of a smart fused filament fabrication system using multi-sensor data fusion for in-process monitoring". Rapid Prototyping Journal 26, nr 7 (26.06.2020): 1249–61. http://dx.doi.org/10.1108/rpj-06-2019-0167.

Pełny tekst źródła
Streszczenie:
Purpose This paper aims to illustrate the integration of multiple heterogeneous sensors into a fused filament fabrication (FFF) system and the implementation of multi-sensor data fusion technologies to support the development of a “smart” machine capable of monitoring the manufacturing process and part quality as it is being built. Design/methodology/approach Starting from off-the-shelf FFF components, the paper discusses the issues related to how the machine architecture and the FFF process itself must be redesigned to accommodate heterogeneous sensors and how data from such sensors can be integrated. The usefulness of the approach is discussed through illustration of detectable, example defects. Findings Through aggregation of heterogeneous in-process data, a smart FFF system developed upon the architectural choices discussed in this work has the potential to recognise a number of process-related issues leading to defective parts. Research limitations/implications Although the implementation is specific to a type of FFF hardware and type of processed material, the conclusions are of general validity for material extrusion processes of polymers. Practical implications Effective in-process sensing enables timely detection of process or part quality issues, thus allowing for early process termination or application of corrective actions, leading to significant savings for high value-added parts. Originality/value While most current literature on FFF process monitoring has focused on monitoring selected process variables, in this work a wider perspective is gained by aggregation of heterogeneous sensors, with particular focus on achieving co-localisation in space and time of the sensor data acquired within the same fabrication process. This allows for the detection of issues that no sensor alone could reliably detect.
Style APA, Harvard, Vancouver, ISO itp.
6

Donati, Cesare, Martina Mammarella, Lorenzo Comba, Alessandro Biglia, Paolo Gay i Fabrizio Dabbene. "3D Distance Filter for the Autonomous Navigation of UAVs in Agricultural Scenarios". Remote Sensing 14, nr 6 (11.03.2022): 1374. http://dx.doi.org/10.3390/rs14061374.

Pełny tekst źródła
Streszczenie:
In precision agriculture, remote sensing is an essential phase in assessing crop status and variability when considering both the spatial and the temporal dimensions. To this aim, the use of unmanned aerial vehicles (UAVs) is growing in popularity, allowing for the autonomous performance of a variety of in-field tasks which are not limited to scouting or monitoring. To enable autonomous navigation, however, a crucial capability lies in accurately locating the vehicle within the surrounding environment. This task becomes challenging in agricultural scenarios where the crops and/or the adopted trellis systems can negatively affect GPS signal reception and localisation reliability. A viable solution to this problem can be the exploitation of high-accuracy 3D maps, which provide important data regarding crop morphology, as an additional input of the UAVs’ localisation system. However, the management of such big data may be difficult in real-time applications. In this paper, an innovative 3D sensor fusion approach is proposed, which combines the data provided by onboard proprioceptive (i.e., GPS and IMU) and exteroceptive (i.e., ultrasound) sensors with the information provided by a georeferenced 3D low-complexity map. In particular, the parallel-cuts ellipsoid method is used to merge the data from the distance sensors and the 3D map. Then, the improved estimation of the UAV location is fused with the data provided by the GPS and IMU sensors, using a Kalman-based filtering scheme. The simulation results prove the efficacy of the proposed navigation approach when applied to a quadrotor that autonomously navigates between vine rows.
Style APA, Harvard, Vancouver, ISO itp.
7

Kozłowski, Michał, Raúl Santos-Rodríguez i Robert Piechocki. "Sensor Modalities and Fusion for Robust Indoor Localisation". ICST Transactions on Ambient Systems 6, nr 18 (12.12.2019): 162670. http://dx.doi.org/10.4108/eai.12-12-2019.162670.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Merfels, Christian, i Cyrill Stachniss. "Sensor Fusion for Self-Localisation of Automated Vehicles". PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science 85, nr 2 (7.03.2017): 113–26. http://dx.doi.org/10.1007/s41064-017-0008-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ciuffreda, Ilaria, Sara Casaccia i Gian Marco Revel. "A Multi-Sensor Fusion Approach Based on PIR and Ultrasonic Sensors Installed on a Robot to Localise People in Indoor Environments". Sensors 23, nr 15 (5.08.2023): 6963. http://dx.doi.org/10.3390/s23156963.

Pełny tekst źródła
Streszczenie:
This work illustrates an innovative localisation sensor network that uses multiple PIR and ultrasonic sensors installed on a mobile social robot to localise occupants in indoor environments. The system presented aims to measure movement direction and distance to reconstruct the movement of a person in an indoor environment by using sensor activation strategies and data processing techniques. The data collected are then analysed using both a supervised (Decision Tree) and an unsupervised (K-Means) machine learning algorithm to extract the direction and distance of occupant movement from the measurement system, respectively. Tests in a controlled environment have been conducted to assess the accuracy of the methodology when multiple PIR and ultrasonic sensor systems are used. In addition, a qualitative evaluation of the system’s ability to reconstruct the movement of the occupant has been performed. The system proposed can reconstruct the direction of an occupant with an accuracy of 70.7% and uncertainty in distance measurement of 6.7%.
Style APA, Harvard, Vancouver, ISO itp.
10

Neuland, Renata, Mathias Mantelli, Bernardo Hummes, Luc Jaulin, Renan Maffei, Edson Prestes i Mariana Kolberg. "Robust Hybrid Interval-Probabilistic Approach for the Kidnapped Robot Problem". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 29, nr 02 (kwiecień 2021): 313–31. http://dx.doi.org/10.1142/s0218488521500141.

Pełny tekst źródła
Streszczenie:
For a mobile robot to operate in its environment it is crucial to determine its position with respect to an external reference frame using noisy sensor readings. A scenario in which the robot is moved to another position during its operation without being told, known as the kidnapped robot problem, complicates global localisation. In addition to that, sensor malfunction and external influences of the environment can cause unexpected errors, called outliers, that negatively affect the localisation process. This paper proposes a method based on the fusion of a particle filter with bounded-error localisation, which is able to deal with outliers in the measurement data. The application of our algorithm to solve the kidnapped robot problem using simulated data shows an improvement over conventional probabilistic filtering methods.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Sensors fusion for localisation"

1

Millikin, R. L. "Sensor fusion for the localisation of birds in flight". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ65871.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Welte, Anthony. "Spatio-temporal data fusion for intelligent vehicle localization". Thesis, Compiègne, 2020. http://bibliotheque.utc.fr/EXPLOITATION/doc/IFD/2020COMP2572.

Pełny tekst źródła
Streszczenie:
La localisation précise constitue une brique essentielle permettant aux véhicules de naviguer de manière autonome sur la route. Cela peut être atteint à travers les capteurs déjà existants, de nouvelles technologies (Iidars, caméras intelligentes) et des cartes haute définition. Dans ce travail, l'intérêt d'enregistrer et réutiliser des informations sauvegardées en mémoire est exploré. Les systèmes de localisation doivent permettre une estimation à haute fréquence, des associations de données, de la calibration et de la détection d'erreurs. Une architecture composée de plusieurs couches de traitement est proposée et étudiée. Une couche principale de filtrage estime la pose tandis que les autres couches abordent les problèmes plus complexes. L'estimation d'état haute fréquence repose sur des mesures proprioceptives. La calibration du système est essentielle afin d'obtenir une pose précise. En gardant les états estimés et les observations en mémoire, les modèles d'observation des capteurs peuvent être calibrés à partir des estimations lissées. Les Iidars et les caméras intelligentes fournissent des mesures qui peuvent être utilisées pour la localisation mais soulèvent des problèmes d'association de données. Dans cette thèse, le problème est abordé à travers une fenêtre spatio-temporelle, amenant une image plus détaillée de l'environnement. Le buffer d'états est ajusté avec les observations et toutes les associations possibles. Bien que l'utilisation d'amers cartographiés permette d'améliorer la localisation, cela n'est possible que si la carte est fiable. Une approche utilisant les résidus lissés a posteriori a été développée pour détecter ces changements de carte
Localization is an essential basic capability for vehicles to be able to navigate autonomously on the road. This can be achieved through already available sensors and new technologies (Iidars, smart cameras). These sensors combined with highly accurate maps result in greater accuracy. In this work, the benefits of storing and reusing information in memory (in data buffers) are explored. Localization systems need to perform a high-frequency estimation, map matching, calibration and error detection. A framework composed of several processing layers is proposed and studied. A main filtering layer estimates the vehicle pose while other layers address the more complex problems. High-frequency state estimation relies on proprioceptive measurements combined with GNSS observations. Calibration is essential to obtain an accurate pose. By keeping state estimates and observations in a buffer, the observation models of these sensors can be calibrated. This is achieved using smoothed estimates in place of a ground truth. Lidars and smart cameras provide measurements that can be used for localization but raise matching issues with map features. In this work, the matching problem is addressed on a spatio-temporal window, resulting in a more detailed pictur of the environment. The state buffer is adjusted using the observations and all possible matches. Although using mapped features for localization enables to reach greater accuracy, this is only true if the map can be trusted. An approach using the post smoothing residuals has been developed to detect changes and either mitigate or reject the affected features
Style APA, Harvard, Vancouver, ISO itp.
3

Lilja, Robin. "A Localisation and Navigation System for an Autonomous Wheel Loader". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-12157.

Pełny tekst źródła
Streszczenie:
Autonomous vehicles are an emerging trend in robotics, seen in a vast range of applications and environments. Consequently, Volvo Construction Equipment endeavour to apply the concept of autonomous vehicles onto one of their main products. In the company’s Autonomous Machine project an autonomous wheel loader is being developed. As an ob jective given by the company; a demonstration proving the possibility of conducting a fully autonomous load and haul cycle should be performed. Conducting such cycle requires the vehicle to be able to localise itself in its task space and navigate accordingly. In this Master’s Thesis, methods of solving those requirements are proposed and evaluated on a real wheel loader. The approach taken regarding localisation, is to apply sensor fusion, by extended Kalman filtering, to the available sensors mounted on the vehicle, including; odometric sensors, a Global Positioning System receiver and an Inertial Measurement Unit. Navigational control is provided through an interface developed, allowing high level software to command the vehicle by specifying drive paths. A path following controller is implemented and evaluated. The main objective was successfully accomplished by integrating the developed localisation and navigational system with the existing system prior this thesis. A discussion of how to continue the development concludes the report; the addition of a continuous vision feedback is proposed as the next logical advancement.
Style APA, Harvard, Vancouver, ISO itp.
4

Matsumoto, Takeshi, i takeshi matsumoto@flinders edu au. "Real-Time Multi-Sensor Localisation and Mapping Algorithms for Mobile Robots". Flinders University. Computer Science, Engineering and Mathematics, 2010. http://catalogue.flinders.edu.au./local/adt/public/adt-SFU20100302.131127.

Pełny tekst źródła
Streszczenie:
A mobile robot system provides a grounded platform for a wide variety of interactive systems to be developed and deployed. The mobility provided by the robot presents unique challenges as it must observe the state of the surroundings while observing the state of itself with respect to the environment. The scope of the discipline includes the mechanical and hardware issues, which limit and direct the capabilities of the software considerations. The systems that are integrated into the mobile robot platform include both specific task oriented and fundamental modules that define the core behaviour of the robot. While the earlier can sometimes be developed separately and integrated at a later stage, the core modules are often custom designed early on to suit the individual robot system depending on the configuration of the mechanical components. This thesis covers the issues encountered and the resolutions that were implemented during the development of a low cost mobile robot platform using off the shelf sensors, with a particular focus on the algorithmic side of the system. The incrementally developed modules target the localisation and mapping aspects by incorporating a number of different sensors to gather the information of the surroundings from different perspectives by simultaneously or sequentially combining the measurements to disambiguate and support each other. Although there is a heavy focus on the image processing techniques, the integration with the other sensors and the characteristics of the platform itself are included in the designs and analyses of the core and interactive modules. A visual odometry technique is implemented for the localisation module, which includes calibration processes, feature tracking, synchronisation between multiple sensors, as well as short and long term landmark identification to calculate the relative pose of the robot in real time. The mapping module considers the interpretation and the representation of sensor readings to simplify and hasten the interactions between multiple sensors, while selecting the appropriate attributes and characteristics to construct a multi-attributed model of the environment. The modules that are developed are applied to realistic indoor scenarios, which are taken into consideration in some of the algorithms to enhance the performance through known constraints. As the performance of algorithms depends significantly on the hardware, the environment, and the number of concurrently running sensors and modules, comparisons are made against various implementations that have been developed throughout the project.
Style APA, Harvard, Vancouver, ISO itp.
5

Khairallah, Mahmoud. "Flow-Based Visual-Inertial Odometry for Neuromorphic Vision Sensors". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST117.

Pełny tekst źródła
Streszczenie:
Plutôt que de générer des images de manière constante et synchrone, les capteurs neuromorphiques de vision -également connus sous le nom de caméras événementielles, permettent à chaque pixel de fournir des informations de manière indépendante et asynchrone chaque fois qu'un changement de luminosité est détecté. Par conséquent, les capteurs de vision neuromorphiques n'ont pas les problèmes des caméras conventionnelles telles que les artefacts d'image et le Flou cinétique. De plus, ils peuvent fournir une compression sans perte de donné avec une résolution temporelle et une plage dynamique plus élevée. Par conséquent, les caméras événmentielles remplacent commodément les caméras conventionelles dans les applications robotiques nécessitant une grande maniabilité et des conditions environnementales variables. Dans cette thèse, nous abordons le problème de l'odométrie visio-inertielle à l'aide de caméras événementielles et d'une centrale inertielle. En exploitant la cohérence des caméras événementielles avec les conditions de constance de la luminosité, nous discutons de la possibilité de construire un système d'odométrie visuelle basé sur l'estimation du flot optique. Nous développons notre approche basée sur l'hypothèse que ces caméras fournissent des informations des contours des objets de la scène et appliquons un algorithme de détection de ligne pour la réduction des données. Le suivi de ligne nous permet de gagner plus de temps pour les calculs et fournit une meilleure représentation de l'environnement que les points d'intérêt. Dans cette thèse, nous ne montrons pas seulement une approche pour l'odométrie visio-inertielle basée sur les événements, mais également des algorithmes qui peuvent être utilisés comme algorithmes des caméras événementielles autonomes ou intégrés dans d'autres approches si nécessaire
Rather than generating images constantly and synchronously, neuromorphic vision sensors -also known as event-based cameras- permit each pixel to provide information independently and asynchronously whenever brightness change is detected. Consequently, neuromorphic vision sensors do not encounter the problems of conventional frame-based cameras like image artifacts and motion blur. Furthermore, they can provide lossless data compression, higher temporal resolution and higher dynamic range. Hence, event-based cameras conveniently replace frame-based cameras in robotic applications requiring high maneuverability and varying environmental conditions. In this thesis, we address the problem of visual-inertial odometry using event-based cameras and an inertial measurement unit. Exploiting the consistency of event-based cameras with the brightness constancy conditions, we discuss the availability of building a visual odometry system based on optical flow estimation. We develop our approach based on the assumption that event-based cameras provide edge-like information about the objects in the scene and apply a line detection algorithm for data reduction. Line tracking allows us to gain more time for computations and provides a better representation of the environment than feature points. In this thesis, we do not only show an approach for event-based visual-inertial odometry but also event-based algorithms that can be used as stand-alone algorithms or integrated into other approaches if needed
Style APA, Harvard, Vancouver, ISO itp.
6

Salehi, Achkan. "Localisation précise d'un véhicule par couplage vision/capteurs embarqués/systèmes d'informations géographiques". Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC064/document.

Pełny tekst źródła
Streszczenie:
La fusion entre un ensemble de capteurs et de bases de données dont les erreurs sont indépendantes est aujourd’hui la solution la plus fiable et donc la plus répandue de l’état de l’art au problème de la localisation. Les véhicules semi-autonomes et autonomes actuels, ainsi que les applications de réalité augmentée visant les contextes industriels exploitent des graphes de capteurs et de bases de données de tailles considérables, dont la conception, la calibration et la synchronisation n’est, en plus d’être onéreuse, pas triviale. Il est donc important afin de pouvoir démocratiser ces technologies, d’explorer la possibilité de l’exploitation de capteurs et bases de données bas-coûts et aisément accessibles. Cependant, ces sources d’information sont naturellement plus incertaines, et plusieurs obstacles subsistent à leur utilisation efficace en pratique. De plus, les succès récents mais fulgurants des réseaux profonds dans des tâches variées laissent penser que ces méthodes peuvent représenter une alternative peu coûteuse et efficace à certains modules des systèmes de SLAM actuels. Dans cette thèse, nous nous penchons sur la localisation à grande échelle d’un véhicule dans un repère géoréférencé à partir d’un système bas-coût. Celui-ci repose sur la fusion entre le flux vidéo d’une caméra monoculaire, des modèles 3d non-texturés mais géoréférencés de bâtiments,des modèles d’élévation de terrain et des données en provenance soit d’un GPS bas-coût soit de l’odométrie du véhicule. Nos travaux sont consacrés à la résolution de deux problèmes. Le premier survient lors de la fusion par terme barrière entre le VSLAM et l’information de positionnement fournie par un GPS bas-coût. Cette méthode de fusion est à notre connaissance la plus robuste face aux incertitudes du GPS, mais est plus exigeante en matière de ressources que la fusion via des fonctions de coût linéaires. Nous proposons une optimisation algorithmique de cette méthode reposant sur la définition d’un terme barrière particulier. Le deuxième problème est le problème d’associations entre les primitives représentant la géométrie de la scène(e.g. points 3d) et les modèles 3d des bâtiments. Les travaux précédents se basent sur des critères géométriques simples et sont donc très sensibles aux occultations en milieu urbain. Nous exploitons des réseaux convolutionnels profonds afin d’identifier et d’associer les éléments de la carte correspondants aux façades des bâtiments aux modèles 3d. Bien que nos contributions soient en grande partie indépendantes du système de SLAM sous-jacent, nos expériences sont basées sur l’ajustement de faisceaux contraint basé images-clefs. Les solutions que nous proposons sont évaluées sur des séquences de synthèse ainsi que sur des séquence urbaines réelles sur des distances de plusieurs kilomètres. Ces expériences démontrent des gains importants en performance pour la fusion VSLAM/GPS, et une amélioration considérable de la robustesse aux occultations dans la définition des contraintes
The fusion between sensors and databases whose errors are independant is the most re-liable and therefore most widespread solution to the localization problem. Current autonomousand semi-autonomous vehicles, as well as augmented reality applications targeting industrialcontexts exploit large sensor and database graphs that are difficult and expensive to synchro-nize and calibrate. Thus, the democratization of these technologies requires the exploration ofthe possiblity of exploiting low-cost and easily accessible sensors and databases. These infor-mation sources are naturally tainted by higher uncertainty levels, and many obstacles to theireffective and efficient practical usage persist. Moreover, the recent but dazzling successes ofdeep neural networks in various tasks seem to indicate that they could be a viable and low-costalternative to some components of current SLAM systems.In this thesis, we focused on large-scale localization of a vehicle in a georeferenced co-ordinate frame from a low-cost system, which is based on the fusion between a monocularvideo stream, 3d non-textured but georeferenced building models, terrain elevation models anddata either from a low-cost GPS or from vehicle odometry. Our work targets the resolutionof two problems. The first one is related to the fusion via barrier term optimization of VS-LAM and positioning measurements provided by a low-cost GPS. This method is, to the bestof our knowledge, the most robust against GPS uncertainties, but it is more demanding in termsof computational resources. We propose an algorithmic optimization of that approach basedon the definition of a novel barrier term. The second problem is the data association problembetween the primitives that represent the geometry of the scene (e.g. 3d points) and the 3d buil-ding models. Previous works in that area use simple geometric criteria and are therefore verysensitive to occlusions in urban environments. We exploit deep convolutional neural networksin order to identify and associate elements from the map that correspond to 3d building mo-del façades. Although our contributions are for the most part independant from the underlyingSLAM system, we based our experiments on constrained key-frame based bundle adjustment.The solutions that we propose are evaluated on synthetic sequences as well as on real urbandatasets. These experiments show important performance gains for VSLAM/GPS fusion, andconsiderable improvements in the robustness of building constraints to occlusions
Style APA, Harvard, Vancouver, ISO itp.
7

Héry, Elwan. "Localisation coopérative de véhicules autonomes communicants". Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2516.

Pełny tekst źródła
Streszczenie:
Afin de naviguer en autonomie un véhicule doit être capable de se localiser précisément par rapport aux bords de voie pour ne pas sortir de celle-ci et par rapport aux véhicules et piétons pour ne pas causer d'accident. Cette thèse traite de l'intérêt de la communication dans l'amélioration de la localisation des véhicules autonomes. La navigation autonome sur route est souvent réalisée à partir de coordonnées cartésiennes. Afin de mieux représenter la pose d'un véhicule relativement à la voie dans laquelle il circule, nous étudions l'utilisation de coordonnées curvilignes le long de chemins enregistrés dans des cartes. Ces coordonnées généralisent l'abscisse curviligne en y ajoutant un écart latéral signé par rapport au centre de la voie et une orientation relative au centre de cette voie en prenant en compte le sens de circulation. Une première approche de localisation coopérative est réalisée à partir de ces coordonnées. Une fusion de données à une dimension permet de montrer l'intérêt de la localisation coopérative dans le cas simplifié où l'écart latéral, l'orientation curviligne et la pose relative entre deux véhicules sont connus avec précision. Les problèmes de corrélation des erreurs dus à l'échange d'information sont pris en compte grâce à un filtre par intersection de covariance. Nous présentons ensuite à une méthode de perception de type ICP (Iterative Closest Point) pour déterminer la pose relative entre les véhicules à partir de points LiDAR et d'un modèle polygonal 2D représentant la forme du véhicule. La propagation des erreurs de poses absolues des véhicules à l'aide de poses relatives estimées avec leurs incertitudes se fait via des équations non linéaires qui peuvent avoir un fort impact sur la consistance. Les poses des différents véhicules entourant l'égo-véhicule sont estimés dans une carte locale dynamique (CLD) permettant d'enrichir la carte statique haute définition décrivant le centre de la voie et les bords de celle-ci. La carte locale dynamique est composée de l'état de chaque véhicule communicant. Les états sont fusionnés en utilisant un algorithme asynchrone, à partir de données disponibles à des temps variables. L'algorithme est décentralisé, chaque véhicule calculant sa propre CLD et la partageant. Les erreurs de position des récepteurs GNSS étant biaisées, une détection de marquages est introduite pour obtenir la distance latérale par rapport au centre de la voie afin d'estimer ces biais. Des observations LiDAR avec la méthode ICP permettent de plus d'enrichir la fusion avec des contraintes entre les véhicules. Des résultats expérimentaux illustrent les performances de cette approche en termes de précision et de consistance
To be able to navigate autonomously, a vehicle must be accurately localized relatively to all obstacles, such as roadside for lane keeping and vehicles and pedestrians to avoid causing accidents. This PhD thesis deals with the interest of cooperation to improve the localization of cooperative vehicles that exchange information. Autonomous navigation on the road is often based on coordinates provided in a Cartesian frame. In order to better represent the pose of a vehicle with respect to the lane in which it travels, we study curvilinear coordinates with respect to a path stored in a map. These coordinates generalize the curvilinear abscissa by adding a signed lateral deviation from the center of the lane and an orientation relative to the center of the lane taking into account the direction of travel. These coordinates are studied with different track models and using different projections to make the map-matching. A first cooperative localization approach is based on these coordinates. The lateral deviation and the orientation relative to the lane can be known precisely from a perception of the lane borders, but for autonomous driving with other vehicles, it is important to maintain a good longitudinal accuracy. A one-dimensional data fusion method makes it possible to show the interest of the cooperative localization in this simplified case where the lateral deviation, the curvilinear orientation and the relative positioning between two vehicles are accurately known. This case study shows that, in some cases, lateral accuracy can be propagated to other vehicles to improve their longitudinal accuracy. The correlation issues of the errors are taken into account with a covariance intersection filter. An ICP (Iterative Closest Point) minimization algorithm is then used to determine the relative pose between the vehicles from LiDAR points and a 2D polygonal model representing the shape of the vehicle. Several correspondences of the LiDAR points with the model and different minimization approaches are compared. The propagation of absolute vehicle pose using relative poses with their uncertainties is done through non-linear equations that can have a strong impact on consistency. The different dynamic elements surrounding the ego-vehicle are estimated in a Local Dynamic Map (LDM) to enhance the static high definition map describing the center of the lane and its border. In our case, the agents are only communicating vehicles. The LDM is composed of the state of each vehicle. The states are merged using an asynchronous algorithm, fusing available data at variable times. The algorithm is decentralized, each vehicle computing its own LDM and sharing it. As the position errors of the GNSS receivers are biased, a marking detection is introduced to obtain the lateral deviation from the center of the lane in order to estimate these biases. LiDAR observations with the ICP method allow to enrich the fusion with the constraints between the vehicles. Experimental results of this fusion show that the vehicles are more accurately localized with respect to each other while maintaining consistent poses
Style APA, Harvard, Vancouver, ISO itp.
8

Jacobson, Adam. "Bio-inspired multi-sensor fusion and calibration for robot place learning and recognition". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/116179/1/Adam%20Jacobson%20Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Determining an agent's location in the world is vital for robotic navigation, path planning and co-operative behaviours. This thesis focuses on the translation of biological insights to the robotics domain to improve topological SLAM with an aim to enable robot navigation and localisation without human intervention. The primary contributions presented within this thesis are SLAM localisation techniques which are robust to environmental changes, require minimal or no human intervention for setup within a new environment and are robust to sensor failures.
Style APA, Harvard, Vancouver, ISO itp.
9

Ericsson, John-Eric, i Daniel Eriksson. "Indoor Positioning and Localisation System with Sensor Fusion : AN IMPLEMENTATION ON AN INDOOR AUTONOMOUS ROBOT AT ÅF". Thesis, KTH, Maskinkonstruktion (Inst.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168841.

Pełny tekst źródła
Streszczenie:
This thesis will present guidelines of how to select sensors and algorithms for indoor positioning and localisation systems with sensor fusion. These guidelines are based on an extensive theory and state of the art research. Different scenarios are presented to give some examples of proposed sensors and algorithms for certain applications. There are of course no right or wrong sensor combinations, but some factors are good to bear in mind when a system is designed. To give an example of the proposed guidelines a Simultaneous Localisation and Mapping (SLAM) system as well as an Indoor Positioning System (IPS) has been designed and implemented on an embedded robot platform. The implemented SLAM system was based on a FastSLAM2 algorithm with ultrasonic range sensors and the implemented IPS was based on a WiFi RSS profiling method using aWeibull-distribution. The methods, sensors and infrastructure have been chosen based on requirements derived from wishes from the stakeholder as well as knowledge from the theory and state of the art research. A combination of SLAM and IPS is proposed, chosen to be called WiFiSLAM, in order to reduce errors from both of the methods. Unfortunately, due to unexpected issues with the platform, no combination has been implemented and tested. The systems were simulated independently before implemented on the embedded platform. Results from these simulations indicated that the requirements were able to be fulfilled as well as an indication of the minimum set-up needed for the implementation. Both the implemented systems were proven to have the expected accuracies during testing and with more time, better tuning could have been performed and probably also better results. From the results, a conclusion could be drawn that a combined WiFi SLAM solution would have improved the result in a larger testing area than what was used. IPS would have increased its precision and SLAM would have got an increased robustness. The thesis has shown that there is no exact way of finding a perfect sensor and method solution. Most important is, however, the weight between time, cost and quality. Other important factors are to decide in which environment a system will perform its tasks and if it is a safety critical system. It has also been shown that fused sensor data will outperform the result of just one sensor and that there is no max limit in fused sensors. However, that requires the sensor fusion algorithm to be well tuned, otherwise the opposite might happen.
Examensjobbet presenterar riktlinjer för hur sensorer och algoritmer för inomhuspositionering och lokaliseringssystem med sensorfusion bör väljas. Riktlinjerna är baserade på en omfattande teori och state of the art undersökning. Olika scenarion presenteras för att ge exempel på metoder för att välja sensorer och algoritmer för applikationer. Självklart finns det inga kombinationer som är rätt eller fel, men vissa faktorer är bra att komma ihåg när ett system designas. För att ge exempel på de föreslagna riktlinjerna har ett “Simultaneous Localisation and Mapping” (SLAM) system samt ett Inomhus Positioneringssystem (IPS) designats och implementerats på en inbyggd robotplattform. Det implementerade SLAM systemet baserades på en FastSLAM2algoritm med ultraljudssensorer och det implementerade IPS baserades på en Wifi RSS profileringsmetod som använder en Weibullfördelning. Metoderna, sensorerna och infrastrukturenhar valts utifrån krav som framställts från önskningar av intressenten samt utifrån kunskap från teori och state of the art undersökningen. En kombination av SLAM och IPS har föreslagits och valts att kallas WiFi SLAM för att reducera osäkerheter från de båda metoderna. Tyvärr har ingen kombination implementerats och testats på grund av oväntade problem med plattformen. Systemen simulerades individuellt före implementationen på den inbyggda plattformen. Resultat från dessa simuleringar tydde på att kraven skulle kunna uppfyllas samt gav en indikation av den minsta “set-upen” som behövdes för implementering. Båda de implementerade systemen visade sig ha de förväntade noggrannheterna under testning och med mer tid kunde bättre kalibrering ha skett, vilket förmodligen skulle resulterat i bättre resultat. Från resultaten kunde slutsatsen dras att en kombinerad WiFi SLAM lösning skulle förbättrat resultatet i en större testyta än den som användes. IPS skulle ha ökat sin precision medan SLAM skulle ha ökat sin robusthet. Examensjobbet har visat att det inte finns något exakt sätt att hitta en perfekt sensor och metodlösning. Viktigast är dock viktningen mellan tid, kostnad och kvalitet. Andra viktigafaktorer är att bestämma miljön systemet skall operera i och om systemet är säkerhetskritiskt. Det visade sig även att fusionerad sensordata kommer överträffa resultatet från endast en sensor och att det inte finns någon maxgräns för antalet fusionerade sensorer. Det kräver dock att sensorfusionsalgoritmen är väl kalibrerad, annars kan det motsatta inträffa.
Style APA, Harvard, Vancouver, ISO itp.
10

Ladhari, Maroua. "Architecture générique de fusion par approche Top-Down : application à la localisation d’un robot mobile". Thesis, Université Clermont Auvergne‎ (2017-2020), 2020. http://www.theses.fr/2020CLFAC052.

Pełny tekst źródła
Streszczenie:
La problématique qui va être abordée dans cette thèse est la localisation d’un robot mobile. Ce dernier, équipé de capteurs bas-coût, cherche à exploiter le maximum d’informations possibles pour répondre à un objectif fixé au préalable. Un problème de fusion de données sera traité d’une manière à ce qu’à chaque situation, le robot saura quelle information utiliser pour se localiser d’une manière continue. Les données que nous allons traiter seront de différents types. Dans nos travaux, deux propriétés de localisation sont désirées: la précision et la confiance. Pour pouvoir le contrôler, le robot doit connaître sa position d’une manière précise et intègre. En effet, la précision désigne le degré d’incertitude métrique lié à la position estimée. Elle est retournée par un filtre de fusion. Si en plus, le degré de certitude d’être dans cette zone d’incertitude est grand, la confiance dans l’estimation sera élevée et cette estimation sera donc considérée comme intègre. Ces deux propriétés sont généralement liées. C’est pourquoi, elles sont souvent représentées ensemble pour caractériser l'estimation retournée de la pose du robot. Dans ce travail nous rechercherons à optimiser simultanément ces deux propriétés.Pour tirer profit des différentes techniques existantes pour une estimation optimale de la pose du robot,nous proposons une approche descendante basée sur l’exploitation d’une carte environnementale définie dans un référentiel absolu. Cette approche utilise une sélection a priori des meilleures mesures informatives parmi toutes les sources de mesure possibles. La sélection se fait selon un objectif donné (de précision et de confiance), l’état actuel du robot et l’apport informationnel des données.Comme les données sont bruitées, imprécises et peuvent également être ambiguës et peu fiables, la prise en compte de ces limites est nécessaire afin de fournir une évaluation de la pose du robot aussi précise et fiable que possible. Pour cela, une focalisation spatiale et un réseau bayésien sont utilisés pour réduire les risques de mauvaises détections. Si malgré tout de mauvaises détections subsistent, elles seront gérées par un processus de retour qui réagit de manière efficace en fonction des objectifs souhaités.Les principales contributions de ce travail sont d'une part la conception d'une architecture de localisation multi-sensorielle générique et modulaire de haut niveau avec un mode opératoire descendant. Nous avons utilisé la notion de triplet perceptif qui représente un ensemble amer, capteur, détecteur pour désigner chaque module perceptif. À chaque instant, une étape de prédiction et une autre de mise à jour sont exécutées. Pour l’étape de mise à jour, le système sélectionne le triplet le plus pertinent (d'un point de vue précision et confiance) selon un critère informationnel. L’objectif étant d’assurer une localisation intègre et précise, notre algorithme a été écrit de manière à ce que l’on puisse gérer les aspects ambiguïtés.D'autre part, l'algorithme développé permet de localiser un robot dans une carte de l'environnement. Pour cela, une prise en compte des possibilités de mauvaises détections suite aux phénomènes d'ambiguïté a été considérée par le moyen d'un processus de retour en arrière. En effet, ce dernier permet d'une part de corriger une mauvaise détection et d'autre part d'améliorer l’estimation retournée de la pose pour répondre à un objectif souhaité
The issue that will be addressed in this thesis is the localization of a mobile robot. Equipped with low- cost sensors, the robot aims to exploit the maximum possible amount of information to meet an objective set beforehand. A data fusion problem will be treated in a way that at each situation, the robot will select which information to use to locate itself in a continuous way. The data we will process will be of different types.In our work, two properties of localization are desired: accuracy and confidence. In order to be controlled, the robot must know its position in a precise and reliable way. Indeed, accuracy refers to the degree of uncertainty related to the estimated position. It is returned by a fusion filter. If, in addition, the degree of certainty of being in this uncertainty zone is important, we will have a good confidence contribution and the estimate will be considered as reliable. These two properties are generally related. This is why they are often represented together to characterize the returned estimate of the robot position. In this work, our objective is to simultaneously optimize these two properties.To take advantage of the different existing techniques for an optimal estimation of the robot position, we propose a top-down approach based on the exploitation of environmental map environmental map defined in an absolute reference frame. This approach uses an a priori selection of the best informative measurements among all possible measurement sources. The selection is made according to a given objective (of accuracy and confidence), the current robot state and the data informational contribution.As the data is noisy, imprecise and may also be ambiguous and unreliable, the consideration of these limitations is necessary in order to provide the most accurate and reliable robot position estimation. For this, spatial focusing and a Bayesian network are used to reduce the risk of misdetection. However, in case of ambiguities, these misdetections may occur. A backwards process has been developed in order to react efficiently to these situations and thus achieve the set objectives.The main contributions of this work are on one side the development of a high-level generic and modular multi sensory localization architecture with a top-down process. We used a concept of perceptual triplet which is the set of landmark, sensor and detector to designate each perceptual module. At each time, a prediction and an update steps are performed. For the update step, the system selects the most relevant triplet (in terms of accuracy and confidence) according to an informational criterion. In order to ensure an accurate and relaible localization, our algorithm has been written in such a way that ambiguity aspects can be managed.On the other side, the developed algorithm allows to locate a robot in an environment map. For this purpose, the possibility of bad detections due to ambiguity phenomena has been taken into account in the backward process. Indeed, this process allows on the one hand to correct a bad detection and on the other hand to improve the returned position estimation to meet a desired objective
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Sensors fusion for localisation"

1

Hucks, John A. Fusion of ground-based sensors for optimal tracking of military targets. Monterey, Calif: Naval Postgraduate School, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

G, Buser Rudolph, Warren Frank B, Society of Photo-optical Instrumentation Engineers. i University of Alabama in Huntsville. Center for Applied Optics., red. Infrared sensors and sensor fusion: 19-21 May, 1987, Orlando, Florida. Bellingham, Wash., USA: SPIE--the International Society for Optical Engineering, 1987.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Multi-sensor data fusion with MATLAB. Boca Raton: Taylor & Francis, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Otmar, Loffeld, Centre national de la recherche scientifique (France) i Society of Photo-optical Instrumentation Engineers., red. Vision systems--sensors, sensor systems, and components: 10-12 June 1996, Besançon, France. Bellingham, Wash: SPIE--the International Society for Optical Engineering, 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Networked multisensor decision and estimation fusion: Based on advanced mathematical methods. Boca Raton, FL: Taylor & Francis, 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Otmar, Loffeld, Society of Photo-optical Instrumentation Engineers., European Optical Society i Commission of the European Communities. Directorate-General for Science, Research, and Development., red. Sensors, sensor systems, and sensor data processing: June 16-17 1997, Munich, FRG. Bellingham, Wash., USA: SPIE, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Intelligent Sensors, Sensor Networks & Information Processing Conference (2nd 2005 Melbourne, Vic.). Proceedings of the 2005 Intelligent Sensors, Sensor Networks & Information Processing Conference: 5-8 December, 2005, Melbourne, Australia. [Piscataway, N.J.]: IEEE, 2006.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Intelligent Sensors, Sensor Networks & Information Processing Conference (2nd 2005 Melbourne, Vic.). Proceedings of the 2005 Intelligent Sensors, Sensor Networks & Information Processing Conference: 5-8 December, 2005, Melbourne, Australia. [Piscataway, N.J.]: IEEE, 2006.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

IEEE/AESS Dayton Chapter Symposium (15th 1998 Fairborn, OH). Sensing the world: Analog sensors & systems across the spectrum : the 15th Annual AESS/IEEE Dayton Section Symposium, Fairborn, OH, 14-15 May 1998. Piscataway, N.J: Institute of Electrical and Electronics Engineers, 1998.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Green, Milford B. Mergers and acquisitions: Geographical and spatial perspectives. London: Routledge, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Sensors fusion for localisation"

1

Espinosa, Jose, Mihalis Tsiakkas, Dehao Wu, Simon Watson, Joaquin Carrasco, Peter R. Green i Barry Lennox. "A Hybrid Underwater Acoustic and RF Localisation System for Enclosed Environments Using Sensor Fusion". W Towards Autonomous Robotic Systems, 369–80. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96728-8_31.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Mitchell, H. B. "Image Sensors". W Image Fusion, 9–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11216-4_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kiaer, Jieun. "Fusion, localisation, and hybridity". W Delicious Words, 54–70. New York: Routledge, 2020. |: Routledge, 2020. http://dx.doi.org/10.4324/9780429321801-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Majumder, Bansari Deb, i Joyanta Kumar Roy. "Multifunction Data Fusion". W Multifunctional Sensors, 49–54. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003350484-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Mitchell, H. B. "Sensors". W Data Fusion: Concepts and Ideas, 15–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27222-6_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Koch, Wolfgang. "Characterizing Objects and Sensors". W Tracking and Sensor Data Fusion, 31–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39271-9_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Subramanian, Rajesh. "Additional Sensors and Sensor Fusion". W Build Autonomous Mobile Robot from Scratch using ROS, 457–96. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9645-5_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Suciu, George, Andrei Scheianu, Cristina Mihaela Bălăceanu, Ioana Petre, Mihaela Dragu, Marius Vochin i Alexandru Vulpe. "Sensors Fusion Approach Using UAVs and Body Sensors". W Advances in Intelligent Systems and Computing, 146–53. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77700-9_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Věchet, S., i J. Krejsa. "Sensors Data Fusion via Bayesian Network". W Recent Advances in Mechatronics, 221–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-05022-0_38.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Wagner, Jakub, Paweł Mazurek i Roman Z. Morawski. "Fusion of Data from Impulse-Radar Sensors and Depth Sensors". W Health Information Science, 205–24. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96009-4_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Sensors fusion for localisation"

1

Jarvis, R. A. "Autonomous Robot Localisation By Sensor Fusion". W IEEE International Workshop on Emerging Technologies and Factory Automation,. IEEE, 1992. http://dx.doi.org/10.1109/etfa.1992.683295.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Izri, Sonia, i Eric Brassart. "Uncertainties quantification criteria for multi-sensors fusion: Application to vehicles localisation". W Automation (MED 2008). IEEE, 2008. http://dx.doi.org/10.1109/med.2008.4602171.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Redzic, Milan, Conor Brennan i Noel E. O'Connor. "Dual-sensor fusion for indoor user localisation". W the 19th ACM international conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2072298.2071948.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Franken, Dietrich. "An approximate maximum-likelihood estimator for localisation using bistatic measurements". W 2018 Sensor Data Fusion: Trends, Solutions, Applications (SDF). IEEE, 2018. http://dx.doi.org/10.1109/sdf.2018.8547074.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Alvarado, Biel Piero, Fernando Matia i Ramon Galan. "Improving indoor robots localisation by fusing different sensors". W 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8593667.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Ristic, Branko, Mark Morelande, Alfonso Farina i S. Dulman. "On Proximity-Based Range-Free Node Localisation in Wireless Sensor Networks". W 2006 9th International Conference on Information Fusion. IEEE, 2006. http://dx.doi.org/10.1109/icif.2006.301734.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Corral-Plaza, David, Olaf Reich, Erik Hübner, Matthias Wagner i Inmaculada Medina-Bulo. "A SENSOR FUSION SYSTEM IDENTIFYING COMPLEX EVENTS FOR LOCALISATION ESTIMATION". W International Conference on Applied Computing 2019. IADIS Press, 2019. http://dx.doi.org/10.33965/ac2019_201912c033.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Khoder, Makkawi, Ait-Tmazirte Nourdine, El Badaoui El Najjar Maan i Moubayed Nazih. "Fault Tolerant multi-sensor Data Fusion for vehicle localisation using Maximum Correntropy Unscented Information Filter and α-Rényi Divergence". W 2020 IEEE 23rd International Conference on Information Fusion (FUSION). IEEE, 2020. http://dx.doi.org/10.23919/fusion45008.2020.9190407.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Pagnottelli, S., S. Taraglio, P. Valigi i A. Zanela. "Visual and laser sensory data fusion for outdoor robot localisation and navigation". W 2005 12th International Conference on Advanced Robotics. IEEE, 2005. http://dx.doi.org/10.1109/icar.2005.1507409.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Uney, Murat, Bernard Mulgrew i Daniel Clark. "Cooperative sensor localisation in distributed fusion networks by exploiting non-cooperative targets". W 2014 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2014. http://dx.doi.org/10.1109/ssp.2014.6884689.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Sensors fusion for localisation"

1

Cadwallader, L. C. Reliability estimates for selected sensors in fusion applications. Office of Scientific and Technical Information (OSTI), wrzesień 1996. http://dx.doi.org/10.2172/425367.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Lane, Brandon, Lars Jacquemetton, Martin Piltch i Darren Beckett. Thermal calibration of commercial melt pool monitoring sensors on a laser powder bed fusion system. Gaithersburg, MD: National Institute of Standards and Technology, lipiec 2020. http://dx.doi.org/10.6028/nist.ams.100-35.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Beiker, Sven. Next-generation Sensors for Automated Road Vehicles. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, luty 2023. http://dx.doi.org/10.4271/epr2023003.

Pełny tekst źródła
Streszczenie:
<div class="section abstract"><div class="htmlview paragraph">This follow-up report to the inaugural SAE EDGE Research Report on “Unsettled Topics Concerning Sensors for Automated Road Vehicles” reviews the progress made in automated vehicle (AV) sensors over the past four to five years. Additionally, it addresses persistent disagreement and confusion regarding certain terms for describing sensors, the different strengths and shortcomings of particular sensors, and procedures regarding how to specify and evaluate them.</div><div class="htmlview paragraph"><b>Next-gen Automated Road Vehicle Sensors</b> summarizes current trends and debates (e.g., sensor fusion, embedded AI, simulation) as well as future directions and needs.</div><div class="htmlview paragraph"><a href="https://www.sae.org/publications/edge-research-reports" target="_blank">Click here to access the full SAE EDGE</a><sup>TM</sup><a href="https://www.sae.org/publications/edge-research-reports" target="_blank"> Research Report portfolio.</a></div></div>
Style APA, Harvard, Vancouver, ISO itp.
4

Mobley, Curtis D. Determining the Scattering Properties of Vertically-Structured Nepheloid Layers from the Fusion of Active and Passive Optical Sensors. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 2006. http://dx.doi.org/10.21236/ada630921.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kulhandjian, Hovannes. Detecting Driver Drowsiness with Multi-Sensor Data Fusion Combined with Machine Learning. Mineta Transportation Institute, wrzesień 2021. http://dx.doi.org/10.31979/mti.2021.2015.

Pełny tekst źródła
Streszczenie:
In this research work, we develop a drowsy driver detection system through the application of visual and radar sensors combined with machine learning. The system concept was derived from the desire to achieve a high level of driver safety through the prevention of potentially fatal accidents involving drowsy drivers. According to the National Highway Traffic Safety Administration, drowsy driving resulted in 50,000 injuries across 91,000 police-reported accidents, and a death toll of nearly 800 in 2017. The objective of this research work is to provide a working prototype of Advanced Driver Assistance Systems that can be installed in present-day vehicles. By integrating two modes of visual surveillance to examine a biometric expression of drowsiness, a camera and a micro-Doppler radar sensor, our system offers high reliability over 95% in the accuracy of its drowsy driver detection capabilities. The camera is used to monitor the driver’s eyes, mouth and head movement and recognize when a discrepancy occurs in the driver's blinking pattern, yawning incidence, and/or head drop, thereby signaling that the driver may be experiencing fatigue or drowsiness. The micro-Doppler sensor allows the driver's head movement to be captured both during the day and at night. Through data fusion and deep learning, the ability to quickly analyze and classify a driver's behavior under various conditions such as lighting, pose-variation, and facial expression in a real-time monitoring system is achieved.
Style APA, Harvard, Vancouver, ISO itp.
6

Kulhandjian, Hovannes. AI-based Pedestrian Detection and Avoidance at Night using an IR Camera, Radar, and a Video Camera. Mineta Transportation Institute, listopad 2022. http://dx.doi.org/10.31979/mti.2022.2127.

Pełny tekst źródła
Streszczenie:
In 2019, the United States experienced more than 6,500 pedestrian fatalities involving motor vehicles which resulted in a 67% rise in nighttime pedestrian fatalities and only a 10% rise in daytime pedestrian fatalities. In an effort to reduce fatalities, this research developed a pedestrian detection and alert system through the application of a visual camera, infrared camera, and radar sensors combined with machine learning. The research team designed the system concept to achieve a high level of accuracy in pedestrian detection and avoidance during both the day and at night to avoid potentially fatal accidents involving pedestrians crossing a street. The working prototype of pedestrian detection and collision avoidance can be installed in present-day vehicles, with the visible camera used to detect pedestrians during the day and the infrared camera to detect pedestrians primarily during the night as well as at high glare from the sun during the day. The radar sensor is also used to detect the presence of a pedestrian and calculate their range and direction of motion relative to the vehicle. Through data fusion and deep learning, the ability to quickly analyze and classify a pedestrian’s presence at all times in a real-time monitoring system is achieved. The system can also be extended to cyclist and animal detection and avoidance, and could be deployed in an autonomous vehicle to assist in automatic braking systems (ABS).
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii