Academic literature on the topic 'Error localisation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Error localisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Error localisation"

1

Wang, Mei, Nan Duan, Zou Zhou, Fei Zheng, Hongbing Qiu, Xiaopeng Li, and Guoli Zhang. "Indoor PDR Positioning Assisted by Acoustic Source Localization, and Pedestrian Movement Behavior Recognition, Using a Dual-Microphone Smartphone." Wireless Communications and Mobile Computing 2021 (July 8, 2021): 1–16. http://dx.doi.org/10.1155/2021/9981802.

Full text
Abstract:
In recent years, the public’s demand for location services has increased significantly. As outdoor positioning has matured, indoor positioning has become a focus area for researchers. Various indoor positioning methods have emerged. Pedestrian dead reckoning (PDR) has become a research hotspot since it does not require a positioning infrastructure. An integral equation is used in PDR positioning; thus, errors accumulate during long-term operation. To eliminate the accumulated errors in PDR localisation, this paper proposes a PDR localisation system applied to complex scenarios with multiple buildings and large areas. The system is based on the pedestrian movement behavior recognition algorithm proposed in this paper, which recognises the behavior of pedestrians for each gait and improves the stride length estimation for PDR localisation based on the recognition results to reduce the accumulation of errors in the PDR localisation algorithm itself. At the same time, the system uses self-researched hardware to modify the audio equipment used for broadcasting within the indoor environment, to locate the acoustic source through a Hamming distance-based localisation algorithm, and to correct the estimated acoustic source estimated location based on the known source location in order to eliminate the accumulated error in PDR localisation. Through analysis and experimental verification, the recognition accuracy of pedestrian movement behavior recognition proposed in this paper reaches 95% and the acoustic source localisation accuracy of 0.32 m during movement, thus, producing an excellent effect on eliminating the cumulative error of PDR localisation.
APA, Harvard, Vancouver, ISO, and other styles
2

Buehner, Mark, and Anna Shlyaeva. "Scale-dependent background-error covariance localisation." Tellus A: Dynamic Meteorology and Oceanography 67, no. 1 (December 2015): 28027. http://dx.doi.org/10.3402/tellusa.v67.28027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Górak, Rafał, and Marcin Luckner. "Automatic Detection of Missing Access Points in Indoor Positioning System †." Sensors 18, no. 11 (October 23, 2018): 3595. http://dx.doi.org/10.3390/s18113595.

Full text
Abstract:
The paper presents a Wi-Fi-based indoor localisation system. It consists of two main parts, the localisation model and an Access Points (APs) detection module. The system uses a received signal strength (RSS) gathered by multiple mobile terminals to detect which AP should be included in the localisation model and whether the model needs to be updated (rebuilt). The rebuilding of the localisation model prevents the localisation system from a significant loss of accuracy. The proposed automatic detection of missing APs has a universal character and it can be applied to any Wi-Fi localisation model which was created using the fingerprinting method. The paper considers the localisation model based on the Random Forest algorithm. The system was tested on data collected inside a multi-floor academic building. The proposed implementation reduced the mean horizontal error by 5.5 m and the classification error for the floor’s prediction by 0.26 in case of a serious malfunction of a Wi-Fi infrastructure. Several simulations were performed, taking into account different occupancy scenarios as well as different numbers of missing APs. The simulations proved that the system correctly detects missing and present APs in the Wi-Fi infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
4

LINDERHOLT, A., and T. ABRAHAMSSON. "PARAMETER IDENTIFIABILITY IN FINITE ELEMENT MODEL ERROR LOCALISATION." Mechanical Systems and Signal Processing 17, no. 3 (May 2003): 579–88. http://dx.doi.org/10.1006/mssp.2002.1522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Neuland, Renata, Mathias Mantelli, Bernardo Hummes, Luc Jaulin, Renan Maffei, Edson Prestes, and Mariana Kolberg. "Robust Hybrid Interval-Probabilistic Approach for the Kidnapped Robot Problem." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 29, no. 02 (April 2021): 313–31. http://dx.doi.org/10.1142/s0218488521500141.

Full text
Abstract:
For a mobile robot to operate in its environment it is crucial to determine its position with respect to an external reference frame using noisy sensor readings. A scenario in which the robot is moved to another position during its operation without being told, known as the kidnapped robot problem, complicates global localisation. In addition to that, sensor malfunction and external influences of the environment can cause unexpected errors, called outliers, that negatively affect the localisation process. This paper proposes a method based on the fusion of a particle filter with bounded-error localisation, which is able to deal with outliers in the measurement data. The application of our algorithm to solve the kidnapped robot problem using simulated data shows an improvement over conventional probabilistic filtering methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Jingdong, Jinghui Yang, and Zesu Cai. "An efficient approach to pose tracking based on odometric error modelling for mobile robots." Robotica 33, no. 6 (April 1, 2014): 1231–49. http://dx.doi.org/10.1017/s0263574714000654.

Full text
Abstract:
SUMMARYOdometric error modelling for mobile robots is the basis of pose tracking. Without bounds the odometric accumulative error decreases localisation precision after long-range movement, which is often not capable of being compensated for in real time. Therefore, an efficient approach to odometric error modelling is proposed in regard to different drive type mobile robots. This method presents a hypothesis that the motion path approximates a circular arc. The approximate functional expressions between the control input of odometry and non-systematic error as well as systematic error derived from odometric error propagation law. Further an efficient algorithm of pose tracking is proposed for mobile robots, which is able to compensate for the non-systematic and systematic error in real time. These experiments denote that the odometric error modelling reduces the accumulative error of odometry efficiently and improves the specific localisation process significantly during autonomous navigation.
APA, Harvard, Vancouver, ISO, and other styles
7

De Vos, Maarten, Lieven De Lathauwer, Bart Vanrumste, Sabine Van Huffel, and W. Van Paesschen. "Canonical Decomposition of Ictal Scalp EEG and Accurate Source Localisation: Principles and Simulation Study." Computational Intelligence and Neuroscience 2007 (2007): 1–10. http://dx.doi.org/10.1155/2007/58253.

Full text
Abstract:
Long-term electroencephalographic (EEG) recordings are important in the presurgical evaluation of refractory partial epilepsy for the delineation of the ictal onset zones. In this paper, we introduce a new concept for an automatic, fast, and objective localisation of the ictal onset zone in ictal EEG recordings. Canonical decomposition of ictal EEG decomposes the EEG in atoms. One or more atoms are related to the seizure activity. A single dipole was then fitted to model the potential distribution of each epileptic atom. In this study, we performed a simulation study in order to estimate the dipole localisation error. Ictal dipole localisation was very accurate, even at low signal-to-noise ratios, was not affected by seizure activity frequency or frequency changes, and was minimally affected by the waveform and depth of the ictal onset zone location. Ictal dipole localisation error using 21 electrodes was around 10.0 mm and improved more than tenfold in the range of 0.5–1.0 mm using 148 channels. In conclusion, our simulation study of canonical decomposition of ictal scalp EEG allowed a robust and accurate localisation of the ictal onset zone.
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Hongchi, Xiaoli Li, Yi Shang, and Dianfu Ma. "Error analysis of quantised RSSI based sensor network localisation." International Journal of Wireless and Mobile Computing 4, no. 1 (2010): 31. http://dx.doi.org/10.1504/ijwmc.2010.030973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sabra, Adham, and Wai-Keung Fung. "A Fuzzy Cooperative Localisation Framework for Underwater Robotic Swarms." Sensors 20, no. 19 (September 25, 2020): 5496. http://dx.doi.org/10.3390/s20195496.

Full text
Abstract:
This article proposes a holistic localisation framework for underwater robotic swarms to dynamically fuse multiple position estimates of an autonomous underwater vehicle while using fuzzy decision support system. A number of underwater localisation methods have been proposed in the literature for wireless sensor networks. The proposed navigation framework harnesses the established localisation methods in order to provide navigation aids in the absence of acoustic exteroceptive sensors navigation aid (i.e., ultra-short base line) and it can be extended to accommodate newly developed localisation methods by expanding the fuzzy rule base. Simplicity, flexibility, and scalability are the main three advantages that are inherent in the proposed localisation framework when compared to other traditional and commonly adopted underwater localisation methods, such as the Extended Kalman Filter. A physics-based simulation platform that considers environment’s hydrodynamics, industrial grade inertial measurement unit, and underwater acoustic communications characteristics is implemented in order to validate the proposed localisation framework on a swarm size of 150 autonomous underwater vehicles. The proposed fuzzy-based localisation algorithm improves the entire swarm mean localisation error and standard deviation by 16.53% and 35.17%, respectively, when compared to the Extended Kalman Filter based localisation with round-robin scheduling.
APA, Harvard, Vancouver, ISO, and other styles
10

Friebel, Björn, Michael Schweins, Nils Dreyer, and Thomas Kürner. "Simulation of GPS localisation based on ray tracing." Advances in Radio Science 19 (December 17, 2021): 85–92. http://dx.doi.org/10.5194/ars-19-85-2021.

Full text
Abstract:
Abstract. In recent years, many simulation tools emerged to model the communication of connected vehicles. Thereby, the focus was put on channel modelling, applications or protocols while the localisation due to satellite navigation systems was treated as perfect. The effect of inaccurate positioning, however, was neglected so far. This paper presents an approach to extend an existing simulation framework for radio networks to estimate the localisation accuracy by navigation systems like GPS, GLONASS or Galileo. Therefore the error due multipath components is calculated by ray optical path loss predictions (ray tracing) considering 3D building data together with a well-established model for the ionospheric error.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Error localisation"

1

Zaman, Munir uz. "Mobile robot localisation : error modelling, data synchronisation and vision techniques." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/844082/.

Full text
Abstract:
Mobile robot localisation has been, and continues to be, a very active research area. Estimating the position of a mobile robot is fundamental for its navigation and map-building. This thesis addresses some of the problems associated with mobile robot localisation. Three distinct items of research presented in this thesis are (i) A systematic odometry error model for a synchronous drive robot; and (ii) A novel method to synchronise two independent sensor data streams, and (iii) A proposal for an exteroceptive truly odometric sensor system - 'Visiodometry'. Cyclops is a synchronous drive mobile robot. The kinematics causes the path of the robot to curve, with the degree of curvature affected by the orientation of the wheels. A systematic odometry error model is proposed to correct for this. The proposed model is supported both experimentally, and theoretically from modelling the kinematics. Combining sensor data from different sensor data streams is commonly done to improve the accuracy of estimated variables. However, in some cases the sensors are not networked making it impossible to synchronise the data streams. The second item of research proposes a novel method to estimate the time difference in the local clocks of the discrete sensor data from their time-stamps alone. A proposed enhancement to the method improves both the rate of convergence and the precision of the estimate. Results show that the method is more optimum and robust than one based on known methods, including those based on Gaussian assumptions. Wheel odometry is a common method for mobile robot localisation. However, wheel odometry is unreliable if there is wheel slip. In these environments visual odometry has been used. However, the method does not work well on planar surfaces or surfaces with fine texture. It is also unable to accurately detect small motions less than a few centimetres. The third area of research proposes an exteroceptive odometric sensor called 'visiodometry' which is independent of the kinematics and therefore robust to wheel odometry errors. Two methods are proposed (i) a dual camera 'shift vector' method and (ii) a monocular 'roto-translation' method. The results demonstrate that the proposed system can provide odometric localisation data in planar environments to a high precision. The method is based upon extracting global motion estimates of affine transformed images of the ground using the phase correlation method. Experimental results demonstrate that, as a proof-of-concept, this type of sensor input is an alternative genuinely odometric input which has the potential to be comparable in accuracy and precision to wheel odometry in environments where wheel odometry and visual odometry methods are unreliable.
APA, Harvard, Vancouver, ISO, and other styles
2

Landsberg, David. "Methods and measures for statistical fault localisation." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:cf737e06-9f12-44fa-94d2-a8d247ad808e.

Full text
Abstract:
Fault localisation is the process of finding the causes of a given error, and is one of the most costly elements of software development. One of the most efficient approaches to fault localisation appeals to statistical methods. These methods are characterised by their ability to estimate how faulty a program artefact is as a function of statistical information about a given program and test suite. However, the major problem facing statistical approaches is their effectiveness -- particularly with respect to finding single (or multiple) faults in large programs typical to the real world. A solution to this problem hinges on discovering new formal properties of faulty programs and developing scalable statistical techniques which exploit them. In this thesis I address this by identifying new properties of faulty programs, developing the formal frameworks and methods which are formally proven to exploit them, and demonstrating that many of our new techniques substantially and statistically significantly outperform competing algorithms at given fault localisation tasks (using p = 0.01) on what (to our knowledge) is one of the largest scale set of experiments in fault localisation to date. This research is thus designed to corroborate the following thesis statement: That the new algorithms presented in this thesis are effective and efficient at software fault localisation and outperform state of the art statistical techniques at a range of fault localisation tasks. In more detail, the major thesis contributions are as follows: 1. We perform a thorough investigation into the existing framework of (sbfl), which currently stands at the cutting edge of statistical fault localisation. To improve on the effectiveness of sbfl, our first contribution is to introduce and motivate many new statistical measures which can be used within this framework. First, we show that many are well motivated to the task of sbfl. Second, we formally prove equivalence properties of large classes of measures. Third, we show that many of the measures perform competitively with the existing measures in experimentation -- in particular our new measure m9185 outperforms all existing measures on average in terms of effectiveness, and along with Kulkzynski2, is in a class of measures which statistically significantly outperforms all other measures at finding a single fault in a program (p = 0.01). 2. Having investigated sbfl, our second contribution is to motivate, introduce, and formally develop a new formal framework which we call probabilistic fault localisation (pfl). pfl is similar to sbfl insofar as it can leverage any suspiciousness measure, and is designed to directly estimate the probability that a given program artefact is faulty. First, we formally prove that pfl is theoretically superior to sbfl insofar as it satisfies and exploits a number of desirable formal properties which sbfl does not. Second, we experimentally show that pfl methods (namely, our measure pfl-ppv) substantially and statistically significantly outperforms the best performing sbfl measures at finding a fault in large multiple fault programs (p = 0.01). Furthermore, we show that for many of our benchmarks it is theoretically impossible to design strictly rational sbfl measures which outperform given pfl techniques. 3. Having addressed the problem of localising a single fault in a pro- gram, we address the problem of localising multiple faults. Accord- ingly, our third major contribution is the introduction and motiva- tion of a new algorithm MOpt(g) which optimises any ranking-based method g (such as pfl/sbfl/Barinel) to the task of multiple fault localisation. First we prove that MOpt(g) formally satisfies and exploits a newly identified formal property of multiple fault optimality. Secondly, we experimentally show that there are values for g such that MOpt(g) substantially and statistically significantly outperforms given ranking-based fault localisation methods at the task of finding multiple faults (p = 0.01). 4. Having developed methods for localising faults as a function of a given test suite, we finally address the problem of optimising test suites for the purposes of fault localisation. Accordingly, we first present an algorithm which leverages model checkers to improve a given test suite by making it satisfy a property of single bug opti- mality. Second, we experimentally show that on small benchmarks single bug optimal test suites can be generated (from scratch) efficiently when the algorithm is used in conjunction with the cbmc model checker, and that the test suite generated can be used effectively for fault localisation.
APA, Harvard, Vancouver, ISO, and other styles
3

Prévost, Raoul. "Décodage et localisation AIS par satellite." Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0121/document.

Full text
Abstract:
Le système d'identification automatique (ou système AIS pour automatic identification system) est un système qui permet aux navires et aux stations côtières de s'échanger certaines informations par radio VHF. Ces informations comprennent l'identifiant, le statut, la position, la direction et la vitesse de l'émetteur. L'objectif de cette thèse est de permettre la réception des messages AIS par un satellite en orbite basse sans modifier le matériel existant équipant les navires. Par l'intermédiaire du système AIS, il devient possible de connaitre la position de tous les navires à travers le monde. Plusieurs nouveaux services sont possibles, comme le contrôle maritime global ou, pour les armateurs, la connaissance constante de la position de leurs bateaux. La réception par satellite des signaux AIS est sujette à un niveau de bruit bien plus élevé que lors de la réception de ces signaux au niveau du sol. Ce niveau de bruit rend les méthodes classiques de réception de ces signaux difficilement utilisables. Une première contribution de cette thèse est le développement de nouveaux démodulateurs utilisant des méthodes de correction d'erreurs. Ceux-ci tirent parti de la présence d'un bloc de contrôle de redondance cyclique (CRC) dans les messages ainsi que de certaines informations connues sur la structure des messages et des données. Des adaptations du récepteur proposé ont également été étudiées afin d'intégrer la poursuite de la phase des signaux reçus et de prendre en compte les collisions des messages envoyés simultanément par plusieurs navires. La dernière partie de cette thèse est consacrée à l'étude des méthodes de localisation des navires ne diffusant pas leur position dans leurs messages AIS. Cette localisation tire parti des paramètres des messages reçus tels que le délai de propagation et le décalage en fréquence de la porteuse dû à l'effet Doppler, et d'un modèle de déplacement des navires
The automatic identification system (AIS) is a system allowing ships and coast stations to exchange some information by VHF radio. This information includes the identifier, status, location, direction and speed of the emitter. The aim of this thesis is to allow the reception of AIS messages by low Earth orbit satellites without modifying the existing ship equipments. With this system, it becomes possible to know the position of all ships over the Earth. As a consequence, several new services become available, such as global traffic monitoring or determining boat location (for ship-owners). Satellite reception of AIS signals is subjected to a higher noise level when compared to ground level reception. This noise makes classical demodulation and decoding methods unusable. A first contribution of this thesis is to develop new demodulators using error correction methods. These demodulators take advantage of the presence of a cyclic redundancy check (CRC) block in the messages as well as known information about the structure of messages and data. Generalizations of the proposed receiver have also been studied in order to take into account the phase noise of the received signals and the possible collision of messages sent simultaneously by several vessels. The last part of this thesis is devoted to the study of localization methods for ships that do not transmit their location in AIS messages. This localization takes advantage of information contained in the received messages such as the propagation delay and the carrier frequency shift due to the Doppler effect, and a ship movement model
APA, Harvard, Vancouver, ISO, and other styles
4

Thomas, Robin Rajan. "Optimisation of adaptive localisation techniques for cognitive radio." Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/27076.

Full text
Abstract:
Spectrum, environment and location awareness are key characteristics of cognitive radio (CR). Knowledge of a user’s location as well as the surrounding environment type may enhance various CR tasks, such as spectrum sensing, dynamic channel allocation and interference management. This dissertation deals with the optimisation of adaptive localisation techniques for CR. The first part entails the development and evaluation of an efficient bandwidth determination (BD) model, which is a key component of the cognitive positioning system. This bandwidth efficiency is achieved using the Cramer-Rao lower bound derivations for a single-input-multiple-output (SIMO) antenna scheme. The performances of the single-input-single-output (SISO) and SIMO BD models are compared using three different generalised environmental models, viz. rural, urban and suburban areas. In the case of all three scenarios, the results reveal a marked improvement in the bandwidth efficiency for a SIMO antenna positioning scheme, especially for the 1×3 urban case, where a 62% root mean square error (RMSE) improvement over the SISO system is observed. The second part of the dissertation involves the presentation of a multiband time-of arrival (TOA) positioning technique for CR. The RMSE positional accuracy is evaluated using a fixed and dynamic bandwidth availability model. In the case of the fixed bandwidth availability model, the multiband TOA positioning model is initially evaluated using the two-step maximum-likelihood (TSML) location estimation algorithm for a scenario where line-of-sight represents the dominant signal path. Thereafter, a more realistic dynamic bandwidth availability model has been proposed, which is based on data obtained from an ultra-high frequency spectrum occupancy measurement campaign. The RMSE performance is then verified using the non-linear least squares, linear least squares and TSML location estimation techniques, using five different bandwidths. The proposed multiband positioning model performs well in poor signal-to-noise ratio conditions (-10 dB to 0 dB) when compared to a single band TOA system. These results indicate the advantage of opportunistic TOA location estimation in a CR environment.
Dissertation (MEng)--University of Pretoria, 2012.
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
5

Pečenka, Ondřej. "Výzkum vlivu rozložení vstupní chyby na průběh lokalizačního procesu WSN." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218263.

Full text
Abstract:
The diploma thesis is focused on two localization algorithms, iterative algorithm, and a linked algorithm simulated in MATLAB. Further, the investigation of the influence of input errors on the errors in localization of sensor nodes examined algorithms and explore possible relationships between the input errors and localization errors. Subsequently are submitted possible ways to optimize and their results.
APA, Harvard, Vancouver, ISO, and other styles
6

Bekkouche, Mohammed. "Combinaison des techniques de Bounded Model Checking et de programmation par contraintes pour l'aide à la localisation d'erreurs : exploration des capacités des CSP pour la localisation d'erreurs." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4096/document.

Full text
Abstract:
Un vérificateur de modèle peut produire une trace de contreexemple, pour un programme erroné, qui est souvent difficile à exploiter pour localiser les erreurs dans le code source. Dans ma thèse, nous avons proposé un algorithme de localisation d'erreurs à partir de contreexemples, nommé LocFaults, combinant les approches de Bounded Model Checking (BMC) avec un problème de satisfaction de contraintes (CSP). Cet algorithme analyse les chemins du CFG (Control Flow Graph) du programme erroné pour calculer les sous-ensembles d'instructions suspectes permettant de corriger le programme. En effet, nous générons un système de contraintes pour les chemins du graphe de flot de contrôle pour lesquels au plus k instructions conditionnelles peuvent être erronées. Ensuite, nous calculons les MCSs (Minimal Correction Sets) de taille limitée sur chacun de ces chemins. La suppression de l'un de ces ensembles de contraintes donne un sous-ensemble satisfiable maximal, en d'autres termes, un sous-ensemble maximal de contraintes satisfaisant la postcondition. Pour calculer les MCSs, nous étendons l'algorithme générique proposé par Liffiton et Sakallah dans le but de traiter des programmes avec des instructions numériques plus efficacement. Cette approche a été évaluée expérimentalement sur des programmes académiques et réalistes
A model checker can produce a trace of counter-example for erroneous program, which is often difficult to exploit to locate errors in source code. In my thesis, we proposed an error localization algorithm from counter-examples, named LocFaults, combining approaches of Bounded Model-Checking (BMC) with constraint satisfaction problem (CSP). This algorithm analyzes the paths of CFG (Control Flow Graph) of the erroneous program to calculate the subsets of suspicious instructions to correct the program. Indeed, we generate a system of constraints for paths of control flow graph for which at most k conditional statements can be wrong. Then we calculate the MCSs (Minimal Correction Sets) of limited size on each of these paths. Removal of one of these sets of constraints gives a maximal satisfiable subset, in other words, a maximal subset of constraints satisfying the postcondition. To calculate the MCSs, we extend the generic algorithm proposed by Liffiton and Sakallah in order to deal with programs with numerical instructions more efficiently. This approach has been experimentally evaluated on a set of academic and realistic programs
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Wenjie. "Contributions to Lane Marking Based Localization for Intelligent Vehicles." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112017/document.

Full text
Abstract:
Les applications pour véhicules autonomes et les systèmes d’aide avancée à la conduite (Advanced Driving Assistance Systems - ADAS) mettent en oeuvre des processus permettant à des systèmes haut niveau de réaliser une prise de décision. Pour de tels systèmes, la connaissance du positionnement précis (ou localisation) du véhicule dans son environnement est un pré-requis nécessaire. Cette thèse s’intéresse à la détection de la structure de scène, au processus de localisation ainsi qu’à la modélisation d’erreurs. A partir d’un large spectre fonctionnel de systèmes de vision, de l’accessibilité d’un système de cartographie ouvert (Open Geographical Information Systems - GIS) et de la large diffusion des systèmes de positionnement dans les véhicules (Global Positioning System - GPS), cette thèse étudie la performance et la fiabilité d’une méthode de localisation utilisant ces différentes sources. La détection de marquage sur la route réalisée par caméra monoculaire est le point de départ permettant de connaître la structure de la scène. En utilisant, une détection multi-noyau avec pondération hiérarchique, la méthode paramétrique proposée effectue la détection et le suivi des marquages sur la voie du véhicule en temps réel. La confiance en cette source d’information a été quantifiée par un indicateur de vraisemblance. Nous proposons ensuite un système de localisation qui fusionne des informations de positionnement (GPS), la carte (GIS) et les marquages détectés précédemment dans un cadre probabiliste basé sur un filtre particulaire. Pour ce faire, nous proposons d’utiliser les marquages détectés non seulement dans l’étape de mise en correspondance des cartes mais aussi dans la modélisation de la trajectoire attendue du véhicule. La fiabilité du système de localisation, en présence d’erreurs inhabituelles dans les différentes sources d’information, est améliorée par la prise en compte de différents indicateurs de confiance. Ce mécanisme est par la suite utilisé pour identifier les sources d’erreur. Cette thèse se conclut par une validation expérimentale des méthodes proposées dans des situations réelles de conduite. Leurs performances ont été quantifiées en utilisant un véhicule expérimental et des données en libre accès sur internet
Autonomous Vehicles (AV) applications and Advanced Driving Assistance Systems (ADAS) relay in scene understanding processes allowing high level systems to carry out decision marking. For such systems, the localization of a vehicle evolving in a structured dynamic environment constitutes a complex problem of crucial importance. Our research addresses scene structure detection, localization and error modeling. Taking into account the large functional spectrum of vision systems, the accessibility of Open Geographical Information Systems (GIS) and the widely presence of Global Positioning Systems (GPS) onboard vehicles, we study the performance and the reliability of a vehicle localization method combining such information sources. Monocular vision–based lane marking detection provides key information about the scene structure. Using an enhanced multi-kernel framework with hierarchical weights, the proposed parametric method performs, in real time, the detection and tracking of the ego-lane marking. A self-assessment indicator quantifies the confidence of this information source. We conduct our investigations in a localization system which tightly couples GPS, GIS and lane makings in the probabilistic framework of Particle Filter (PF). To this end, it is proposed the use of lane markings not only during the map-matching process but also to model the expected ego-vehicle motion. The reliability of the localization system, in presence of unusual errors from the different information sources, is enhanced by taking into account different confidence indicators. Such a mechanism is later employed to identify error sources. This research concludes with an experimental validation in real driving situations of the proposed methods. They were tested and its performance was quantified using an experimental vehicle and publicly available datasets
APA, Harvard, Vancouver, ISO, and other styles
8

Viandier, Nicolas. "Modélisation et utilisation des erreurs de pseudodistances GNSS en environnement transport pour l’amélioration des performances de localisation." Thesis, Ecole centrale de Lille, 2011. http://www.theses.fr/2011ECLI0006/document.

Full text
Abstract:
Les GNSS sont désormais largement présents dans le domaine des transports. Actuellement, la communauté scientifique désire développer des applications nécessitant une grande précision, disponibilité et intégrité.Ces systèmes offrent un service de position continu. Les performances sont définies par les paramètres du système mais également par l’environnement de propagation dans lequel se propagent les signaux. Les caractéristiques de propagation dans l’atmosphère sont connues. En revanche, il est plus difficile de prévoir l’impact de l’environnement proche de l’antenne, composé d’obstacles urbains. L’axe poursuivit par le LEOST et le LAGIS consiste à appréhender l’environnement et à utiliser cette information en complément de l’information GNSS. Cette approche vise à réduire le nombre de capteurs et ainsi la complexité du système et son coût. Les travaux de recherche menés dans le cadre de cette thèse permettent principalement de proposer des modélisations d'erreur de pseudodistances et des modélisations de l'état de réception encore plus réalistes. Après une étape de caractérisation de l’erreur, plusieurs modèles d’erreur de pseudodistance sont proposés. Ces modèles sont le mélange fini de gaussiennes et le mélange de processus de Dirichlet. Les paramètres du modèle sont estimés conjointement au vecteur d’état contenant la position grâce à une solution de filtrage adaptée comme le filtre particulaire Rao-Blackwellisé. L’évolution du modèle de bruit permet de s'adapter à l’environnement et donc de fournir une localisation plus précise. Les différentes étapes des travaux réalisés dans cette thèse ont été testées et validées sur données de simulation et réelles
Today, the GNSS are largely present in the transport field. Currently, the scientific community aims to develop transport applications with a high accuracy, availability and integrity. These systems offer a continuous positioning service. Performances are defined by the system parameters but also by signal environment propagation. The atmosphere propagation characteristics are well known. However, it is more difficult to anticipate and analyze the impact of the propagation environment close to the antenna which can be composed, for instance, of urban obstacles or vegetation.Since several years, the LEOST and the LAGIS research axes are driven by the understanding of the propagation environment and its use as supplementary information to help the GNSS receiver to be more pertinent. This approach aims to reduce the number of sensors in the localisation system, and consequently reduces its complexity and cost. The work performed in this thesis is devoted to provide more realistic pseudorange error models and reception channel model. After, a step of observation error characterization, several pseudorange error models have been proposed. These models are the finite gaussian mixture model and the Dirichlet process mixture. The model parameters are then estimated jointly with the state vector containing position by using adapted filtering solution like the Rao-Blackwellized particle filter. The noise model evolution allows adapting to an urban environment and consequently providing a position more accurate.Each step of this work has been tested and evaluated on simulation data and real data
APA, Harvard, Vancouver, ISO, and other styles
9

Ménétrier, Benjamin. "Utilisation d'une assimilation d'ensemble pour modéliser des covariances d'erreur d'ébauche dépendantes de la situation météorologique à échelle convective." Thesis, Toulouse, INPT, 2014. http://www.theses.fr/2014INPT0052/document.

Full text
Abstract:
L'assimilation de données vise à fournir aux modèles de prévision numérique du temps un état initial de l'atmosphère le plus précis possible. Pour cela, elle utilise deux sources d'information principales : des observations et une prévision récente appelée "ébauche", toutes deux entachées d'erreurs. La distribution de ces erreurs permet d'attribuer un poids relatif à chaque source d'information, selon la confiance que l'on peut lui accorder, d'où l'importance de pouvoir estimer précisément les covariances de l'erreur d'ébauche. Les méthodes de type Monte-Carlo, qui échantillonnent ces covariances à partir d'un ensemble de prévisions perturbées, sont considérées comme les plus efficaces à l'heure actuelle. Cependant, leur coût de calcul considérable limite de facto la taille de l'ensemble. Les covariances ainsi estimées sont donc contaminées par un bruit d'échantillonnage, qu'il est nécessaire de filtrer avant toute utilisation. Cette thèse propose des méthodes de filtrage du bruit d'échantillonnage dans les covariances d'erreur d'ébauche pour le modèle à échelle convective AROME de Météo-France. Le premier objectif a consisté à documenter la structure des covariances d'erreur d'ébauche pour le modèle AROME. Une assimilation d'ensemble de grande taille a permis de caractériser la nature fortement hétérogène et anisotrope de ces covariances, liée au relief, à la densité des observations assimilées, à l'influence du modèle coupleur, ainsi qu'à la dynamique atmosphérique. En comparant les covariances estimées par deux ensembles indépendants de tailles très différentes, le bruit d'échantillonnage a pu être décrit et quantifié. Pour réduire ce bruit d'échantillonnage, deux méthodes ont été développées historiquement, de façon distincte : le filtrage spatial des variances et la localisation des covariances. On montre dans cette thèse que ces méthodes peuvent être comprises comme deux applications directes du filtrage linéaire des covariances. L'existence de critères d'optimalité spécifiques au filtrage linéaire de covariances est démontrée dans une seconde partie du travail. Ces critères présentent l'avantage de n'impliquer que des grandeurs pouvant être estimées de façon robuste à partir de l'ensemble. Ils restent très généraux et l'hypothèse d'ergodicité nécessaire à leur estimation n'est requise qu'en dernière étape. Ils permettent de proposer des algorithmes objectifs de filtrage des variances et pour la localisation des covariances. Après un premier test concluant dans un cadre idéalisé, ces nouvelles méthodes ont ensuite été évaluées grâce à l'ensemble AROME. On a pu montrer que les critères d'optimalité pour le filtrage homogène des variances donnaient de très bons résultats, en particulier le critère prenant en compte la non-gaussianité de l'ensemble. La transposition de ces critères à un filtrage hétérogène a permis une légère amélioration des performances, à un coût de calcul plus élevé cependant. Une extension de la méthode a ensuite été proposée pour les composantes du tenseur de la hessienne des corrélations locales. Enfin, les fonctions de localisation horizontale et verticale ont pu être diagnostiquées, uniquement à partir de l'ensemble. Elles ont montré des variations cohérentes selon la variable et le niveau concernés, et selon la taille de l'ensemble. Dans une dernière partie, on a évalué l'influence de l'utilisation de variances hétérogènes dans le modèle de covariances d'erreur d'ébauche d'AROME, à la fois sur la structure des covariances modélisées et sur les scores des prévisions. Le manque de réalisme des covariances modélisées et l'absence d'impact positif pour les prévisions soulèvent des questions sur une telle approche. Les méthodes de filtrage développées au cours de cette thèse pourraient toutefois mener à d'autres applications fructueuses au sein d'approches hybrides de type EnVar, qui constituent une voie prometteuse dans un contexte d'augmentation de la puissance de calcul disponible
Data assimilation aims at providing an initial state as accurate as possible for numerical weather prediction models, using two main sources of information : observations and a recent forecast called the “background”. Both are affected by systematic and random errors. The precise estimation of the distribution of these errors is crucial for the performance of data assimilation. In particular, background error covariances can be estimated by Monte-Carlo methods, which sample from an ensemble of perturbed forecasts. Because of computational costs, the ensemble size is much smaller than the dimension of the error covariances, and statistics estimated in this way are spoiled with sampling noise. Filtering is necessary before any further use. This thesis proposes methods to filter the sampling noise of forecast error covariances. The final goal is to improve the background error covariances of the convective scale model AROME of Météo-France. The first goal is to document the structure of background error covariances for AROME. A large ensemble data assimilation is set up for this purpose. It allows to finely characterize the highly heterogeneous and anisotropic nature of covariances. These covariances are strongly influenced by the topography, by the density of assimilated observations, by the influence of the coupling model, and also by the atmospheric dynamics. The comparison of the covariances estimated from two independent ensembles of very different sizes gives a description and quantification of the sampling noise. To damp this sampling noise, two methods have been historically developed in the community : spatial filtering of variances and localization of covariances. We show in this thesis that these methods can be understood as two direct applications of the theory of linear filtering of covariances. The existence of specific optimality criteria for the linear filtering of covariances is demonstrated in the second part of this work. These criteria have the advantage of involving quantities that can be robustly estimated from the ensemble only. They are fully general and the ergodicity assumption that is necessary to their estimation is required in the last step only. They allow the variance filtering and the covariance localization to be objectively determined. These new methods are first illustrated in an idealized framework. They are then evaluated with various metrics, thanks to the large ensemble of AROME forecasts. It is shown that optimality criteria for the homogeneous filtering of variances yields very good results, particularly with the criterion taking the non-gaussianity of the ensemble into account. The transposition of these criteria to a heterogeneous filtering slightly improves performances, yet at a higher computational cost. An extension of the method is proposed for the components of the local correlation hessian tensor. Finally, horizontal and vertical localization functions are diagnosed from the ensemble itself. They show consistent variations depending on the considered variable and level, and on the ensemble size. Lastly, the influence of using heterogeneous variances into the background error covariances model of AROME is evaluated. We focus first on the description of the modelled covariances using these variances and then on forecast scores. The lack of realism of the modelled covariances and the negative impact on scores raise questions about such an approach. However, the filtering methods developed in this thesis are general. They are likely to lead to other prolific applications within the framework of hybrid approaches, which are a promising way in a context of growing computational resources
APA, Harvard, Vancouver, ISO, and other styles
10

Vu, Dinh Thang. "Outils statistiques pour le positionnement optimal de capteurs dans le contexte de la localisation de sources." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00638778.

Full text
Abstract:
Cette thèse porte sur l'étude du positionnement optimale des réseaux de capteurs pour la localisation de sources. Nous avons étudié deux approches: l'approche basée sur les performances de l'estimation en termes d'erreur quadratique moyenne et l'approche basée sur le seuil statistique de résolution (SSR).Pour le première approche, nous avons considéré les bornes inférieures de l'erreur quadratique moyenne qui sont utilisés généralement pour évaluer la performance d'estimation indépendamment du type d'estimateur considéré. Nous avons étudié deux types de bornes: la borne Cramér-Rao (BCR) pour le modèle où les paramètres sont supposés déterministes et la borne Weiss-Weinstein (BWW) pour le modèle où les paramètres sont supposés aléatoires. Nous avons dérivé les expressions analytiques de ces bornes pour développer des outils statistiques afin d'optimiser la géométrie des réseaux de capteurs. Par rapport à la BCR, la borne BWW peut capturer le décrochement de l'EQM des estimateurs dans la zone non-asymptotique. De plus, les expressions analytiques de la BWW pour un modèle Gaussien général à moyenne paramétré ou à covariance matrice paramétré sont donnés explicitement. Basé sur ces expressions analytiques, nous avons étudié l'impact de la géométrie des réseaux de capteurs sur les performances d'estimation en utilisant les réseaux de capteurs 3D et 2D pour deux modèles des observations concernant les signaux sources: (i) le modèle déterministe et (ii) le modèle stochastique. Nous en avons ensuite déduit des conditions concernant les propriétés d'isotropie et de découplage.Pour la deuxième approche, nous avons considéré le seuil statistique de résolution qui caractérise la séparation minimale entre les deux sources. Dans cette thèse, nous avons étudié le SSR pour le contexte Bayésien moins étudié dans la littérature. Nous avons introduit un modèle des observations linéarisé basé sur le critère de probabilité d'erreur minimale. Ensuite, nous avons présenté deux approches Bayésiennes pour le SSR, l'une basée sur la théorie de l'information et l'autre basée sur la théorie de la détection. Ces approches pourront être utilisée pour améliorer la capacité de résolution des systèmes.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Error localisation"

1

Boer, A. de. Coorelation, error-localisation and updating of the second problem defined in GARTEUR AG11. Amsterdam: National Aerospace Laboratory, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Error localisation"

1

Lecomte, Christophe, J. J. Forster, B. R. Mace, and N. S. Ferguson. "Bayesian Damage Localisation at Higher Frequencies with Gaussian Process Error." In Topics in Model Validation and Uncertainty Quantification, Volume 4, 39–48. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-2431-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tholen, Christoph, Lars Nolle, and Jens Werner. "On the Influence of Localisation and Communication Error on the Behaviour of a Swarm of Autonomous Underwater Vehicles." In Recent Advances in Soft Computing, 68–79. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97888-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Error localisation"

1

Collignon, Philippe, and Jean-Claude Golinval. "Reliable Mode Shape Expansion Method for Error Localisation and Model Updating." In ASME 1997 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/detc97/vib-4145.

Full text
Abstract:
Abstract Failure detection and model updating using structural model are based on the comparison of an appropriate indicator of the discrepancy between experimental and analytical results. The reliability of the expansion of measured mode shapes is very important for the process of error localization and model updating. Two mode shape expansion techniques are examined in this paper : the well known dynamic expansion (DE) method and a method based on the minimisation of errors on constitutive equations (MECE). A new expansion method based on some improvements of the previous techniques is proposed to obtain results that are more reliable for error localisation and for model updating. The relative performance of the different expansion methods is demonstrated on the example of a cantilever beam.
APA, Harvard, Vancouver, ISO, and other styles
2

Alarcón, Daniel, and Rajeev Goré. "Efficient error localisation and imputation for real-world census data using SMT." In ACSW '16: Australasian Computer Science Week. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2843043.2843052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hall, Matthew L., Catherine E. Towers, and David P. Towers. "Correction of 3D localisation error of multiple objects in close-proximity in digital holography." In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: OSA, 2020. http://dx.doi.org/10.1364/3d.2020.jw2a.14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ali, Rheeda L., Chris D. Cantwell, Norman A. Qureshi, Caroline H. Roney, Phang Boon Lim, Spencer J. Sherwin, Jennifer H. Siggers, and Nicholas S. Peters. "Automated fiducial point selection for reducing registration error in the co-localisation of left atrium electroanatomic and imaging data." In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2015. http://dx.doi.org/10.1109/embc.2015.7318775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Danieli, Guido A., G. Fragomeni, and E. Giuzio. "A Device for Driving a Wire or a Drill Precisely in a Given Direction Through a Point Under Closed Sky Conditions." In ASME 2001 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/imece2001/bed-23079.

Full text
Abstract:
Abstract The paper presents a new device which allows the precise positioning of a guide through a point and in a given direction, or which can permit to reach a given bone section. Other devices exist mainly addressed to the insertion of a screw in the correct position on endo-medullary nails, but these are usually based on the knowledge of the geometry of the nail, and are therefore made by the maker of the nail. In other instances, often the doctors use trial drilling under fluoroscopy to place a kirshner wire in the required position. The actual device is instead based only on an aiming methodology resting on geometrical considerations. Previous knowledge of the position of such a point and direction, which can be given by a hole belonging to a plate or to an endo-medullary nail or can be part of the patient’s bone structure, is not necessary. This is due to the use of fluoroscopy coupled to the localisation of two aiming planes on whose interception is placed the line of interest. A first instrument developed on this principle found limited clinical success due to the complexity of the first kinematic chain used in the aiming process and to human error. A new version has been designed and is at present under construction, which solves the kinematic problem through a careful choice of a series of constraints enabling the aiming process to be divided in sub processes, thus gradually controlling the final result.
APA, Harvard, Vancouver, ISO, and other styles
6

Dempster, Andrew G., Binghao Li, and Ishrat Quader. "Errors in determinstic wireless fingerprinting systems for localisation." In 2008 3rd International Symposium on Wireless Pervasive Computing (ISWPC). IEEE, 2008. http://dx.doi.org/10.1109/iswpc.2008.4556177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

El-Mihoub, Tarek A., Lars Nolle, and Christoph Tholen. "On Localisation Errors and Cooperative Search for a Swarm of AUVs." In Global Oceans 2020: Singapore - U.S. Gulf Coast. IEEE, 2020. http://dx.doi.org/10.1109/ieeeconf38699.2020.9389227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zinoune, Clement, Philippe Bonnifait, and Javier Ibanez-Guzman. "A sequential test for autonomous localisation of map errors for driving assistance systems." In 2012 15th International IEEE Conference on Intelligent Transportation Systems - (ITSC 2012). IEEE, 2012. http://dx.doi.org/10.1109/itsc.2012.6338833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rabaoui, Asma, Nicolas Viandier, Juliette Marais, and Emmanuel Duflos. "On the use of Dirichlet process mixtures for the modelling of pseudorange errors in multi-constellation based localisation." In 2009 9th International Conference on ITS Telecommunications (ITST). IEEE, 2009. http://dx.doi.org/10.1109/itst.2009.5399308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Müller, Simone, and Dieter Kranzlmüller. "Dynamic Sensor Matching for Parallel Point Cloud Data Acquisition." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.3.

Full text
Abstract:
Based on depth perception of individual stereo cameras, spatial structures can be derived as point clouds. Thequality of such three-dimensional data is technically restricted by sensor limitations, latency of recording, andinsufficient object reconstructions caused by surface illustration. Additionally external physical effects likelighting conditions, material properties, and reflections can lead to deviations between real and virtual objectperception. Such physical influences can be seen in rendered point clouds as geometrical imaging errors onsurfaces and edges. We propose the simultaneous use of multiple and dynamically arranged cameras. Theincreased information density leads to more details in surrounding detection and object illustration. During apre-processing phase the collected data are merged and prepared. Subsequently, a logical analysis part examinesand allocates the captured images to three-dimensional space. For this purpose, it is necessary to create a newmetadata set consisting of image and localisation data. The post-processing reworks and matches the locallyassigned images. As a result, the dynamic moving images become comparable so that a more accurate point cloudcan be generated. For evaluation and better comparability we decided to use synthetically generated data sets. Ourapproach builds the foundation for dynamic and real-time based generation of digital twins with the aid of realsensor data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography