Dissertations / Theses on the topic 'Error localisation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 20 dissertations / theses for your research on the topic 'Error localisation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Zaman, Munir uz. "Mobile robot localisation : error modelling, data synchronisation and vision techniques." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/844082/.
Full textLandsberg, David. "Methods and measures for statistical fault localisation." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:cf737e06-9f12-44fa-94d2-a8d247ad808e.
Full textPrévost, Raoul. "Décodage et localisation AIS par satellite." Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0121/document.
Full textThe automatic identification system (AIS) is a system allowing ships and coast stations to exchange some information by VHF radio. This information includes the identifier, status, location, direction and speed of the emitter. The aim of this thesis is to allow the reception of AIS messages by low Earth orbit satellites without modifying the existing ship equipments. With this system, it becomes possible to know the position of all ships over the Earth. As a consequence, several new services become available, such as global traffic monitoring or determining boat location (for ship-owners). Satellite reception of AIS signals is subjected to a higher noise level when compared to ground level reception. This noise makes classical demodulation and decoding methods unusable. A first contribution of this thesis is to develop new demodulators using error correction methods. These demodulators take advantage of the presence of a cyclic redundancy check (CRC) block in the messages as well as known information about the structure of messages and data. Generalizations of the proposed receiver have also been studied in order to take into account the phase noise of the received signals and the possible collision of messages sent simultaneously by several vessels. The last part of this thesis is devoted to the study of localization methods for ships that do not transmit their location in AIS messages. This localization takes advantage of information contained in the received messages such as the propagation delay and the carrier frequency shift due to the Doppler effect, and a ship movement model
Thomas, Robin Rajan. "Optimisation of adaptive localisation techniques for cognitive radio." Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/27076.
Full textDissertation (MEng)--University of Pretoria, 2012.
Electrical, Electronic and Computer Engineering
unrestricted
Pečenka, Ondřej. "Výzkum vlivu rozložení vstupní chyby na průběh lokalizačního procesu WSN." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218263.
Full textBekkouche, Mohammed. "Combinaison des techniques de Bounded Model Checking et de programmation par contraintes pour l'aide à la localisation d'erreurs : exploration des capacités des CSP pour la localisation d'erreurs." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4096/document.
Full textA model checker can produce a trace of counter-example for erroneous program, which is often difficult to exploit to locate errors in source code. In my thesis, we proposed an error localization algorithm from counter-examples, named LocFaults, combining approaches of Bounded Model-Checking (BMC) with constraint satisfaction problem (CSP). This algorithm analyzes the paths of CFG (Control Flow Graph) of the erroneous program to calculate the subsets of suspicious instructions to correct the program. Indeed, we generate a system of constraints for paths of control flow graph for which at most k conditional statements can be wrong. Then we calculate the MCSs (Minimal Correction Sets) of limited size on each of these paths. Removal of one of these sets of constraints gives a maximal satisfiable subset, in other words, a maximal subset of constraints satisfying the postcondition. To calculate the MCSs, we extend the generic algorithm proposed by Liffiton and Sakallah in order to deal with programs with numerical instructions more efficiently. This approach has been experimentally evaluated on a set of academic and realistic programs
Lu, Wenjie. "Contributions to Lane Marking Based Localization for Intelligent Vehicles." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112017/document.
Full textAutonomous Vehicles (AV) applications and Advanced Driving Assistance Systems (ADAS) relay in scene understanding processes allowing high level systems to carry out decision marking. For such systems, the localization of a vehicle evolving in a structured dynamic environment constitutes a complex problem of crucial importance. Our research addresses scene structure detection, localization and error modeling. Taking into account the large functional spectrum of vision systems, the accessibility of Open Geographical Information Systems (GIS) and the widely presence of Global Positioning Systems (GPS) onboard vehicles, we study the performance and the reliability of a vehicle localization method combining such information sources. Monocular vision–based lane marking detection provides key information about the scene structure. Using an enhanced multi-kernel framework with hierarchical weights, the proposed parametric method performs, in real time, the detection and tracking of the ego-lane marking. A self-assessment indicator quantifies the confidence of this information source. We conduct our investigations in a localization system which tightly couples GPS, GIS and lane makings in the probabilistic framework of Particle Filter (PF). To this end, it is proposed the use of lane markings not only during the map-matching process but also to model the expected ego-vehicle motion. The reliability of the localization system, in presence of unusual errors from the different information sources, is enhanced by taking into account different confidence indicators. Such a mechanism is later employed to identify error sources. This research concludes with an experimental validation in real driving situations of the proposed methods. They were tested and its performance was quantified using an experimental vehicle and publicly available datasets
Viandier, Nicolas. "Modélisation et utilisation des erreurs de pseudodistances GNSS en environnement transport pour l’amélioration des performances de localisation." Thesis, Ecole centrale de Lille, 2011. http://www.theses.fr/2011ECLI0006/document.
Full textToday, the GNSS are largely present in the transport field. Currently, the scientific community aims to develop transport applications with a high accuracy, availability and integrity. These systems offer a continuous positioning service. Performances are defined by the system parameters but also by signal environment propagation. The atmosphere propagation characteristics are well known. However, it is more difficult to anticipate and analyze the impact of the propagation environment close to the antenna which can be composed, for instance, of urban obstacles or vegetation.Since several years, the LEOST and the LAGIS research axes are driven by the understanding of the propagation environment and its use as supplementary information to help the GNSS receiver to be more pertinent. This approach aims to reduce the number of sensors in the localisation system, and consequently reduces its complexity and cost. The work performed in this thesis is devoted to provide more realistic pseudorange error models and reception channel model. After, a step of observation error characterization, several pseudorange error models have been proposed. These models are the finite gaussian mixture model and the Dirichlet process mixture. The model parameters are then estimated jointly with the state vector containing position by using adapted filtering solution like the Rao-Blackwellized particle filter. The noise model evolution allows adapting to an urban environment and consequently providing a position more accurate.Each step of this work has been tested and evaluated on simulation data and real data
Ménétrier, Benjamin. "Utilisation d'une assimilation d'ensemble pour modéliser des covariances d'erreur d'ébauche dépendantes de la situation météorologique à échelle convective." Thesis, Toulouse, INPT, 2014. http://www.theses.fr/2014INPT0052/document.
Full textData assimilation aims at providing an initial state as accurate as possible for numerical weather prediction models, using two main sources of information : observations and a recent forecast called the “background”. Both are affected by systematic and random errors. The precise estimation of the distribution of these errors is crucial for the performance of data assimilation. In particular, background error covariances can be estimated by Monte-Carlo methods, which sample from an ensemble of perturbed forecasts. Because of computational costs, the ensemble size is much smaller than the dimension of the error covariances, and statistics estimated in this way are spoiled with sampling noise. Filtering is necessary before any further use. This thesis proposes methods to filter the sampling noise of forecast error covariances. The final goal is to improve the background error covariances of the convective scale model AROME of Météo-France. The first goal is to document the structure of background error covariances for AROME. A large ensemble data assimilation is set up for this purpose. It allows to finely characterize the highly heterogeneous and anisotropic nature of covariances. These covariances are strongly influenced by the topography, by the density of assimilated observations, by the influence of the coupling model, and also by the atmospheric dynamics. The comparison of the covariances estimated from two independent ensembles of very different sizes gives a description and quantification of the sampling noise. To damp this sampling noise, two methods have been historically developed in the community : spatial filtering of variances and localization of covariances. We show in this thesis that these methods can be understood as two direct applications of the theory of linear filtering of covariances. The existence of specific optimality criteria for the linear filtering of covariances is demonstrated in the second part of this work. These criteria have the advantage of involving quantities that can be robustly estimated from the ensemble only. They are fully general and the ergodicity assumption that is necessary to their estimation is required in the last step only. They allow the variance filtering and the covariance localization to be objectively determined. These new methods are first illustrated in an idealized framework. They are then evaluated with various metrics, thanks to the large ensemble of AROME forecasts. It is shown that optimality criteria for the homogeneous filtering of variances yields very good results, particularly with the criterion taking the non-gaussianity of the ensemble into account. The transposition of these criteria to a heterogeneous filtering slightly improves performances, yet at a higher computational cost. An extension of the method is proposed for the components of the local correlation hessian tensor. Finally, horizontal and vertical localization functions are diagnosed from the ensemble itself. They show consistent variations depending on the considered variable and level, and on the ensemble size. Lastly, the influence of using heterogeneous variances into the background error covariances model of AROME is evaluated. We focus first on the description of the modelled covariances using these variances and then on forecast scores. The lack of realism of the modelled covariances and the negative impact on scores raise questions about such an approach. However, the filtering methods developed in this thesis are general. They are likely to lead to other prolific applications within the framework of hybrid approaches, which are a promising way in a context of growing computational resources
Vu, Dinh Thang. "Outils statistiques pour le positionnement optimal de capteurs dans le contexte de la localisation de sources." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00638778.
Full textBarbarella, Elena. "Towards the localization and characterization of defects based on the Modified error in Constitutive Relation : focus on the buckling test and comparison with other type of experiments." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN066/document.
Full textComposite materials are nowadays extending their operational field to industrial applications other than aeronautics. New potential markets, such as automotive, imply the need to comply with different constraints; reduced cost and production time become more binding, taking the lead over the complete absence of defects. The drawback to fast automatized procedure is the higher defectiveness of the components produced, a deeper control of the part is therefore needed. Non-destructive techniques are expensive both in terms of cost and time and therefore the main question we tried to answer in this thesis is: is it possible to detect and estimate the effect of defects without resorting to the complex and time-consuming NDT techniques? An acceptable answer may potentially lead to a lower precision but should guarantee sufficient quantitative information for these applications. The thesis aims at exploring possibilities to use classical mechanical test combined with Digital Image correlation and inverse procedure to localize and characterized possible (large) defects. Buckling tests have been chosen at first due their supposed sensitivity to defects. Among the possible inverse technique, we have chosen to extend the so-called Modified Error in Constitutive Relation to the case of buckling because, in the case of vibration tests performed with several frequencies, the MCRE proved to have very good localization properties. The dedicated formulation of the MCRE for linearized buckling requires a post-processing of the non-linear experimental results. The Southwell plot is here employed to reconstruct the eigenvalue, the critical load, of the equivalent eigenvalue problem (i.e. the solution of the problem with material defect and no geometrical ones) and the Stereo Digital Image Correlation (StereoDIC) is exploited to reconstruct the deformed shape of the specimen during the test, used as mode. The interests and limits of the methodology are discussed notably through the comparison of numerical results using the MCRE in case of traction, flexion or vibration tests. It is shown that the linearized buckling based MCRE technique proves well for pseudo-experimental measurements at least for moderate geometrical imperfections. In addition first experiments have been performed; the defects are characterized from real experimental specimens, both for a nominally perfect specimen and for a defective one, where a zone of fibre waviness is induced. Stereo Digital Image Correlation (StereoDIC) is exploited to reconstruct the deformed shape of the specimen during the test, this shape being used as an approximation of the buckling mode. While on the first one no defects are detected, on the flawed specimen the localized area is in reasonable agreement with the area affected by fibre undulations
Boskovitz, Agnes, and abvi@webone com au. "Data Editing and Logic: The covering set method from the perspective of logic." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20080314.163155.
Full textBalakrishnan, Arjun. "Integrity Analysis of Data Sources in Multimodal Localization System." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG060.
Full textIntelligent vehicles are a key component in humanity’s vision for safer, efficient, and accessible transportation systems across the world. Due to the multitude of data sources and processes associated with Intelligent vehicles, the reliability of the total system is greatly dependent on the possibility of errors or poor performances observed in its components. In our work, we focus on the critical task of localization of intelligent vehicles and address the challenges in monitoring the integrity of data sources used in localization. The primary contribution of our research is the proposition of a novel protocol for integrity by combining integrity concepts from information systems with the existing integrity concepts in the field of Intelligent Transport Systems (ITS). An integrity monitoring framework based on the theorized integrity protocol that can handle multimodal localization problems is formalized. As the first step, a proof of concept for this framework is developed based on cross-consistency estimation of data sources using polynomial models. Based on the observations from the first step, a 'Feature Grid' data representation is proposed in the second step and a generalized prototype for the framework is implemented. The framework is tested in highways as well as complex urban scenarios to demonstrate that the proposed framework is capable of providing continuous integrity estimates of multimodal data sources used in intelligent vehicle localization
Urban, Daniel. "Lokalizace mobilního robota v prostředí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385923.
Full textKinnaert, Xavier. "Data processing of induced seismicity : estimation of errors and of their impact on geothermal reservoir models." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAH013/document.
Full textInduced seismicity location and focal mechanisms are commonly used to image the sub-surface designin reservoirs among other tasks. In this Ph.D. the inaccuracies and uncertainties on earthquake location and focal mechanisms are quantified using a three-step method. The technique is applied to the geothermal sites of Soultz and Rittershoffen to investigate the effect of several criteria on thee arthquake location. A good azimuthal seismic coverage and the use of seismic down-hole sensors seriously decrease the location uncertainty. On the contrary, velocity model uncertainties, represented by a 5% Gaussian distribution of the velocity model around the reference model, will multiply location uncertainties by a factor of 2 to 3. An incorrect knowledge of the sub-surface or the simplifications performed before the earthquake location can lead to biases of 10% of the vertical distance separating the source and the stations with a non-isotropic spatial distribution. Hence the sub-surface design maybe distorted in the interpretations. To prevent from that fact, the calibration shot method was proved to be efficient. The study on focal mechanism errors seems to lead to different conclusions. Obviously, the angular bias may be increased by neglecting the fault in the velocity. But, it may also be the same as or even smaller than the bias calculated for the case simulating a perfect knowledge of the medium of propagation. Furthermore a better seismic coverage always leads to smaller angular biases. Hence,it is worth advising to use more than only earthquake location in order to image a reservoir. Other geothermal sites and reservoirs may benefit from the method developed here
Die korrekte Lokalisierung von induzierter Seismizität und den dazugehörigen Herdflächenlösungensind sehr wichtige Parameter. So werden zum Beispiel die Verteilung der Erdbeben und die Orientierung ihrer Herdflächenlösungen dazu benutzt um in der Tiefe liegende Reservoirs zulokalisieren und abzubilden. In dieser Doktorarbeit wird eine Technik vorgeschlagen um diemethodisch bedingten Fehler zu quantifizieren. Mit dieser Methode werden die verschiedenen Fehlerquellen, die Unsicherheiten und die Fehler im Modell getrennt. Die Technik wird für die geothermischen Felder in Soultz und in Rittershoffen benutzt um den Einfluss verschiedener Parameter (Annahmen) auf die Lokalisierung der induzierten Seismizität zu bestimmen. Es wurde festgestellt, dass Bohrlochseismometer und eine gute azimutale Verteilung der seismischen Stationen die Unbestimmtheiten verkleinern. Die Geschwindigkeitsunbestimmheiten, die durch eine Gauss-Verteilung mit 5% Fehler dargestellt werden, vervielfachen die Lokalisierungsungenauigkeiten um einen Faktor 2 bis 3. Eine ungenaue Kenntnis des Untergrunds oder die verwendete vereinfachte Darstellung der Geschwindigkeitsverhältnisse im Untergrund (notwendig um die synthetischen Rechnungen durchführen zu können) führen zu anisotropen Abweichungen und Fehlern in der Herdtiefe von bis zu 10%. Diese können die Interpretationen des Untergrunds deutlich verfälschen. Ein “calibration shot” kann diese Fehler korrigieren. Leider können die Fehler für die Herdflächenlösungen nicht in derselben Weise korrigiert werden. Es erscheint daher als keine gute Idee, ein Reservoir nur über die Lokalisierung von Erdbeben zu bestimmen. Eine Kombination mehrerer seismischer Methoden scheint angezeigt. Die hier besprochene Methode kann als Grundlage dienen für die Erkundung anderer (geothermischer)
Routledge, Andrew James. "The internal structure of consciousness." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/the-internal-structure-of-consciousness(4e91f257-fa9f-4a53-81be-e30cdb0002a5).html.
Full textHartig, Michael. "Loi de van der Waals-London pour les systèmes d'atomes et de molécules relativistes." Thesis, Toulon, 2019. http://www.theses.fr/2019TOUL0009.
Full textWe consider a multiatomic system where the nuclei are assumed to be point charges at fixed positions.Particles interact via Coulomb potential and electrons have relativistic kinetic energy given by (p2+m2)1/2-m.We prove the van der Waals-London law, which states that the interaction energy between neutral atoms decays as the sixth power of the distance |D| between the atoms. We rigorously compute all terms in the binding energy up to order |D|-9 with error term of order O(|D|-10). As intermediate steps we prove exponential decay of eigenfunctions of multiparticle Schrödinger operators with permutation symmetry imposed by the Pauli principle and new estimates of the localization error. In addition we prove the van der Waals-London law for the projected Dirac operator, known as the Brown-Ravenhall operator. In this case we do not calculate the coefficients explicitly and we obtain an error term of order O(|D|-7)
Abu-Shaban, Zohair M. "Towards the Next Generation of Location-Aware Communications." Phd thesis, 2017. http://hdl.handle.net/1885/143226.
Full textBoskovitz, Agnes. "Data Editing and Logic: The covering set method from the perspective of logic." Phd thesis, 2008. http://hdl.handle.net/1885/49318.
Full textBenmessaoud, Sirine. "Metalinguistic knowledge of second language pre-service teachers and the quality of their written corrective feedback : what relations?" Thesis, 2020. http://hdl.handle.net/1866/24553.
Full textThe present quantitative study seeks to 1) measure pre-service teachers’ metalinguistic knowledge, 2) describe the quality of French as a second language (FSL) pre-service teachers’ written corrective feedback (WCF), and 3) examine the relationship between pre-service teachers’ metalinguistic knowledge and the quality of their written corrective feedback (i.e., teachers’ metalinguistic awareness). A group of 18 French as a second language pre-service teachers following the initial teacher training program in Montreal, participated in the study. Participants were assigned 1) a task of analytical abilities to measure their metalinguistic knowledge, and 2) a task of written corrective feedback provision to evaluate the quality of their written corrective feedback in terms of error location and the metalinguistic explanation provided. Descriptive analyses were undertaken to answer the first two research questions. Correlation analyses were performed to examine whether there exist any relations between pre-service teachers’ metalinguistic knowledge and the quality of their WCF. Among other things, results indicated that 1) while the error location of WCF provided was precise, 2) the metalinguistic explanation provided by the participants was not accurate, 3) there is a relationship between pre-service teachers’ metalinguistic knowledge and the quality of written corrective feedback.