To see the other types of publications on this topic, follow the link: Data fusion algorithms.

Dissertations / Theses on the topic 'Data fusion algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data fusion algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Aziz, Ashraf Mamdouh Abdel. "New data fusion algorithms for distributed multi-sensor multi-target environments." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA369780.

Full text
Abstract:
Dissertation (Ph.D. in Electrical Engineering) Naval Postgraduate School, September 1999.
"September 1999". Dissertation supervisor(s): Robert Cristi, Murali Tummala. Includes bibliographical references (p. 199-214). Also avaliable online.
APA, Harvard, Vancouver, ISO, and other styles
2

Rivera, velázquez Josué. "Analysis and development of algorithms for data fusion in sensor arrays." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS038.

Full text
Abstract:
Actuellement, la plupart des capteurs sont de nature `` intelligente '', ce qui signifie que les éléments de détection et l'électronique associée sont intégrés sur le même circuit. Parmi ces capteurs de nouvelle génération les systèmes micro-électro-mécaniques (MEMS) utilisent les technologies microélectroniques pour la fabrication par lots de capteurs à des volumes sans précédent et à des prix bas. Si ces composants sur étagère sont satisfaisants pour de nombreuses applications nécessitant un niveau de précision faible à moyen, ils ne peuvent toujours pas répondre pleinement aux besoins de performances de nombreuses applications de haute précision.Cependant, en raison de leur prix décroissant, de leur faible encombrement et de leur faible consommation d'énergie, il est désormais possible de mettre en œuvre des systèmes avec des dizaines ou même des centaines de capteurs. Ces systèmes amènent une solution possible au manque de performances des capteurs individuels et peuvent en outre améliorer la fiabilité et la robustesse de la détection. Les matrices de capteurs sont l'une de ces méthodes de mesures redondantes qui surviennent en réponse aux problèmes susmentionnés. Le développement d'algorithmes de fusion de données pour ces systèmes est un sujet de recherche fréquemment étudié dans la littérature. Néanmoins, il reste encore beaucoup de recherches à faire dans ce domaine de plus en plus important. L'émergence de nouvelles applications aux besoins de plus en plus complexes accroît la nécessité de nouveaux algorithmes avec des propriétés telles que la facilité d'intégration, l'adaptabilité, la robustesse, le faible coût de calcul et la généricité, entre autres.Dans cette thèse, nous présentons un nouvel algorithme pour les systèmes multi-capteurs qui propose une solution viable pour surmonter les contraintes mentionnées précédemment. La proposition est une méthode on-line basée sur une estimation quadratique sans biais de norme minimale (acronyme en Anglais: MINQUE) qui est capable de calculer les variances des capteurs sans connaître les entrées. Cet algorithme est capable de suivre les changements de variances des capteurs causés principalement par les effets du bruit basse fréquence, ainsi que de détecter et de signaler les capteurs affectés par des erreurs permanent ou transitoires. Cette approche est générique, ce qui signifie qu'elle peut être mise en œuvre pour différents types de systèmes de capteurs. De même, cet algorithme peut être implémenté dans des systèmes de réseaux de capteurs.Deux autres contributions de cette thèse peuvent être répertoriées. La première est un modèle de capteur générique pour les simulations de capteurs au niveau système. Cet outil créé dans l'environnement Matlab Simulink permet l'analyse des implémentations d'algorithmes de fusion de données dans des systèmes multi-capteurs. Contrairement aux modèles existant auparavant dans la littérature, ce modèle présente des caractéristiques telles que la généricité et l'inclusion de bruits basse fréquence, ainsi que le paramétrage à travers des graphiques d'analyse spectrale (graphique de Densité Spectrale de Puissance) et des graphiques d'analyse de stabilité dans le temps (graphique de l'écart Allan). La seconde est une étude visant à comparer les performances et la faisabilité de la mise en œuvre de différents algorithmes de fusion de données dans les systèmes multi-capteurs. Cette étude contient une analyse de la complexité de calcul, de la mémoire requise et de l'erreur d'estimation. Les algorithmes analysés sont : la méthode des moindres carrés, le réseau de neurones artificiel, le filtre de Kalman et la pondération aléatoire
Currently, most of the sensors are ``smart'' in nature, which means that sensing elements and associated electronics are integrated on the same chip. Among these new generation of sensors, the Micro-Electro-Mechanical-Systems (MEMS) make use of Microelectronics technologies for batch manufacturing of small footprint sensors to unprecedented volumes and at low prices. If those components of the shelf are satisfactory for many consumer and low- to medium-end applications, they still cannot fully meet the performance needs of many high-end applications.However, due to their decreasing price, their small footprint, and their low-power consumption, it is now feasible to implement systems with tens and even hundreds of sensors. Those systems give a possible solution to the lack of performance of individual sensors and additionally they can also improve dependability and robustness of sensing. Sensor array systems are one of these methods of redundant measurements that arise in response to the aforementioned problems. The development of data fusion algorithms for sensor array systems is a research topic frequently studied in the literature. Even so, it still remains a lot of research work to do in this increasingly important area. The emergence of new applications with increasingly complex needs is growing the requirement for new algorithms with features such as integration, adaptability, dependability, low computational cost, and genericity among others.In this thesis we present a new algorithm for sensor array systems that propose a viable solution to overcome constraints mentioned before. The proposal is an on-line method based on the MInimum Norm Quadratic Unbiased Estimation (MINQUE) that is able to compute sensors' variances without the knowledge of the inputs. This algorithm is capable to track changes in sensors' variances caused principally by the low-frequency noise effects, as well as to detect and point out sensors affected by permanent or transitory errors. This approach is generic, which means that it can be implemented for different types of sensor array systems. In addition, this algorithm can be also implemented in sensor network systems.Two more contributions of this thesis can be listed. The first is a generic sensor model for sensor simulations at system level. This tool created inside the Matlab Simulink environment permits the analysis of implementations of data fusion algorithms in multi-sensor systems. Unlike the models previously existing in the literature, this sensor model has characteristics such as genericity and inclusion of low-frequency noises. The second is a study to compare the performance and feasibility in the implementation of different algorithms for data fusion in sensor array systems. This study contains an analysis of computational complexity, memory required, and the error in estimation. The analyzed algorithms are : the method of least squares, an artificial neural network, Kalman filter, and Random weighting
APA, Harvard, Vancouver, ISO, and other styles
3

Baravdish, Ninos. "Information Fusion of Data-Driven Engine Fault Classification from Multiple Algorithms." Thesis, Linköpings universitet, Fordonssystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176508.

Full text
Abstract:
As the automotive industry constantly makes technological progress, higher demands are placed on safety, environmentally friendly and durability. Modern vehicles are headed towards increasingly complex system, in terms of both hardware and software making it important to detect faults in any of the components. Monitoring the engine’s health has traditionally been done using expert knowledge and model-based techniques, where derived models of the system’s nominal state are used to detect any deviations. However, due to increased complexity of the system this approach faces limitations regarding time and knowledge to describe the engine’s states. An alternative approach is therefore data-driven methods which instead are based on historical data measured from different operating points that are used to draw conclusion about engine’s present state. In this thesis a proposed diagnostic framework is presented, consisting of a systematically approach for fault classification of known and unknown faults along with a fault size estimation. The basis for this lies in using principal component analysis to find the fault vector for each fault class and decouple one fault at the time, thus creating different subspaces. Importantly, this work investigates the efficiency of taking multiple classifiers into account in the decision making from a performance perspective. Aggregating multiple classifiers is done solving a quadratic optimization problem. To evaluate the performance, a comparison with a random forest classifier has been made. Evaluation with challenging test data show promising results where the algorithm relates well to the performance of random forest classifier.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Lingjie Luo Zhi-Quan. "Data fusion and filtering for target tracking and identification /." *McMaster only, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ayodeji, Akiwowo. "Developing integrated data fusion algorithms for a portable cargo screening detection system." Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/9901.

Full text
Abstract:
Towards having a one size fits all solution to cocaine detection at borders; this thesis proposes a systematic cocaine detection methodology that can use raw data output from a fibre optic sensor to produce a set of unique features whose decisions can be combined to lead to reliable output. This multidisciplinary research makes use of real data sourced from cocaine analyte detecting fibre optic sensor developed by one of the collaborators - City University, London. This research advocates a two-step approach: For the first step, the raw sensor data are collected and stored. Level one fusion i.e. analyses, pre-processing and feature extraction is performed at this stage. In step two, using experimentally pre-determined thresholds, each feature decides on detection of cocaine or otherwise with a corresponding posterior probability. High level sensor fusion is then performed on this output locally to combine these decisions and their probabilities at time intervals. Output from every time interval is stored in the database and used as prior data for the next time interval. The final output is a decision on detection of cocaine. The key contributions of this thesis includes investigating the use of data fusion techniques as a solution for overcoming challenges in the real time detection of cocaine using fibre optic sensor technology together with an innovative user interface design. A generalizable sensor fusion architecture is suggested and implemented using the Bayesian and Dempster-Shafer techniques. The results from implemented experiments show great promise with this architecture especially in overcoming sensor limitations. A 5-fold cross validation system using a 12 13 - 1 Neural Network was used in validating the feature selection process. This validation step yielded 89.5% and 10.5% true positive and false alarm rates with 0.8 correlation coefficient. Using the Bayesian Technique, it is possible to achieve 100% detection whilst the Dempster Shafer technique achieves a 95% detection using the same features as inputs to the DF system.
APA, Harvard, Vancouver, ISO, and other styles
6

Bougiouklis, Theodoros C. "Traffic management algorithms in wireless sensor networks." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FBougiouklis.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2006.
Thesis Advisor(s): Weillian Su. "September 2006." Includes bibliographical references (p. 79-80). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
7

Elbakary, Mohamed Ibrahim. "Novel Pixel-Level and Subpixel-Level Registration Algorithms for Multi-Modal Imagery Data." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1293%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Trailović, Lidija. "Ranking and optimization of target tracking algorithms." online access from Digital Dissertation Consortium access full-text, 2002. http://libweb.cityu.edu.hk/cgi-bin/er/db/ddcdiss.pl?3074810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gnanapandithan, Nithya. "Data detection and fusion in decentralized sensor networks." Thesis, Manhattan, Kan. : Kansas State University, 2005. http://hdl.handle.net/2097/132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ho, Peter. "Organization in decentralized sensing." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Draper, Stark Christiaan. "Successive structuring of source coding algorithms for data fusion, buffering, and distribution in networks." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29239.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 159-165).
Numerous opportunities to improve network performance present themselves when we make communication networks aware of the characteristics of the data content they are handling. In this thesis, we design such content-aware algorithms that span traditional network layers and are successively structured, focusing on problems of data fusion, buffering, and distribution. The successive structuring of these algorithms provides the flexibility needed to deal with the distributed processing, the heterogeneous sources of information, and the uncertain operating conditions that typify many networks. We investigate the broad interactions between estimation and communication in the context of data fusion in tree-structured sensor networks. We show how to decompose any general tree into serial (pipeline) and parallel (hub-and-spoke) networks. We develop successive coding strategies for these prototype sensor networks based on generalized Wyner-Ziv coding. We extend Wyner-Ziv source coding with side information to "noisy" encoder observations and develop the associated rate-distortion function. We show how to approach the serial and parallel network configurations as cascades of noisy Wyner-Ziv stages. This approach leads to convenient iterative (achievable) distortion-rate expressions for quadratic-Gaussian scenarios. Under a sum-rate constraint, the parallel network is equivalent to what is referred to as the CEO problem. We connect our work to those earlier results. We further develop channel coding strategies for certain classes of relay channels.
(cont.) We also explore the interactions between source coding and queue management in problems of buffering and distributing distortion-tolerant data. We formulate a general queuing model relevant to numerous communication scenarios, and develop a bound on the performance of any algorithm. We design an adaptive buffer-control algorithm for use in dynamic environments and under finite memory limitations; its performance closely approximates the bound. Our design uses multiresolution source codes that exploit the data's distortion-tolerance in minimizing end-to-end distortion. Compared to traditional approaches, the performance gains of the adaptive algorithm are significant - improving distortion, delay, and overall system robustness.
by Stark Christiaan Draper.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
12

Dong, Shaoqiang Agrawal Prathima. "Node placement, routing and localization algorithms for heterogeneous wireless sensor networks." Auburn, Ala, 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SPRING/Electrical_and_Computer_Engineering/Thesis/Dong_Shaoqiang_40.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Julier, Simon J. "Process models for the navigation of high speed land vehicles." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Auephanwiriyakul, Sansanee. "A study of linguistic pattern recognition and sensor fusion /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9999270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jones, Malachi Gabriel. "Design and implementation of a multi-agent systems laboratory." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29617.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Jeff Shamma; Committee Member: Eric Feron; Committee Member: Magnus Egerstedt. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
16

Lu, Yang. "Unified Bias Analysis of Subspace-Based DOA Estimation Algorithms." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4613.

Full text
Abstract:
This thesis presents the unified bias analysis of subspace-based DOA estimation algorithms in terms of physical parameters such as source separation, signal coherence, number of senors and snapshots. The analysis reveals the direct relationship between the performance of the DOA algorithms and signal measurement conditions. Insights into different algorithms are provided. Based upon previous first-order subspace perturbations, second-order subspace perturbations are developed which provide basis for bias analysis and unification. Simulations verifying the theoretical bias analysis are presented.
APA, Harvard, Vancouver, ISO, and other styles
17

Zarrouati-Vissière, Nadège. "La réalité augmentée : fusion de vision et navigation." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://pastel.archives-ouvertes.fr/pastel-00961962.

Full text
Abstract:
Cette thèse a pour objet l'étude d'algorithmes pour des applications de réalité visuellement augmentée. Plusieurs besoins existent pour de telles applications, qui sont traités en tenant compte de la contrainte d'indistinguabilité de la profondeur et du mouvement linéaire dans le cas de l'utilisation de systèmes monoculaires. Pour insérer en temps réel de manière réaliste des objets virtuels dans des images acquises dans un environnement arbitraire et inconnu, il est non seulement nécessaire d'avoir une perception 3D de cet environnement à chaque instant, mais également d'y localiser précisément la caméra. Pour le premier besoin, on fait l'hypothèse d'une dynamique de la caméra connue, pour le second on suppose que la profondeur est donnée en entrée: ces deux hypothèses sont réalisables en pratique. Les deux problèmes sont posés dans lecontexte d'un modèle de caméra sphérique, ce qui permet d'obtenir des équations de mouvement invariantes par rotation pour l'intensité lumineuse comme pour la profondeur. L'observabilité théorique de ces problèmes est étudiée à l'aide d'outils de géométrie différentielle sur la sphère unité Riemanienne. Une implémentation pratique est présentée: les résultats expérimentauxmontrent qu'il est possible de localiser une caméra dans un environnement inconnu tout en cartographiant précisément cet environnement.
APA, Harvard, Vancouver, ISO, and other styles
18

Kallumadi, Surya Teja. "Data aggregation in sensor networks." Thesis, Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/2387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Agarwalla, Bikash Kumar. "Resource management for data streaming applications." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34836.

Full text
Abstract:
This dissertation investigates novel middleware mechanisms for building streaming applications. Developing streaming applications is a challenging task because (i) they are continuous in nature; (ii) they require fusion of data coming from multiple sources to derive higher level information; (iii) they require efficient transport of data from/to distributed sources and sinks; (iv) they need access to heterogeneous resources spanning sensor networks and high performance computing; and (v) they are time critical in nature. My thesis is that an intuitive programming abstraction will make it easier to build dynamic, distributed, and ubiquitous data streaming applications. Moreover, such an abstraction will enable an efficient allocation of shared and heterogeneous computational resources thereby making it easier for domain experts to build these applications. In support of the thesis, I present a novel programming abstraction, called DFuse, that makes it easier to develop these applications. A domain expert only needs to specify the input and output connections to fusion channels, and the fusion functions. The subsystems developed in this dissertation take care of instantiating the application, allocating resources for the application (via the scheduling heuristic developed in this dissertation) and dynamically managing the resources (via the dynamic scheduling algorithm presented in this dissertation). Through extensive performance evaluation, I demonstrate that the resources are allocated efficiently to optimize the throughput and latency constraints of an application.
APA, Harvard, Vancouver, ISO, and other styles
20

Borkar, Milind. "A distributed Monte Carlo method for initializing state vector distributions in heterogeneous smart sensor networks." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22680.

Full text
Abstract:
The objective of this research is to demonstrate how an underlying system's state vector distribution can be determined in a distributed heterogeneous sensor network with reduced subspace observability at the individual nodes. We show how the network, as a whole, is capable of observing the target state vector even if the individual nodes are not capable of observing it locally. The initialization algorithm presented in this work can generate the initial state vector distribution for networks with a variety of sensor types as long as the measurements at the individual nodes are known functions of the target state vector. Initialization is accomplished through a novel distributed implementation of the particle filter that involves serial particle proposal and weighting strategies, which can be accomplished without sharing raw data between individual nodes in the network. The algorithm is capable of handling missed detections and clutter as well as compensating for delays introduced by processing, communication and finite signal propagation velocities. If multiple events of interest occur, their individual states can be initialized simultaneously without requiring explicit data association across nodes. The resulting distributions can be used to initialize a variety of distributed joint tracking algorithms. In such applications, the initialization algorithm can initialize additional target tracks as targets come and go during the operation of the system with multiple targets under track.
APA, Harvard, Vancouver, ISO, and other styles
21

Arezki, Yassir. "Algorithmes de références 'robustes' pour la métrologie dimensionnelle des surfaces asphériques et des surfaces complexes en optique." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLN058.

Full text
Abstract:
Les formes asphériques et les surfaces complexes sont une classe très avancée d'éléments optiques. Leur application a considérablement augmenté au cours des dernières années dans les systèmes d'imagerie, l'astronomie, la lithographie, etc. La métrologie de ces pièces est très difficile, en raison de la grande gamme dynamique d'information acquise et la traçabilité à l'unité SI mètre. Elle devrait faire usage de la norme infinie; (Méthode de zone minimum ou la méthode Min-Max) pour calculer l'enveloppe entourant les points dans le jeu de données en réduisant au minimum la différence entre l'écart maximum et l'écart minimal entre la surface et l'ensemble de données. Cette méthode a une grande complexité en fonction du nombre de points, enplus, les algorithmes impliqués sont non-déterministes. Bien que cette méthode fonctionne pour des géométries simples (lignes, plans, cercles, cylindres, cônes et sphères), elle est encore un défi majeur lorsqu' utilisée pour des géométries complexes (asphérique et surfaces complexes). Par conséquent, l'objectif de la thèse est le développement des algorithmes d'ajustement Min-Max pour les deux surfaces asphériques et complexes, afin de fournir des algorithmes de référence robustes pour la grande communauté impliquée dans ce domaine. Les algorithmes de référence à développer devraient être évalués et validés sur plusieurs données de référence (Softgauges) qui seront générées par la suite
Aspheres and freeform surfaces are a very challenging class of optical elements. Their application has grown considerably in the last few years in imaging systems, astronomy, lithography, etc. The metrology for aspheres is very challenging, because of the high dynamic range of the acquired information and the traceability to the SI unit meter. Metrology should make use of the infinite norm; (Minimum Zone Method or Min-Max method) to calculate the envelope enclosing the points in the dataset by minimizing the difference between the maximum deviation and the minimum deviation between the surface and the dataset. This method grows in complexity as the number of points in the dataset increases, and the involved algorithms are non-deterministic. Despite the fact that this method works for simple geometries (lines, planes, circles, cylinders, cones and spheres) it is still a major challenge when used on complex geometries (asphere and freeform surfaces). Therefore, the main objective is to address this key challenge about the development of Min-Max fitting algorithms for both aspherical and freeform surfaces as well as least squares fitting algorithms, in order to provide robust reference algorithms for the large community involved in this domain. The reference algorithms to be developed should be evaluated and validated on several reference data (softgauges) that will be generated using reference data generators
APA, Harvard, Vancouver, ISO, and other styles
22

Malik, Zohaib Mansoor. "Design and implementation of temporal filtering and other data fusion algorithms to enhance the accuracy of a real time radio location tracking system." Thesis, Högskolan i Gävle, Avdelningen för elektronik, matematik och naturvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-13261.

Full text
Abstract:
A general automotive navigation system is a satellite navigation system designed for use inautomobiles. It typically uses GPS to acquire position data to locate the user on a road in the unit's map database. However, due to recent improvements in the performance of small and lightweight micro-machined electromechanical systems (MEMS) inertial sensors have made the application of inertial techniques to such problems, possible. This has resulted in an increased interest in the topic of inertial navigation. In location tracking system, sensors are used either individually or in conjunction like in data fusion. However, still they remain noisy, and so there is a need to measure maximum data and then make an efficient system that can remove the noise from data and provide a better estimate. The task of this thesis work was to take data from two sensors, and use an estimation technique toprovide an accurate estimate of the true location. The proposed sensors were an accelerometer and a GPS device. This thesis however deals with using accelerometer sensor and using estimation scheme, Kalman filter. The thesis report presents an insight to both the proposed sensors and different estimation techniques. Within the scope of the work, the task was performed using simulation software Matlab. Kalman filter’s efficiency was examined using different noise levels.
APA, Harvard, Vancouver, ISO, and other styles
23

Narasimhan, Ramakrishnan Akshra. "Design and Evaluation of Perception System Algorithms for Semi-Autonomous Vehicles." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595256912692618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Elkin, Colin P. "Development of Adaptive Computational Algorithms for Manned and Unmanned Flight Safety." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1544640516618623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lian, Chunfeng. "Information fusion and decision-making using belief functions : application to therapeutic monitoring of cancer." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2333/document.

Full text
Abstract:
La radiothérapie est une des méthodes principales utilisée dans le traitement thérapeutique des tumeurs malignes. Pour améliorer son efficacité, deux problèmes essentiels doivent être soigneusement traités : la prédication fiable des résultats thérapeutiques et la segmentation précise des volumes tumoraux. La tomographie d’émission de positrons au traceur Fluoro- 18-déoxy-glucose (FDG-TEP) peut fournir de manière non invasive des informations significatives sur les activités fonctionnelles des cellules tumorales. Les objectifs de cette thèse sont de proposer: 1) des systèmes fiables pour prédire les résultats du traitement contre le cancer en utilisant principalement des caractéristiques extraites des images FDG-TEP; 2) des algorithmes automatiques pour la segmentation de tumeurs de manière précise en TEP et TEP-TDM. La théorie des fonctions de croyance est choisie dans notre étude pour modéliser et raisonner des connaissances incertaines et imprécises pour des images TEP qui sont bruitées et floues. Dans le cadre des fonctions de croyance, nous proposons une méthode de sélection de caractéristiques de manière parcimonieuse et une méthode d’apprentissage de métriques permettant de rendre les classes bien séparées dans l’espace caractéristique afin d’améliorer la précision de classification du classificateur EK-NN. Basées sur ces deux études théoriques, un système robuste de prédiction est proposé, dans lequel le problème d’apprentissage pour des données de petite taille et déséquilibrées est traité de manière efficace. Pour segmenter automatiquement les tumeurs en TEP, une méthode 3-D non supervisée basée sur le regroupement évidentiel (evidential clustering) et l’information spatiale est proposée. Cette méthode de segmentation mono-modalité est ensuite étendue à la co-segmentation dans des images TEP-TDM, en considérant que ces deux modalités distinctes contiennent des informations complémentaires pour améliorer la précision. Toutes les méthodes proposées ont été testées sur des données cliniques, montrant leurs meilleures performances par rapport aux méthodes de l’état de l’art
Radiation therapy is one of the most principal options used in the treatment of malignant tumors. To enhance its effectiveness, two critical issues should be carefully dealt with, i.e., reliably predicting therapy outcomes to adapt undergoing treatment planning for individual patients, and accurately segmenting tumor volumes to maximize radiation delivery in tumor tissues while minimize side effects in adjacent organs at risk. Positron emission tomography with radioactive tracer fluorine-18 fluorodeoxyglucose (FDG-PET) can noninvasively provide significant information of the functional activities of tumor cells. In this thesis, the goal of our study consists of two parts: 1) to propose reliable therapy outcome prediction system using primarily features extracted from FDG-PET images; 2) to propose automatic and accurate algorithms for tumor segmentation in PET and PET-CT images. The theory of belief functions is adopted in our study to model and reason with uncertain and imprecise knowledge quantified from noisy and blurring PET images. In the framework of belief functions, a sparse feature selection method and a low-rank metric learning method are proposed to improve the classification accuracy of the evidential K-nearest neighbor classifier learnt by high-dimensional data that contain unreliable features. Based on the above two theoretical studies, a robust prediction system is then proposed, in which the small-sized and imbalanced nature of clinical data is effectively tackled. To automatically delineate tumors in PET images, an unsupervised 3-D segmentation based on evidential clustering using the theory of belief functions and spatial information is proposed. This mono-modality segmentation method is then extended to co-segment tumor in PET-CT images, considering that these two distinct modalities contain complementary information to further improve the accuracy. All proposed methods have been performed on clinical data, giving better results comparing to the state of the art ones
APA, Harvard, Vancouver, ISO, and other styles
26

Lassoued, Khaoula. "Localisation de robots mobiles en coopération mutuelle par observation d'état distribuée." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2289/document.

Full text
Abstract:
On étudie dans cette thèse des méthodes de localisation coopérative de robots mobiles sans utilisation de mesures extéroceptives relatives, comme des angles ou des distances entre robots. Les systèmes de localisation considérés sont basés sur des mesures de radionavigation sur des balises fixes ou des satellites. Pour ces systèmes, on observe en général un écart entre la position observée et la position réelle. Cet écart systématique (appelé biais) peut être dû à une mauvaise position de la balise ou à une différence entre la propagation réelles des ondes électromagnétiques par rapport aux conditions standard utilisées pour établir les modèles d’observation. L’influence de ce biais sur la localisation des robots est non négligeable. La coopération et l’échange de données entre les robots (estimations des biais, estimations des positions et données proprioceptives) est une approche qui permet de corriger ces erreurs systématiques. La localisation coopérative par échange des estimations est sujette aux problèmes de consanguinité des données qui peuvent engendrer des résultats erronés, en particulier trop confiants. Lorsque les estimations sont utilisées pour la navigation autonome à l’approche, on doit éviter tout risque de collision qui peut mettre en jeu la sécurité des robots et des personnes aux alentours. On doit donc avoir recours à un mécanisme d’intégrité vérifiant que l’erreur commise reste inférieure à une erreur maximale tolérable pour la mission. Dans un tel contexte, il est nécessaire de caractériser des domaines de confiance fiables contenant les positions des robots mobiles avec une forte probabilité. L’utilisation des méthodes ensemblistes à erreurs bornées est considérée alors comme une solution efficace. En effet, ce type d’approche résout naturellement le problème de consanguinité des données et fournit des domaines de confiance fiables. De surcroît, l’utilisation de modèles non-linéaires ne pose aucun problème de linéarisation. Après avoir modélisé un système coopératif de nr robots avec des mesures biaisées sur des balises, une étude d’observabilité est conduite. Deux cas sont considérés selon la nature des mesures brutes des observations. En outre, des conditions d’observabilité sont démontrées. Un algorithme ensembliste de localisation coopérative est ensuite présenté. Les méthodes considérées sont basées sur la propagation de contraintes sur des intervalles et l’inversion ensembliste. La coopération est effectuée grâce au partage des positions estimées, des biais estimés et des mesures proprioceptives.L’échange des estimations de biais permet de réduire les incertitudes sur les positions des robots. Dans un cadre d’étude simple, la faisabilité de l’algorithme est évaluée grâce à des simulations de mesures de distances sur balises en utilisant plusieurs robots. La coopération est comparée aux méthodes non coopératives. L’algorithme coopératif ensembliste est ensuite testé sur des données réelles en utilisant deux véhicules. Les performances de la méthode ensembliste coopérative sont enfin comparées avec deux méthodes Bayésiennes séquentielles, notamment une avec fusion par intersection de covariance. La comparaison est conduite en termes d’exactitude et d’incertitude
In this work, we study some cooperative localization issues for mobile robotic systems that interact with each other without using relative measurements (e.g. bearing and relative distances). The considered localization technologies are based on beacons or satellites that provide radio-navigation measurements. Such systems often lead to offsets between real and observed positions. These systematic offsets (i.e, biases) are often due to inaccurate beacon positions, or differences between the real electromagnetic waves propagation and the observation models. The impact of these biases on robots localization should not be neglected. Cooperation and data exchange (estimates of biases, estimates of positions and proprioceptive measurements) reduce significantly systematic errors. However, cooperative localization based on sharing estimates is subject to data incest problems (i.e, reuse of identical information in the fusion process) that often lead to over-convergence problems. When position information is used in a safety-critical context (e.g. close navigation of autonomous robots), one should check the consistency of the localization estimates. In this context, we aim at characterizing reliable confidence domains that contain robots positions with high reliability. Hence, set-membership methods are considered as efficient solutions. This kind of approach enables merging adequately the information even when it is reused several time. It also provides reliable domains. Moreover, the use of non-linear models does not require any linearization. The modeling of a cooperative system of nr robots with biased beacons measurements is firstly presented. Then, we perform an observability study. Two cases regarding the localization technology are considered. Observability conditions are identified and demonstrated. We then propose a set-membership method for cooperativelocalization. Cooperation is performed by sharing estimated positions, estimated biases and proprioceptive measurements. Sharing biases estimates allows to reduce the estimation error and the uncertainty of the robots positions. The algorithm feasibility is validated through simulation when the observations are beacons distance measurements with several robots. The cooperation provides better performance compared to a non-cooperative method. Afterwards, the cooperative algorithm based on set-membership method is tested using real data with two experimental vehicles. Finally, we compare the interval method performance with a sequential Bayesian approach based on covariance intersection. Experimental results indicate that the interval approach provides more accurate positions of the vehicles with smaller confidence domains that remain reliable. Indeed, the comparison is performed in terms of accuracy and uncertainty
APA, Harvard, Vancouver, ISO, and other styles
27

Ribas, Afonso Degmar. "Classificação distribuída de anuros usando rede de sensores sem fio." Universidade Federal do Amazonas, 2013. http://tede.ufam.edu.br/handle/tede/2922.

Full text
Abstract:
Made available in DSpace on 2015-04-11T14:02:57Z (GMT). No. of bitstreams: 1 Afonso.pdf: 820074 bytes, checksum: 796ca447ff3c69734519173f92044438 (MD5) Previous issue date: 2013-03-27
Wireless Sensor Networks (WSNs) can be used in environmental conservation applications and studies due to its wireless communication, sensing, and monitoring capabilities. In the Ecology context, amphibians are used as bioindicators of ecosystemic changes of a region and can early indicate environmental problems. Thus, biologists monitor the anuran (frogs and toads) population in order to establish environmental conservational strategies. Anuran were chosen because the sounds they emit allow classification by using microphones and signal processing. In this work we propose and evaluate some distributed algorithms for anuran classification based on their calls (vocalizations) in the habit using WSNs. This method is interesting because it is not intrusive and it allows remote monitoring. Our solution builds cluster of nodes whose acoustic collected measurements are correlated. The nodes of the same group are combined to generate local classification decisions. Then, these decisions are combined to generate a global decision. We use k-means algorithm for clustering nodes with correlated measurements, which groups instances by similarity. Experiments show that, in comparison with other literature algorithms, the error rate of our solution were 26 pp (percentage points) lower.
As Redes de Sensores Sem Fios (RSSFs) podem ser utilizadas em aplicações de conservação e estudo ambiental devido à sua capacidade de sensoriamento, monitoramento e comunicação sem fio. Dentro do contexto da Ecologia, os anfíbios são utilizados como bioindicadores de mudanças no ecossistema de uma região e podem precocemente indicar problemas ambientais. Desta forma, os biólogos monitoram a população de anuros (sapos e rãs) a fim de estabelecer estratégias de conservação do meio ambiente. Os anuros são escolhidos por causa sons que emitem (coaxar), que permitem a identificação dessas espécies por meio de microfones e processamento do sinal. Portanto, neste trabalho propomos e avaliamos alguns algoritmos distribuídos para classificação de anuros baseados em suas vocalizações em seu habitat usando RSSF. Este método é interessante pois não é intrusivo e permite o monitoramento remoto. Nossa solução cria grupos de nós sensores cujas medidas acústicas coletadas estão correlacionadas. Os dados dos nós de um mesmo grupo são combinados para gerar decisões de classificação locais. Essas decisões são então combinadas para formar uma decisão global. Para agrupar os nós com medidas correlacionadas, utilizamos o algoritmo k-means, que agrupa instâncias similares. Os experimentos mostram que, em comparação com outros algoritmos da literatura, a taxa de erro da nossa solução chegou ser até 26 pp (pontos percentuais) menor.
APA, Harvard, Vancouver, ISO, and other styles
28

Jing, Hongyuan. "Landmine detection algorithm design based on data fusion technology." Thesis, University of Leicester, 2018. http://hdl.handle.net/2381/43099.

Full text
Abstract:
This research has focused on close-in landmine detection, which aims to identify landmines in a particular landmine area. Close-range landmine detection requires both sub-surface sensors, such as metal detectors and ground penetrating radar (GPR), and surface sensors, such as optical cameras. A new multi-focus image fusion algorithm is proposed which outperforms the existing intensity-hue-saturation (IHS) and principle components analysis (PCA) algorithms on both visual and fusion parameter analysis. In addition, the proposed algorithm can save 30.9% running time than the IHS algorithm, which is the same level as the existing PCA algorithm. A novel single GPR sensor landmine detection algorithm entropy-based region selecting algorithm is proposed which uses the entropy value of the region as the feature and continuous layers instead of a hard threshold. Two A-scan based statistics algorithms and a GPR signal oscillation feature based detection algorithm are also proposed. The results show that the proposed entropy-based algorithm outperforms the existing region selection algorithm on both detection accuracy and running time. The proposed statistics algorithms and GPR feature-based algorithm outperform the edge histogram descriptor and edge energy algorithms on both detection accuracy range, running time and memory usage. In addition, the GPR feature-based algorithm can reduce the false alarm rate (FAR) by 22% for all targets at 90% probability of detection. With regards to data fusion system design, this research overcomes the limitations of the existing Bayesian fusion approach. A new Kalman-Bayes based fusion system is developed which reduces the system uncertainty and improves the fusion process. The experimental results have shown that the proposed Kalman-Bayes fusion system and enhanced fuzzy fusion system can reach 7.8% FAR at 91.1% detection rate and 6.30% FAR at 92.4% detection rate, correspondingly, outperforming the existing Bayes and fuzzy fusion systems in terms of detection ability.
APA, Harvard, Vancouver, ISO, and other styles
29

Shi, Hongxiang. "Hierarchical Statistical Models for Large Spatial Data in Uncertainty Quantification and Data Fusion." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1504802515691938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
31

Héry, Elwan. "Localisation coopérative de véhicules autonomes communicants." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2516.

Full text
Abstract:
Afin de naviguer en autonomie un véhicule doit être capable de se localiser précisément par rapport aux bords de voie pour ne pas sortir de celle-ci et par rapport aux véhicules et piétons pour ne pas causer d'accident. Cette thèse traite de l'intérêt de la communication dans l'amélioration de la localisation des véhicules autonomes. La navigation autonome sur route est souvent réalisée à partir de coordonnées cartésiennes. Afin de mieux représenter la pose d'un véhicule relativement à la voie dans laquelle il circule, nous étudions l'utilisation de coordonnées curvilignes le long de chemins enregistrés dans des cartes. Ces coordonnées généralisent l'abscisse curviligne en y ajoutant un écart latéral signé par rapport au centre de la voie et une orientation relative au centre de cette voie en prenant en compte le sens de circulation. Une première approche de localisation coopérative est réalisée à partir de ces coordonnées. Une fusion de données à une dimension permet de montrer l'intérêt de la localisation coopérative dans le cas simplifié où l'écart latéral, l'orientation curviligne et la pose relative entre deux véhicules sont connus avec précision. Les problèmes de corrélation des erreurs dus à l'échange d'information sont pris en compte grâce à un filtre par intersection de covariance. Nous présentons ensuite à une méthode de perception de type ICP (Iterative Closest Point) pour déterminer la pose relative entre les véhicules à partir de points LiDAR et d'un modèle polygonal 2D représentant la forme du véhicule. La propagation des erreurs de poses absolues des véhicules à l'aide de poses relatives estimées avec leurs incertitudes se fait via des équations non linéaires qui peuvent avoir un fort impact sur la consistance. Les poses des différents véhicules entourant l'égo-véhicule sont estimés dans une carte locale dynamique (CLD) permettant d'enrichir la carte statique haute définition décrivant le centre de la voie et les bords de celle-ci. La carte locale dynamique est composée de l'état de chaque véhicule communicant. Les états sont fusionnés en utilisant un algorithme asynchrone, à partir de données disponibles à des temps variables. L'algorithme est décentralisé, chaque véhicule calculant sa propre CLD et la partageant. Les erreurs de position des récepteurs GNSS étant biaisées, une détection de marquages est introduite pour obtenir la distance latérale par rapport au centre de la voie afin d'estimer ces biais. Des observations LiDAR avec la méthode ICP permettent de plus d'enrichir la fusion avec des contraintes entre les véhicules. Des résultats expérimentaux illustrent les performances de cette approche en termes de précision et de consistance
To be able to navigate autonomously, a vehicle must be accurately localized relatively to all obstacles, such as roadside for lane keeping and vehicles and pedestrians to avoid causing accidents. This PhD thesis deals with the interest of cooperation to improve the localization of cooperative vehicles that exchange information. Autonomous navigation on the road is often based on coordinates provided in a Cartesian frame. In order to better represent the pose of a vehicle with respect to the lane in which it travels, we study curvilinear coordinates with respect to a path stored in a map. These coordinates generalize the curvilinear abscissa by adding a signed lateral deviation from the center of the lane and an orientation relative to the center of the lane taking into account the direction of travel. These coordinates are studied with different track models and using different projections to make the map-matching. A first cooperative localization approach is based on these coordinates. The lateral deviation and the orientation relative to the lane can be known precisely from a perception of the lane borders, but for autonomous driving with other vehicles, it is important to maintain a good longitudinal accuracy. A one-dimensional data fusion method makes it possible to show the interest of the cooperative localization in this simplified case where the lateral deviation, the curvilinear orientation and the relative positioning between two vehicles are accurately known. This case study shows that, in some cases, lateral accuracy can be propagated to other vehicles to improve their longitudinal accuracy. The correlation issues of the errors are taken into account with a covariance intersection filter. An ICP (Iterative Closest Point) minimization algorithm is then used to determine the relative pose between the vehicles from LiDAR points and a 2D polygonal model representing the shape of the vehicle. Several correspondences of the LiDAR points with the model and different minimization approaches are compared. The propagation of absolute vehicle pose using relative poses with their uncertainties is done through non-linear equations that can have a strong impact on consistency. The different dynamic elements surrounding the ego-vehicle are estimated in a Local Dynamic Map (LDM) to enhance the static high definition map describing the center of the lane and its border. In our case, the agents are only communicating vehicles. The LDM is composed of the state of each vehicle. The states are merged using an asynchronous algorithm, fusing available data at variable times. The algorithm is decentralized, each vehicle computing its own LDM and sharing it. As the position errors of the GNSS receivers are biased, a marking detection is introduced to obtain the lateral deviation from the center of the lane in order to estimate these biases. LiDAR observations with the ICP method allow to enrich the fusion with the constraints between the vehicles. Experimental results of this fusion show that the vehicles are more accurately localized with respect to each other while maintaining consistent poses
APA, Harvard, Vancouver, ISO, and other styles
32

Ruthenberg, Thomas M. "Data fusion algorithm for the Vessel Traffic Services system : a fuzzy associative system approach /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA300458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

De, Gregorio Ludovica. "Development of new data fusion techniques for improving snow parameters estimation." Doctoral thesis, Università degli studi di Trento, 2019. http://hdl.handle.net/11572/245392.

Full text
Abstract:
Water stored in snow is a critical contribution to the world’s available freshwater supply and is fundamental to the sustenance of natural ecosystems, agriculture and human societies. The importance of snow for the natural environment and for many socio-economic sectors in several mid‐ to high‐latitude mountain regions around the world, leads scientists to continuously develop new approaches to monitor and study snow and its properties. The need to develop new monitoring methods arises from the limitations of in situ measurements, which are pointwise, only possible in accessible and safe locations and do not allow for a continuous monitoring of the evolution of the snowpack and its characteristics. These limitations have been overcome by the increasingly used methods of remote monitoring with space-borne sensors that allow monitoring the wide spatial and temporal variability of the snowpack. Snow models, based on modeling the physical processes that occur in the snowpack, are an alternative to remote sensing for studying snow characteristics. However, from literature it is evident that both remote sensing and snow models suffer from limitations as well as have significant strengths that it would be worth jointly exploiting to achieve improved snow products. Accordingly, the main objective of this thesis is the development of novel methods for the estimation of snow parameters by exploiting the different properties of remote sensing and snow model data. In particular, the following specific novel contributions are presented in this thesis: i. A novel data fusion technique for improving the snow cover mapping. The proposed method is based on the exploitation of the snow cover maps derived from the AMUNDSEN snow model and the MODIS product together with their quality layer in a decision level fusion approach by mean of a machine learning technique, namely the Support Vector Machine (SVM). ii. A new approach has been developed for improving the snow water equivalent (SWE) product obtained from AMUNDSEN model simulations. The proposed method exploits some auxiliary information from optical remote sensing and from topographic characteristics of the study area in a new approach that differs from the classical data assimilation approaches and is based on the estimation of AMUNDSEN error with respect to the ground data through a k-NN algorithm. The new product has been validated with ground measurement data and by a comparison with MODIS snow cover maps. In a second step, the contribution of information derived from X-band SAR imagery acquired by COSMO-SkyMed constellation has been evaluated, by exploiting simulations from a theoretical model to enlarge the dataset.
APA, Harvard, Vancouver, ISO, and other styles
34

Midwood, Sean A. "A computationally efficient and cost effective multisensor data fusion algorithm for the United States Coast Guard Vessel Traffic Services system." Thesis, Monterey, Calif. : Naval Postgraduate School, 1997. http://handle.dtic.mil/100.2/ADA333476.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 1997.
Thesis Advisor(s): Murali Tummala. "September 1997." Includes bibliographical references (p. 61-62). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
35

Haj, Chhadé Hiba. "Data fusion and collaborative state estimation in wireless sensor networks." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2207/document.

Full text
Abstract:
L'objectif de la thèse est de développer des algorithmes de fusion de données recueillies à l’aide d'un réseau de capteurs sans fil afin de localiser plusieurs sources émettant un agent chimique ou biologique dans l'air. Ces capteurs détectent la concentration de la substance émise, transportée par advection et diffusion, au niveau de leurs positions et de communiquer cette information à un centre de traitement. L’information recueillie de façon collaborative est d'abord utilisée pour localiser les capteurs déployés au hasard et ensuite pour localiser les sources. Les applications comprennent, entre autres, la surveillance environnementale et la surveillance de sites sensibles ainsi que des applications de sécurité dans le cas d'une libération accidentelle ou intentionnelle d'un agent toxique. Toutefois, l'application considérée dans la thèse est celle de la détection et la localisation de mines terrestres. Dans cette approche, les mines sont considérées comme des sources émettrices de produits chimiques explosifs.La thèse comprend une contribution théorique où nous étendons l'algorithme de propagation de la croyance, un algorithme de fusion de données bien connu et largement utilisé pour l'estimation collaborative d'état dans les réseaux de capteurs, au cadre des méthodes à erreurs bornées. Le nouvel algorithme est testé sur le problème de l'auto-localisation dans les réseaux de capteurs statiques ainsi que l'application de suivi d'un objet mobile en utilisant un réseau de capteurs de distance. Autres contributions comprennent l'utilisation d'une approche probabiliste bayésienne avec des techniques d'analyse de données pour localiser un nombre inconnu de sources émettrices de vapeur
The aim of the thesis is to develop fusion algorithms for data collected from a wireless sensor network in order to locate multiple sources emitting some chemical or biological agent in the air. These sensors detect the concentration of the emitted substance, transported by advection and diffusion, at their positions and communicate this information to a treatment center. The information collected in a collaborative manner is used first to locate the randomly deployed sensors and second to locate the sources. Applications include, amongst others, environmental monitoring and surveillance of sensitive sites as well as security applications in the case of an accidental or intentional release of a toxic agent. However, the application we consider in the thesis is that of landmine detection and localization. In this approach, the land mines are considered as sources emitting explosive chemicals. The thesis includes a theoretical contribution where we extend the Belief Propagation algorithm, a well-known data fusion algorithm that is widely used for collaborative state estimation in sensor networks, to the bounded error framework. The novel algorithm is tested on the self-localization problem in static sensor networks as well as the application of tracking a mobile object using a network of range sensors. Other contributions include the use of a Bayesian probabilistic approach along with data analysis techniques to locate an unknown number of vapor emitting sources
APA, Harvard, Vancouver, ISO, and other styles
36

Vincke, Bastien. "Architectures pour des systèmes de localisation et de cartographie simultanées." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00770323.

Full text
Abstract:
La robotique mobile est un domaine en plein essor. L'un des domaines de recherche consiste à permettre à un robot de cartographier son environnement tout en se localisant dans l'espace. Les techniques couramment employées de SLAM (Simultaneous Localization And Mapping) restent généralement coûteuses en termes de puissance de calcul. La tendance actuelle vers la miniaturisation des systèmes impose de restreindre les ressources embarquées. L'ensemble de ces constatations nous ont guidés vers l'intégration d'algorithmes de SLAM sur des architectures adéquates dédiées pour l'embarqué.Les premiers travaux ont consisté à définir une architecture permettant à un robot mobile de se localiser. Cette architecture doit respecter certaines contraintes, notamment celle du temps réel, des dimensions réduites et de la faible consommation énergétique.L'implantation optimisée d'un algorithme (EKF-SLAM), en utilisant au mieux les spécificités architecturales du système (capacités des processeurs, implantation multi-cœurs, calcul vectoriel ou parallélisation sur architecture hétérogène), a permis de démontrer la possibilité de concevoir des systèmes embarqués pour les applications SLAM dans un contexte d'adéquation algorithme architecture. Une seconde approche a été explorée ayant pour objectif la définition d'un système à base d'une architecture reconfigurable (à base de FPGA) permettant la conception d'une architecture fortement parallèle dédiée au SLAM. L'architecture définie a été évaluée en utilisant une méthodologie HIL (Hardware in the Loop).Les principaux algorithmes de SLAM sont conçus autour de la théorie des probabilités, ils ne garantissent en aucun cas les résultats de localisation. Un algorithme de SLAM basé sur la théorie ensembliste a été défini garantissant l'ensemble des résultats obtenus. Plusieurs améliorations algorithmiques sont ensuite proposées. Une comparaison avec les algorithmes probabilistes a mis en avant la robustesse de l'approche ensembliste.Ces travaux de thèse mettent en avant deux contributions principales. La première consiste à affirmer l'importance d'une conception algorithme-architecture pour résoudre la problématique du SLAM. La seconde est la définition d'une méthode ensembliste permettant de garantir les résultats de localisation et de cartographie.
APA, Harvard, Vancouver, ISO, and other styles
37

Bader, Kaci. "Tolérance aux fautes pour la perception multi-capteurs : application à la localisation d'un véhicule intelligent." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2161/document.

Full text
Abstract:
La perception est une entrée fondamentale des systèmes robotiques, en particulier pour la localisation, la navigation et l'interaction avec l'environnement. Or les données perçues par les systèmes robotiques sont souvent complexes et sujettes à des imprécisions importantes. Pour remédier à ces problèmes, l'approche multi-capteurs utilise soit plusieurs capteurs de même type pour exploiter leur redondance, soit des capteurs de types différents pour exploiter leur complémentarité afin de réduire les imprécisions et les incertitudes sur les capteurs. La validation de cette approche de fusion de données pose deux problèmes majeurs.Tout d'abord, le comportement des algorithmes de fusion est difficile à prédire,ce qui les rend difficilement vérifiables par des approches formelles. De plus, l'environnement ouvert des systèmes robotiques engendre un contexte d'exécution très large, ce qui rend les tests difficiles et coûteux. L'objet de ces travaux de thèse est de proposer une alternative à la validation en mettant en place des mécanismes de tolérance aux fautes : puisqu'il est difficile d'éliminer toutes les fautes du système de perception, on va chercher à limiter leurs impacts sur son fonctionnement. Nous avons étudié la tolérance aux fautes intrinsèquement permise par la fusion de données en analysant formellement les algorithmes de fusion de données, et nous avons proposé des mécanismes de détection et de rétablissement adaptés à la perception multi-capteurs. Nous avons ensuite implémenté les mécanismes proposés pour une application de localisation de véhicules en utilisant la fusion de données par filtrage de Kalman. Nous avons finalement évalué les mécanismes proposés en utilisant le rejeu de données réelles et la technique d'injection de fautes, et démontré leur efficacité face à des fautes matérielles et logicielles
Perception is a fundamental input for robotic systems, particularly for positioning, navigation and interaction with the environment. But the data perceived by these systems are often complex and subject to significant imprecision. To overcome these problems, the multi-sensor approach uses either multiple sensors of the same type to exploit their redundancy or sensors of different types for exploiting their complementarity to reduce the sensors inaccuracies and uncertainties. The validation of the data fusion approach raises two major problems. First, the behavior of fusion algorithms is difficult to predict, which makes them difficult to verify by formal approaches. In addition, the open environment of robotic systems generates a very large execution context, which makes the tests difficult and costly. The purpose of this work is to propose an alternative to validation by developing fault tolerance mechanisms : since it is difficult to eliminate all the errors of the perceptual system, We will try to limit impact in their operation. We studied the inherently fault tolerance allowed by data fusion by formally analyzing the data fusion algorithms, and we have proposed detection and recovery mechanisms suitable for multi-sensor perception, we implemented the proposed mechanisms on vehicle localization application using Kalman filltering data fusion. We evaluated the proposed mechanims using the real data replay and fault injection technique
APA, Harvard, Vancouver, ISO, and other styles
38

Seba, Ali. "Fusion de données capteurs visuels et inertiels pour l'estimation de la pose d'un corps rigide." Thesis, Versailles-St Quentin en Yvelines, 2015. http://www.theses.fr/2015VERS020V/document.

Full text
Abstract:
Cette thèse traite la problématique d'estimation de la pose (position relative et orientation) d'un corps rigide en mouvement dans l’espace 3D par fusion de données issues de capteurs inertiels et visuels. Les mesures inertielles sont fournies à partir d’une centrale inertielle composée de gyroscopes 3 axes et d’accéléromètres 3 axes. Les données visuelles sont issues d’une caméra. Celle-ci est positionnée sur le corps rigide en mouvement, elle fournit des images représentatives du champ visuel perçu. Ainsi, les mesures implicites des directions des lignes, supposées fixes dans l’espace de la scène, projetées sur le plan de l’image seront utilisées dans l’algorithme d’estimation de l’attitude. La démarche consistait d’abord à traiter le problème de la mesure issue du capteur visuel sur une longue séquence en utilisant les caractéristiques de l’image. Ainsi, un algorithme de suivi de lignes a été proposé en se basant sur les techniques de calcul du flux optique des points extraits des lignes à suivre et utilisant une approche de mise en correspondance par minimisation de la distance euclidienne. Par la suite, un observateur conçu dans l’espace SO(3) a été proposé afin d’estimer l’orientation relative du corps rigide dans la scène 3D en fusionnant les données issues de l’algorithme de suivi de lignes avec les données des gyroscopes. Le gain de l’observateur a été élaboré en utilisant un filtre de Kalman de type M.E.K.F. (Multiplicative Extended Kalman Filter). Le problème de l’ambigüité du signe dû à la mesure implicite des directions des lignes a été considéré dans la conception de cet observateur. Enfin, l’estimation de la position relative et de la vitesse absolue du corps rigide dans la scène 3D a été traitée. Deux observateurs ont été proposés : le premier est un observateur en cascade avec découplage entre l’estimation de l’attitude et l’estimation de la position. L’estimation issue de l’observateur d’attitude alimente un observateur non linéaire utilisant des mesures issues des accéléromètres afin de fournir une estimation de la position relative et de la vitesse absolue du corps rigide. Le deuxième observateur, conçu quant à lui directement dans SE(3) , utilise un filtre de Kalman de type M.E.K.F afin d’estimer la pose par fusion de données inertielles (accéléromètres, gyromètres) et des données visuelles. Les performances des méthodes proposées sont illustrées et validées par différents résultats de simulation
AbstractThis thesis addresses the problems of pose estimation of a rigid body moving in 3D space by fusing data from inertial and visual sensors. The inertial measurements are provided from an I.M.U. (Inertial Measurement Unit) composed by accelerometers and gyroscopes. Visual data are from cameras, which positioned on the moving object, provide images representative of the perceived visual field. Thus, the implicit measure directions of fixed lines in the space of the scene from their projections on the plane of the image will be used in the attitude estimation. The approach was first to address the problem of measuring visual sensors after a long sequence using the characteristics of the image. Thus, a line tracking algorithm has been proposed based on optical flow of the extracted points and line matching approach by minimizing the Euclidean distance. Thereafter, an observer in the SO(3) space has been proposed to estimate the relative orientation of the object in the 3D scene by merging the data from the proposed lines tracking algorithm with Gyro data. The observer gain was developed using a Kalman filter type M.E.K.F. (Multiplicative Extended Kalman Filter). The problem of ambiguity in the sign of the measurement directions of the lines was considered in the design of the observer. Finally, the estimation of the relative position and the absolute velocity of the rigid body in the 3D scene have been processed. Two observers were proposed: the first one is an observer cascaded with decoupled from the estimation of the attitude and position estimation. The estimation result of the attitude observer feeds a nonlinear observer using measurements from the accelerometers in order to provide an estimate of the relative position and the absolute velocity of the rigid body. The second observer, designed directly in SE (3) for simultaneously estimating the position and orientation of a rigid body in 3D scene by fusing inertial data (accelerometers, gyroscopes), and visual data using a Kalman filter (M.E.K.F.). The performance of the proposed methods are illustrated and validated by different simulation results
APA, Harvard, Vancouver, ISO, and other styles
39

May, Michael. "Data analytics and methods for improved feature selection and matching." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.

Full text
Abstract:
This work focuses on analysing and improving feature detection and matching. After creating an initial framework of study, four main areas of work are researched. These areas make up the main chapters within this thesis and focus on using the Scale Invariant Feature Transform (SIFT).The preliminary analysis of the SIFT investigates how this algorithm functions. Included is an analysis of the SIFT feature descriptor space and an investigation into the noise properties of the SIFT. It introduces a novel use of the a contrario methodology and shows the success of this method as a way of discriminating between images which are likely to contain corresponding regions from images which do not. Parameter analysis of the SIFT uses both parameter sweeps and genetic algorithms as an intelligent means of setting the SIFT parameters for different image types utilising a GPGPU implementation of SIFT. The results have demonstrated which parameters are more important when optimising the algorithm and the areas within the parameter space to focus on when tuning the values. A multi-exposure, High Dynamic Range (HDR), fusion features process has been developed where the SIFT image features are matched within high contrast scenes. Bracketed exposure images are analysed and features are extracted and combined from different images to create a set of features which describe a larger dynamic range. They are shown to reduce the effects of noise and artefacts that are introduced when extracting features from HDR images directly and have a superior image matching performance. The final area is the development of a novel, 3D-based, SIFT weighting technique which utilises the 3D data from a pair of stereo images to cluster and class matched SIFT features. Weightings are applied to the matches based on the 3D properties of the features and how they cluster in order to attempt to discriminate between correct and incorrect matches using the a contrario methodology. The results show that the technique provides a method for discriminating between correct and incorrect matches and that the a contrario methodology has potential for future investigation as a method for correct feature match prediction.
APA, Harvard, Vancouver, ISO, and other styles
40

Pálenská, Markéta. "Návrh algoritmu pro fúzi dat navigačních systémů GPS a INS." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2013. http://www.nusl.cz/ntk/nusl-230495.

Full text
Abstract:
Diplomová práce se zabývá návrhem algoritmu rozšířeného Kalmanova filtru, který integruje data z inerciálního navigačního systému (INS) a globálního polohovacího systému (GPS). Součástí algoritmu je i samotná mechanizace INS, určující na základě dat z akcelerometrů a gyroskopů údaje o rychlosti, zeměpisné pozici a polohových úhlech letadla. Vzhledem k rychlému nárůstu chybovosti INS je výstup korigován hodnotami rychlosti a pozice získané z GPS. Výsledný algoritmus je implementován v prostředí Simulink. Součástí práce je odvození jednotlivých stavových matic rozšířeného Kalmanova filtru.
APA, Harvard, Vancouver, ISO, and other styles
41

Plachkov, Alex. "Soft Data-Augmented Risk Assessment and Automated Course of Action Generation for Maritime Situational Awareness." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35336.

Full text
Abstract:
This thesis presents a framework capable of integrating hard (physics-based) and soft (people-generated) data for the purpose of achieving increased situational assessment (SA) and effective course of action (CoA) generation upon risk identification. The proposed methodology is realized through the extension of an existing Risk Management Framework (RMF). In this work, the RMF’s SA capabilities are augmented via the injection of soft data features into its risk modeling; the performance of these capabilities is evaluated via a newly-proposed risk-centric information fusion effectiveness metric. The framework’s CoA generation capabilities are also extended through the inclusion of people-generated data, capturing important subject matter expertise and providing mission-specific requirements. Furthermore, this work introduces a variety of CoA-related performance measures, used to assess the fitness of each individual potential CoA, as well as to quantify the overall chance of mission success improvement brought about by the inclusion of soft data. This conceptualization is validated via experimental analysis performed on a combination of real- world and synthetically-generated maritime scenarios. It is envisioned that the capabilities put forth herein will take part in a greater system, capable of ingesting and seamlessly integrating vast amounts of heterogeneous data, with the intent of providing accurate and timely situational updates, as well as assisting in operational decision making.
APA, Harvard, Vancouver, ISO, and other styles
42

Reche, Jérôme. "Nouvelle méthodologie hybride pour la mesure de rugosités sub-nanométriques." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT050.

Full text
Abstract:
La détermination de la rugosité sub-nanométrique sur les flancs des motifs, dont les dimensions critiques atteignent une taille inférieure à 10nm, devient une étape primordiale. Mais à ce jour aucune technique de métrologie n'est suffisamment robuste pour garantir un résultat juste et précis. Une voie actuellement en cours d'exploration pour la mesure dimensionnelle consiste à hybrider différentes techniques de métrologie. Pour ce faire, des algorithmes de fusion de données sont développés afin de traiter les informations issues de multiples équipements de métrologie. Le but étant donc d’utiliser ce même type de méthode pour la mesure de rugosité de ligne. Ces travaux de thèse explicitent tout d’abord les progrès de méthodologie de mesure de rugosité de ligne au travers de la décomposition fréquentielle et des modèles associés. Les différentes techniques utilisées pour la mesure de rugosité de lignes sont présentées avec une nouveauté importante concernant le développement et l’utilisation de la technique SAXS pour ce type de mesure. Cette technique possède un potentiel élevé pour la détermination de motifs sub nanométriques. Des étalons de rugosités de ligne sont fabriqués, sur la base de l’état de l’art comportant des rugosités périodiques, mais aussi, des rugosités plus complexes déterminées par un modèle statistique utilisé normalement pour la mesure. Ces travaux se focalisent finalement sur les méthodes d’hybridation et plus particulièrement sur l’utilisation de réseaux de neurones. Ainsi, la mise en place d’un réseau de neurones est détaillée au travers de la multitude de paramètres qu’il comporte. Le choix d’un apprentissage du réseau de neurones sur simulation mène à la nécessité de savoir générer les différentes métrologies en présence
Roughness at Sub-nanometric scale determination becomes a critical issue, especially for patterns with critical dimensions below 10nm. Currently, there is no metrology technique able to provide a result with high precision and accuracy. A way, based on hybrid metrology, is currently explored and dedicated to dimensional measurements. This hybrid metrology uses data fusion algorithms in order to address data coming from different tools. This thesis presents some improvements on line roughness analysis thanks to frequency decomposition and associated model. The current techniques used for roughness determination are explained and a new one SAXS (Small Angle X-rays Scattering) is used to push again limits of extraction of roughness. This technique has a high potential to determine sub nanometrics patterns. Moreover, the design and manufacturing of reference line roughness samples is made, following the state of art with periodic roughness, but also more complex roughness determined by a statistical model usually used for measurement. Finally, this work focus on hybridization methods and more especially on neural network utilization. Thus, the establishment of a neural network is detailed through the multitude of parameters which must be set. In addition, training of the neural network on simulation leads to the capability to generate different metrology
APA, Harvard, Vancouver, ISO, and other styles
43

Kenyeres, Martin. "Analýza a zefektivnění distribuovaných systémů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-390292.

Full text
Abstract:
A significant progress in the evolution of the computer systems and their interconnection over the past 70 years has allowed replacing the frequently used centralized architectures with the highly distributed ones, formed by independent entities fulfilling specific functionalities as one user-intransparent unit. This has resulted in an intense scientic interest in distributed algorithms and their frequent implementation into real systems. Especially, distributed algorithms for multi-sensor data fusion, ensuring an enhanced QoS of executed applications, find a wide usage. This doctoral thesis addresses an optimization and an analysis of the distributed systems, namely the distributed consensus-based algorithms for an aggregate function estimation (primarily, my attention is focused on a mean estimation). The first section is concerned with a theoretical background of the distributed systems, their evolution, their architectures, and a comparison with the centralized systems (i.e. their advantages/disadvantages). The second chapter deals with multi-sensor data fusion, its application, the classification of the distributed estimation techniques, their mathematical modeling, and frequently quoted algorithms for distributed averaging (e.g. protocol Push-Sum, Metropolis-Hastings weights, Best Constant weights etc.). The practical part is focused on mechanisms for an optimization of the distributed systems, the proposal of novel algorithms and complements for the distributed systems, their analysis, and comparative studies in terms of such as the convergence rate, the estimation precision, the robustness, the applicability to real systems etc.
APA, Harvard, Vancouver, ISO, and other styles
44

Jiao, Lianmeng. "Classification of uncertain data in the framework of belief functions : nearest-neighbor-based and rule-based approaches." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2222/document.

Full text
Abstract:
Dans de nombreux problèmes de classification, les données sont intrinsèquement incertaines. Les données d’apprentissage disponibles peuvent être imprécises, incomplètes, ou même peu fiables. En outre, des connaissances spécialisées partielles qui caractérisent le problème de classification peuvent également être disponibles. Ces différents types d’incertitude posent de grands défis pour la conception de classifieurs. La théorie des fonctions de croyance fournit un cadre rigoureux et élégant pour la représentation et la combinaison d’une grande variété d’informations incertaines. Dans cette thèse, nous utilisons cette théorie pour résoudre les problèmes de classification des données incertaines sur la base de deux approches courantes, à savoir, la méthode des k plus proches voisins (kNN) et la méthode à base de règles.Pour la méthode kNN, une préoccupation est que les données d’apprentissage imprécises dans les régions où les classes de chevauchent peuvent affecter ses performances de manière importante. Une méthode d’édition a été développée dans le cadre de la théorie des fonctions de croyance pour modéliser l’information imprécise apportée par les échantillons dans les régions qui se chevauchent. Une autre considération est que, parfois, seul un ensemble de données d’apprentissage incomplet est disponible, auquel cas les performances de la méthode kNN se dégradent considérablement. Motivé par ce problème, nous avons développé une méthode de fusion efficace pour combiner un ensemble de classifieurs kNN couplés utilisant des métriques couplées apprises localement. Pour la méthode à base de règles, afin d’améliorer sa performance dans les applications complexes, nous étendons la méthode traditionnelle dans le cadre des fonctions de croyance. Nous développons un système de classification fondé sur des règles de croyance pour traiter des informations incertains dans les problèmes de classification complexes. En outre, dans certaines applications, en plus de données d’apprentissage, des connaissances expertes peuvent également être disponibles. Nous avons donc développé un système de classification hybride fondé sur des règles de croyance permettant d’utiliser ces deux types d’information pour la classification
In many classification problems, data are inherently uncertain. The available training data might be imprecise, incomplete, even unreliable. Besides, partial expert knowledge characterizing the classification problem may also be available. These different types of uncertainty bring great challenges to classifier design. The theory of belief functions provides a well-founded and elegant framework to represent and combine a large variety of uncertain information. In this thesis, we use this theory to address the uncertain data classification problems based on two popular approaches, i.e., the k-nearest neighbor rule (kNN) andrule-based classification systems. For the kNN rule, one concern is that the imprecise training data in class over lapping regions may greatly affect its performance. An evidential editing version of the kNNrule was developed based on the theory of belief functions in order to well model the imprecise information for those samples in over lapping regions. Another consideration is that, sometimes, only an incomplete training data set is available, in which case the ideal behaviors of the kNN rule degrade dramatically. Motivated by this problem, we designedan evidential fusion scheme for combining a group of pairwise kNN classifiers developed based on locally learned pairwise distance metrics.For rule-based classification systems, in order to improving their performance in complex applications, we extended the traditional fuzzy rule-based classification system in the framework of belief functions and develop a belief rule-based classification system to address uncertain information in complex classification problems. Further, considering that in some applications, apart from training data collected by sensors, partial expert knowledge can also be available, a hybrid belief rule-based classification system was developed to make use of these two types of information jointly for classification
APA, Harvard, Vancouver, ISO, and other styles
45

Avilès, Cruz Carlos. "Analyse de texture par statistiques d'ordre superieur : caracterisation et performances." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0001.

Full text
Abstract:
Dans le cadre statistique, chercher a classer et a segmenter des textures a grain fin, c'est souvent utiliser des statistiques d'ordre un et d'ordre deux mais parfois ceux-ci sont insuffisantes. Une autre alternative est d'utiliser les statistiques d'ordre superieur, plus particulierement les moments d'ordre trois et d'ordre quatre. Dans ce travail, on met en place une methodologie d'experimentation ayant pour but de faire de la classification et de la segmentation de micro-textures, a l'aide de ces statistiques d'ordre trois et quatre. Deux methodes pour la classification et la segmentation de micro-textures sont mise en place, d'une part une methode supervisee et d'autre part une methode non supervisee. Cette derniere est basee sur l'algorithme em. Du fait de la redondance importante d'information sous-jacente aux moments statistiques, des methodes de reduction et de selection de dimension sont testees. D'une part, pour la reduction de dimension on a utilise l'analyse en composantes principales (acp) et l'analyse en composantes curvilignes (acc). D'autre part, pour la selection d'attributs la methode branch and bound a ete utilisee. Ces methodes ont ete explorees sur les attributs statistiques afin d'utiliser les plus discriminants soit dans l'espace original, soit dans un espace de projection. Puisqu'aucune famille de parametres prise isolement (statistique d'ordre un, deux, trois et quatre) ne suffit pour faire la discrimination d'une large gamme de textures, on est oblige de faire leur mise en cooperation via la fusion de donnees, fournissant une solution performante en reconnaissance.
APA, Harvard, Vancouver, ISO, and other styles
46

Negri, Lucas Hermann. "Algoritmos de inteligência computacional em instrumentação: uso de fusão de dados na avaliação de amostras biológicas e químicas." Universidade do Estado de Santa Catarina, 2012. http://tede.udesc.br/handle/handle/2072.

Full text
Abstract:
Made available in DSpace on 2016-12-12T20:27:37Z (GMT). No. of bitstreams: 1 LUCAS HERMANN NEGRI.pdf: 2286573 bytes, checksum: 5c0e3c77c1d910bd47dd444753c142c4 (MD5) Previous issue date: 2012-02-24
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This work presents computational methods to process data from electrical impedance spectroscopy and fiber Bragg grating interrogation in order to characterize the evaluated samples. Estimation and classification systems were developed, by using the signals isolatedly or simultaneously. A new method to adjust the parameters of functions that describes the electrical impedance spectra by using particle swarm optimization is proposed. Such method were also extended to correct distorted spectra. A benchmark for peak detection algorithms in fiber Bragg grating interrogation was performed, including the currently used algorithms as obtained from literature, where the accuracy, precision, and computational performance were evaluated. This comparative study was performed with both simulated and experimental data. It was perceived that there is no optimal algorithm when all aspects are taken into account, but it is possible to choose a suitable algorithm when one has the application requirements. A novel peak detection algorithm based on an artificial neural network is proposed, being recommended when the analyzed spectra have distortions or is not symmetrical. Artificial neural networks and support vector machines were employed with the data processing algorithms to classify or estimate sample characteristics in experiments with bovine meat, milk, and automotive fuel. The results have shown that the proposed data processing methods are useful to extract the data main information and that the employed data fusion schemes were useful, in its initial classification and estimation objectives.
Neste trabalho são apresentados métodos computacionais para o processamento de dados produzidos em sistemas de espectroscopia de impedância elétrica e sensoriamento a redes de Bragg em fibra óptica com o objetivo de inferir características das amostras analisadas. Sistemas de estimação e classificação foram desenvolvidos, utilizando os sinais isoladamente ou de forma conjunta com o objetivo de melhorar as respostas dos sistemas. Propõe-se o ajuste dos parâmetros de funções que modelam espectros de impedância elétrica por meio de um novo algoritmo de otimização por enxame de partículas, incluindo a sua utilização na correção de espectros com determinadas distorções. Um estudo comparativo foi realizado entre os métodos correntes utilizados na detecção de pico de sinais resultantes de sensores em fibras ópticas, onde avaliou-se a exatidão, precisão e desempenho computacional. Esta comparação foi feita utilizando dados simulados e experimentais, onde percebeu-se que não há algoritmo simultaneamente superior em todos os aspectos avaliados, mas que é possível escolher o ideal quando se têm os requisitos da aplicação. Um método de detecção de pico por meio de uma rede neural artificial foi proposto, sendo recomendado em situações onde o espectro analisado possui distorções ou não é simétrico. Redes neurais artificiais e máquinas de vetor de suporte foram utilizadas em conjunto com os algoritmos de processamento com o objetivo de classificar ou estimar alguma característica de amostras em experimentos que envolveram carnes bovinas, leite bovino e misturas de combustível automotivo. Mostra-se neste trabalho que os métodos de processamento propostos são úteis para a extração das características importantes dos dados e que os esquemas utilizados para a fusão destes dados foram úteis dentro dos seus objetivos iniciais de classificação e estimação.
APA, Harvard, Vancouver, ISO, and other styles
47

Patrix, Jérémy. "Détection de comportements à travers des modèles multi-agents collaboratifs, appliquée à l'évaluation de la situation, notamment en environnement asymétrique avec des données imprécises et incertaines." Phd thesis, Université de Caen, 2013. http://tel.archives-ouvertes.fr/tel-00991091.

Full text
Abstract:
Ce manuscrit de thèse présente une méthode innovante brevetée pour la détection de comportements collectifs. En utilisant des procédés de fusion sur les données issues d'un réseau multi-capteurs, les récents systèmes de surveillance obtiennent les séquences d'observations des personnes surveillées. Ce bas niveau d'évaluation de la situation a été mesuré insuffisant pour aider les forces de sécurité lors des événements de foule. Afin d'avoir une plus haute évaluation de la situation dans ces environnements asymétriques, nous proposons une approche multi-agents qui réduit la complexité du problème par des agents sur trois niveaux - macro, méso et micro - d'observations. Nous utilisons un nouvel état relatif dans les approches de l'état de l'art pour nous permettre la détection, en temps réel, des groupes, de leurs comportements, objectifs et intentions. Dans le cadre de projets européens, nous avons utilisé un serious game simulant une foule dans des scénarios asymétriques. Les résultats montrent un meilleur accord avec les prédictions théoriques et une amélioration significative des travaux précédents. Le travail présenté ici pourrait être utilisé dans de futures études de détection de comportements multi-agents et pourrait un jour aider à résoudre les problèmes liés aux événements catastrophiques de foules incontrôlables.
APA, Harvard, Vancouver, ISO, and other styles
48

Osman, Ousama. "Méthodes de diagnostic en ligne, embarqué et distribué dans les réseaux filaires complexes." Thesis, Université Clermont Auvergne‎ (2017-2020), 2020. http://www.theses.fr/2020CLFAC038.

Full text
Abstract:
Les recherches menées dans cette thèse portent sur le diagnostic de réseaux filaires complexes à l’aide de la réflectométrie distribuée. L’objectif est de développer de nouvelles technologies de diagnostic en ligne, distribuées des réseaux complexes permettant la fusion de données ainsi que la communication entre les réflectomètres pour détecter, localiser et caractériser les défauts électriques (francs et non francs). Cette collaboration entre les réflectomètres permet de résoudre le problème d’ambiguïté de localisation des défauts et d’améliorer la qualité du diagnostic. La première contribution concerne la proposition d’une méthode basée sur la théorie des graphes permettant la combinaison de données entre les réflectomètres distribués afin de faciliter la localisation d’un défaut. L’amplitude du signal réfléchi est ensuite utilisée pour identifier le type du défaut et estimer son impédance. Cette estimation est basée sur la régénération du signal en compensant la dégradation subie par le signal de diagnostic au cours de sa propagation à travers le réseau. La deuxième contribution permet la fusion des données de réflectomètres distribués dans des réseaux complexes affectés par de multiples défauts. Pour atteindre cet objectif, deux méthodes ont été proposées et développées : la première est basée sur les algorithmes génétiques (AG) et la deuxième est basée sur les réseaux de neurones (RN). Ces outils combinés avec la réflectométrie distribuée permettent la détection automatique, la localisation et la caractérisation de plusieurs défauts dans différents types et topologies des réseaux filaires. La troisième contribution propose d’intégrer la communication entre les réflectomètres via le signal de diagnostic porteur d’informations. Elle utilise adéquatement les phases du signal multiporteuses MCTDR pour transmettre des données. Cette communication assure l’échange d’informations utiles entre les réflectomètres sur l’état des câbles, permettant ainsi la fusion de données et la localisation des défauts sans ambiguïtés. Les problèmes d’interférence entre les réflectomètres sont également abordés lorsqu’ils injectent simultanément leurs signaux de test dans le réseau. Ces travaux de thèse ont montré l’efficacité des méthodes proposées pour améliorer les performances des systèmes de diagnostic filaire actuels en termes de diagnostic de certains défauts encore difficiles à détecter aujourd’hui, et d’assurer la sécurité de fonctionnement des systèmes électriques
The research conducted in this thesis focuses on the diagnosis of complex wired networks using distributed reflectometry. It aims to develop new distributed diagnostic techniques for complex networks that allow data fusion as well as communication between reflectometers to detect, locate and characterize electrical faults (soft and hard faults). This collaboration between reflectometers solves the problem of fault location ambiguity and improves the quality of diagnosis. The first contribution is the development of a graph theory-based method for combining data between distributed reflectometers, thus facilitating the location of the fault. Then, the amplitude of the reflected signal is used to identify the type of fault and estimate its impedance. The latter is based on the regeneration of the signal by compensating for the degradation suffered by the diagnosis signal during its propagation through the network. The second contribution enables data fusion between distributed reflectometers in complex networks affected by multiple faults. To achieve this objective, two methods have been proposed and developed: the first is based on genetic algorithms (GA) and the second is based on neural networks (RN). These tools combined with distributed reflectometryallow automatic detection, location, and characterization of several faults in different types and topologies of wired networks. The third contribution proposes the use of information-carrying diagnosis signal to integrate communication between distributed reflectometers. It properly uses the phases of the MCTDR multi-carrier signal to transmit data. This communication ensures the exchange of useful information (such as fault location and amplitude) between reflectometers on the state of the cables, thus enabling data fusion and unambiguous fault location. Interference problems between the reflectometers are also addressed when they simultaneously inject their test signals into the network. These studies illustrate the efficiency and applicability of the proposed methods. They also demonstrate their potential to improve the performance of the current wired diagnosis systems to meet the need and the problem of detecting and locating faults that manufacturers and users face today in electrical systems to improve their operational safety
APA, Harvard, Vancouver, ISO, and other styles
49

Papež, Milan. "Optimální odhad stavu modelu navigačního systému." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220149.

Full text
Abstract:
This thesis presents an investigation of the possibility of using the fixed-point arithmetic in the inertial navigation systems, which use the local level navigation frame mechanization equations. Two square root filtering methods, the Potter's square root Kalman filter and UD factorized Kalman filter, are compared with respect to the conventional Kalman filter and its Joseph's stabilized form. The effect of rounding errors to the Kalman filter optimality and the covariance matrix or its factors conditioning is evaluated for a various lengths of the fractional part of the fixed-point computational word. Main contribution of this research lies in an evaluation of the minimal fixed-point arithmetic word length for the Phi-angle error model with noise statistics which correspond to the tactical grade inertial measurements units.
APA, Harvard, Vancouver, ISO, and other styles
50

Chiang, Kuan-Chen, and 江冠臻. "Performance Analysis of Integrated Weights Data Fusion Algorithms." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/69080295994095178349.

Full text
Abstract:
碩士
育達商業技術學院
資訊管理所
93
The desired improvements of multi-sensor network tracking system rely on more accurate state estimates and less computation loads. An algorithm is devised for the problem of a distributed multi-sensor network track to track data fusion. For sensor level track, to reduce the computational loads involved in physical implementation, the method is essentially based on the decoupling technique that Kalman filter gain formulations are recursively computed. For local processor, statement vector of data fusion algorithm construct the combination of two sensor fusion in the network respectively. For global processor, an approach called integrated weights algorithm is utilized to process global state estimation using track data transmitted from local processor. Performance results for the proposed algorithm are compared with that of the local processor, using computer simulations of typical target maneuvering scenarios.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography