Dissertations / Theses on the topic 'Distributed Data Fusion (DDF)'

To see the other types of publications on this topic, follow the link: Distributed Data Fusion (DDF).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 37 dissertations / theses for your research on the topic 'Distributed Data Fusion (DDF).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Aziz, Ashraf Mamdouh Abdel. "New data fusion algorithms for distributed multi-sensor multi-target environments." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA369780.

Full text
Abstract:
Dissertation (Ph.D. in Electrical Engineering) Naval Postgraduate School, September 1999.
"September 1999". Dissertation supervisor(s): Robert Cristi, Murali Tummala. Includes bibliographical references (p. 199-214). Also avaliable online.
APA, Harvard, Vancouver, ISO, and other styles
2

Wallace, Christopher John. "Distributed data fusion for condition monitoring of graphite nuclear reactor cores." Thesis, University of Strathclyde, 2013. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=20607.

Full text
Abstract:
Nuclear power stations worldwide are exceeding their originally specified design lives and with only limited construction of new generation underway, there is a desire to continue the operation of existing stations to ensure electricity supply. Continued operation of nuclear power stations with degrading and life-limiting components necessitates increased monitoring and inspection, particularly of the reactor cores, to ensure they are safe to operate. The monitoring of a large number of components and their related data sources is a distributed and time consuming process for the engineer given the lack of infrastructure available for collecting, managing and analysing monitoring data. This thesis describes the issues associated with nuclear Condition Monitoring (CM) and investigates the suitability of a distributed framework utilising intelligent software agents to collect, manage and analyse data autonomously. The application of data fusion techniques is examined to estimate unre corded parameters, provide contextualisation for anomalies in order to quickly identify true faults from explainable anomalies and to extract more detail from existing CM data. A generalised framework is described for nuclear CM of any type of reactor, specifying the required components and capabilites based on the design of a suitable Multi Agent System, including the interaction of the framework with existing CM systems and human users. A high level ontology for nuclear CM is proposed and is emphasised as a crucial aspect of the data management and extendability of the framework to incorporate further data sources and analyses. A prototype system, based on the generalised framework is developed for the case of the Advanced Gas-cooled Reactor, with new and existing CM analyses formalised within intelligent agents. Using real station data and simulated fault data, the prototype system was shown to be capable of performing the existing monitoring tasks considerably faster than a human user while retaining all data and analyses for justification and traceability of decisions based on the analyses.
APA, Harvard, Vancouver, ISO, and other styles
3

Vilsmaier, Christian. "Contextualized access to distributed and heterogeneous multimedia data sources." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0094/document.

Full text
Abstract:
Rendre les données multimédias disponibles en ligne devient moins cher et plus pratique sur une base quotidienne, par exemple par les utilisateurs eux-mêmes. Des phénomènes du Web comme Facebook, Twitter et Flickr bénéficient de cette évolution. Ces phénomènes et leur acceptation accrue conduisent à une multiplication du nombre d’images disponibles en ligne. La taille cumulée de ces images souvent publiques et donc consultables, est de l’ordre de plusieurs zettaoctets. L’exécution d’une requête de similarité sur de tels volumes est un défi que la communauté scientifique commence à cibler. Une approche envisagée pour faire face à ce problème propose d’utiliser un système distribué et hétérogène de recherche d’images basé sur leur contenu (CBIRs). De nombreux problèmes émergent d’un tel scénario. Un exemple est l’utilisation de formats de métadonnées distincts pour décrire le contenu des images; un autre exemple est l’information technique et structurelle inégale. Les métriques individuelles qui sont utilisées par les CBIRs pour calculer la similarité entre les images constituent un autre exemple. Le calcul de bons résultats dans ce contexte s’avère ainsi une tàche très laborieuse qui n’est pas encore scientifiquement résolue. Le problème principalement abordé dans cette thèse est la recherche de photos de CBIRs similaires à une image donnée comme réponse à une requête multimédia distribuée. La contribution principale de cette thèse est la construction d’un réseau de CBIRs sensible à la sémantique des contenus (CBIRn). Ce CBIRn sémantique est capable de collecter et fusionner les résultats issus de sources externes spécialisées. Afin d’être en mesure d’intégrer de telles sources extérieures, prêtes à rejoindre le réseau, mais pas à divulguer leur configuration, un algorithme a été développé capable d’estimer la configuration d’un CBIRS. En classant les CBIRs et en analysant les requêtes entrantes, les requêtes d’image sont exclusivement transmises aux CBIRs les plus appropriés. De cette fac ̧on, les images sans intérêt pour l’utilisateur peuvent être omises à l’avance. Les images retournées cells sont considérées comme similaires par rapport à l’image donnée pour la requête. La faisabilité de l’approche et l’amélioration obtenue par le processus de recherche sont démontrées par un développement prototypique et son évaluation utilisant des images d’ImageNet. Le nombre d’images pertinentes renvoyées par l’approche de cette thèse en réponse à une requête image est supérieur d’un facteur 4.75 par rapport au résultat obtenu par un réseau de CBIRs predéfini
Making multimedia data available online becomes less expensive and more convenient on a daily basis. This development promotes web phenomenons such as Facebook, Twitter, and Flickr. These phenomena and their increased acceptance in society in turn leads to a multiplication of the amount of available images online. This vast amount of, frequently public and therefore searchable, images already exceeds the zettabyte bound. Executing a similarity search on the magnitude of images that are publicly available and receiving a top quality result is a challenge that the scientific community has recently attempted to rise to. One approach to cope with this problem assumes the use of distributed heterogeneous Content Based Image Retrieval system (CBIRs). Following from this anticipation, the problems that emerge from a distributed query scenario must be dealt with. For example the involved CBIRs’ usage of distinct metadata formats for describing their content, as well as their unequal technical and structural information. An addition issue is the individual metrics that are used by the CBIRs to calculate the similarity between pictures, as well as their specific way of being combined. Overall, receiving good results in this environment is a very labor intensive task which has been scientifically but not yet comprehensively explored. The problem primarily addressed in this work is the collection of pictures from CBIRs, that are similar to a given picture, as a response to a distributed multimedia query. The main contribution of this thesis is the construction of a network of Content Based Image Retrieval systems that are able to extract and exploit the information about an input image’s semantic concept. This so called semantic CBIRn is mainly composed of CBIRs that are configured by the semantic CBIRn itself. Complementarily, there is a possibility that allows the integration of specialized external sources. The semantic CBIRn is able to collect and merge results of all of these attached CBIRs. In order to be able to integrate external sources that are willing to join the network, but are not willing to disclose their configuration, an algorithm was developed that approximates these configurations. By categorizing existing as well as external CBIRs and analyzing incoming queries, image queries are exclusively forwarded to the most suitable CBIRs. In this way, images that are not of any use for the user can be omitted beforehand. The hereafter returned images are rendered comparable in order to be able to merge them to one single result list of images, that are similar to the input image. The feasibility of the approach and the hereby obtained improvement of the search process is demonstrated by a prototypical implementation. Using this prototypical implementation an augmentation of the number of returned images that are of the same semantic concept as the input images is achieved by a factor of 4.75 with respect to a predefined non-semantic CBIRn
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Erwei Kam Moshe. "Detection in distributed sensor networks /." Philadelphia, Pa. : Drexel University, 2005. http://hdl.handle.net/1860/1303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gnanapandithan, Nithya. "Data detection and fusion in decentralized sensor networks." Thesis, Manhattan, Kan. : Kansas State University, 2005. http://hdl.handle.net/2097/132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Palaniappan, Ravishankar. "A SELF-ORGANIZING HYBRID SENSOR SYSTEM WITH DISTRIBUTED DATA FUSION FOR INTRUDER TRACKING AND SURVEILLANCE." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2407.

Full text
Abstract:
A wireless sensor network is a network of distributed nodes each equipped with its own sensors, computational resources and transceivers. These sensors are designed to be able to sense specific phenomenon over a large geographic area and communicate this information to the user. Most sensor networks are designed to be stand-alone systems that can operate without user intervention for long periods of time. While the use of wireless sensor networks have been demonstrated in various military and commercial applications, their full potential has not been realized primarily due to the lack of efficient methods to self organize and cover the entire area of interest. Techniques currently available focus solely on homogeneous wireless sensor networks either in terms of static networks or mobile networks and suffers from device specific inadequacies such as lack of coverage, power and fault tolerance. Failing nodes result in coverage loss and breakage in communication connectivity and hence there is a pressing need for a fault tolerant system to allow replacing of the failed nodes. In this dissertation, a unique hybrid sensor network is demonstrated that includes a host of mobile sensor platforms. It is shown that the coverage area of the static sensor network can be improved by self-organizing the mobile sensor platforms to allow interaction with the static sensor nodes and thereby increase the coverage area. The performance of the hybrid sensor network is analyzed for a set of N mobile sensors to determine and optimize parameters such as the position of the mobile nodes for maximum coverage of the sensing area without loss of signal between the mobile sensors, static nodes and the central control station. A novel approach to tracking dynamic targets is also presented. Unlike other tracking methods that are based on computationally complex methods, the strategy adopted in this work is based on a computationally simple but effective technique of received signal strength indicator measurements. The algorithms developed in this dissertation are based on a number of reasonable assumptions that are easily verified in a densely distributed sensor network and require simple computations that efficiently tracks the target in the sensor field. False alarm rate, probability of detection and latency are computed and compared with other published techniques. The performance analysis of the tracking system is done on an experimental testbed and also through simulation and the improvement in accuracy over other methods is demonstrated.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Modeling and Simulation PhD
APA, Harvard, Vancouver, ISO, and other styles
7

Cadell, Philip. "BabelFuse data fusion unit with precision wireless clock synchronisation." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/55225/1/Philip_Cadell_Thesis.pdf.

Full text
Abstract:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.
APA, Harvard, Vancouver, ISO, and other styles
8

Gallagher, Jonathan G. "Likelihood as a Method of Multi Sensor Data Fusion for Target Tracking." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1244041862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Borkar, Milind. "A distributed Monte Carlo method for initializing state vector distributions in heterogeneous smart sensor networks." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22680.

Full text
Abstract:
The objective of this research is to demonstrate how an underlying system's state vector distribution can be determined in a distributed heterogeneous sensor network with reduced subspace observability at the individual nodes. We show how the network, as a whole, is capable of observing the target state vector even if the individual nodes are not capable of observing it locally. The initialization algorithm presented in this work can generate the initial state vector distribution for networks with a variety of sensor types as long as the measurements at the individual nodes are known functions of the target state vector. Initialization is accomplished through a novel distributed implementation of the particle filter that involves serial particle proposal and weighting strategies, which can be accomplished without sharing raw data between individual nodes in the network. The algorithm is capable of handling missed detections and clutter as well as compensating for delays introduced by processing, communication and finite signal propagation velocities. If multiple events of interest occur, their individual states can be initialized simultaneously without requiring explicit data association across nodes. The resulting distributions can be used to initialize a variety of distributed joint tracking algorithms. In such applications, the initialization algorithm can initialize additional target tracks as targets come and go during the operation of the system with multiple targets under track.
APA, Harvard, Vancouver, ISO, and other styles
10

Jones, Malachi Gabriel. "Design and implementation of a multi-agent systems laboratory." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29617.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Jeff Shamma; Committee Member: Eric Feron; Committee Member: Magnus Egerstedt. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
11

Holtzhausen, David Schalk. "Development of distributed control system for SSL soccer robots." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80221.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: This thesis describes the development of a distributed control system for SSL soccer robots. The project continues on work done to develop a robotics research platform at Stellenbosch University. The wireless communication system is implemented using Player middleware. This enables high level programming of the robot drivers and communication clients, resulting in an easily modifiable system. The system is developed to be used as either a centralised or decentralised control system. The software of the robot’s motor controller unit is updated to ensure optimal movement. Slippage of the robot’s wheels restricts the robot’s movement capabilities. Trajectory tracking software is developed to ensure that the robot follows the desired trajectory while operating within its physical limits. The distributed control architecture reduces the robots dependency on the wireless network and the off-field computer. The robots are given some autonomy by integrating the navigation and control on the robot self. Kalman filters are designed to estimate the robots translational and rotational velocities. The Kalman filters fuse vision data from an overhead vision system with inertial measurements of an on-board IMU. This ensures reliable and accurate position, orientation and velocity information on the robot. Test results show an improvement in the controller performance as a result of the proposed system.
AFRIKAANSE OPSOMMING: Hierdie tesis beskryf die ontwikkeling van ’n verspreidebeheerstelsel vir SSL sokker robotte. Die projek gaan voort op vorige werk wat gedoen is om ’n robotika navorsingsplatform aan die Universiteit van Stellenbosch te ontwikkel. Die kommunikasiestelsel is geïmplementeer met behulp van Player middelware. Dit stel die robotbeheerders en kommunikasiekliënte in staat om in hoë vlak tale geprogrameer te word. Dit lei tot ’n maklik veranderbare stelsel. Die stelsel is so ontwikkel dat dit gebruik kan word as óf ’n gesentraliseerde of verspreidebeheerstelsel. Die sagteware van die motorbeheer eenheid is opgedateer om optimale robot beweging te verseker. As die robot se wiele gly beperk dit die robot se bewegingsvermoëns. Trajekvolgings sagteware is ontwikkel om te verseker dat die robot die gewenste pad volg, terwyl dit binne sy fisiese operasionele grense bly. Die verspreibeheerargitektuur verminder die robot se afhanklikheid op die kommunikasienetwerk en die sentrale rekenaar. Die robot is ’n mate van outonomie gegee deur die integrasie van die navigasie en beheer op die robot self te doen. Kalman filters is ontwerp om die robot se translasie en rotasie snelhede te beraam. Die Kalman filters kombineer visuele data van ’n oorhoofse visiestelsel met inertia metings van ’n IMU op die robot. Dit verseker betroubare en akkurate posisie, oriëntasie en snelheids inligting. Toetsresultate toon ’n verbetering in die beheervermoë as ’n gevolg van die voorgestelde stelsel.
APA, Harvard, Vancouver, ISO, and other styles
12

Nachabe, Ismail Lina. "Automatic sensor discovery and management to implement effective mechanism for data fusion and data aggregation." Thesis, Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0021/document.

Full text
Abstract:
Actuellement, des descriptions basées sur de simples schémas XML sont utilisées pour décrire un capteur/actuateur et les données qu’il mesure et fournit. Ces schémas sont généralement formalisés en utilisant le langage SensorML (Sensor Model Language), ne permettant qu’une description hiérarchique basique des attributs des objets sans aucune notion de liens sémantiques, de concepts et de relations entre concepts. Nous pensons au contraire que des descriptions sémantiques des capteurs/actuateurs sont nécessaires au design et à la mise en œuvre de mécanismes efficaces d’inférence, de fusion et de composition de données. Cette ontologie sémantique permettra de masquer l’hétérogénéité des données collectées et facilitera leur fusion et leur composition au sein d’un environnement de gestion de capteur similaire à celui d’une architecture ouverte orientée services. La première partie des travaux de cette thèse porte donc sur la conception et la validation d’une ontologie sémantique légère, extensible et générique de description des données fournies par un capteur/actuateur. Cette description ontologique de données brutes devra être conçue : • d’une manière extensible et légère afin d’être applicable à des équipements embarqués hétérogènes, • comme sous élément d’une ontologie de plus haut niveau (upper level ontology) utilisée pour modéliser les capteurs et actuateurs (en tant qu’équipements et non plus de données fournies), ainsi que les informations mesurées (information veut dire ici donnée de plus haut niveau issue du traitement et de la fusion des données brutes). La seconde partie des travaux de cette thèse portera sur la spécification et la qualification : • d’une architecture générique orientée service (SOA) permettant la découverte et la gestion d’un capteur/actuateur, et des données qu’il fournit (incluant leurs agrégation et fusion en s’appuyant sur les mécanismes de composition de services de l’architecture SOA), à l’identique d’un service composite de plus haut niveau, • d’un mécanisme amélioré de collecte de données à grande échelle, au dessus de cette ontologie descriptive. L’objectif des travaux de la thèse est de fournir des facilitateurs permettant une mise en œuvre de mécanismes efficaces de collecte, de fusion et d’agrégation de données, et par extension de prise de décisions. L’ontologie de haut niveau proposée sera quant à elle pourvue de tous les attributs permettant une représentation, une gestion et une composition des ‘capteurs, actuateurs et objets’ basées sur des architectures orientées services (Service Oriented Architecture ou SOA). Cette ontologie devrait aussi permettre la prise en compte de l’information transporter (sémantique) dans les mécanismes de routage (i.e. routage basé information). Les aspects liés à l’optimisation et à la modélisation constitueront aussi une des composantes fortes de cette thèse. Les problématiques à résoudre pourraient être notamment : • La proposition du langage de description le mieux adapté (compromis entre richesse, complexité et flexibilité), • La définition de la structure optimum de l’architecture de découverte et de gestion d’un capteur/actuateur, • L’identification d’une solution optimum au problème de la collecte à grande échelle des données de capteurs/actuateurs
The constant evolution of technology in terms of inexpensive and embedded wireless interfaces and powerful chipsets has leads to the massive usage and development of wireless sensor networks (WSNs). This potentially affects all aspects of our lives ranging from home automation (e.g. Smart Buildings), passing through e-Health applications, environmental observations and broadcasting, food sustainability, energy management and Smart Grids, military services to many other applications. WSNs are formed of an increasing number of sensor/actuator/relay/sink devices, generally self-organized in clusters and domain dedicated, that are provided by an increasing number of manufacturers, which leads to interoperability problems (e.g., heterogeneous interfaces and/or grounding, heterogeneous descriptions, profiles, models …). Moreover, these networks are generally implemented as vertical solutions not able to interoperate with each other. The data provided by these WSNs are also very heterogeneous because they are coming from sensing nodes with various abilities (e.g., different sensing ranges, formats, coding schemes …). To tackle this heterogeneity and interoperability problems, these WSNs’ nodes, as well as the data sensed and/or transmitted, need to be consistently and formally represented and managed through suitable abstraction techniques and generic information models. Therefore, an explicit semantic to every terminology should be assigned and an open data model dedicated for WSNs should be introduced. SensorML, proposed by OGC in 2010, has been considered an essential step toward data modeling specification in WSNs. Nevertheless, it is based on XML schema only permitting basic hierarchical description of the data, hence neglecting any semantic representation. Furthermore, most of the researches that have used semantic techniques for developing their data models are only focused on modeling merely sensors and actuators (this is e.g. the case of SSN-XG). Other researches dealt with data provided by WSNs, but without modelling the data type, quality and states (like e.g. OntoSensor). That is why the main aim of this thesis is to specify and formalize an open data model for WSNs in order to mask the aforementioned heterogeneity and interoperability between different systems and applications. This model will also facilitate the data fusion and aggregation through an open management architecture like environment as, for example, a service oriented one. This thesis can thus be split into two main objectives: 1)To formalize a semantic open data model for generically describing a WSN, sensors/actuators and their corresponding data. This model should be light enough to respect the low power and thus low energy limitation of such network, generic for enabling the description of the wide variety of WSNs, and extensible in a way that it can be modified and adapted based on the application. 2)To propose an upper service model and standardized enablers for enhancing sensor/actuator discovery, data fusion, data aggregation and WSN control and management. These service layer enablers will be used for improving the data collection in a large scale network and will facilitate the implementation of more efficient routing protocols, as well as decision making mechanisms in WSNs
APA, Harvard, Vancouver, ISO, and other styles
13

Ko, Ming Hsiao. "Using dynamic time warping for multi-sensor fusion." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/384.

Full text
Abstract:
Fusion is a fundamental human process that occurs in some form at all levels of sense organs such as visual and sound information received from eyes and ears respectively, to the highest levels of decision making such as our brain fuses visual and sound information to make decisions. Multi-sensor data fusion is concerned with gaining information from multiple sensors by fusing across raw data, features or decisions. The traditional frameworks for multi-sensor data fusion only concern fusion at specific points in time. However, many real world situations change over time. When the multi-sensor system is used for situation awareness, it is useful not only to know the state or event of the situation at a point in time, but also more importantly, to understand the causalities of those states or events changing over time.Hence, we proposed a multi-agent framework for temporal fusion, which emphasises the time dimension of the fusion process, that is, fusion of the multi-sensor data or events derived over a period of time. The proposed multi-agent framework has three major layers: hardware, agents, and users. There are three different fusion architectures: centralized, hierarchical, and distributed, for organising the group of agents. The temporal fusion process of the proposed framework is elaborated by using the information graph. Finally, the core of the proposed temporal fusion framework – Dynamic Time Warping (DTW) temporal fusion agent is described in detail.Fusing multisensory data over a period of time is a challenging task, since the data to be fused consists of complex sequences that are multi–dimensional, multimodal, interacting, and time–varying in nature. Additionally, performing temporal fusion efficiently in real–time is another challenge due to the large amount of data to be fused. To address these issues, we proposed the DTW temporal fusion agent that includes four major modules: data pre-processing, DTW recogniser, class templates, and decision making. The DTW recogniser is extended in various ways to deal with the variability of multimodal sequences acquired from multiple heterogeneous sensors, the problems of unknown start and end points, multimodal sequences of the same class that hence has different lengths locally and/or globally, and the challenges of online temporal fusion.We evaluate the performance of the proposed DTW temporal fusion agent on two real world datasets: 1) accelerometer data acquired from performing two hand gestures, and 2) a benchmark dataset acquired from carrying a mobile device and performing pre-defined user scenarios. Performance results of the DTW based system are compared with those of a Hidden Markov Model (HMM) based system. The experimental results from both datasets demonstrate that the proposed DTW temporal fusion agent outperforms HMM based systems, and has the capability to perform online temporal fusion efficiently and accurately in real–time.
APA, Harvard, Vancouver, ISO, and other styles
14

Ko, Ming Hsiao. "Using dynamic time warping for multi-sensor fusion." Curtin University of Technology, Department of Computing, 2009. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=129032.

Full text
Abstract:
Fusion is a fundamental human process that occurs in some form at all levels of sense organs such as visual and sound information received from eyes and ears respectively, to the highest levels of decision making such as our brain fuses visual and sound information to make decisions. Multi-sensor data fusion is concerned with gaining information from multiple sensors by fusing across raw data, features or decisions. The traditional frameworks for multi-sensor data fusion only concern fusion at specific points in time. However, many real world situations change over time. When the multi-sensor system is used for situation awareness, it is useful not only to know the state or event of the situation at a point in time, but also more importantly, to understand the causalities of those states or events changing over time.
Hence, we proposed a multi-agent framework for temporal fusion, which emphasises the time dimension of the fusion process, that is, fusion of the multi-sensor data or events derived over a period of time. The proposed multi-agent framework has three major layers: hardware, agents, and users. There are three different fusion architectures: centralized, hierarchical, and distributed, for organising the group of agents. The temporal fusion process of the proposed framework is elaborated by using the information graph. Finally, the core of the proposed temporal fusion framework – Dynamic Time Warping (DTW) temporal fusion agent is described in detail.
Fusing multisensory data over a period of time is a challenging task, since the data to be fused consists of complex sequences that are multi–dimensional, multimodal, interacting, and time–varying in nature. Additionally, performing temporal fusion efficiently in real–time is another challenge due to the large amount of data to be fused. To address these issues, we proposed the DTW temporal fusion agent that includes four major modules: data pre-processing, DTW recogniser, class templates, and decision making. The DTW recogniser is extended in various ways to deal with the variability of multimodal sequences acquired from multiple heterogeneous sensors, the problems of unknown start and end points, multimodal sequences of the same class that hence has different lengths locally and/or globally, and the challenges of online temporal fusion.
We evaluate the performance of the proposed DTW temporal fusion agent on two real world datasets: 1) accelerometer data acquired from performing two hand gestures, and 2) a benchmark dataset acquired from carrying a mobile device and performing pre-defined user scenarios. Performance results of the DTW based system are compared with those of a Hidden Markov Model (HMM) based system. The experimental results from both datasets demonstrate that the proposed DTW temporal fusion agent outperforms HMM based systems, and has the capability to perform online temporal fusion efficiently and accurately in real–time.
APA, Harvard, Vancouver, ISO, and other styles
15

KALLAS, KASSEM. "A Game-Theoretic Approach for Adversarial Information Fusion in Distributed Sensor Networks." Doctoral thesis, Università di Siena, 2017. http://hdl.handle.net/11365/1005735.

Full text
Abstract:
Every day we share our personal information through digital systems which are constantly exposed to threats. For this reason, security-oriented disciplines of signal processing have received increasing attention in the last decades: multimedia forensics, digital watermarking, biometrics, network monitoring, steganography and steganalysis are just a few examples. Even though each of these fields has its own peculiarities, they all have to deal with a common problem: the presence of one or more adversaries aiming at making the system fail. Adversarial Signal Processing lays the basis of a general theory that takes into account the impact that the presence of an adversary has on the design of effective signal processing tools. By focusing on the application side of Adversarial Signal Processing, namely adversarial information fusion in distributed sensor networks, and adopting a game-theoretic approach, this thesis contributes to the above mission by addressing four issues. First, we address decision fusion in distributed sensor networks by developing a novel soft isolation defense scheme that protects the network from adversaries, specifically, Byzantines. Second, we develop an optimum decision fusion strategy in the presence of Byzantines. In the next step, we propose a technique to reduce the complexity of the optimum fusion by relying on a novel nearly-optimum message passing algorithm based on factor graphs. Finally, we introduce a defense mechanism to protect decentralized networks running consensus algorithm against data falsification attacks.
APA, Harvard, Vancouver, ISO, and other styles
16

Agarwalla, Bikash Kumar. "Resource management for data streaming applications." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34836.

Full text
Abstract:
This dissertation investigates novel middleware mechanisms for building streaming applications. Developing streaming applications is a challenging task because (i) they are continuous in nature; (ii) they require fusion of data coming from multiple sources to derive higher level information; (iii) they require efficient transport of data from/to distributed sources and sinks; (iv) they need access to heterogeneous resources spanning sensor networks and high performance computing; and (v) they are time critical in nature. My thesis is that an intuitive programming abstraction will make it easier to build dynamic, distributed, and ubiquitous data streaming applications. Moreover, such an abstraction will enable an efficient allocation of shared and heterogeneous computational resources thereby making it easier for domain experts to build these applications. In support of the thesis, I present a novel programming abstraction, called DFuse, that makes it easier to develop these applications. A domain expert only needs to specify the input and output connections to fusion channels, and the fusion functions. The subsystems developed in this dissertation take care of instantiating the application, allocating resources for the application (via the scheduling heuristic developed in this dissertation) and dynamically managing the resources (via the dynamic scheduling algorithm presented in this dissertation). Through extensive performance evaluation, I demonstrate that the resources are allocated efficiently to optimize the throughput and latency constraints of an application.
APA, Harvard, Vancouver, ISO, and other styles
17

Otto, Carola [Verfasser]. "Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems / Carola Otto." Karlsruhe : KIT Scientific Publishing, 2013. http://www.ksp.kit.edu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kenyeres, Martin. "Analýza a zefektivnění distribuovaných systémů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-390292.

Full text
Abstract:
A significant progress in the evolution of the computer systems and their interconnection over the past 70 years has allowed replacing the frequently used centralized architectures with the highly distributed ones, formed by independent entities fulfilling specific functionalities as one user-intransparent unit. This has resulted in an intense scientic interest in distributed algorithms and their frequent implementation into real systems. Especially, distributed algorithms for multi-sensor data fusion, ensuring an enhanced QoS of executed applications, find a wide usage. This doctoral thesis addresses an optimization and an analysis of the distributed systems, namely the distributed consensus-based algorithms for an aggregate function estimation (primarily, my attention is focused on a mean estimation). The first section is concerned with a theoretical background of the distributed systems, their evolution, their architectures, and a comparison with the centralized systems (i.e. their advantages/disadvantages). The second chapter deals with multi-sensor data fusion, its application, the classification of the distributed estimation techniques, their mathematical modeling, and frequently quoted algorithms for distributed averaging (e.g. protocol Push-Sum, Metropolis-Hastings weights, Best Constant weights etc.). The practical part is focused on mechanisms for an optimization of the distributed systems, the proposal of novel algorithms and complements for the distributed systems, their analysis, and comparative studies in terms of such as the convergence rate, the estimation precision, the robustness, the applicability to real systems etc.
APA, Harvard, Vancouver, ISO, and other styles
19

Estrada, Camilo Ernesto Restrepo. "Use of social media data in flood monitoring." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18138/tde-19032019-143847/.

Full text
Abstract:
Floods are one of the most devastating types of worldwide disasters in terms of human, economic, and social losses. If authoritative data is scarce, or unavailable for some periods, other sources of information are required to improve streamflow estimation and early flood warnings. Georeferenced social media messages are increasingly being regarded as an alternative source of information for coping with flood risks. However, existing studies have mostly concentrated on the links between geo-social media activity and flooded areas. This thesis aims to show a novel methodology that shows a way to close the research gap regarding the use of social networks as a proxy for precipitation-runoff and flood forecast estimates. To address this, it is proposed to use a transformation function that creates a proxy variable for rainfall by analysing messages from geo-social media and precipitation measurements from authoritative sources, which are then incorporated into a hydrological model for the flow estimation. Then the proxy and authoritative rainfall data are merged to be used in a data assimilation scheme using the Ensemble Kalman Filter (EnKF). It is found that the combined use of authoritative rainfall values with the social media proxy variable as input to the Probability Distributed Model (PDM), improves flow simulations for flood monitoring. In addition, it is found that when these models are made under a scheme of fusion-assimilation of data, the results improve even more, becoming a tool that can help in the monitoring of \"ungauged\" or \"poorly gauged\" catchments. The main contribution of this thesis is the creation of a completely original source of rain monitoring, which had not been explored in the literature in a quantitative way. It also shows how the joint use of this source and data assimilation methodologies aid to detect flood events.
As inundações são um dos tipos mais devastadores de desastres em todo o mundo em termos de perdas humanas, econômicas e sociais. Se os dados oficiais forem escassos ou indisponíveis por alguns períodos, outras fontes de informação são necessárias para melhorar a estimativa de vazões e antecipar avisos de inundação. Esta tese tem como objetivo mostrar uma metodologia que mostra uma maneira de fechar a lacuna de pesquisa em relação ao uso de redes sociais como uma proxy para as estimativas de precipitação e escoamento. Para resolver isso, propõe-se usar uma função de transformação que cria uma variável proxy para a precipitação, analisando mensagens de medições geo-sociais e precipitação de fontes oficiais, que são incorporadas em um modelo hidrológico para a estimativa de fluxo. Em seguida, os dados de proxy e precipitação oficial são fusionados para serem usados em um esquema de assimilação de dados usando o Ensemble Kalman Filter (EnKF). Descobriu-se que o uso combinado de valores oficiais de precipitação com a variável proxy das mídias sociais como entrada para o modelo distribuído de probabilidade (Probability Distributed Model - PDM) melhora as simulações de fluxo para o monitoramento de inundações. A principal contribuição desta tese é a criação de uma fonte completamente original de monitoramento de chuva, que não havia sido explorada na literatura de forma quantitativa.
APA, Harvard, Vancouver, ISO, and other styles
20

Matta, Natalie. "Vers une gestion décentralisée des données des réseaux de capteurs dans le contexte des smart grids." Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0010/document.

Full text
Abstract:
Cette thèse s’intéresse à la gestion décentralisée des données récoltées par les réseaux de capteurs dans le contexte des réseaux électriques intelligents (smart grids). Nous proposons une architecture décentralisée basée sur les systèmes multi-agents pour la gestion des données et de l’énergie dans un smart grid. En particulier, nos travaux traitent de la gestion des données des réseaux de capteurs dans le réseau de distribution d’un smart grid et ont pour objectif de lever deux verrous essentiels : (1) l'identification et la détection de défaillances et de changements nécessitant une prise de décision et la mise en œuvre des actions correspondantes ; (2) la gestion des grandes quantités de données qui seront récoltées suite à la prolifération des capteurs et des compteurs communicants. La gestion de ces informations peut faire appel à plusieurs méthodes, dont l'agrégation des paquets de données sur laquelle nous nous focalisons dans cette thèse. Nous proposons d’agréger (PriBaCC) et/ou de corréler (CoDA) le contenu de ces paquets de données de manière décentralisée. Ainsi, le traitement de ces données s'effectuera plus rapidement, ce qui aboutira à une prise de décision rapide et efficace concernant la gestion de l'énergie. La validation par simulation de nos contributions a montré que celles-ci répondent aux enjeux identifiés, notamment en réduisant le volume des données à gérer et le délai de communication des données prioritaires
This thesis focuses on the decentralized management of data collected by wireless sensor networks which are deployed in a smart grid, i.e. the evolved new generation electricity network. It proposes a decentralized architecture based on multi-agent systems for both data and energy management in the smart grid. In particular, our works deal with data management of sensor networks which are deployed in the distribution electric subsystem of a smart grid. They aim at answering two key challenges: (1) detection and identification of failure and disturbances requiring swift reporting and appropriate reactions; (2) efficient management of the growing volume of data caused by the proliferation of sensors and other sensing entities such as smart meters. The management of this data can call upon several methods, including the aggregation of data packets on which we focus in this thesis. To this end, we propose to aggregate (PriBaCC) and/or to correlate (CoDA) the contents of these data packets in a decentralized manner. Data processing will thus be done faster, consequently leading to rapid and efficient decision-making concerning energy management. The validation of our contributions by means of simulation has shown that they meet the identified challenges. It has also put forward their enhancements with respect to other existing approaches, particularly in terms of reducing data volume as well as transmission delay of high priority data
APA, Harvard, Vancouver, ISO, and other styles
21

Ribas, Afonso Degmar. "Classificação distribuída de anuros usando rede de sensores sem fio." Universidade Federal do Amazonas, 2013. http://tede.ufam.edu.br/handle/tede/2922.

Full text
Abstract:
Made available in DSpace on 2015-04-11T14:02:57Z (GMT). No. of bitstreams: 1 Afonso.pdf: 820074 bytes, checksum: 796ca447ff3c69734519173f92044438 (MD5) Previous issue date: 2013-03-27
Wireless Sensor Networks (WSNs) can be used in environmental conservation applications and studies due to its wireless communication, sensing, and monitoring capabilities. In the Ecology context, amphibians are used as bioindicators of ecosystemic changes of a region and can early indicate environmental problems. Thus, biologists monitor the anuran (frogs and toads) population in order to establish environmental conservational strategies. Anuran were chosen because the sounds they emit allow classification by using microphones and signal processing. In this work we propose and evaluate some distributed algorithms for anuran classification based on their calls (vocalizations) in the habit using WSNs. This method is interesting because it is not intrusive and it allows remote monitoring. Our solution builds cluster of nodes whose acoustic collected measurements are correlated. The nodes of the same group are combined to generate local classification decisions. Then, these decisions are combined to generate a global decision. We use k-means algorithm for clustering nodes with correlated measurements, which groups instances by similarity. Experiments show that, in comparison with other literature algorithms, the error rate of our solution were 26 pp (percentage points) lower.
As Redes de Sensores Sem Fios (RSSFs) podem ser utilizadas em aplicações de conservação e estudo ambiental devido à sua capacidade de sensoriamento, monitoramento e comunicação sem fio. Dentro do contexto da Ecologia, os anfíbios são utilizados como bioindicadores de mudanças no ecossistema de uma região e podem precocemente indicar problemas ambientais. Desta forma, os biólogos monitoram a população de anuros (sapos e rãs) a fim de estabelecer estratégias de conservação do meio ambiente. Os anuros são escolhidos por causa sons que emitem (coaxar), que permitem a identificação dessas espécies por meio de microfones e processamento do sinal. Portanto, neste trabalho propomos e avaliamos alguns algoritmos distribuídos para classificação de anuros baseados em suas vocalizações em seu habitat usando RSSF. Este método é interessante pois não é intrusivo e permite o monitoramento remoto. Nossa solução cria grupos de nós sensores cujas medidas acústicas coletadas estão correlacionadas. Os dados dos nós de um mesmo grupo são combinados para gerar decisões de classificação locais. Essas decisões são então combinadas para formar uma decisão global. Para agrupar os nós com medidas correlacionadas, utilizamos o algoritmo k-means, que agrupa instâncias similares. Os experimentos mostram que, em comparação com outros algoritmos da literatura, a taxa de erro da nossa solução chegou ser até 26 pp (pontos percentuais) menor.
APA, Harvard, Vancouver, ISO, and other styles
22

Euawatana, Teerapong. "Implementation business-to-business electronic commercial website using ColdFusion 4.5." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1917.

Full text
Abstract:
This project was created using ColdFusion 4.5 to build and implement a commercial web site to present a real picture of electronic commerce. This project is intended to provide enough information for other students who are looking for a guideline for further study and to improve their skills in business from an information management aspect.
APA, Harvard, Vancouver, ISO, and other styles
23

Tabella, Gianluca. "Subsea Oil Spill Risk Management based on Sensor Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
This thesis consists of the evaluation of sensor-based risk management against oil spills using an underwater distributed sensor network. The work starts by highlighting the importance of having a performing leak detection system both from an environmental, safety and economic point of view. The case study is the Goliat FPSO in the Barents Sea which has to meet requirements dictated by Norwegian authorities to prevent oil spills. The modeled network is made of passive acoustic sensors monitoring the subsea manifolds. These sensors send their local 1-bit decision to a Fusion Center which takes a global decision on whether the leakage is occurring. This work evaluates how the choice of adapted Fusion Rules (Counting Rule and Weighted Fusion Rule) can affect the performances of the leak detection system in its current geometry. It will also be discussed how different thresholds, selected for a specific FR or sensor test, can change the system performance. The detection methods are based on statistical signal processing adapted to fit this application within the Oil&Gas field. The work also proposes some new leak localization methods developed so they can be coupled with the proposed leak detection methods, giving a coherent set of operations that the sensors and the FC must perform. Performances of detection techniques are assessed balancing the need for high values of True Positive Rate and Precision and low values of False Positive Rate using indexes based both on the ROC curve (like the Youden's Index) and on the PR curve (the F-scores). Whereas, performances of localization techniques will be assessed on their ability to localize the spill in the shortest time; if this is not possible, parameters like the difference between the estimated and the real leak position will be considered. Finally, some tests are carried out applying the different sets of proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Osman, Ousama. "Méthodes de diagnostic en ligne, embarqué et distribué dans les réseaux filaires complexes." Thesis, Université Clermont Auvergne‎ (2017-2020), 2020. http://www.theses.fr/2020CLFAC038.

Full text
Abstract:
Les recherches menées dans cette thèse portent sur le diagnostic de réseaux filaires complexes à l’aide de la réflectométrie distribuée. L’objectif est de développer de nouvelles technologies de diagnostic en ligne, distribuées des réseaux complexes permettant la fusion de données ainsi que la communication entre les réflectomètres pour détecter, localiser et caractériser les défauts électriques (francs et non francs). Cette collaboration entre les réflectomètres permet de résoudre le problème d’ambiguïté de localisation des défauts et d’améliorer la qualité du diagnostic. La première contribution concerne la proposition d’une méthode basée sur la théorie des graphes permettant la combinaison de données entre les réflectomètres distribués afin de faciliter la localisation d’un défaut. L’amplitude du signal réfléchi est ensuite utilisée pour identifier le type du défaut et estimer son impédance. Cette estimation est basée sur la régénération du signal en compensant la dégradation subie par le signal de diagnostic au cours de sa propagation à travers le réseau. La deuxième contribution permet la fusion des données de réflectomètres distribués dans des réseaux complexes affectés par de multiples défauts. Pour atteindre cet objectif, deux méthodes ont été proposées et développées : la première est basée sur les algorithmes génétiques (AG) et la deuxième est basée sur les réseaux de neurones (RN). Ces outils combinés avec la réflectométrie distribuée permettent la détection automatique, la localisation et la caractérisation de plusieurs défauts dans différents types et topologies des réseaux filaires. La troisième contribution propose d’intégrer la communication entre les réflectomètres via le signal de diagnostic porteur d’informations. Elle utilise adéquatement les phases du signal multiporteuses MCTDR pour transmettre des données. Cette communication assure l’échange d’informations utiles entre les réflectomètres sur l’état des câbles, permettant ainsi la fusion de données et la localisation des défauts sans ambiguïtés. Les problèmes d’interférence entre les réflectomètres sont également abordés lorsqu’ils injectent simultanément leurs signaux de test dans le réseau. Ces travaux de thèse ont montré l’efficacité des méthodes proposées pour améliorer les performances des systèmes de diagnostic filaire actuels en termes de diagnostic de certains défauts encore difficiles à détecter aujourd’hui, et d’assurer la sécurité de fonctionnement des systèmes électriques
The research conducted in this thesis focuses on the diagnosis of complex wired networks using distributed reflectometry. It aims to develop new distributed diagnostic techniques for complex networks that allow data fusion as well as communication between reflectometers to detect, locate and characterize electrical faults (soft and hard faults). This collaboration between reflectometers solves the problem of fault location ambiguity and improves the quality of diagnosis. The first contribution is the development of a graph theory-based method for combining data between distributed reflectometers, thus facilitating the location of the fault. Then, the amplitude of the reflected signal is used to identify the type of fault and estimate its impedance. The latter is based on the regeneration of the signal by compensating for the degradation suffered by the diagnosis signal during its propagation through the network. The second contribution enables data fusion between distributed reflectometers in complex networks affected by multiple faults. To achieve this objective, two methods have been proposed and developed: the first is based on genetic algorithms (GA) and the second is based on neural networks (RN). These tools combined with distributed reflectometryallow automatic detection, location, and characterization of several faults in different types and topologies of wired networks. The third contribution proposes the use of information-carrying diagnosis signal to integrate communication between distributed reflectometers. It properly uses the phases of the MCTDR multi-carrier signal to transmit data. This communication ensures the exchange of useful information (such as fault location and amplitude) between reflectometers on the state of the cables, thus enabling data fusion and unambiguous fault location. Interference problems between the reflectometers are also addressed when they simultaneously inject their test signals into the network. These studies illustrate the efficiency and applicability of the proposed methods. They also demonstrate their potential to improve the performance of the current wired diagnosis systems to meet the need and the problem of detecting and locating faults that manufacturers and users face today in electrical systems to improve their operational safety
APA, Harvard, Vancouver, ISO, and other styles
25

Zhou, Shuting. "Navigation of a quad-rotor to access the interior of a building." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2237.

Full text
Abstract:
Ce travail de recherche est dédié à l’élaboration d’une stratégie de navigation autonome qui comprend la génération d’une trajectoire optimale en évitant des obstacles, la détection de l’objet d’intérêt spécifique (i.e. une fenêtre) et puis l’exécution de la manoeuvre postérieure à approcher la fenêtre et enfin accéder à l’intérieur du bâtiment. Le véhicule est navigué par un système de vision et une combinaison de capteurs inertiels et d’altitude, ce qui réalise une localisation relative du quadri-rotor par rapport à son environment. Une méthode de planification de trajectoire basée sur Model Predictive Control (MPC), qui utilise les informations fournies par le GPS et le capteur visuel, a été conçue pour générer une trajectoire optimale en temps réel avec des capacités d’évitement de collision, qui commence à partir d’un point initial donné par l’utilisateur et guide le véhicule pour atteindre le point final à l’extérieur du bâtiment de la cible. Dans le but de détecter et de localiser l’objet d’intérêt, deux stratégies de détection d’objet basées sur la vision sont proposées et sont respectivement appliquées dans le système de stéréo vision et le système de vision en utilisant la Kinect. Après l’estimation du modèle de la fenêtre cible, un cadre d’estimation de mouvement est conçu pour estimer ego-mouvement du véhicule à partir des images fournies par le capteur visuel. Il y a eu deux versions des cadres d’estimation de mouvement pour les deux systèmes de vision. Une plate-forme expérimentale de quad-rotor est développée. Pour l’estimation de la dynamique de translation du véhicule, un filtre de Kalman est mis en œuvre pour combiner les capteurs d’imagerie, inertiels et d’altitude. Un système de détection et de contrôle hiérarchique est conçu pour effectuer la navigation et le contrôle de l’hélicoptère quadri-rotor, ce qui permet au véhicule d’estimer l’état sans marques artificielles ou d’autres systèmes de positionnement externes
This research work is dedicated to the development of an autonomous navigation strategy which includes generating an optimal trajectory with obstacles avoiding capabilities, detecting specific object of interest (i.e. a window) and then conducting the subsequent maneuver to approach the window and finally access into the building. The vehicle is navigated by a vision system and a combination of inertial and altitude sensors, which achieve a relative localization of the quad-rotor with respect to its surrounding environment. A MPC-based path planning method using the information provided by the GPS and the visual sensor has been developed to generate an optimal real-time trajectory with collision avoidance capabilities, which starts from an initial point given by the user and guides the vehicle to achieve the final point outside the target building. With the aim of detecting and locating the object of interest, two different vision-based object detection strategies are proposed and are applied respectively in the stereo vision system and the vision system using the Kinect. After estimating the target window model, a motion estimation framework is developed to estimate the vehicle’s ego-motion from the images provided by the visual sensor. There have been two versions of the motion estimation frameworks for both vision systems. A quad-rotor experimental platform is developed. For estimating the translational dynamic of the vehicle, a Kalman filter is implemented to combine the imaging, inertial and altitude sensors. A hierarchical sensing and control system is designed to perform the navigation and control of the quad-rotor helicopter, which allows the vehicle to estimate the state without artificial marks or other external positioning systems
APA, Harvard, Vancouver, ISO, and other styles
26

Peñarrocha, Alós Ignacio. "Sensores virtuales para procesos con medidas escasas y retardos temporales." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/3882.

Full text
Abstract:
En este trabajo se aborda el problema de controlar un proceso cuya salida se muestrea de forma irregular. Para ello se propone utilizar un predictor que estima las salidas del proceso en instantes regulares de tiempo más un controlador convencional que calcula la acción de control a partir de las estimaciones del predictor (técnica conocida como control inferencial). La predicción consiste en estimar las variables de salida que se desean controlar a partir de las mediciones realizadas con diversos sensores utilizando para ello un modelo matemático del proceso. El filtro de Kalman permite hacer la predicción de forma óptima si las perturbaciones tienen una distribución gaussiana de media cero, pero con el inconveniente de requerir un elevado coste computacional cuando se utilizan diferentes sensores con retardos temporales variantes. En este trabajo se propone una estrategia de predicción alternativa de bajo coste computacional cuyo diseño se basa en el conocimiento de la disponibilidad de mediciones y de los retardos (del proceso, del sistema de medición o del sistema de transmisión de datos) y de la naturaleza de las perturbaciones. Los predictores propuestos minimizan el error de predicción frente al muestreo aleatorio con retardos variantes, perturbaciones, ruido de medida, error de modelado, retardos en la acción de control e incertidumbre en los tiempos de medición. Las diferentes estrategias de diseño que se proponen se clasifican según el tipo de información que se dispone de las perturbaciones y del coste computacional requerido. Se han planteado los diseños para sistemas monovariables, multivariables, lineales y no lineales. Asimismo, también se ha elaborado una forma más eficiente de incluir mediciones escasas con retardo en el filtro de Kalman, con el objetivo de reducir el coste computacional de la predicción. En este trabajo se demuestra que los sistemas de control inferencial que utilizan los predictores propuestos cumplen con el principio de sep
Peñarrocha Alós, I. (2006). Sensores virtuales para procesos con medidas escasas y retardos temporales [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3882
Palancia
APA, Harvard, Vancouver, ISO, and other styles
27

Trivedi, Neeta. "Robust, Energy‐efficient Distributed Inference in Wireless Sensor Networks With Applications to Multitarget Tracking." Thesis, 2014. https://etd.iisc.ac.in/handle/2005/4569.

Full text
Abstract:
The Joint Directors of Laboratories (JDL) data fusion model is a functional and comprehensive model for data fusion and inference process and serves as a common frame of reference for fusion technologies and algorithms. However, in distributed data fusion (DDF), since a node fuses the data locally available to it and the data arriving at it from the network, the framework by which the inputs arrive at a node must be part of the DDF problem, more so when the network starts becoming an overwhelming part of the inference process, like in wireless sensor networks (WSN). The current state of the art is the advancement as the result of parallel efforts in the constituent technology areas relating to the network or architecture domain and the application or fusion domain. Each of these disciplines is an evolving area requiring concentrated efforts to reach the Holy Grail. However, the most serious gap exists in the linkages within and across the two domains. This goal of this thesis is to investigate how the architectural issues can be crucial to maintaining provably correct solutions for distributed inference in WSN, to examine the requirements of networking structure for multitarget tracking in WSN as the boundaries get pushed in terms of target signature separation, sensor location uncertainties, reporting structure changes, and energy scarcity, and to propose robust and energy-efficient solutions for multitarget tracking in WSN. The findings point to an architecture that is achievable given today’s technology. This thesis shows the feasibility of using this architecture for efficient integrated execution of the architecture domain and the fusion domain functionality. Specific contributions in the areas of architecture domain include optimal lower bound on energy required for broadcast to a set of nodes, a QoS- and resource-aware broadcast algorithm, and a fusion-aware converge cast algorithm. The contributions in fusion domain include the following. Extension to the JDL model is proposed that accounts for DDF. Probabilistic graphical models are introduced with the motivation of balancing computation load and communication overheads among sensor nodes. Under the assumption that evidence originates from sensor nodes and a large part of inference must be drawn locally, the model allows mapping of inference responsibilities to sensor nodes in distributed manner. An algorithm formulating the problem of maximum a posteriori state estimate from general multimodal posterior as constrained nonlinear optimization problem, and an error estimate for indicating actionable confidence in this state are proposed. A DBN-based framework iMerge is proposed that models the overlap of signal energies from closely spaced targets for adding robustness to data association. iConsensus, a lightweight approach to network management and distributed tracking, and iMultitile, a method to trade off the cost of managing and propagating the particles with desired accuracy limits are also proposed. iSLAT, a distributed, lightweight smoothing algorithm for simultaneous localization and multitarget tracking is discussed. iSLAT uses the well-known RANSAC algorithm for approximation of the joint posterior densities.
APA, Harvard, Vancouver, ISO, and other styles
28

Khaleghi, Bahador. "Distributed Random Set Theoretic Soft/Hard Data Fusion." Thesis, 2012. http://hdl.handle.net/10012/6842.

Full text
Abstract:
Research on multisensor data fusion aims at providing the enabling technology to combine information from several sources in order to form a unifi ed picture. The literature work on fusion of conventional data provided by non-human (hard) sensors is vast and well-established. In comparison to conventional fusion systems where input data are generated by calibrated electronic sensor systems with well-defi ned characteristics, research on soft data fusion considers combining human-based data expressed preferably in unconstrained natural language form. Fusion of soft and hard data is even more challenging, yet necessary in some applications, and has received little attention in the past. Due to being a rather new area of research, soft/hard data fusion is still in a edging stage with even its challenging problems yet to be adequately de fined and explored. This dissertation develops a framework to enable fusion of both soft and hard data with the Random Set (RS) theory as the underlying mathematical foundation. Random set theory is an emerging theory within the data fusion community that, due to its powerful representational and computational capabilities, is gaining more and more attention among the data fusion researchers. Motivated by the unique characteristics of the random set theory and the main challenge of soft/hard data fusion systems, i.e. the need for a unifying framework capable of processing both unconventional soft data and conventional hard data, this dissertation argues in favor of a random set theoretic approach as the first step towards realizing a soft/hard data fusion framework. Several challenging problems related to soft/hard fusion systems are addressed in the proposed framework. First, an extension of the well-known Kalman lter within random set theory, called Kalman evidential filter (KEF), is adopted as a common data processing framework for both soft and hard data. Second, a novel ontology (syntax+semantics) is developed to allow for modeling soft (human-generated) data assuming target tracking as the application. Third, as soft/hard data fusion is mostly aimed at large networks of information processing, a new approach is proposed to enable distributed estimation of soft, as well as hard data, addressing the scalability requirement of such fusion systems. Fourth, a method for modeling trust in the human agents is developed, which enables the fusion system to protect itself from erroneous/misleading soft data through discounting such data on-the-fly. Fifth, leveraging the recent developments in the RS theoretic data fusion literature a novel soft data association algorithm is developed and deployed to extend the proposed target tracking framework into multi-target tracking case. Finally, the multi-target tracking framework is complemented by introducing a distributed classi fication approach applicable to target classes described with soft human-generated data. In addition, this dissertation presents a novel data-centric taxonomy of data fusion methodologies. In particular, several categories of fusion algorithms have been identifi ed and discussed based on the data-related challenging aspect(s) addressed. It is intended to provide the reader with a generic and comprehensive view of the contemporary data fusion literature, which could also serve as a reference for data fusion practitioners by providing them with conducive design guidelines, in terms of algorithm choice, regarding the specifi c data-related challenges expected in a given application.
APA, Harvard, Vancouver, ISO, and other styles
29

yu-jie, chang, and 張栯榤. "A Research on Nonlinear Distributed Data Fusion System." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/65001580899063562919.

Full text
Abstract:
碩士
大葉大學
電機工程學系碩士班
93
Multi-target tracking is very important role in the research area of radar. If the system combines the multiple sensor fusion algorithm, it can adopt another sensor data and have mutual correction. Therefore, the tracking results will be more accuracy. The method of a distributed multiple sensor data fusion is investigated in this research. In this system, each sensor has its own tracking data, besides sensors’ tracking data will communicate mutually and will revise the tracking data of distributed sensors. Moreover, a data association technique is used to solve the mutual coincidence of targets and an adaptive procedure will reduce the tracking error while targets produce suddenly acceleration.
APA, Harvard, Vancouver, ISO, and other styles
30

Garimella, Bhavani Jayaweera Sudharman K. "Distributed detection and data fusion in resource constrained wireless sensor networks." Diss., 2005. http://il.proquest.com/products_umi/dissertations.

Full text
Abstract:
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering.
"December 2005." Title from PDF title page (viewed on April 19, 2007). Thesis adviser: Sudharman K. Jayaweera. UMI Number: AAT 1436557 Includes bibliographic references (leaves 39-45).
APA, Harvard, Vancouver, ISO, and other styles
31

Garimella, Bhavani. "Distributed detection and data fusion in resource constrained wireless sensor networks." Thesis, 2005. http://hdl.handle.net/10057/740.

Full text
Abstract:
Wireless sensor networks have received immense attention in recent years due to their possible applications in various fields like battery-field surveillance, disaster recovery etc. Since these networks are mostly resource-constrained there is a need for efficient algorithms in maximizing the network resources. In this thesis, energy and bandwidth-efficient detection and fusion algorithms for such resource constrained wireless sensor systems are developed. A Sequential Probability Ratio Test (SPRT) based detection algorithms for an energy-constrained sensor network is proposed. Performance is evaluated in terms of number of nodes required to achieve a given probability of detection. Simulation results show that a network implementing the SPRT based model outperforms a network having a parallel fusion detector. To implement distributed detection and fusion in energy and bandwidth constrained networks, non-orthogonal communication is considered to be one of the possible solutions. An optimal Bayesian data fusion receiver for a DS-CDMA based distributed wireless sensor network having a parallel architecture is proposed. It is shown that the optimal Bayesian receiver outperforms the partitioned receivers in terms of probability of error. But the complexity of this optimal receiver is exponential in the number of nodes. In order to reduce the complexity, partitioned receivers that perform detection and fusion in two stages are proposed. Several well-known multi-user detectors namely, JML, matched filter, Decorrelator and linear MMSE detectors are considered for the first stage detection and performance is evaluated in terms of probability of error at the fusion center. Conventional detector based fusion receiver has a performance close to that of optimal fusion receiver with quite less complexity under specific channel conditions. Performance and complexity trade-offs should be considered while designing the network.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering.
"December 2005."
APA, Harvard, Vancouver, ISO, and other styles
32

Wu, Bin-shin, and 吳秉勳. "Distributed Extended Kalman Filter-based Multisensor Data Fusion for Mobile robot Systems." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/43000423413934733969.

Full text
Abstract:
碩士
義守大學
機械與自動化工程學系碩士班
94
With prevalence and development of mobile robots, mobile robots have extended their application from industrial manufacturing to home. In recent years, the development of mobile robots attract much attention in various applications. Due to the widespread use of mobile robots, one of the most important requirement is technique position-tracking. With the increasing demand of precise position-tracking, researchers investigated algorithms to increase tracking accuracy. Kalman Filter is the most extensively used algorithm for position-tracking but the accuracy is restricted for some nonlinear cases. This research aims to develop the Distributed Extended Kalman Filter for multisensor data fusion. As the multisensor data were communicated and fused between distributed platforms, the precision of the position-tracking of mobile robots improves accordingly. This work studies use the ranging parameter and measurement errors in sensors using Extended Kalman Filter and Distributed Extended Kalman Filter. Numerical simulations are conducted to investigate the tracking accuracy for various mobile robot systems.
APA, Harvard, Vancouver, ISO, and other styles
33

Hsueh, Chin-sheng, and 薛智升. "Distributed TDOA/AOA Location and Data Fusion Methods with NLOS Mitigation in UWB Systems." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/10927576908621977395.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
94
Ultra Wideband (UWB) signal can offer an accurate location service in wireless sensor networks because its high range resolution. Target tracking by multiple sensors can provide better performance, but the centralized algorithms are not suitable for wireless sensor networks. In additional, the non line of sight (NLOS) propagation error leads to severe degradation of the accuracy in location systems. In this thesis, NLOS identification and mitigation technique utilizing modified biased Kalman filter (KF) is proposed to reduce the NLOS time of arrival (TOA) errors in UWB environments. We combine the modified biased Kalman filter with sliding window to identify and mitigate different degree of NLOS errors immediately. In order to deal with the influence of inaccurate NLOS angle of arrival (AOA) measurements, we also had a discussion on AOA selection and fusion methods. In the distributed location structure, we used the extended Information filter (EIF) to process the formulated time difference of arrival (TDOA) and AOA measurements for the target positioning and tracking. Instead of using extended Kalman filter, extended Information filter can assimilate selected AOA easily without dynamic dimensions. The sensors are divided into different groups for distributed TDOA/AOA location to reduce computation and then each group can assimilate information from other groups easily to maintain precise location. The simulation results show that the proposed architecture can mitigate NLOS errors effectively and improve the accuracy of target positioning and tracking from distributed location and data fusion in wireless sensor networks.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Chien-Wen, and 陳建文. "Distributed TDOA/AOA Wireless Location for Multi-sensor Data Fusion System with Correlated Measurement Noises." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/3dz989.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
95
In multi-sensor data fusion target tracking system, using information filtering can implement distributed location with uncorrelated measurement noises, but the measurement noises of different sensors are often correlated. If measurement noises are correlated, the covariance matrix of measurement noises is not a diagonal matrix. We can not use information filtering to implement distributed location with correlated measurement noises. By using the matrix theory, the covariance matrix of measurement noises can be transformed to a diagonal matrix. The observation models are transformed to new observation models, and the multi-sensor measurements with correlated measurement noises are transformed to equivalent pseudo ones with uncorrelated measurement noises. There are many methods in the matrix theory, we use Cholesky fatorization in this thesis. Cholesky fatorization is from Gaussian elimination, and there are many advantages in the computation process.However, the observation models need to be transformed to new observation models, and the measurement datas for the approach need to be separated and recombined. For measurement datas being separated and recombined, every sensor must communicate with each other. In practice, one sensor does not directly communicate with other sensors except its direct neighbors. By formulating the Cholesky factorization process, we present architectures which are applied in wireless distributed location. Distributed architectures with clustered nodes are proposed to achieve measurement exchange and information sharing for wireless location and target tracking. With limited times of data exchanges between clustered nodes, the correlated noise components in the measurements are transformed into uncorrelated ones through the Cholesky process, and the resultant information can be directly shared and processed by the derived extended information filters at the nodes in the distributed system. Hybrid TDOA/AOA wireless location systems with the NLOS error effects are used as examples in investigating the distributed information architecture. Simulation results show that the proposed distributed information processing and data fusion architecture effectively achieve improved location and tracking accuracy.
APA, Harvard, Vancouver, ISO, and other styles
35

Ballal, Prasanna M. "Decision and control in distributed cooperative systems." 2008. http://hdl.handle.net/10106/1053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Rybarczyk, Ryan Thomas. "e-DTS 2.0: A Next-Generation of a Distributed Tracking System." 2012. http://hdl.handle.net/1805/2782.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
A key component in tracking is identifying relevant data and combining the data in an effort to provide an accurate estimate of both the location and the orientation of an object marker as it moves through an environment. This thesis proposes an enhancement to an existing tracking system, the enhanced distributed tracking system (e-DTS), in the form of the e-DTS 2.0 and provides an empirical analysis of these enhancements. The thesis also provides suggestions on future enhancements and improvements. When a Camera identifies an object within its frame of view, it communicates with a JINI-based service in an effort to expose this information to any client who wishes to consume it. This aforementioned communication utilizes the JINI Multicast Lookup Protocol to provide the means for a dynamic discovery of any sensors as they are added or removed from the environment during the tracking process. The client can then retrieve this information from the service and perform a fusion technique in an effort to provide an estimation of the marker's current location with respect to a given coordinate system. The coordinate system handoff and transformation is a key component of the e-DTS 2.0 tracking process as it improves the agility of the system.
APA, Harvard, Vancouver, ISO, and other styles
37

Phadke, Aboli Manas. "Designing and experimenting with e-DTS 3.0." Thesis, 2014. http://hdl.handle.net/1805/4932.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
With the advances in embedded technology and the omnipresence of smartphones, tracking systems do not need to be confined to a specific tracking environment. By introducing mobile devices into a tracking system, we can leverage their mobility and the availability of multiple sensors such as camera, Wi-Fi, Bluetooth and Inertial sensors. This thesis proposes to improve the existing tracking systems, enhanced Distributed Tracking System (e-DTS 2.0) [19] and enhanced Distributed Object Tracking System (eDOTS)[26], in the form of e-DTS 3.0 and provides an empirical analysis of these improvements. The enhancements proposed are to introduce Android-based mobile devices into the tracking system, to use multiple sensors on the mobile devices such as the camera, the Wi-Fi and Bluetooth sensors and inertial sensors and to utilize possible resources that may be available in the environment to make the tracking opportunistic. This thesis empirically validates the proposed enhancements through the experiments carried out on a prototype of e-DTS 3.0.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography