To see the other types of publications on this topic, follow the link: Information fusion.

Dissertations / Theses on the topic 'Information fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Information fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Xu, Philippe. "Information fusion for scene understanding." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2153/document.

Full text
Abstract:
La compréhension d'image est un problème majeur de la robotique moderne, la vision par ordinateur et l'apprentissage automatique. En particulier, dans le cas des systèmes avancés d'aide à la conduite, la compréhension de scènes routières est très importante. Afin de pouvoir reconnaître le grand nombre d’objets pouvant être présents dans la scène, plusieurs capteurs et algorithmes de classification doivent être utilisés. Afin de pouvoir profiter au mieux des méthodes existantes, nous traitons le problème de la compréhension de scènes comme un problème de fusion d'informations. La combinaison d'une grande variété de modules de détection, qui peuvent traiter des classes d'objets différentes et utiliser des représentations distinctes, est faites au niveau d'une image. Nous considérons la compréhension d'image à deux niveaux : la détection d'objets et la segmentation sémantique. La théorie des fonctions de croyance est utilisée afin de modéliser et combiner les sorties de ces modules de détection. Nous mettons l'accent sur la nécessité d'avoir un cadre de fusion suffisamment flexible afin de pouvoir inclure facilement de nouvelles classes d'objets, de nouveaux capteurs et de nouveaux algorithmes de détection d'objets. Dans cette thèse, nous proposons une méthode générale permettant de transformer les sorties d’algorithmes d'apprentissage automatique en fonctions de croyance. Nous étudions, ensuite, la combinaison de détecteurs de piétons en utilisant les données Caltech Pedestrian Detection Benchmark. Enfin, les données du KITTI Vision Benchmark Suite sont utilisées pour valider notre approche dans le cadre d'une fusion multimodale d'informations pour de la segmentation sémantique
Image understanding is a key issue in modern robotics, computer vison and machine learning. In particular, driving scene understanding is very important in the context of advanced driver assistance systems for intelligent vehicles. In order to recognize the large number of objects that may be found on the road, several sensors and decision algorithms are necessary. To make the most of existing state-of-the-art methods, we address the issue of scene understanding from an information fusion point of view. The combination of many diverse detection modules, which may deal with distinct classes of objects and different data representations, is handled by reasoning in the image space. We consider image understanding at two levels : object detection ans semantic segmentation. The theory of belief functions is used to model and combine the outputs of these detection modules. We emphazise the need of a fusion framework flexible enough to easily include new classes, new sensors and new object detection algorithms. In this thesis, we propose a general method to model the outputs of classical machine learning techniques as belief functions. Next, we apply our framework to the combination of pedestrian detectors using the Caltech Pedestrain Detection Benchmark. The KITTI Vision Benchmark Suite is then used to validate our approach in a semantic segmentation context using multi-modal information
APA, Harvard, Vancouver, ISO, and other styles
2

Kaupp, Tobias. "Probabilistic Human-Robot Information Fusion." Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/2554.

Full text
Abstract:
This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
APA, Harvard, Vancouver, ISO, and other styles
3

Kaupp, Tobias. "Probabilistic Human-Robot Information Fusion." University of Sydney, 2008. http://hdl.handle.net/2123/2554.

Full text
Abstract:
PhD
This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
APA, Harvard, Vancouver, ISO, and other styles
4

Johansson, Ronnie. "Large-Scale Information Acquisition for Data and Information Fusion." Doctoral thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Johansson, Ronnie. "Information Acquisition in Data Fusion Systems." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1673.

Full text
Abstract:

By purposefully utilising sensors, for instance by a datafusion system, the state of some system-relevant environmentmight be adequately assessed to support decision-making. Theever increasing access to sensors o.ers great opportunities,but alsoincurs grave challenges. As a result of managingmultiple sensors one can, e.g., expect to achieve a morecomprehensive, resolved, certain and more frequently updatedassessment of the environment than would be possible otherwise.Challenges include data association, treatment of con.ictinginformation and strategies for sensor coordination.

We use the term information acquisition to denote the skillof a data fusion system to actively acquire information. Theaim of this thesis is to instructively situate that skill in ageneral context, explore and classify related research, andhighlight key issues and possible future work. It is our hopethat this thesis will facilitate communication, understandingand future e.orts for information acquisition.

The previously mentioned trend towards utilisation of largesets of sensors makes us especially interested in large-scaleinformation acquisition, i.e., acquisition using many andpossibly spatially distributed and heterogeneous sensors.

Information acquisition is a general concept that emerges inmany di.erent .elds of research. In this thesis, we surveyliterature from, e.g., agent theory, robotics and sensormanagement. We, furthermore, suggest a taxonomy of theliterature that highlights relevant aspects of informationacquisition.

We describe a function, perception management (akin tosensor management), which realizes information acquisition inthe data fusion process and pertinent properties of itsexternal stimuli, sensing resources, and systemenvironment.

An example of perception management is also presented. Thetask is that of managing a set of mobile sensors that jointlytrack some mobile targets. The game theoretic algorithmsuggested for distributing the targets among the sensors proveto be more robust to sensor failure than a measurement accuracyoptimal reference algorithm.

Keywords:information acquisition, sensor management,resource management, information fusion, data fusion,perception management, game theory, target tracking

APA, Harvard, Vancouver, ISO, and other styles
6

Nouranian, Saman. "Information fusion for prostate brachytherapy planning." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58305.

Full text
Abstract:
Low-dose-rate prostate brachytherapy is a minimally invasive treatment approach for localized prostate cancer. It takes place in one session by permanent implantation of several small radio-active seeds inside and adjacent to the prostate. The current procedure at the majority of institutions requires planning of seed locations prior to implantation from transrectal ultrasound (TRUS) images acquired weeks in advance. The planning is based on a set of contours representing the clinical target volume (CTV). Seeds are manually placed with respect to a planning target volume (PTV), which is an anisotropic dilation of the CTV, followed by dosimetry analysis. The main objective of the plan is to meet clinical guidelines in terms of recommended dosimetry by covering the entire PTV with the placement of seeds. The current planning process is manual, hence highly subjective, and can potentially contribute to the rate and type of treatment related morbidity. The goal of this thesis is to reduce subjectivity in prostate brachytherapy planning. To this end, we developed and evaluated several frameworks to automate various components of the current prostate brachytherapy planning process. This involved development of techniques with which target volume labels can be automatically delineated from TRUS images. A seed arrangement planning approach was developed by distributing seeds with respect to priors and optimizing the arrangement according to the clinical guidelines. The design of the proposed frameworks involved the introduction and assessment of data fusion techniques that aim to extract joint information in retrospective clinical plans, containing the TRUS volume, the CTV, the PTV and the seed arrangement. We evaluated the proposed techniques using data obtained in a cohort of 590 brachytherapy treatment cases from the Vancouver Cancer Centre, and compare the automation results with the clinical gold-standards and previously delivered plans. Our results demonstrate that data fusion techniques have the potential to enable automatic planning of prostate brachytherapy.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
7

Dalmas, Tiphaine. "Information fusion for automated question answering." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/27860.

Full text
Abstract:
Until recently, research efforts in automated Question Answering (QA) have mainly focused on getting a good understanding of questions to retrieve correct answers. I focus on the analysis of the relationships between answer candidates as provided in open domain QA on multiple documents. I argue that such candidates have intrinsic properties, partly regardless of the question, and those properties can be exploited to provide better quality and more user-oriented answers in QA. Information fusion refers to the technique of merging pieces of information from different sources. While frequency has proved to be a significant characteristic of a correct answer, I evaluate the value of other relationships characterizing answer variability and redundancy. Partially inspired by recent developments in multi-document summarization, I redefine the concept of “answer” within an engineering approach to QA based on the Model-View-Controller (MVC) pattern of user interface design. An “answer model” is a directed graph in which nodes correspond to entities projected from extractions and edges convey relationships between such nodes. I describe shallow techniques to compare entities and enrich the model by discovering four broad categories of relationships between entities in the model: equivalence, inclusion, aggregation and alternative. Quantitatively, answer candidate modelling improves answer extraction accuracy. It also proves to be more robust to incorrect answer candidates than traditional techniques. Qualitatively, models provide meta-information encoded by relationships that allow shallow reasoning to help organize and generate the final output. Coupling this fusion-based reasoning with the MVC approach, I report experiments on mixed-media answering involving generation of illustrated summaries, and discuss the application of web-based answer modelling to improve non web QA tasks. Finally, I discuss issues related to the computation of answer models (candidate selection for fusion, relationship transitivity), and address the difficulty of assessing fusion-based answers with the current evolution methods in QA.
APA, Harvard, Vancouver, ISO, and other styles
8

Oreshkin, Boris. "Distributed information fusion in sensor networks." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86916.

Full text
Abstract:
This thesis addresses the problem of design and analysis of distributed in-network signal processing algorithms for effcient aggregation and fusion of information in wireless sensor networks. The distributed in-network signal processing algorithms alleviate a number of drawbacks of the centralized fusion approach. The single point of failure, complex routing protocols, uneven power consumption in sensor nodes, ineffcient wireless channel utilization, and poor scalability are among these drawbacks. These drawbacks of the centralized approach lead to reduced network lifetime, poor robustness to node failures, and reduced network capacity. The distributed algorithms alleviate these issues by using simple pairwise message exchange protocols and localized in-network processing. However, for such algorithms accuracy losses and/or time required to complete a particular fusion task may be significant. The design and analysis of fast and accurate distributed algorithms with guaranteed performance characteristics is thus important. In this thesis two specific problems associated with the analysis and design of such distributed algorithms are addressed.
For the distributed average consensus algorithm a memory based acceleration methodology is proposed. The convergence of the proposed methodology is investigated. For the two important settings of this methodology, optimal values of system parameters are determined and improvement with respect to the standard distributed average consensus algorithm is theoretically characterized. The theoretical improvement characterization matches well with the results of numerical experiments revealing significant and well scaling gain. The practical distributed on-line initialization scheme is devised. Numerical experiments reveal the feasibility of the proposed initialization scheme and superior performance of the proposed methodology with respect to several existing acceleration approaches.
For the collaborative signal and information processing methodology a number of theoretical performance guarantees is obtained. The collaborative signal and information processing framework consists in activating only a cluster of wireless sensors to perform target tracking task in the cluster head using particle filter. The optimal cluster is determined at every time instant and cluster head hand-off is performed if necessary. To reduce communication costs only an approximation of the filtering distribution is sent during hand-off resulting in additional approximation errors. The time uniform performance guarantees accounting for the additional errors are obtained in two settings: the subsample approximation and the parametric mixture approximation hand-off.
Cette thèse aborde le problème de la conception et l'analyse d'algorithmes distribuès servant à l'agrégation efficace et la fusion de l'information dans des reséaux capteurs sans fil. Ces algorithmes distribuès servent à addresser un bon nombre d'inconvénients qu'ont les approches de fusion centralisée telles que le point de défaillance unique, les protocoles de routage complexe, la consommation de puissance inégale dans les noeuds de capteurs, l'utilisation inefficace des voies de transmission sans-fil et l'extensibilité limitée. Ces inconvénients de l'approche centralisée ont comme effet de réduire la durée de vie du reséau, la robustesse des noeuds face aux défaillances et la capacité du réseau. Les algorithmes distribuès atténuent ces problèmes en utilisant des simples protocoles de messageries entre les noeuds ainsi que du traitement d'information localisé. Toutefois, pour ces algorithmes, les pertes de précision et/ou de temps nécessaire pour effectuer une tâche peuvent être importantes. C'est pourquoi la conception et l'analyse d'algorithmes distribuès rapide et précis est importante. Dans cette thèse, deux problèmes spécifiques associés à l'analyse et le conception de tels algorithms sont abordés.
En ce qui concerne l'algorithme de consensus sur la moyenne distribuè, une méthode d'accélération fondé sur la mémoire est proposée et sa convergence analysée. Pour les deux paramètres importants de cette méthodologie, les valeurs optimales pour le système sont déterminées et l'amélioration par rapport à l'algorithme de consensus de base est caractérisée de façon théorique. Cette caractérisation correspond aux resultants d'expériences numériques et révèlent des gains importants et extensibles. Le régime distribuè d'initialisation en ligne est conçu. Des expériences numériques révèlevent la faisabilité du régime d'initilisation proposé ainsi qu'un rendement supérieur à plusieurs approches existantes.
Pour la méthodologie de traitement de signaux et d'information collaborative, un certain nombre de garanties théoriques de performance sont obtenues. Ce cadre de travail consiste à activer seulement une grappe de capteurs sans fil pour effectuer les tâches de pistage d'objet au niveau deu chef de groupe en utilisant un filtre particulaire. La grappe optimale est déterminée à chaque intervale de temps et le transfert du titre de chef de groupe est réalisé au besoin. Pour réduire les coûts de communication, seulement une approximation de la distribution du filtre est envoyé pendant le transfert de responsabilités ce qui entraîne des erreurs supplémentaires. Les garanties de performance uniformes dans le temps tenant compte de ces erreurs supplémentaires sont obtenues dans deux contextes.
APA, Harvard, Vancouver, ISO, and other styles
9

Peacock, Andrew M. "Information fusion for improved motion estimation." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/428.

Full text
Abstract:
Motion Estimation is an important research field with many commercial applications including surveillance, navigation, robotics, and image compression. As a result, the field has received a great deal of attention and there exist a wide variety of Motion Estimation techniques which are often specialised for particular problems. The relative performance of these techniques, in terms of both accuracy and of computational requirements, is often found to be data dependent, and no single technique is known to outperform all others for all applications under all conditions. Information Fusion strategies seek to combine the results of different classifiers or sensors to give results of a better quality for a given problem than can be achieved by any single technique alone. Information Fusion has been shown to be of benefit to a number of applications including remote sensing, personal identity recognition, target detection, forecasting, and medical diagnosis. This thesis proposes and demonstrates that Information Fusion strategies may also be applied to combine the results of different Motion Estimation techniques in order to give more robust, more accurate and more timely motion estimates than are provided by any of the individual techniques alone. Information Fusion strategies for combining motion estimates are investigated and developed. Their usefulness is first demonstrated by combining scalar motion estimates of the frequency of rotation of spinning biological cells. Then the strategies are used to combine the results from three popular 2D Motion Estimation techniques, chosen to be representative of the main approaches in the field. Results are presented, from both real and synthetic test image sequences, which illustrate the potential benefits of Information Fusion to Motion Estimation applications. There is often a trade-off between accuracy of Motion Estimation techniques and their computational requirements. An architecture for Information Fusion that allows faster, less accurate techniques to be effectively combined with slower, more accurate techniques is described. This thesis describes a number of novel techniques for both Information Fusion and Motion Estimation which have potential scope beyond that examined here. The investigations presented in this thesis have also been reported in a number of workshop, conference and journal papers, which are listed at the end of the document.
APA, Harvard, Vancouver, ISO, and other styles
10

Cavanaugh, Andrew F. "Bayesian Information Fusion for Precision Indoor Location." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/157.

Full text
Abstract:
This thesis documents work which is part of the ongoing effort by the Worcester Polytechnic Institute (WPI) Precision Personnel Locator (PPL) project, to track and locate first responders in urban/indoor settings. Specifically, the project intends to produce a system which can accurately determine the floor that a person is on, as well as where on the floor that person is, with sub-meter accuracy. The system must be portable, rugged, fast to set up, and require no pre-installed infrastructure. Several recent advances have enabled us to get closer to meeting these goals: The development of Transactional Array Reconciliation Tomography(TART) algorithm, and corresponding locator hardware, as well as the integration of barometric sensors, and a new antenna deployment scheme. To fully utilize these new capabilities, a Bayesian Fusion algorithm has been designed. The goal of this thesis is to present the necessary methods for incorporating diverse sources of information, in a constructive manner, to improve the performance of the PPL system. While the conceptual methods presented within are meant to be general, the experimental results will focus on the fusion of barometric height estimates and RF data. These information sources will be processed with our existing Singular Value Array Reconciliation Tomography (σART), and the new TART algorithm, using a Bayesian Fusion algorithm to more accurately estimate indoor locations.
APA, Harvard, Vancouver, ISO, and other styles
11

Bellenger, Amandine. "Semantic Decision Support for Information Fusion Applications." Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00845918.

Full text
Abstract:
This thesis is part of the knowledge representation domain and modeling of uncertainty in a context of information fusion. The main idea is to use semantic tools and more specifically ontologies, not only to represent the general domain knowledge and observations, but also to represent the uncertainty that sources may introduce in their own observations. We propose to represent these uncertainties and semantic imprecision trough a metaontology (called DS-Ontology) based on the theory of belief functions. The contribution of this work focuses first on the definition of semantic inclusion and intersection operators for ontologies and on which relies the implementation of the theory of belief functions, and secondly on the development of a tool called FusionLab for merging semantic information within ontologies from the previous theorical development. These works have been applied within a European maritime surveillance project.
APA, Harvard, Vancouver, ISO, and other styles
12

Bellenger, Amandine. "Semantic decision support for information fusion applications." Electronic Thesis or Diss., Rouen, INSA, 2013. http://www.theses.fr/2013ISAM0013.

Full text
Abstract:
La thèse s'inscrit dans le domaine de la représentation des connaissances et la modélisation de l'incertitude dans un contexte de fusion d'informations. L'idée majeure est d'utiliser les outils sémantiques que sont les ontologies, non seulement pour représenter les connaissances générales du domaine et les observations, mais aussi pour représenter les incertitudes que les sources introduisent dans leurs observations. Nous proposons de représenter ces incertitudes au travers d'une méta-ontologie (DS-ontology) fondée sur la théorie des fonctions de croyance. La contribution de ce travail porte sur la définition d'opérateurs d'inclusion et d'intersection sémantique et sur lesquels s'appuie la mise en œuvre de la théorie des fonctions de croyance, et sur le développement d'un outil appelé FusionLab permettant la fusion d'informations sémantiques à partir du développement théorique précédent. Une application de ces travaux a été réalisée dans le cadre d'un projet de surveillance maritime
This thesis is part of the knowledge representation domain and modeling of uncertainty in a context of information fusion. The main idea is to use semantic tools and more specifically ontologies, not only to represent the general domain knowledge and observations, but also to represent the uncertainty that sources may introduce in their own observations. We propose to represent these uncertainties and semantic imprecision trough a metaontology (called DS-Ontology) based on the theory of belief functions. The contribution of this work focuses first on the definition of semantic inclusion and intersection operators for ontologies and on which relies the implementation of the theory of belief functions, and secondly on the development of a tool called FusionLab for merging semantic information within ontologies from the previous theorical development. These works have been applied within a European maritime surveillance project
APA, Harvard, Vancouver, ISO, and other styles
13

Lu, Jingting. "E-services based information fusion: a user-level information integration framework." [Gainesville, Fla.]: University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lundquist, Christian. "Automotive Sensor Fusion for Situation Awareness." Licentiate thesis, Linköping University, Linköping University, Automatic Control, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-51226.

Full text
Abstract:

The use of radar and camera for situation awareness is gaining popularity in automotivesafety applications. In this thesis situation awareness consists of accurate estimates of theego vehicle’s motion, the position of the other vehicles and the road geometry. By fusinginformation from different types of sensors, such as radar, camera and inertial sensor, theaccuracy and robustness of those estimates can be increased.

Sensor fusion is the process of using information from several different sensors tocompute an estimate of the state of a dynamic system, that in some sense is better thanit would be if the sensors were used individually. Furthermore, the resulting estimate isin some cases only obtainable through the use of data from different types of sensors. Asystematic approach to handle sensor fusion problems is provided by model based stateestimation theory. The systems discussed in this thesis are primarily dynamic and they aremodeled using state space models. A measurement model is used to describe the relationbetween the state variables and the measurements from the different sensors. Within thestate estimation framework a process model is used to describe how the state variablespropagate in time. These two models are of major importance for the resulting stateestimate and are therefore given much attention in this thesis. One example of a processmodel is the single track vehicle model, which is used to model the ego vehicle’s motion.In this thesis it is shown how the estimate of the road geometry obtained directly from thecamera information can be improved by fusing it with the estimates of the other vehicles’positions on the road and the estimate of the radius of the ego vehicle’s currently drivenpath.

The positions of stationary objects, such as guardrails, lampposts and delineators aremeasured by the radar. These measurements can be used to estimate the border of theroad. Three conceptually different methods to represent and derive the road borders arepresented in this thesis. Occupancy grid mapping discretizes the map surrounding theego vehicle and the probability of occupancy is estimated for each grid cell. The secondmethod applies a constrained quadratic program in order to estimate the road borders,which are represented by two polynomials. The third method associates the radar measurementsto extended stationary objects and tracks them as extended targets.

The approaches presented in this thesis have all been evaluated on real data from bothfreeways and rural roads in Sweden.


IVSS - SEFS
APA, Harvard, Vancouver, ISO, and other styles
15

Andersson, Lars. "Multi-robot Information Fusion : Considering spatial uncertainty models." Doctoral thesis, Linköpings universitet, Fluid och mekanisk systemteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15327.

Full text
Abstract:
The work presented in this thesis covers the topic of deployment for mobile robot teams. By connecting robots in teams they can perform a better job than each individual is capable of. It also gives redundancy, increases robustness, provides scalability, and increases efficiency. Multi-robot Information Fusion also results in a broader perspective for decision making. This thesis focuses on methods for estimating formation and trajectories and how these can be used for deployment of a robot team. The problems covered discuss what impact trajectories and formation have on the total uncertainty when exploring unknown areas. The deployment problem is approached using a centralized Kalman filter, for investigation of how team formation affects error propagation. Trajectory estimation is done using a smoother, where all information is used not only to estimate the trajectory of each robot, but also to align trajectories from different robots. Both simulation and experimental results are presented in the appended papers. It is shown that sensor placements can substantially affect uncertainty during deployment. When deploying a robot team the formation can be used as a tool for balancing error propagation among the robot states. A robust algorithm for associating rendezvous observations to align robot trajectories is also presented. Trajectory alignment is used as an efficient and cost-effective method for joining mapping information within robot teams. When working with robot teams, sensor placement and formation should be considered to obtain the maximum from the system. It is also of great value to mix robots with different characteristics since it is shown that using sensor fusion the robots can inherit each other’s characteristics if sensors are used correctly. Information sharing requires modularity and general models, which consumecomputational resources. Over time computer resources will become cheaper, allowing for distribution, and each robot will become more self-contained. Together with increased wireless bandwidth this will enable larger numbers of robots to cooperate.
APA, Harvard, Vancouver, ISO, and other styles
16

Aziz, Tariq. "Impact of information fusion in complex decision making." Thesis, Högskolan i Skövde, Institutionen för kommunikation och information, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-5234.

Full text
Abstract:
In military battlefield domain, decision making plays a very important part because safety and protection depends upon the accurate decisions made by the commanders in complex situations. In military and defense applications, there is a need of such technology that helps leaders to take good decisions in the critical situations with information overload. With the help of multi-sensor information fusion, the amount of information can be reduced as well as uncertainties in the information in the decision making of identifying and tracking targets in the military area.   Information fusion refers to the process of getting information from different sources and fusing this information, to supply an enhanced decision support. Decision making is the very core and a vital part in the field of information fusion and better decisions can be obtained by understanding how situation awareness can be enhanced. Situation awareness is about understanding the elements of the situation i.e. circumstances of the surrounding environment, their relations and their future impacts, for better decision making. Efficient situation awareness can be achieved with the effective use of the sensors. Sensors play a very useful role in the multi-sensor fusion technology to collect the data about, for instance, the enemy regarding their movements across the border and finding relationships between different objects in the battlefield that helps the decision makers to enhance situation awareness.   The purpose of this thesis is to understand and analyze the critical issue of uncertainties that results information in overload in military battlefield domain and benefits of using multi-sensor information fusion technology to reduce uncertainties by comparing uncertainty management methods of Bayesian and Dempster Shafer theories to enhance decision making and situation awareness for identifying the targets in battlefield domain.
APA, Harvard, Vancouver, ISO, and other styles
17

Jilkine, Petr. "Application of information fusion methods to biomedical data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23615.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Andersson, Lars A. A. "Multi-robot information fusion : considering spatial uncertainty models /." Linköping : Department of Management and Engineering, Linköping University, 2008. http://www.bibl.liu.se/liupubl/disp/disp2008/tek1209s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Elmo, Benedetto. "Metodi innovativi di Information Fusion per la crittografia." Doctoral thesis, Universita degli studi di Salerno, 2014. http://hdl.handle.net/10556/1421.

Full text
Abstract:
2011 - 2012
Nel presente lavoro viene mostrato un sistema per realizzare un codice crittografico, che sia utilizzabile per garantire un alto livello di segretezza usando tecniche di Information Fusion (IF). Nel dettaglio, si è deciso di fondere due codici; uno generato tramite un algoritmo di crittografia a chiave pubblica e uno creato mediante l’utilizzo di formule matematiche innovative basate sui frattali. La tecnica di Information Fusion utilizzata è stata presentata in precedenza con un differente utilizzo ed opportune modifiche. Infatti, nel precedente lavoro, veniva creato un codice di accesso identificativo, mentre, in questo, viene generata una chiave crittografica fortemente casuale da utilizzare in ambito di cifratura. La scelta di utilizzare i frattali per generare una sequenza numerica da fondere con codici utilizzati nella cifratura a chiave pubblica è vincolata alla caratteristica randomica di tali strutture. Questo porta all’idea di utilizzare queste caratteristiche per applicazioni crittografiche quali One-Time-Pad. La tecnica di fusione modificata è denominato F&NIF (Fractal & Numerical Information Fusion). [a cura dell'autore]
In this paper we want to show a system to realize a cryptography code. This system must ensure a high level of security using Information Fusion (IF) technique. In particular, we have decided to merge two codes generated by an algorithm of Public-key Cryptography and by fractals relations respectively. This IF method has been presented with a different use. In fact, in the previous paper, was created an identification access key but now we want to generate a very random Cryptography key to be used for encryption. The choice of using fractals to generate numbers to be fused with codes of Public-Key Cryptography, is due to the randomness of these structures. The idea is to use these features for cryptographic applications such as One-Time-Pad. The modified fusion technique is called F&NIF (Fractal & Numerical Information Fusion). [edited by author]
XI n.s.
APA, Harvard, Vancouver, ISO, and other styles
20

Sanchez, Gomez Edgar Gerardo. "Fusion Network Performance: An Integrated Packet/Circuit Hybrid Optical Network." Thesis, KTH, Kommunikationssystem, CoS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142652.

Full text
Abstract:
IP traffic increase has resulted in a demand for greater capacity of the underlying Ethernet network. As a consequence, not only Internet Service Providers (ISPs) but also telecom operators have migrated their mobile back-haul networks from legacy SONET/SDH circuit-switched equipment to packet-based networks. This inevitable shift brings higher through put efficiency and lower costs; however, the guaranteed QoS and minimal delay and packet delay variation (PDV) that can only be offered by circuit-switched technologies such as SONET/SDH are still essential and are becoming more vital for transport and metro networks, as well as for mobile back-haul networks, as the range and demands of applications increase. Fusion network offers "both an Ethernet wavelength transport and the ability to exploit vacant wavelength capacity using statistical multiplexing without interfering with the performance of the wavelength transport" [RVH] by dividing the traffic into two service classes while still using the capacity of the same wavelength in a wavelength routed optical network (WRON) [SBS06]: 1. A Guaranteed Service Transport (GST) service class supporting QoS demands such as no packet loss and fixed low delay for the circuit-switched traffic.   2. A statistical multiplexing (SM) service class offering high bandwidth efficiency for the best-effort packet-switched traffic.   Experimentation was carried out using two TransPacket's H1 nodes and the Spirent Test-Center as a packet generator/analyzer with the objective of demonstrating that the fusion technology, using TransPacket's H1 muxponders allow transporting GST traffic with circuit QoS; that is with no packet loss, no PDV and minimum delay independent of the insertion of statistically multiplexed traffic. Results indicated that the GST traffic performance is completely independent of the added SM traffic and its load. GST was always given absolute priority and remained with a constant average end-to-end delay of 21.47 μs, no packet loss and a minimum PDV of 50 ns while SM traffic load increased, increasing the overall 10GE lightpath utilization up to 99.5%.
APA, Harvard, Vancouver, ISO, and other styles
21

Poznic, Dominic. "Statistical and Information Analysis of Plasma Diagnostics." Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/20189.

Full text
Abstract:
Model comparison using Bayesian evidence, an analysis technique not previously used in plasma physics, is demonstrated on data from two experimental fusion devices. In the first device, a Polywell at the University of Sydney, Langmuir probe data is analysed to compare models describing its electron velocity distribution function (EVDF). The best performing model of the EVDF across all datasets is found, and its posterior distribution is used to give statistical distributions of plasma parameters, such as the plasma potential and density. Compared across the different data sets, these parameters indicate the successful formation of an electric potential well, crucial to the operation of the Polywell. Intensity profiles collected by an infrared camera viewing the divertor strike plate of MAST, previously run at the Culham Centre for Fusion Energy, are analysed to compare models describing the scrape-off layer (SOL). One of the previously existing models in the literature, the Eich function, performs best overall according to the Bayesian evidence, but one of the new convection-diffusion models matches its performance in several of the frames, indicating that improvement is possible. A second analysis technique, based on the Fisher information, is developed to aid the design of discharge plasma experiments with spectroscopic diagnostics. The mathematical basis of this technique is completely novel, having been derived specifically for use in this problem. The analysis technique quantifies the sensitivity of a spectral line to changes in the electron energy distribution function (EEDF). This technique is recommended for those designing plasma discharge experiments where a particular deviation in the EEDF is to be determined from spectral data. It allows all lines in a potential spectrum to be compared and a small subset chosen that will still strongly indicate the deviation. A demonstration is given using a collisional-radiative model of an argon plasma.
APA, Harvard, Vancouver, ISO, and other styles
22

Smith, Walter E. "Developing a model fusion center to enhance information sharing." Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/10697.

Full text
Abstract:
CHDS State/Local
Fusion Centers are in a unique position to provide the necessary collaborative space to bring the federal intelligence community together with state, local and tribal initiatives to support homeland security efforts at the grass roots level. Fusion Centers are described as a collaborative effort of two or more agencies to share, or more importantly, fuse information or data from multiple sources. Although, fusion centers have developed at different intervals, the U.S. Homeland Security has provided guiding documents to support fusion center maturation. This research examines these documents and proposed strategies incorporated into four proficient fusion centers in the Northeast Region of the United States to identify best or smart practices, success stories and areas for improvement. There has been a plethora of literature written concerning fusion centers since the tragedies of September 11, 2001. These categories of the literature include: official documents, guidelines and lessons learned for intelligence input, civil liberties safeguards and protections and literature dealing with the intelligence cycle and information sharing. The focus of this thesis is to examine correlation between the implementation of the current United States Homeland Security and U.S. Justice suggested Fusion Center Guidelines, and the employment of these guidelines in the successful development of a model fusion center.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhao, Zipeng S. M. Massachusetts Institute of Technology. "Fusion of correlated information in multifidelity aircraft design optimization." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98813.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 117-119).
Models used in engineering design often face trade-offs between computational cost and prediction uncertainty. To ameliorate this problem, correlated models of varying fidelities are used together under different fidelity management strategies to produce accurate predictions while avoiding typically expensive costs. However, existing strategies either account for model correlation and operate under the assumption of a strict fidelity hierarchy, or do not consider model correlation but allow model fidelities to vary across the design space. In this thesis, we present a surrogate-based multifidelity framework that simultaneously accounts for model correlation and accommodates non-hierarchical fidelity specifications. The development of our multifidelity framework can be classified into three stages. The first stage involves the construction of three separate wing weight estimation models that simplify different aspects of the wing sizing problem, thereby creating a scenario where model fidelities are not confined to a rigid hierarchy. The second stage involves the establishment of a formal definition of model correlation, and an extension that allows model correlations to vary across the design space. The third stage involves the incorporation of model correlation in surrogate-based information fusion. To illustrate the application of our framework, we set up a wing weight estimation problem using wing span as design variable. In a later chapter, the problem is extended to two dimensions for increased complexity using body weight and aspect ratio as design variables. Results from both wing weight estimation problems indicate a combination of variance reduction and inflation at different positions in the design space when model correlation is considered, in comparison to the case where model correlation is ignored.
by Zipeng Zhao.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
24

Chaw, Poh C. "Multimodal biometrics score level fusion using non-confidence information." Thesis, Nottingham Trent University, 2011. http://irep.ntu.ac.uk/id/eprint/361/.

Full text
Abstract:
Multimodal biometrics refers to automatic authentication methods that depend on multiple modalities of measurable physical characteristics. It alleviates most of the restrictions of single biometrics. To combine the multimodal biometrics scores, three different categories of fusion approaches including rule based, classification based and density based approaches are available. When choosing an approach, one has to consider not only the fusion performance, but also system requirements and other circumstances. In the context of verification, classification errors arise from samples in the overlapping region (or non- confidence region) between genuine users and impostors. In score space, a further separation of the samples outside the non-confidence region does not result in further verification improvements. Therefore, information contained in the non-confidence region might be useful for improving the fusion process. Up to this point, no attempts are reported in the literature that tries to enhance the fusion process using this additional information. In this work, the use of this information is explored in rule based and density based approaches mentioned above. The first approach proposes to use the non-confidence region width as a weighting parameter for the Weighted Sum fusion rule. By doing so, the non-confidence region of the multimodal biometrics score space can be minimised. This effectively leads to a better generalisation performance than commonly used Weighted Sum rules. Furthermore, it achieves fusion performances comparable to the more complicated training based approaches. These performances are not only achieved in a wide range of bimodal biometrics experiments, but also in higher dimensional multibiometrics fusion. This method also eliminates the need for score normalization, which is required by other rule based fusion methods. The second approach proposes a new Gaussian Mixture Model based likelihood ratio fusion method. This approach suggests the application of this density based fusion to the non-confidence region only and directly reject or accept the samples in the confidence region. By applying Gaussian Mixture Model to the non-confidence ii region, a smaller and more informative region, the impact of an inaccurately chosen component number on the fusion performance can be reduced. Without tuning or using any component searching algorithm, this proposed approach achieves comparable performance to the one using specific component number searching algorithm. This successful demonstration means less resource is required whilst comparable performance can be achieved and processing time is also significantly reduced.
APA, Harvard, Vancouver, ISO, and other styles
25

Gunturk, Bahadir K. "Multi-frame information fusion for image and video enhancement." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-180015/unrestricted/gunturk%5Fbahadir%5Fk%5F200312%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Shiel, Michael P. "Multi-level information fusion for environment aware robotic navigation." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/61955/1/Michael_Shiel_Thesis.pdf.

Full text
Abstract:
This thesis develops the hardware and software framework for an integrated navigation system. Dynamic data fusion algorithms are used to develop a system with a high level of resistance to the typical problems that affect standard navigation systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Mustaniemi, J. (Janne). "Image fusion algorithm for a multi-aperture camera." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201609172801.

Full text
Abstract:
Portable devices such as mobile phones have become thinner and smaller over the years. This development sets new challenges for the camera industry. Consumers are looking for high quality cameras with versatile features. Modern manufacturing technology and powerful signal processors make it possible to produce a small-sized multi-aperture camera with good image quality. Such a camera is worthy alternative to traditional Bayer matrix camera. In this master’s thesis, an image processing algorithm is designed and implemented for a four-aperture camera. The camera consists of four separate camera units, each having dedicated optics and color filter. Each camera unit has a slightly different viewpoint, which causes parallax error between the captured images. This error has to be corrected before the individual images are combined into a single RGB image. In practice, corresponding pixels are searched from each image using graph cuts method and mutual information similarity measure. Implemented algorithm also utilizes a trifocal tensor, which allows images to be processed together, instead of matching each image pair independently. Matching of corresponding pixels produces a disparity map (depth map) that is used to modify the input images. Moreover, the depth map was used for synthetic refocusing, which aims to change the image focus after capturing. The algorithm was evaluated by visually inspecting the quality of the output images. Images were also compared against the reference images captured by the same test camera system. The results show that the overall quality of the fused images is near the reference images. Closer inspection reveals small color errors, typically found near the object borders. Most of the errors are caused by the fact that some of the pixels are not visible in all images. Promising results were obtained when depth map was used for post-capture refocusing. Since the quality of the effect highly depends on the depth map, the above-mentioned visibility problem causes small errors to the refocused image. Future improvements, such as occlusion handling and sub-pixel accuracy would significantly increase the quality of fused and refocused images
Kannettavat laitteet kuten matkapuhelimet ovat tulleet vuosi vuodelta pienemmiksi ja ohuemmiksi. Kyseinen kehitys on tuonut myös lisää haasteita kamerateollisuudelle. Kuluttajat odottavat kameralta hyvää kuvanlaatua ja monipuolisia kuvausominaisuuksia. Nykyaikaiset valmistustekniikat ja tehokkaat signaaliprosessorit mahdollistavat pienikokoisen sekä hyvälaatuisen moniaukkokameran toteuttamisen. Kamera on varteenotettava vaihtoehto tavanomaiselle Bayer-matriisikameralle. Tässä diplomityössä on tarkoitus suunnitella ja toteuttaa kuvankäsittelyalgoritmi neliaukkokameraan. Kamera koostuu neljästä erillisestä kamerayksiköstä, joilla kullakin on oma optiikkansa sekä värisuodatin. Koska jokaisella kamerayksiköllä on hieman erilainen kuvakulma, aiheutuu kuvien välille parallaksivirhettä. Tämä virhe tulee korjata, ennen kuin yksittäiset kuvat yhdistetään yhdeksi värikuvaksi. Käytännössä tämä tarkoittaa sitä, että jokaisesta kuvasta etsitään toisiaan vastaavat kuvapisteet. Apuna tähän käytetään graph cuts -menetelmää sekä keskinäisinformaatiota. Algoritmi käyttää hyväkseen myös trifokaalista tensoria, joka mahdollistaa useamman kuvan sovittamisen yhtäaikaisesti sen sijaan, että jokainen kuvapari sovitettaisiin erikseen. Vastinpisteiden sovittaminen tuottaa dispariteettikartan (syvyyskartta), minkä perusteella syötekuvia muokataan. Syvyyskarttaa käytetään myös kuvan synteettiseen uudelleen tarkennukseen, minkä tarkoituksena on muuttaa kuvan tarkennusta kuvan ottamisen jälkeen. Algoritmin suorituskykyä arvioitiin tarkastelemalla tuloskuvien visuaalista laatua. Kuvia verrattiin myös referenssikuviin, jotka otettiin samalla testikamerajärjestelmällä. Tulokset osoittavat, että fuusioitujen kuvien laatu on lähellä referenssikuvien laatua. Lähempi tarkastelu paljastaa pieniä värivirheitä, jotka sijaitsevat tyypillisesti kuvassa näkyvien kohteiden reunoilla. Suurin osa virheistä aiheutuu siitä, että kaikki kuvapisteet eivät ole näkyvillä jokaisessa kuvassa. Lupaavia tuloksia saatiin myös kun syvyyskarttaa käytettiin synteettiseen uudelleen tarkennukseen. Koska efektin laatu riippuu voimakkaasti syvyyskartasta, edellä mainittu katvealueongelma aiheuttaa pieniä virheitä tuloskuvaan. Lukuisat jatkokehitysmahdollisuudet, kuten katvealueiden käsittely ja alipikselitarkkuus parantaisivat huomattavasti sekä fuusioitujen että uudelleen tarkennettujen kuvien laatua
APA, Harvard, Vancouver, ISO, and other styles
28

Xu, Yunjie. "Data fusion with multiple queries in single information retrieval scheme." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2002. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Manyika, James. "An information-theoretic approach to data fusion and sensor management." Thesis, University of Oxford, 1993. http://ora.ox.ac.uk/objects/uuid:6e6dd2a8-1ec0-4d39-8f8b-083289756a70.

Full text
Abstract:
The use of multi-sensor systems entails a Data Fusion and Sensor Management requirement in order to optimize the use of resources and allow the synergistic operation of sensors. To date, data fusion and sensor management have largely been dealt with separately and primarily for centralized and hierarchical systems. Although work has recently been done in distributed and decentralized data fusion, very little of it has addressed sensor management. In decentralized systems, a consistent and coherent approach is essential and the ad hoc methods used in other systems become unsatisfactory. This thesis concerns the development of a unified approach to data fusion and sensor management in multi-sensor systems in general and decentralized systems in particular, within a single consistent information-theoretic framework. Our approach is based on considering information and its gain as the main goal of multi-sensor systems. We develop a probabilistic information update paradigm from which we derive directly architectures and algorithms for decentralized data fusion and, most importantly, address sensor management. Presented with several alternatives, the question of how to make decisions leading to the best sensing configuration or actions, defines the management problem. We discuss the issues in decentralized decision making and present a normative method for decentralized sensor management based on information as expected utility. We discuss several ways of realizing the solution culminating in an iterative method akin to bargaining for a general decentralized system. Underlying this is the need for a good sensor model detailing a sensor's physical operation and the phenomenological nature of measurements vis-a-vis the probabilistic information the sensor provides. Also, implicit in a sensor management problem is the existence of several sensing alternatives such as those provided by agile or multi-mode sensors. With our application in mind, we detail such a sensor model for a novel Tracking Sonar with precisely these capabilities making it ideal for managed data fusion. As an application, we consider vehicle navigation, specifically localization and map-building. Implementation is on the OxNav vehicle (JTR) which we are currently developing. The results show, firstly, how with managed data fusion, localization is greatly speeded up compared to previous published work and secondly, how synergistic operation such as sensor-feature assignments, hand-off and cueing can be realised decentrally. This implementation provides new ways of addressing vehicle navigation, while the theoretical results are applicable to a variety of multi-sensing problems.
APA, Harvard, Vancouver, ISO, and other styles
30

Noonan, Colin Anthony. "Measures of effectiveness for data fusion based on information entropy." Thesis, Durham University, 2000. http://etheses.dur.ac.uk/4416/.

Full text
Abstract:
This thesis is concerned with measuring and predicting the performance and effectiveness of a data fusion process. Its central proposition is that information entropy may be used to quantify concisely the effectiveness of the process. The personal and original contribution to that subject which is contained in this thesis is summarised as follows: The mixture of performance behaviours that occur in a data fusion system are described and modelled as the states of an ergodic Markov process. An new analytic approach to combining the entropy of discrete and continuous information is defined. A new simple and accurate model of data association performance is proposed. A new model is proposed for the propagation of information entropy in an minimum mean square combination of track estimates. A new model is proposed for the propagation of the information entropy of object classification belief as new observations are incorporated in a recursive Bayesian classifier. A new model to quantify the information entropy of the penalty of ignorance is proposed. New formulations of the steady state solution of the matrix Riccati equation to model tracker performance are proposed.
APA, Harvard, Vancouver, ISO, and other styles
31

Qi, Guilin. "Fusion of uncertain information in the framework of possibilistic logic." Thesis, Queen's University Belfast, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.437544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Thiam, Patrick [Verfasser]. "Information fusion mechanisms for multi-modal affect recognition / Patrick Thiam." Ulm : Universität Ulm, 2021. http://d-nb.info/1231916508/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Refors, Michael. "Information filter based sensor fusion for estimation of vehicle velocity." Thesis, KTH, Maskinkonstruktion (Inst.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192157.

Full text
Abstract:
In this thesis, the possibility to estimate the velocity of a Heavy-duty vehicle (HDV) based on the Global Positioning System (GPS), an Inertial Measurement Unit (IMU) and the propeller shaft tachometer is investigated. The thesis was performed at Scania CV AB. The objective was to find an alternative to the wheel encoders that currently are used for velocity estimation. Three different sensor configurations were tested: the first (SC1) was based on GPS and an accelerometer, the second (SC2) was based on GPS, an accelerometer and a gyroscope, and the third (SC3) was based on GPS, an accelerometer and the propeller shaft tachometer. An experimental sensor architecture for collection of measurement data was built. The sensor configurations were evaluated in simulations based on measurement data collected from a test vehicle at Scania’s test track in S¨odert¨alje. An Information filter (IF) was used for decentralized fusion of sensor measurements. The sensor configurations were evaluated against the wheel encoders and a high quality GPS/IMU reference system using the Root Mean Squared Error (RMSE), Mean Signed Deviation (MSD) and maximum error. It was concluded that the sensor configurations based solely on GPS and IMU are not robust enough during GPS outages because of the IMU’s drift. An alternative source to GPS for correction of the IMU errors was thus necessary. The propeller shaft tachometer was used for this. The RMSE for this sensor configuration (SC3) was reduced with 37% and the MSD was reduced with 60% in comparison to the wheel encoder based velocity in the most extreme test performed, when the wheels slip and the GPS signal is erroneous during two instances. SC3 is thus proposed for further development. This work lays the basis for real-time implementation of the proposed sensor configuration and shows the feasibility of using the IF for decentralized multi-sensor fusion. It is also suggested to use the IF for integration of multiple sensors to create a refined and redundant velocity estimation.
I det här examensarbetet undersöks möjligheten att skatta hastigheten av ett tungt fordon baserat på GPS, IMU och den drivande axelns varvtalsgivare. Projektet utfördes hos Scania CV AB. Målet var att finna ett alternativ till hjulhastighetssensorerna som används för hastighetsskattning idag. Tre olika sensorkonfigurationer testades. Den första (SC1) baserades på GPS och en longitudinell accelerometer, den andra (SC2) på GPS, en longitudinell accelerometer and ett gyroskop som mäter lutning. Den tredje (SC3) baserades på GPS, en longitudinell accelerometer och den drivande axelns varvtalsgivare. En experimentiell sensorarkitektur byggdes för insamling av mätdata. Sensorkonfigurationerna evaluerades med simuleringar baserade på mätdata från ett testfordon insamlad på Scanias testbana i Södertälje. Ett infrotmationsfilter (IF) användes för decentraliserad fusion av sensordata. Sensorkonfigurationerna evaluerades mot hjulhastighetssensorerna och ett högkvalitativt GPS/IMU-referenssystem med hjälp av de statistiska mätvärdena Root Mean Square Error (RMSE), Mean Signed Deviation (MSD) och det maximala felet. Resultaten visade att sensorkonfigurationerna baserade endast på GPS och IMU inte var tillräckligt robusta då GPS-signalen förlorades på grund av IMU:ns tendens att divergera. En alternativ källa till GPS för korrigering av IMU:ns fel var därför nödvändig. För detta användes den drivande axelns varvtalsgivare. Denna sensorkonfiguration (SC3) har visat en RMSE-förbättring med 37% och en MSD förbättrad med 60% i jämförelse med hjulhastighetssensorerna i det mest extrema test som geneomförts, då hjulen spinner och GPSsignalen är felaktig under två tillfällen. SC3 är därför föreslagen för vidareutveckling. Detta arbete lägger grunden för fortsatt utveckling av en realtidsimplementation av den föreslagna sensorkonfigurationen, och påvisar möjligheten att använda ett IF för decentraliserad multisensorfusion. Det är även föreslaget att använda IF för integration av flera sensorer för att skapa en förfinad och redundant hastighetsskattning.
APA, Harvard, Vancouver, ISO, and other styles
34

Higgins, Jonathan E. "A study of information fusion applied to subband speaker recognition." Thesis, University of Southampton, 2002. https://eprints.soton.ac.uk/257274/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Johansson, Ola, and Franzén Sofie Madsen. "Sensor Fusion and Information Sharing for Automated Vehicles in Intersections." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293837.

Full text
Abstract:
One of the biggest challenges in the development ofautonomous vehicles is to anticipate the behavior of other roadusers. Autonomous vehicles rely on data obtained by on-boardsensors and make decisions accordingly, but this becomes difficultif the sensors are occluded or have limited range. In this reportwe propose an algorithm for connected vehicles in an intersectionto fuse and share sensor data and gain a better estimationof the surrounding environment. The method used for sensorfusion was a Kalman filter and a tracking algorithm, where timedelay from external sensors was considered. Parameters for theKalman filter were decided through measurement of the sensors’variances as well as tuning. It was concluded that the variancesare dependent on the objects’ movements, which means thatconstant parameters for the Kalman filter would not be enoughto make it efficient. However, the tracking and the sensor sharingmade a significant difference in the vehicle’s detection rate whichcould ultimately increase safety in intersections.
En av de största utmaningarna för utvecklingen av autonoma fordon är att förutse andra trafikanters beteenden. Autonoma fordon förlitar sig på data från sensorer ombord och fattar beslut i enlighet med informationen från dessa. Detta blir särskilt svårt om sensorerna skyms eller om sensorerna har begränsad räckvidd. I denna rapport föreslår vi en algoritm för delning och optimering av sensordata för autonoma fordon i en vägkorsning för att ge fordonet en så bra uppfattning av omgivningen som möjligt. Metoden som användes för sensorfusion var ett Kalman-filter tilsammans med en spårningsalgoritm där tidsfördröjning av data från externa sensorer togs i beaktning. Parametrarna för Kalman-filtret valdes genom mätning av sensorns varians samt genom trimning. Slutsatsen drogs att varianserna är beroende av objektens rörelsemönster, vilket innebär att konstanta parametrar för Kalman-filtret inte skulle vara tillräckligt för att göra det funktionellt. Spårningen och delningen av sensordata gjorde emellertid en betydande skillnad i andelen upptäckta objekt vilket skulle kunna nyttjas för att öka säkerheten i korsningar.
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
APA, Harvard, Vancouver, ISO, and other styles
36

Ma, Jinhua. "Dependency modeling for information fusion with applications in visual recognition." HKBU Institutional Repository, 2013. https://repository.hkbu.edu.hk/etd_ra/1522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ren, Jie. "Robust moving object detection by information fusion from multiple cameras." Thesis, University of Liverpool, 2014. http://livrepository.liverpool.ac.uk/19013/.

Full text
Abstract:
Moving object detection is an essential process before tracking and event recognition in video surveillance can take place. To monitor a wider field of view and avoid occlusions in pedestrian tracking, multiple cameras are usually used and homography can be employed to associate multiple camera views. Foreground regions detected from each of the multiple camera views are projected into a virtual top view according to the homography for a plane. The intersection regions of the foreground projections indicate the locations of moving objects on that plane. The homography mapping for a set of parallel planes at different heights can increase the robustness of the detection. However, homography mapping is very time consuming and the intersections of non-corresponding foreground regions can cause false-positive detections. In this thesis, a real-time moving object detection algorithm using multiple cameras is proposed. Unlike the pixelwise homography mapping which projects binary foreground images, the approach used in the research described in this thesis was to approximate the contour of each foreground region with a polygon and only transmit and project the polygon vertices. The foreground projections are rebuilt from the projected polygons in the reference view. The experimental results have shown that this method can be run in real time and generate results similar to those using foreground images. To identify the false-positive detections, both geometrical information and colour cues are utilized. The former is a height matching algorithm based on the geometry between the camera views. The latter is a colour matching algorithm based on the Mahalanobis distance of the colour distributions of two foreground regions. Since the height matching is uncertain in the scenarios with the adjacent pedestrian and colour matching cannot handle occluded pedestrians, the two algorithms are combined to improve the robustness of the foreground intersection classification. The robustness of the proposed algorithm is demonstrated in real-world image sequences.
APA, Harvard, Vancouver, ISO, and other styles
38

Mohamed, Abdul Cader Akmal Jahan. "Finger biometric system using bispectral invariants and information fusion techniques." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/134464/1/Akmal%20Jahan_Mohamed%20Abdul%20Cader_Thesis.pdf.

Full text
Abstract:
Contactless hand biometric systems are better accepted than contact prints as they are hygienic and accelerate data acquisition. This research is one of the few investigating contactless biometrics of the full hand by proposing a novel algorithm based on ridge orientation information along lines connecting key points, higher order spectral features, and fusion. It was investigated with contactless finger images acquired from 81 users, and found to be robust to hand orientation and image size, and provide acceptable performance using two fingers with fusion. The algorithm has potential to use in high throughput applications where contact sensing may be slow.
APA, Harvard, Vancouver, ISO, and other styles
39

Martinsson, Håkan. "An evaluation of subjective logic for trust modelling in information fusion." Thesis, University of Skövde, School of Humanities and Informatics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-977.

Full text
Abstract:

Information fusion is to combine information from a variety of sources, or sensors. When the sources are uncertain or contradicting conflict can arise. To deal with such uncertainty and conflict a trust model can be used. The most common ones in information fusion currently is bayesian theory and Dempster-Shafer theory. Bayesian theory does not explicitly handle ignorance, and thus predetermined values has to be hard coded into the system. This is solved in Dempster-Shafer theory by the introduction of ignorance. Even though Dempster-Shafer theory is widely used in information fusion when there is a need for ignorance to be modelled, there has been serious critique presented towards the theory. Thus this work aims at examining another trust models utility in information fusion namely subjective logic. The examination is executed by studying subjective logic using two scenarios from the literature. The results from the scenarios points to subjective logic being a reasonable approach for modelling trust in information fusion.

APA, Harvard, Vancouver, ISO, and other styles
40

Barua, Shaibal. "Multi-sensor Information Fusion for Classification of Driver's Physiological Sensor Data." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-18880.

Full text
Abstract:
Physiological sensor signals analysis is common practice in medical domain for diagnosis andclassification of various physiological conditions. Clinicians’ frequently use physiologicalsensor signals to diagnose individual’s psychophysiological parameters i.e., stress tiredness,and fatigue etc. However, parameters obtained from physiological sensors could vary becauseof individual’s age, gender, physical conditions etc. and analyzing data from a single sensorcould mislead the diagnosis result. Today, one proposition is that sensor signal fusion canprovide more reliable and efficient outcome than using data from single sensor and it is alsobecoming significant in numerous diagnosis fields including medical diagnosis andclassification. Case-Based Reasoning (CBR) is another well established and recognizedmethod in health sciences. Here, an entropy based algorithm, “Multivariate MultiscaleEntropy analysis” has been selected to fuse multiple sensor signals. Other physiologicalsensor signals measurements are also taken into consideration for system evaluation. A CBRsystem is proposed to classify ‘healthy’ and ‘stressed’ persons using both fused features andother physiological i.e. Heart Rate Variability (HRV), Respiratory Sinus Arrhythmia (RSA),Finger Temperature (FT) features. The evaluation and performance analysis of the system have been done and the results ofthe classification based on data fusion and physiological measurements are presented in thisthesis work.
APA, Harvard, Vancouver, ISO, and other styles
41

Jarrell, Jason A. "Employ sensor fusion techniques for determining aircraft attitude and position information." Morgantown, W. Va. : [West Virginia University Libraries], 2008. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5894.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2008.
Title from document title page. Document formatted into pages; contains xii, 108, [9] p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 104-108).
APA, Harvard, Vancouver, ISO, and other styles
42

Hsueh, Pei-Yun. "Meeting decision detection : multimodal information fusion for multi-party dialogue understanding." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3971.

Full text
Abstract:
Modern advances in multimedia and storage technologies have led to huge archives of human conversations in widely ranging areas. These archives offer a wealth of information in the organization contexts. However, retrieving and managing information in these archives is a time-consuming and labor-intensive task. Previous research applied keyword and computer vision-based methods to do this. However, spontaneous conversations, complex in the use of multimodal cues and intricate in the interactions between multiple speakers, have posed new challenges to these methods. We need new techniques that can leverage the information hidden in multiple communication modalities – including not just “what” the speakers say but also “how” they express themselves and interact with others. In responding to this need, the thesis inquires into the multimodal nature of meeting dialogues and computational means to retrieve and manage the recorded meeting information. In particular, this thesis develops the Meeting Decision Detector (MDD) to detect and track decisions, one of the most important outcomes of the meetings. The MDD involves not only the generation of extractive summaries pertaining to the decisions (“decision detection”), but also the organization of a continuous stream of meeting speech into locally coherent segments (“discourse segmentation”). This inquiry starts with a corpus analysis which constitutes a comprehensive empirical study of the decision-indicative and segment-signalling cues in the meeting corpora. These cues are uncovered from a variety of communication modalities, including the words spoken, gesture and head movements, pitch and energy level, rate of speech, pauses, and use of subjective terms. While some of the cues match the previous findings of speech segmentation, some others have not been studied before. The analysis also provides empirical grounding for computing features and integrating them into a computational model. To handle the high-dimensional multimodal feature space in the meeting domain, this thesis compares empirically feature discriminability and feature pattern finding criteria. As the different knowledge sources are expected to capture different types of features, the thesis also experiments with methods that can harness synergy between the multiple knowledge sources. The problem formalization and the modeling algorithm so far correspond to an optimal setting: an off-line, post-meeting analysis scenario. However, ultimately the MDD is expected to be operated online – right after a meeting, or when a meeting is still in progress. Thus this thesis also explores techniques that help relax the optimal setting, especially those using only features that can be generated with a higher degree of automation. Empirically motivated experiments are designed to handle the corresponding performance degradation. Finally, with the users in mind, this thesis evaluates the use of query-focused summaries in a decision debriefing task, which is common in the organization context. The decision-focused extracts (which represent compressions of 1%) is compared against the general-purpose extractive summaries (which represent compressions of 10-40%). To examine the effect of model automation on the debriefing task, this evaluation experiments with three versions of decision-focused extracts, each relaxing one manual annotation constraint. Task performance is measured in actual task effectiveness, usergenerated report quality, and user-perceived success. The users’ clicking behaviors are also recorded and analyzed to understand how the users leverage the different versions of extractive summaries to produce abstractive summaries. The analysis framework and computational means developed in this work is expected to be useful for the creation of other dialogue understanding applications, especially those that require to uncover the implicit semantics of meeting dialogues.
APA, Harvard, Vancouver, ISO, and other styles
43

Ahmad, Muhammad Imran. "Feature extraction and information fusion in face and palmprint multimodal biometrics." Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2128.

Full text
Abstract:
Multimodal biometric systems that integrate the biometric traits from several modalities are able to overcome the limitations of single modal biometrics. Fusing the information at an earlier level by consolidating the features given by different traits can give a better result due to the richness of information at this stage. In this thesis, three novel methods are derived and implemented on face and palmprint modalities, taking advantage of the multimodal biometric fusion at feature level. The benefits of the proposed method are the enhanced capabilities in discriminating information in the fused features and capturing all of the information required to improve the classification performance. Multimodal biometric proposed here consists of several stages such as feature extraction, fusion, recognition and classification. Feature extraction gathers all important information from the raw images. A new local feature extraction method has been designed to extract information from the face and palmprint images in the form of sub block windows. Multiresolution analysis using Gabor transform and DCT is computed for each sub block window to produce compact local features for the face and palmprint images. Multiresolution Gabor analysis captures important information in the texture of the images while DCT represents the information in different frequency components. Important features with high discrimination power are then preserved by selecting several low frequency coefficients in order to estimate the model parameters. The local features extracted are fused in a new matrix interleaved method. The new fused feature vector is higher in dimensionality compared to the original feature vectors from both modalities, thus it carries high discriminating power and contains rich statistical information. The fused feature vector also has larger data points in the feature space which is advantageous for the training process using statistical methods. The underlying statistical information in the fused feature vectors is captured using GMM where several numbers of modal parameters are estimated from the distribution of fused feature vector. Maximum likelihood score is used to measure a degree of certainty to perform recognition while maximum likelihood score normalization is used for classification process. The use of likelihood score normalization is found to be able to suppress an imposter likelihood score when the background model parameters are estimated from a pool of users which include statistical information of an imposter. The present method achieved the highest recognition accuracy 97% and 99.7% when tested using FERET-PolyU dataset and ORL-PolyU dataset respectively.
APA, Harvard, Vancouver, ISO, and other styles
44

Yeh, T. C. Jim, Jirka Simunek, and Genuchten Martinus Th Van. "Stochastic fusion of information for characterizing and monitoring the vadose zone." Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 2002. http://hdl.handle.net/10150/615767.

Full text
Abstract:
Inverse problems for vadose zone hydrological processes are often being perceived as ill - posed and intractable. Consequently, solutions to inverse problems are often subject to skepticism. In this paper, using examples, we elucidate difficulties associated with inverse problems and the prerequisites for such problems to be well -posed so that a unique solution exists. We subsequently explain the need of a stochastic conceptualization of the inverse problem and, in turn, the conditional- effective -parameter concept. This concept aims to resolve the ill -posed nature of inverse problems for the vadose zone, for which generally only sparse data are available. Next, the development of inverse methods for the vadose zone, based on a conditional -effective -parameter concept, is explored, including cokriging, the use of a successive linear estimator, and a sequential estimator. Their applications to the vadose zone inverse problems are subsequently examined, which include hydraulic /pneumatic and electrical resistivity tomography surveys, and hydraulic conductivity estimation using observed pressure heads, concentrations, and arrival times. Finally, a stochastic information fusion technology is presented that assimilates information from unsaturated hydraulic tomography and electrical resistivity tomography. This technology offers great promise to effectively characterize heterogeneity, to monitor processes in the vadose zone, and to quantify uncertainty associated with vadose zone characterization and monitoring.
APA, Harvard, Vancouver, ISO, and other styles
45

Rajagopalan, Vidya. "Increasing DBM Reliability using Distribution Independent Tests and Information Fusion Techniques." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/30158.

Full text
Abstract:
In deformation based morphometry (DBM) group-wise differences in brain structure are measured using deformable registration and some form of statistical test. However, it is known that DBM results are sensitive to both the registration method and statistical test used. Given the lack of an objective model of group variation it has been difficult to determine the extent of the influence of registration implementation or contraints on DBM analysis. In this thesis, we use registration methods with varying levels of theoretic similarity to study the influence of registration mechanics on DBM results. We show that because of the extent of the influence of registration mechanics on DBM results, analysis of changes should always be made with a thorough understanding of the registration method used. We also show that minor variations in registration methods can lead to large changes in DBM results. When using DBM, it would be imprudent to use only one registration method to draw any conclusions about the variations being studied. In order to provide a more complete representation of inter-group changes, we propose a method for combining multiple registration methods using Dempster-Shafer evidence theory to produce belief maps of categorical changes between groups. We show that the Dempster-Shafer combination produces a unique and easy to interpret belief map of regional changes between and within groups without the complications associated with hypothesis testing. Another, often confounding, element of DBM is the parametric hypothesis test used to specify voxels undergoing significant change between the two groups. The accuracy and reliability of these tests are contingent on a number of fundamental assumptions made about the distribution of the data used in the tests. Many DBM studies often overlook these assumptions and fail to verify their validity for the data being tested. This raises many doubts about the credibility of the results from such tests. In this thesis, we propose to perform statistical analysis on DBM data using nonparametric, distribution independent hypothesis tests. With no data distributional assumptions, these tests provide both increased flexibility and reliability of DBM statistical analysis
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
46

Baravdish, Ninos. "Information Fusion of Data-Driven Engine Fault Classification from Multiple Algorithms." Thesis, Linköpings universitet, Fordonssystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176508.

Full text
Abstract:
As the automotive industry constantly makes technological progress, higher demands are placed on safety, environmentally friendly and durability. Modern vehicles are headed towards increasingly complex system, in terms of both hardware and software making it important to detect faults in any of the components. Monitoring the engine’s health has traditionally been done using expert knowledge and model-based techniques, where derived models of the system’s nominal state are used to detect any deviations. However, due to increased complexity of the system this approach faces limitations regarding time and knowledge to describe the engine’s states. An alternative approach is therefore data-driven methods which instead are based on historical data measured from different operating points that are used to draw conclusion about engine’s present state. In this thesis a proposed diagnostic framework is presented, consisting of a systematically approach for fault classification of known and unknown faults along with a fault size estimation. The basis for this lies in using principal component analysis to find the fault vector for each fault class and decouple one fault at the time, thus creating different subspaces. Importantly, this work investigates the efficiency of taking multiple classifiers into account in the decision making from a performance perspective. Aggregating multiple classifiers is done solving a quadratic optimization problem. To evaluate the performance, a comparison with a random forest classifier has been made. Evaluation with challenging test data show promising results where the algorithm relates well to the performance of random forest classifier.
APA, Harvard, Vancouver, ISO, and other styles
47

Janez, Fabrice. "Fusion of information sources defined on different non-exhaustive reference sets /." Paris : ONERA, 1998. http://catalogue.bnf.fr/ark:/12148/cb367050865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Hojun. "ONTOLOGY-BASED DATA FUSION WITHIN A NET-CENTRIC INFORMATION EXCHANGE FRAMEWORK." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/193779.

Full text
Abstract:
With the advent of Network-Centric Warfare (NCW) concepts, Command and Control (C2) Systems need efficient methods for communicating between heterogeneous systems. To extract or exchange various levels of information within the networks requires interoperability between human and machine as well as between machine and machine. This dissertation explores the Information Exchange Framework (IEF) concept of distributed data fusion sensor networks in Network-centric environments. It is used to synthesize integrative battlefield pictures by combining the Battle Management Language (BML) and System Entity Structure (SES) ontology framework for C2 systems. The SES is an ontology framework that can facilitate information exchange in a network environment. From the perspective of the SES framework, BML serves to express pragmatic frames, since it can specify the information desired by a consumer in an unambiguous way. This thesis formulates information exchange in the SES ontology via BML and defines novel pruning and transformation processes of the SES to extract and fuse data into higher level representations. This supports the interoperability between human users and other sensor systems. The efficacy of such data fusion and exchange is illustrated with several battlefield scenario examples.A second intercommunication issue between sensor systems is how to ensure efficient and effective message passing. This is studied by using Cursor-on-Target (CoT), an effort to standardize a battlefield data exchange format. CoT regulates only a few essential data types as standard and has a simple and efficient structure to hold a wide range of message formats used in dissimilar military enterprises. This thesis adopts the common message type into radar sensor networks to manage the target tracking problem in distributed sensor networks.To demonstrate the effectiveness of the proposed Information Exchange Framework for data fusion systems, we illustrate the approach in an air defense operation scenario using DEVS modeling and simulation. The examples depict basic air defense operation procedure. The demonstration shows that the information requested by a commander is delivered in the right way at the right time so that it can support agile decision making against threats.
APA, Harvard, Vancouver, ISO, and other styles
49

Ehsanibenafati, Aida. "Visualization Tool for Sensor Data Fusion." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5677.

Full text
Abstract:
In recent years researchers has focused on the development of techniques for multi-sensor data fusion systems. Data fusion systems process data from multiple sensors to develop improved estimate of the position, velocity, attributes and identity of entities such as the targets or entities of interest. Visualizing sensor data from fused data to raw data from each sensor help analysts to interpret the data and assess sensor data fusion platform, an evolving situation or threats. Immersive visualization has emerged as an ideal solution for exploration of sensor data and provides opportunities for improvement in multi sensor data fusion. The thesis aims to investigate possibilities of applying information visualization to sensor data fusion platform in Volvo. A visualization prototype is also developed to enables multiple users to interactively visualize Sensor Data Fusion platform in real-time, mainly in order to demonstrates, evaluate and analyze the platform functionality. In this industrial study two research methodologies were used; a case study and an experiment for evaluating the results. First a case study was conducted in order to find the best visualization technique for visualizing sensor data fusion platform. Second an experiment was conducted to evaluate the usability of the prototype that has been developed and make sure the user requirement were met. The visualization tool enabled us to study the effectiveness and efficiency of the visualization techniques used. The results confirm that the visualization method used is effective, efficient for visualizing sensor data fusion platform.
APA, Harvard, Vancouver, ISO, and other styles
50

Wahlström, Johan. "Sensor Fusion for Smartphone-based Vehicle Telematics." Doctoral thesis, KTH, Teknisk informationsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-218071.

Full text
Abstract:
The fields of navigation and motion inference have rapidly been transformed by advances in computing, connectivity, and sensor design. As a result, unprecedented amounts of data are today being collected by cheap and small navigation sensors residing in our surroundings. Often, these sensors will be embedded into personal mobile devices such as smartphones and tablets. To transform the collected data into valuable information, one must typically formulate and solve a statistical inference problem. This thesis is concerned with inference problems that arise when trying to use smartphone sensors to extract information on driving behavior and traffic conditions. One of the fundamental differences between smartphone-based driver behavior profiling and traditional analysis based on vehicle-fixed sensors is that the former is based on measurements from sensors that are mobile with respect to the vehicle. Thus, the utility of data from smartphone-embedded sensors is diminished by not knowing the relative orientation and position of the smartphone and the vehicle. The problem of estimating the relative smartphone-to-vehicle orientation is solved by extending the state-space model of a global navigation satellite system-aided inertial navigation system. Specifically, the state vector is augmented to include the relative orientation, and the measurement vector is augmented with pseudo observations describing well-known characteristics of car dynamics. To estimate the relative positions of multiple smartphones, we exploit the kinematic relation between the accelerometer measurements from different smartphones. The characteristics of the estimation problem are examined using the Cramér-Rao bound, and the positioning method is evaluated in a field study using concurrent measurements from seven smartphones. The characteristics of smartphone data vary with the smartphone's placement in the vehicle. To investigate this, a large set of vehicle trip segments are clustered based on measurements from smartphone-embedded sensors and vehicle-fixed accelerometers. The clusters are interpreted as representing the smartphone being rigidly mounted on a cradle, placed on the passenger seat, held by hand, etc. Finally, the problem of fusing speed measurements from the on-board diagnostics system and a global navigation satellite system receiver is considered. Estimators of the vehicle’s speed and the scale factor of the wheel speed sensors are derived under the assumptions of synchronous and asynchronous samples.

QC 20171123

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography