To see the other types of publications on this topic, follow the link: MULTISOURCE DATA.

Dissertations / Theses on the topic 'MULTISOURCE DATA'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 dissertations / theses for your research on the topic 'MULTISOURCE DATA.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Fiskio-Lasseter, John Howard Eli. "Specification and solution of multisource data flow problems /." view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1280151111&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 150-162). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
2

Filiberti, Daniel Paul. "Combined Spatial-Spectral Processing of Multisource Data Using Thematic Content." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1066%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kayani, Amina Josetta. "Critical determinants influencing employee reactions to multisource feedback systems." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/150.

Full text
Abstract:
The current study examines the Multisource Feedback (MSF) system by investigating the impact several MSF design and implementation factors have on employees’ reaction towards the system. The fundamental goal of the research was to advance the understanding of what is currently known about effectively implementing multisource feedback systems to maximize employee favorable reaction, acceptance and perceptions of usefulness.Of the many management feedback trends that have swept organizations in the past decade, few have had the outstanding impact of MSF. Despite the numerous studies on MSF, perusal of empirical literature lacks overall cohesion in identifying critical factors influencing employees’ reactions to MSF. The constructs examined were delimited to those found to have inherent paradoxes, insufficient coverage, be inconclusive and/or have contradictory findings in the extant literature.A series of main research questions, underscoring the main goal of the study, were developed from the gaps identified in literature to establish which predictors were predominant in influencing the employees’ reactions, acceptance and perceptions of usefulness towards the MSF system. These research questions were formed into hypotheses for testing. The relationships to be tested were integrated into a hypothetical model which encompassed four sub-models to be tested. The models, named the Climate, Reaction, Reaction-Acceptance, Reaction-Perceptions of Usefulness and Acceptance-Perceptions of Usefulness Models were tested in parts using a combination of exploratory factor analysis, correlation analysis and multiple regressions. Further, key informants from each organization and HR managers in three large organizations provided post-survey feedback and information to assist with the elucidation of quantitative findings; this represented the pluralist approach taken in the study.Survey items were derived from extant literature as well as developed specifically for the study. Further, the items were refined using expert reviewers and a pilot study. A cross-sectional web-based survey was administered to employees from a range of managerial levels in three large Malaysian multinational organizations. A total of 420 useable surveys were received, representing a response rate of 47%.Self-report data was used to measure the constructs which were perceptions of the various facets of the MSF. An empirical methodology was used to test the hypotheses to enable the research questions to be answered and to suggest a final model of Critical Determinants Influencing Employee Reaction to MSF Systems.The study was conducted in six phases. In the first phase, a literature map was drawn highlighting the gaps in empirical research. In the second stage, a hypothetical model of employees’ reaction to MSF was developed from past empirical research and literature on MSF. The third phase involved drafting a survey questionnaire on the basis of available literature, with input from academics and practitioners alike. The fourth stage entailed pilot testing the survey instrument using both the ‘paper and pencil’ and web-based methods. The surveys were administered with the assistance of the key informants of the participant organizations in the fifth stage of the study; data received were analysed using a range of statistical tools within SPSS version 15. Content analysis was utilized to categorize themes that emerged from an open-ended question. In the sixth and final stage, empirical results from the quantitative analysis were presented to HR managers to glean first hand understanding over the patterns that emerged.Exploratory factor analysis and reliability analysis indicated that the surveyinstrument was sound in terms of validity and reliability. In the Climate model, itwas found that all the hypothesized predictors, feedback-seeking environment,control over organizational processes, understanding over organizational events,operational support and political awareness were positively associated withpsychological climate for MSF implementation. In terms of predictive power, controlover organizational processes failed to attain significance at the 5% level. In theReaction model, it was found that perceived purpose, perceived anonymity,complexity and rater assignment processes had significant associations withemployee reaction to MSF, but perceived anonymity indicated poor predictive powerfrom the regressions results. As hypothesized, employee reaction was found to be related to MSF acceptance and perceptions of usefulness, and results indicated thatthe two latter outcome constructs were related, but statistically distinct.The two-tier pluralist technique of collecting and examining data was a salient feature of the current study. Indeed, such a holistic approach to investigating the determinants of employee reaction to MSF allowed for better integration of its theory and practice. The study is believed to make a modest, but unique contribution to knowledge, advancing the body of knowledge towards a better understanding of MSF design and implementation issues.The results have implications for calibrating MSF systems and evaluating the needfor, and likely effectiveness of, what has been hailed as one of the powerful newmodels for management feedback in the past two decades. Suggestions were madeabout how the results could benefit academia and practitioners alike. Since mostorganizational and management research has a western ethnocentric bias, the current study encompassed eastern evidence, using cases in Malaysia.
APA, Harvard, Vancouver, ISO, and other styles
4

Peterson, Dwight M. "The Merging of Multisource Telemetry Data to Support Over the Horizon Missile Testing." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608414.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
The testing of instrumented missile systems with extended range capabilities present many challenges to existing T&E and training ranges. Providing over-the-horizon (OTH) telemetry data collection and displaying portions of this data in real time for range safety purposes are just a few of many factors required for successful instrumented range support. Techniques typically used for OTH telemetry data collection are to use fixed or portable antennas installed at strategic down-range locations, instrumented relay pods installed on chase aircraft, and instrumented high flying relay aircraft. Multiple data sources from these various locations typically arrive at a central site within a telemetry ground station and must be merged together to determine the best data source for real time and post processing purposes. Before multiple telemetered sources can be merged, the time skews caused by the relay of down-range land and airborne based sources must be taken into account. The time skews are fixed for land based sources, but vary with airborne sources. Various techniques have been used to remove the time skews associated with multiple telemetered sources. These techniques, which involve both hardware and software applications, have been effective, but are expensive and application and range dependent. This paper describes the use of a personal computer (PC) based workstation, configured with independent Pulse Code Modulation (PCM) decommutators/bit synchronizers, Inner-Range Instrumentation Group (IRIG) timing, and data merging resident software to perform the data merging task. Current technology now permits multiple PCM decommutators, each built as a separate virtual memory expansion (VME) card, to be installed within a PC based workstation. Each land based or airborne source is connected to a dedicated VME based PCM decommutator/bit synchronizer within the workstation. After the exercise has been completed, data merging software resident within the workstation is run which reads the digitized data from each of the disk files and aligns the data on a bit by bit basis to determine the optimum merged result. Both time based and event based alignment is performed when merging the multiple sources.This technique has application for current TOMAHAWK exercises performed at the Air Force Development Test Center, Eglin Air Force Base (AFB), Florida and the Naval Air Warfare Center/Weapons Division (NAWC/WD), Point Mugu, California and future TOMAHAWK Baseline Improvement Program (TBIP) testing.
APA, Harvard, Vancouver, ISO, and other styles
5

Papadopoulos, Georgios. "Towards a 3D building reconstruction using spatial multisource data and computational intelligence techniques." Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0084/document.

Full text
Abstract:
La reconstruction de bâtiments à partir de photographies aériennes et d’autres données spatiales urbaines multi-sources est une tâche qui utilise une multitude de méthodes automatisées et semi-automatisées allant des processus ponctuels au traitement classique des images et au balayage laser. Dans cette thèse, un système de relaxation itératif est développé sur la base de l'examen du contexte local de chaque bord en fonction de multiples sources d'entrée spatiales (masques optiques, d'élévation, d'ombre et de feuillage ainsi que d'autres données prétraitées, décrites au chapitre 6). Toutes ces données multisource et multirésolution sont fusionnées de manière à extraire les segments de ligne probables ou les arêtes correspondant aux limites des bâtiments. Deux nouveaux sous-systèmes ont également été développés dans cette thèse. Ils ont été conçus dans le but de fournir des informations supplémentaires, plus fiables, sur les contours des bâtiments dans une future version du système de relaxation proposé. La première est une méthode de réseau de neurones à convolution profonde (CNN) pour la détection de frontières de construction. Le réseau est notamment basé sur le modèle SRCNN (Dong C. L., 2015) de super-résolution à la pointe de la technologie. Il accepte des photographies aériennes illustrant des données de zones urbaines densément peuplées ainsi que leurs cartes d'altitude numériques (DEM) correspondantes. La formation utilise trois variantes de cet ensemble de données urbaines et vise à détecter les contours des bâtiments grâce à une nouvelle cartographie hétéroassociative super-résolue. Une autre innovation de cette approche est la conception d'une couche de perte personnalisée modifiée appelée Top-N. Dans cette variante, l'erreur quadratique moyenne (MSE) entre l'image de sortie reconstruite et l'image de vérité de sol (GT) fournie des contours de bâtiment est calculée sur les 2N pixels de l'image avec les valeurs les plus élevées. En supposant que la plupart des N pixels de contour de l’image GT figurent également dans les 2N pixels supérieurs de la reconstruction, cette modification équilibre les deux catégories de pixels et améliore le comportement de généralisation du modèle CNN. Les expériences ont montré que la fonction de coût Top-N offre des gains de performance par rapport à une MSE standard. Une amélioration supplémentaire de la capacité de généralisation du réseau est obtenue en utilisant le décrochage. Le deuxième sous-système est un réseau de convolution profonde à super-résolution, qui effectue un mappage associatif à entrée améliorée entre les images d'entrée à basse résolution et à haute résolution. Ce réseau a été formé aux données d’altitude à basse résolution et aux photographies urbaines optiques à haute résolution correspondantes. Une telle différence de résolution entre les images optiques / satellites optiques et les données d'élévation est souvent le cas dans les applications du monde réel
Building reconstruction from aerial photographs and other multi-source urban spatial data is a task endeavored using a plethora of automated and semi-automated methods ranging from point processes, classic image processing and laser scanning. In this thesis, an iterative relaxation system is developed based on the examination of the local context of each edge according to multiple spatial input sources (optical, elevation, shadow & foliage masks as well as other pre-processed data as elaborated in Chapter 6). All these multisource and multiresolution data are fused so that probable line segments or edges are extracted that correspond to prominent building boundaries.Two novel sub-systems have also been developed in this thesis. They were designed with the purpose to provide additional, more reliable, information regarding building contours in a future version of the proposed relaxation system. The first is a deep convolutional neural network (CNN) method for the detection of building borders. In particular, the network is based on the state of the art super-resolution model SRCNN (Dong C. L., 2015). It accepts aerial photographs depicting densely populated urban area data as well as their corresponding digital elevation maps (DEM). Training is performed using three variations of this urban data set and aims at detecting building contours through a novel super-resolved heteroassociative mapping. Another innovation of this approach is the design of a modified custom loss layer named Top-N. In this variation, the mean square error (MSE) between the reconstructed output image and the provided ground truth (GT) image of building contours is computed on the 2N image pixels with highest values . Assuming that most of the N contour pixels of the GT image are also in the top 2N pixels of the re-construction, this modification balances the two pixel categories and improves the generalization behavior of the CNN model. It is shown in the experiments, that the Top-N cost function offers performance gains in comparison to standard MSE. Further improvement in generalization ability of the network is achieved by using dropout.The second sub-system is a super-resolution deep convolutional network, which performs an enhanced-input associative mapping between input low-resolution and high-resolution images. This network has been trained with low-resolution elevation data and the corresponding high-resolution optical urban photographs. Such a resolution discrepancy between optical aerial/satellite images and elevation data is often the case in real world applications. More specifically, low-resolution elevation data augmented by high-resolution optical aerial photographs are used with the aim of augmenting the resolution of the elevation data. This is a unique super-resolution problem where it was found that many of -the proposed general-image SR propositions do not perform as well. The network aptly named building super resolution CNN (BSRCNN) is trained using patches extracted from the aforementioned data. Results show that in comparison with a classic bicubic upscale of the elevation data the proposed implementation offers important improvement as attested by a modified PSNR and SSIM metric. In comparison, other proposed general-image SR methods performed poorer than a standard bicubic up-scaler.Finally, the relaxation system fuses together all these multisource data sources comprising of pre-processed optical data, elevation data, foliage masks, shadow masks and other pre-processed data in an attempt to assign confidence values to each pixel belonging to a building contour. Confidence is augmented or decremented iteratively until the MSE error fails below a specified threshold or a maximum number of iterations have been executed. The confidence matrix can then be used to extract the true building contours via thresholding
APA, Harvard, Vancouver, ISO, and other styles
6

Bascol, Kevin. "Adaptation de domaine multisource sur données déséquilibrées : application à l'amélioration de la sécurité des télésièges." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES062.

Full text
Abstract:
Bluecime a mis au point un système de vidéosurveillance à l'embarquement de télésièges qui a pour but d'améliorer la sécurité des passagers. Ce système est déjà performant, mais il n'utilise pas de techniques d'apprentissage automatique et nécessite une phase de configuration chronophage. L’apprentissage automatique est un sous-domaine de l'intelligence artificielle qui traite de l'étude et de la conception d'algorithmes pouvant apprendre et acquérir des connaissances à partir d'exemples pour une tâche donnée. Une telle tâche pourrait consister à classer les situations sûres ou dangereuses dans les télésièges à partir d'exemples d'images déjà étiquetées dans ces deux catégories, appelés exemples d’entraînement. L'algorithme d'apprentissage automatique apprend un modèle capable de prédire la catégories de nouveaux cas. Depuis 2012, il a été démontré que les modèles d'apprentissage profond sont les modèles d'apprentissage machine les mieux adaptés pour traiter les problèmes de classification d'images lorsque de nombreuses données d’entraînement sont disponibles. Dans ce contexte, cette thèse, financée par Bluecime, vise à améliorer à la fois le coût et l'efficacité du système actuel de Bluecime grâce à l'apprentissage profond
Bluecime has designed a camera-based system to monitor the boarding station of chairlifts in ski resorts, which aims at increasing the safety of all passengers. This already successful system does not use any machine learning component and requires an expensive configuration step. Machine learning is a subfield of artificial intelligence which deals with studying and designing algorithms that can learn and acquire knowledge from examples for a given task. Such a task could be classifying safe or unsafe situations on chairlifts from examples of images already labeled with these two categories, called the training examples. The machine learning algorithm learns a model able to predict one of these two categories on unseen cases. Since 2012, it has been shown that deep learning models are the best suited machine learning models to deal with image classification problems when many training data are available. In this context, this PhD thesis, funded by Bluecime, aims at improving both the cost and the effectiveness of Bluecime's current system using deep learning
APA, Harvard, Vancouver, ISO, and other styles
7

Ben, Hassine Soumaya. "Évaluation et requêtage de données multisources : une approche guidée par la préférence et la qualité des données : application aux campagnes marketing B2B dans les bases de données de prospection." Thesis, Lyon 2, 2014. http://www.theses.fr/2014LYO22012/document.

Full text
Abstract:
Avec l’avènement du traitement distribué et l’utilisation accrue des services web inter et intra organisationnels alimentée par la disponibilité des connexions réseaux à faibles coûts, les données multisources partagées ont de plus en plus envahi les systèmes d’informations. Ceci a induit, dans un premier temps, le changement de leurs architectures du centralisé au distribué en passant par le coopératif et le fédéré ; et dans un deuxième temps, une panoplie de problèmes d’exploitation allant du traitement des incohérences des données doubles à la synchronisation des données distribuées. C’est le cas des bases de prospection marketing où les données sont enrichies par des fichiers provenant de différents fournisseurs.Nous nous intéressons au cadre particulier de construction de fichiers de prospection pour la réalisation de campagnes marketing B-to-B, tâche traitée manuellement par les experts métier. Nous visons alors à modéliser le raisonnement de brokers humains, afin d’optimiser et d’automatiser la sélection du « plan fichier » à partir d’un ensemble de données d’enrichissement multisources. L’optimisation en question s’exprimera en termes de gain (coût, qualité) des données sélectionnées, le coût se limitant à l’unique considération du prix d’utilisation de ces données.Ce mémoire présente une triple contribution quant à la gestion des bases de données multisources. La première contribution concerne l’évaluation rigoureuse de la qualité des données multisources. La deuxième contribution porte sur la modélisation et l’agrégation préférentielle des critères d’évaluation qualité par l’intégrale de Choquet. La troisième contribution concerne BrokerACO, un prototype d’automatisation et d’optimisation du brokering multisources basé sur l’algorithme heuristique d’optimisation par les colonies de fourmis (ACO) et dont la Pareto-optimalité de la solution est assurée par l’utilisation de la fonction d’agrégation des préférences des utilisateurs définie dans la deuxième contribution. L’efficacité du prototype est montrée par l’analyse de campagnes marketing tests effectuées sur des données réelles de prospection
In Business-to-Business (B-to-B) marketing campaigns, manufacturing “the highest volume of sales at the lowest cost” and achieving the best return on investment (ROI) score is a significant challenge. ROI performance depends on a set of subjective and objective factors such as dialogue strategy, invested budget, marketing technology and organisation, and above all data and, particularly, data quality. However, data issues in marketing databases are overwhelming, leading to insufficient target knowledge that handicaps B-to-B salespersons when interacting with prospects. B-to-B prospection data is indeed mainly structured through a set of independent, heterogeneous, separate and sometimes overlapping files that form a messy multisource prospect selection environment. Data quality thus appears as a crucial issue when dealing with prospection databases. Moreover, beyond data quality, the ROI metric mainly depends on campaigns costs. Given the vagueness of (direct and indirect) cost definition, we limit our focus to price considerations.Price and quality thus define the fundamental constraints data marketers consider when designing a marketing campaign file, as they typically look for the "best-qualified selection at the lowest price". However, this goal is not always reachable and compromises often have to be defined. Compromise must first be modelled and formalized, and then deployed for multisource selection issues. In this thesis, we propose a preference-driven selection approach for multisource environments that aims at: 1) modelling and quantifying decision makers’ preferences, and 2) defining and optimizing a selection routine based on these preferences. Concretely, we first deal with the data marketer’s quality preference modelling by appraising multisource data using robust evaluation criteria (quality dimensions) that are rigorously summarized into a global quality score. Based on this global quality score and data price, we exploit in a second step a preference-based selection algorithm to return "the best qualified records bearing the lowest possible price". An optimisation algorithm, BrokerACO, is finally run to generate the best selection result
APA, Harvard, Vancouver, ISO, and other styles
8

Mondésir, Jacques Philémon. "Apports de la texture multibande dans la classification orientée-objets d'images multisources (optique et radar)." Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9706.

Full text
Abstract:
Résumé : La texture dispose d’un bon potentiel discriminant qui complète celui des paramètres radiométriques dans le processus de classification d’image. L’indice Compact Texture Unit (CTU) multibande, récemment mis au point par Safia et He (2014), permet d’extraire la texture sur plusieurs bandes à la fois, donc de tirer parti d’un surcroît d’informations ignorées jusqu’ici dans les analyses texturales traditionnelles : l’interdépendance entre les bandes. Toutefois, ce nouvel outil n’a pas encore été testé sur des images multisources, usage qui peut se révéler d’un grand intérêt quand on considère par exemple toute la richesse texturale que le radar peut apporter en supplément à l’optique, par combinaison de données. Cette étude permet donc de compléter la validation initiée par Safia (2014) en appliquant le CTU sur un couple d’images optique-radar. L’analyse texturale de ce jeu de données a permis de générer une image en « texture couleur ». Ces bandes texturales créées sont à nouveau combinées avec les bandes initiales de l’optique, avant d’être intégrées dans un processus de classification de l’occupation du sol sous eCognition. Le même procédé de classification (mais sans CTU) est appliqué respectivement sur : la donnée Optique, puis le Radar, et enfin la combinaison Optique-Radar. Par ailleurs le CTU généré sur l’Optique uniquement (monosource) est comparé à celui dérivant du couple Optique-Radar (multisources). L’analyse du pouvoir séparateur de ces différentes bandes à partir d’histogrammes, ainsi que l’outil matrice de confusion, permet de confronter la performance de ces différents cas de figure et paramètres utilisés. Ces éléments de comparaison présentent le CTU, et notamment le CTU multisources, comme le critère le plus discriminant ; sa présence rajoute de la variabilité dans l’image permettant ainsi une segmentation plus nette, une classification à la fois plus détaillée et plus performante. En effet, la précision passe de 0.5 avec l’image Optique à 0.74 pour l’image CTU, alors que la confusion diminue en passant de 0.30 (dans l’Optique) à 0.02 (dans le CTU).
Abstract : Texture has a good discriminating power which complements the radiometric parameters in the image classification process. The index Compact Texture Unit multiband, recently developed by Safia and He (2014), allows to extract texture from several bands at a time, so taking advantage of extra information not previously considered in the traditional textural analysis: the interdependence between bands. However, this new tool has not yet been tested on multi-source images, use that could be an interesting added-value considering, for example, all the textural richness the radar can provide in addition to optics, by combining data. This study allows to complete validation initiated by Safia (2014), by applying the CTU on an optics-radar dataset. The textural analysis of this multisource data allowed to produce a "color texture" image. These newly created textural bands are again combined with the initial optical bands before their use in a classification process of land cover in eCognition. The same classification process (but without CTU) was applied respectively to: Optics data, then Radar, finally on the Optics-Radar combination. Otherwise, the CTU generated on the optics separately (monosource) was compared to CTU arising from Optical-Radar couple (multisource). The analysis of the separating power of these different bands (radiometric and textural) with histograms, and the confusion matrix tool allows to compare the performance of these different scenarios and classification parameters. These comparators show the CTU, including the CTU multisource, as the most discriminating criterion; his presence adds variability in the image thus allowing a clearer segmentation (homogeneous and non-redundant), a classification both more detailed and more efficient. Indeed, the accuracy changes from 0.5 with the Optics image to 0.74 for the CTU image while confusion decreases from 0.30 (in Optics) to 0.02 (in the CTU).
APA, Harvard, Vancouver, ISO, and other styles
9

Zamite, João Miguel Quintino de Morais 1985. "Multisource epidemic data collector." Master's thesis, 2010. http://hdl.handle.net/10451/2346.

Full text
Abstract:
Tese de mestrado. Biologia (Bioinformática e Biologia Computacional). Universidade de Lisboa, Faculdade de Ciências, 2010
Epidemic surveillance has recently been subject to the development of Web based information retrieval systems. The majority of these systems extract information directly from users, official epidemic reports or news sources. Others extract epidemic data fromInternet based social network services. The currently existing epidemic surveillance systems are mostly monolithic, not being designed for knowledge share or their integration with other applications such as epidemic forecasting tools. In this dissertation, an approach is presented to the creation of a data collection system which enables the integration of data from diverse sources. Based on the principles of interoperability and modularity, this system not only addresses the current needs for data integration but is also expansible to enable data extraction from future epidemic data sources. This system was developed as a module for the ”Epidemic Marketplace” under the EPIWORK project with the objective of becoming a valuable data source for epidemic modeling and forecasting tools. This document describes the requirements and development stages for this epidemic surveillance system and its evaluation.
Nos últimos anos, a vigilância epidemiológica tem sido um campo de desenvolvimento de sistemas de recolha de informação da Web. A maioria destes sistemas extraem informação directamente dos utilizadores, de relatórios oficiais ou de fontes noticiosas, enquanto outros extraem dados epidemiológicos de redes sociais da Internet. Estes sistemas de vigilância epidemiológica são na sua maioria monolíticos, não sendo desenhados para a partilha de dados e sua integração com outras aplicacões, como ferramentas de previsão epidemiológica. Ao longo desta dissertação apresento uma abordagempara a criação de um sistema de colecta de dados que permite a integração de dados de diversas fontes. Baseado nos princípios de interoperabilidade e modularidade, este sistema não só aborda a necessidade para a integração de informação mas é também expansível para permitir a extracção de dados de fontes futuras de dados epidemiológicos. Este sistema foi desenvolvido como um módulo para o ”Epidemic Marketplace” no projecto EPIWORK com o objectivo de se tornar uma fonte de dados para ferramentas de modelação e previsão epidemiológica. Este documento descreve os requisitos e fases de desenvolvimento deste sistema de vigilância epidemiológica bem como a sua avaliação.
European Commission - EPIWORK project under the Seventh Framework Programme (Grant # 231807), the EPIWORK project partners, CMU-Portugal partnership and FCT (Portuguese research funding agency) for its LaSIGE Multi-annual support
APA, Harvard, Vancouver, ISO, and other styles
10

You, MIn-Ruei, and 游旻叡. "Deriving deep sea seafood tracability maps using multisource data aggregation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/remqvk.

Full text
Abstract:
碩士
國立臺灣海洋大學
資訊工程學系
106
Taiwan is an island country surrounded by the ocean that is rich in marine resources. According to Taiwan Fishery Agency's statistical data, the fishery industry is very important since it produces an estimated value of 86 billion NTD per year. However, marine resources are not unlimited and may deplete if not sustained. Illegal, unreported, and unregulated (IUU) fishing is a major reason of unsustained fishery and has aroused concerned of different sectors in the community. To strengthen the management of deep sea fisheries, Fisheries Agency, Council of Agriculture, Executive Yuan established Fisheries Management Center. Since July 1, 2016, the Fisheries Agency implemented a declaration system for landing and began to use a new generation eLogbook system, hoping that these strategies can make monitoring and management more complete. The international regulation attaches great importance to the traceability of seafood, which is a key to battle IUU fishing. In Taiwan, the Agricultural Traceability System has already been established. However, there is no such system yet in the deep sea fishery sector. From the landing declarations and eLogbook system developed by the Fisheries Agency, we can construct a traceability map of deep sea seafood. We apply data aggregation techniques on multisource data and use Closest Point of Approach (CPA) algorithm to estimate the position where the fishing vessel transships to another vessel. This seafood traceability map system can map catch and transshipment information of major fish products. It provides a web-based visualization interface which can show either the region of catch or travel map of produces. Authorities and users can use this system to understand the source and validity of fish catches.
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Lu-Yang, and 陳履洋. "GPU-Acceleration of Multisource Data Fusion for Image Classification Using Nearest Feature Space Approach." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/4598s5.

Full text
Abstract:
碩士
國立臺北科技大學
電機工程系所
101
The disaster damage investigations and scale estimates can provide critical information for follow-up disaster relief responses and interactions. Recently, with the advance of satellite and image processing technology, remote sensing imaging becomes a mature and feasible technique for disaster interpretation. In this paper a nearest feature space (NFS) approach is proposed for the purpose of landslide hazard assessment using multisource images. In the past NFS was applied to hyperspectral image classification. The NFS algorithm keeps the original structure of training samples and calculates the nearest distance between test samples and training samples in the feature space. However, when the number of training samples is large, the computational loads of NFS is high. A parallel version of NFS algorithm based on graphics processing units (GPUs) is proposed to overcome this drawback. The proposed method based on the Compute Unified Device Architecture (CUDA) programming model is applied to implement the high performance computing of NFS. The experimental results demonstrate that the NFS approach is effective for land cover classification in the field of Earth remote sensing.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Yi Chun (Benny), and 王怡鈞. "Multisource Data Fusion and Fisher Criterion Based Nearest Feature Space Approach to Landslide Classification." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/rtw5uf.

Full text
Abstract:
博士
國立臺北科技大學
電機工程系博士班
103
In this dissertation, a novel technique known as the Fisher criterion based nearest feature space (FCNFS) approach is proposed for supervised classification of multisource images for the purpose of landslide hazard assessment. The method is developed for land cover classification based upon the fusion of remotely sensed images of the same scene collected from multiple sources. This dissertation presents a framework for data fusion of multisource remotely sensed images, consisting of two approaches: the band generation process (BGP) and the FCNFS classifier. The multiple adaptive BGP is introduced to create an additional set of bands that are specifically accommodated to the landslide class and are extracted from the original multisource images. In comparison to the original nearest feature space (NFS) method, the proposed FCNFS classifier uses the Fisher criterion of between-class and within-class discrimination to enhance the classifier. In the training phase, the labeled samples are discriminated by the Fisher criterion, which can be treated as a pre-processing step of the NFS method. After completion of the training, the classification results can be obtained from the NFS algorithm. Experimental results show that the proposed BGP/FCNFS framework is suitable for land cover classification in Earth remote sensing and improves the classification accuracy compared to conventional classifiers.
APA, Harvard, Vancouver, ISO, and other styles
13

Corongiu, Manuela. "Modelling Railways in the Context of Interoperable Geospatial Data." Doctoral thesis, 2021. http://hdl.handle.net/2158/1240386.

Full text
Abstract:
In geospatial information, the interoperability term must be defined at different levels to fully consider the design of complex spatial infrastructures: sematic, schematic, syntax, and, above all, on processes and steps required to be shared in a common framework. The interoperability issue is the keystone of the research topic and is analysed through different aspects and points of view, with a focus on three relevant aspects. First of all, the 3D information: from the cartographic point of view (2.5D) to fully 3D models. Then, the link between reference geoinformation and geospatial thematic applications applied in the context of railway infrastructures. Finally, multi-source information in an integrated spatial database is analysed in management, validation, and update over time. The proposed approach starts from the reference data based on 3D geotopographic information. The research aims to devise a prototype process of a 3D data model able to describe firstly geospatial databases derived from cartography maps, then a spatial model shareable among different territorial applications and analysis. 3D city models and Building Information Model (BIM) connection has been considered. The case study refers to railway infrastructure contents. Consequently, the research objectives touch the following aspects: the evolution of base cartography toward spatial databases, the connection between a 3D geospatial database and a 3D city modelling, the connection between 3D city modelling and BIM, the connection between geo- reference and geo-thematic applications in the context of railways, the role of point clouds data within spatial databases, and the multi-source geospatial information management. To summarise, the thesis focuses on outlining a road map to keep interoperability using geographical standards and formal steps. Each step runs as a liaison point between different spatial data applications. Independence from technological platforms or application formats has been one of the mandatory requirements.
APA, Harvard, Vancouver, ISO, and other styles
14

Huang, Zhi. "Individual and combined AI models for complicated predictive forest type mapping using multisource GIS data." Phd thesis, 2004. http://hdl.handle.net/1885/148517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

RUBEENA. "OBJECT-BASED CLASSIFICATION USING MULTI-RESOLUTION IMAGE FUSION." Thesis, 2019. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18082.

Full text
Abstract:
Multisource remote sensing data has recently gained much greater attention of researchers for urban land-cover classification because it is increasingly being realized that the complimentary characteristics of different types of remote sensing data can significantly enhance and aid the identification of natural and man-made objects in an urban landscape. However while on one hand, it improves the classification accuracy, on the other, it also increases the data volume including noise, redundant information and uncertainty between the datasets. Therefore, it is essential to extract selected input features and combine them from the multisource data to achieve the highest possible classification accuracy. The otherchallenging tasks while dealing with multisource data are the development of the data processing and classification techniques to exploit the advantages of multisource data sets. The objective of this thesis is to improve the urban land-cover classification process by using multisource remote sensing data and exploring various spectral and spatial features with recent image processing and classification algorithms and techniques for improvement in classification accuracy of urban land cover objects. This work proposes a methodological approach for classifying natural (i.e., vegetation, trees and soil) and man-made (i.e., buildings and roads) objects using multisource datasets (i.e., Long Wave Infrared (LWIR) Hyperspectral and High Spatial Resolution Visible RGB data). The spatial, spectral and spatial-structural features, such as textural (i.e., contrast and homogeneity), Normalized Difference Vegetation Index (NDVI), Morphological Building Index (MBI) are extracted and incorporated on the connected components of each class under the category of natural and man-made objects. The feature knowledge set formed consisting of different domains of spectral and spatial attributes are trained and tested using one-against-one Support Vector Machine (SVM) classifier network. The decisions of the classifier network are finalized using fusion technique i.e., majority vote contributed by each feature set. The results vi obtained from the classification approach using decision level fusion show that majority vote by NDVI feature in SVM classifier network has contributed in achieving good confidence measures for classes, building (100%), vegetation (91.6%), soil (91%) and road (79.3%). From these results, it may be inferred that NDVI feature gives better results for almost all classes. Since, NDVI is calculated using mean spectral information of thermal bands from hyperspectral data and red bands from VIS RGB data, therefore, the mean spectral information from the thermal hyperspectral bands can be explored for studying the natural and man-made objects in urban land-cover. Also, the feature level fusion of multisource datasets i.e., the fusion of long wave thermal bands from hyperspectral data and red bands from very high resolution visible RGB data has resulted in achieving good classification accuracy of objects in an urban area. A decision levelstrategy has also been derived using the combination of textural, NDVI and MBI features using weighted majority voting fusion technique to enhance the classification results of natural and man-made objects. The comparative analysis of decision level fusion of each feature set and the combination of all features reveal that the overall confidence measure of the classified objects has improved from 57.1% to 98.2% (for all classes except for building). Hence, it is seen that the proposed weighted majority voting strategy of combined spectral and spatial features derived from multisensory data improves the classification accuracy of urban land-cover objects. The capabilities of non-parametric classifiers, such as Artificial Neural Network (ANN) and Support Vector Machine (SVM), have been investigated using LWIR and Visible RGB data with the methodological approach derived using spectral and spatial features and decision level fusion strategy. The results have shown that SVM classifier network gives better generalisation results than ANN for identification of natural and man-made objects. The vii statistics obtained show that in comparison to neural networks, SVM requires less training data for a decent performance. This research also explores the benefit of using wide range hyperspectral data i.e., Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG). The spectral channels of hyperspectral data ranging from 325-nm to 2500-nm have been investigated for extracting the useful wavelengths for forming the spectral indices which in turn can be used for identifying the components /constituents of natural and man-made objects of an urban area. The Optimum Index Factor (OIF) is computed by evaluating different standard deviation and correlation values. The wavelength values for which the best OIF is obtained are selected to form the spectral indices. Various spectral indices (namely, Normalised Difference Vegetation Index (NDVI), Infrared Percentage Vegetation Index (IPVI), Difference Vegetation Index (DVI)-wavelengths (in nanometers-1077.65, 666.94), Normalised Difference Building Index (NDBI)-wavelengths (in nanometers-1788.88, 892.33), Normalised Difference Soil Index (NDSI)-wavelengths (in nanometers-1763.84, 396.47), Clay Mineral Index-wavelengths (in nanometers-2164.54, 1713.75) have been formulated by using the significant wavelengths derived from OIF. Two indices, namely Clay Mineral Index and NDBI have resulted in significantly improving the classification of components/constituents in soil (i.e., Producers’ Accuracy of Arid Soil-92.13% and Non-Arid Soil-93.84%) and man-built (i.e., Producers’ Accuracy of Concrete -83.41% and Asphalt-93.09%) structures. Besides, the research also focuses on improving the classification accuracy of linearly structured man-made object i.e., roads in an urban area by exploring the spatialshape attributes using a very high spatial resolution VIS RGB data. A new approach has been proposed in for calculating the spatial shape features. The results have shown that it is beneficial to study the pixel shape features which can be extended to object level in order to improve the classification accuracy. The classification accuracy achieved through this study for class “road” is 97.3% viii which is the enhanced in comparison to the results achieved so far with other experiments in this research work. Several important conclusions have been drawn from this study regarding improvement in classification accuracy of urban objects and certain new approaches for the same have been recommended. The study also givessuggestions for undertaking future research in certain areas such as for exploring more number of spatial structural features in identifying man made features and use of combination of spectral and spatial features in identification of subclasses or different levels of classification in natural and man-made objects in an urban environment.
APA, Harvard, Vancouver, ISO, and other styles
16

Trivedi, Neeta. "Robust, Energy‐efficient Distributed Inference in Wireless Sensor Networks With Applications to Multitarget Tracking." Thesis, 2014. https://etd.iisc.ac.in/handle/2005/4569.

Full text
Abstract:
The Joint Directors of Laboratories (JDL) data fusion model is a functional and comprehensive model for data fusion and inference process and serves as a common frame of reference for fusion technologies and algorithms. However, in distributed data fusion (DDF), since a node fuses the data locally available to it and the data arriving at it from the network, the framework by which the inputs arrive at a node must be part of the DDF problem, more so when the network starts becoming an overwhelming part of the inference process, like in wireless sensor networks (WSN). The current state of the art is the advancement as the result of parallel efforts in the constituent technology areas relating to the network or architecture domain and the application or fusion domain. Each of these disciplines is an evolving area requiring concentrated efforts to reach the Holy Grail. However, the most serious gap exists in the linkages within and across the two domains. This goal of this thesis is to investigate how the architectural issues can be crucial to maintaining provably correct solutions for distributed inference in WSN, to examine the requirements of networking structure for multitarget tracking in WSN as the boundaries get pushed in terms of target signature separation, sensor location uncertainties, reporting structure changes, and energy scarcity, and to propose robust and energy-efficient solutions for multitarget tracking in WSN. The findings point to an architecture that is achievable given today’s technology. This thesis shows the feasibility of using this architecture for efficient integrated execution of the architecture domain and the fusion domain functionality. Specific contributions in the areas of architecture domain include optimal lower bound on energy required for broadcast to a set of nodes, a QoS- and resource-aware broadcast algorithm, and a fusion-aware converge cast algorithm. The contributions in fusion domain include the following. Extension to the JDL model is proposed that accounts for DDF. Probabilistic graphical models are introduced with the motivation of balancing computation load and communication overheads among sensor nodes. Under the assumption that evidence originates from sensor nodes and a large part of inference must be drawn locally, the model allows mapping of inference responsibilities to sensor nodes in distributed manner. An algorithm formulating the problem of maximum a posteriori state estimate from general multimodal posterior as constrained nonlinear optimization problem, and an error estimate for indicating actionable confidence in this state are proposed. A DBN-based framework iMerge is proposed that models the overlap of signal energies from closely spaced targets for adding robustness to data association. iConsensus, a lightweight approach to network management and distributed tracking, and iMultitile, a method to trade off the cost of managing and propagating the particles with desired accuracy limits are also proposed. iSLAT, a distributed, lightweight smoothing algorithm for simultaneous localization and multitarget tracking is discussed. iSLAT uses the well-known RANSAC algorithm for approximation of the joint posterior densities.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography