Rozprawy doktorskie na temat „MULTISOURCE DATA”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 16 najlepszych rozpraw doktorskich naukowych na temat „MULTISOURCE DATA”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Fiskio-Lasseter, John Howard Eli. "Specification and solution of multisource data flow problems /". view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1280151111&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.
Pełny tekst źródłaTypescript. Includes vita and abstract. Includes bibliographical references (leaves 150-162). Also available for download via the World Wide Web; free to University of Oregon users.
Filiberti, Daniel Paul. "Combined Spatial-Spectral Processing of Multisource Data Using Thematic Content". Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1066%5F1%5Fm.pdf&type=application/pdf.
Pełny tekst źródłaKayani, Amina Josetta. "Critical determinants influencing employee reactions to multisource feedback systems". Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/150.
Pełny tekst źródłaPeterson, Dwight M. "The Merging of Multisource Telemetry Data to Support Over the Horizon Missile Testing". International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608414.
Pełny tekst źródłaThe testing of instrumented missile systems with extended range capabilities present many challenges to existing T&E and training ranges. Providing over-the-horizon (OTH) telemetry data collection and displaying portions of this data in real time for range safety purposes are just a few of many factors required for successful instrumented range support. Techniques typically used for OTH telemetry data collection are to use fixed or portable antennas installed at strategic down-range locations, instrumented relay pods installed on chase aircraft, and instrumented high flying relay aircraft. Multiple data sources from these various locations typically arrive at a central site within a telemetry ground station and must be merged together to determine the best data source for real time and post processing purposes. Before multiple telemetered sources can be merged, the time skews caused by the relay of down-range land and airborne based sources must be taken into account. The time skews are fixed for land based sources, but vary with airborne sources. Various techniques have been used to remove the time skews associated with multiple telemetered sources. These techniques, which involve both hardware and software applications, have been effective, but are expensive and application and range dependent. This paper describes the use of a personal computer (PC) based workstation, configured with independent Pulse Code Modulation (PCM) decommutators/bit synchronizers, Inner-Range Instrumentation Group (IRIG) timing, and data merging resident software to perform the data merging task. Current technology now permits multiple PCM decommutators, each built as a separate virtual memory expansion (VME) card, to be installed within a PC based workstation. Each land based or airborne source is connected to a dedicated VME based PCM decommutator/bit synchronizer within the workstation. After the exercise has been completed, data merging software resident within the workstation is run which reads the digitized data from each of the disk files and aligns the data on a bit by bit basis to determine the optimum merged result. Both time based and event based alignment is performed when merging the multiple sources.This technique has application for current TOMAHAWK exercises performed at the Air Force Development Test Center, Eglin Air Force Base (AFB), Florida and the Naval Air Warfare Center/Weapons Division (NAWC/WD), Point Mugu, California and future TOMAHAWK Baseline Improvement Program (TBIP) testing.
Papadopoulos, Georgios. "Towards a 3D building reconstruction using spatial multisource data and computational intelligence techniques". Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0084/document.
Pełny tekst źródłaBuilding reconstruction from aerial photographs and other multi-source urban spatial data is a task endeavored using a plethora of automated and semi-automated methods ranging from point processes, classic image processing and laser scanning. In this thesis, an iterative relaxation system is developed based on the examination of the local context of each edge according to multiple spatial input sources (optical, elevation, shadow & foliage masks as well as other pre-processed data as elaborated in Chapter 6). All these multisource and multiresolution data are fused so that probable line segments or edges are extracted that correspond to prominent building boundaries.Two novel sub-systems have also been developed in this thesis. They were designed with the purpose to provide additional, more reliable, information regarding building contours in a future version of the proposed relaxation system. The first is a deep convolutional neural network (CNN) method for the detection of building borders. In particular, the network is based on the state of the art super-resolution model SRCNN (Dong C. L., 2015). It accepts aerial photographs depicting densely populated urban area data as well as their corresponding digital elevation maps (DEM). Training is performed using three variations of this urban data set and aims at detecting building contours through a novel super-resolved heteroassociative mapping. Another innovation of this approach is the design of a modified custom loss layer named Top-N. In this variation, the mean square error (MSE) between the reconstructed output image and the provided ground truth (GT) image of building contours is computed on the 2N image pixels with highest values . Assuming that most of the N contour pixels of the GT image are also in the top 2N pixels of the re-construction, this modification balances the two pixel categories and improves the generalization behavior of the CNN model. It is shown in the experiments, that the Top-N cost function offers performance gains in comparison to standard MSE. Further improvement in generalization ability of the network is achieved by using dropout.The second sub-system is a super-resolution deep convolutional network, which performs an enhanced-input associative mapping between input low-resolution and high-resolution images. This network has been trained with low-resolution elevation data and the corresponding high-resolution optical urban photographs. Such a resolution discrepancy between optical aerial/satellite images and elevation data is often the case in real world applications. More specifically, low-resolution elevation data augmented by high-resolution optical aerial photographs are used with the aim of augmenting the resolution of the elevation data. This is a unique super-resolution problem where it was found that many of -the proposed general-image SR propositions do not perform as well. The network aptly named building super resolution CNN (BSRCNN) is trained using patches extracted from the aforementioned data. Results show that in comparison with a classic bicubic upscale of the elevation data the proposed implementation offers important improvement as attested by a modified PSNR and SSIM metric. In comparison, other proposed general-image SR methods performed poorer than a standard bicubic up-scaler.Finally, the relaxation system fuses together all these multisource data sources comprising of pre-processed optical data, elevation data, foliage masks, shadow masks and other pre-processed data in an attempt to assign confidence values to each pixel belonging to a building contour. Confidence is augmented or decremented iteratively until the MSE error fails below a specified threshold or a maximum number of iterations have been executed. The confidence matrix can then be used to extract the true building contours via thresholding
Bascol, Kevin. "Adaptation de domaine multisource sur données déséquilibrées : application à l'amélioration de la sécurité des télésièges". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES062.
Pełny tekst źródłaBluecime has designed a camera-based system to monitor the boarding station of chairlifts in ski resorts, which aims at increasing the safety of all passengers. This already successful system does not use any machine learning component and requires an expensive configuration step. Machine learning is a subfield of artificial intelligence which deals with studying and designing algorithms that can learn and acquire knowledge from examples for a given task. Such a task could be classifying safe or unsafe situations on chairlifts from examples of images already labeled with these two categories, called the training examples. The machine learning algorithm learns a model able to predict one of these two categories on unseen cases. Since 2012, it has been shown that deep learning models are the best suited machine learning models to deal with image classification problems when many training data are available. In this context, this PhD thesis, funded by Bluecime, aims at improving both the cost and the effectiveness of Bluecime's current system using deep learning
Ben, Hassine Soumaya. "Évaluation et requêtage de données multisources : une approche guidée par la préférence et la qualité des données : application aux campagnes marketing B2B dans les bases de données de prospection". Thesis, Lyon 2, 2014. http://www.theses.fr/2014LYO22012/document.
Pełny tekst źródłaIn Business-to-Business (B-to-B) marketing campaigns, manufacturing “the highest volume of sales at the lowest cost” and achieving the best return on investment (ROI) score is a significant challenge. ROI performance depends on a set of subjective and objective factors such as dialogue strategy, invested budget, marketing technology and organisation, and above all data and, particularly, data quality. However, data issues in marketing databases are overwhelming, leading to insufficient target knowledge that handicaps B-to-B salespersons when interacting with prospects. B-to-B prospection data is indeed mainly structured through a set of independent, heterogeneous, separate and sometimes overlapping files that form a messy multisource prospect selection environment. Data quality thus appears as a crucial issue when dealing with prospection databases. Moreover, beyond data quality, the ROI metric mainly depends on campaigns costs. Given the vagueness of (direct and indirect) cost definition, we limit our focus to price considerations.Price and quality thus define the fundamental constraints data marketers consider when designing a marketing campaign file, as they typically look for the "best-qualified selection at the lowest price". However, this goal is not always reachable and compromises often have to be defined. Compromise must first be modelled and formalized, and then deployed for multisource selection issues. In this thesis, we propose a preference-driven selection approach for multisource environments that aims at: 1) modelling and quantifying decision makers’ preferences, and 2) defining and optimizing a selection routine based on these preferences. Concretely, we first deal with the data marketer’s quality preference modelling by appraising multisource data using robust evaluation criteria (quality dimensions) that are rigorously summarized into a global quality score. Based on this global quality score and data price, we exploit in a second step a preference-based selection algorithm to return "the best qualified records bearing the lowest possible price". An optimisation algorithm, BrokerACO, is finally run to generate the best selection result
Mondésir, Jacques Philémon. "Apports de la texture multibande dans la classification orientée-objets d'images multisources (optique et radar)". Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9706.
Pełny tekst źródłaAbstract : Texture has a good discriminating power which complements the radiometric parameters in the image classification process. The index Compact Texture Unit multiband, recently developed by Safia and He (2014), allows to extract texture from several bands at a time, so taking advantage of extra information not previously considered in the traditional textural analysis: the interdependence between bands. However, this new tool has not yet been tested on multi-source images, use that could be an interesting added-value considering, for example, all the textural richness the radar can provide in addition to optics, by combining data. This study allows to complete validation initiated by Safia (2014), by applying the CTU on an optics-radar dataset. The textural analysis of this multisource data allowed to produce a "color texture" image. These newly created textural bands are again combined with the initial optical bands before their use in a classification process of land cover in eCognition. The same classification process (but without CTU) was applied respectively to: Optics data, then Radar, finally on the Optics-Radar combination. Otherwise, the CTU generated on the optics separately (monosource) was compared to CTU arising from Optical-Radar couple (multisource). The analysis of the separating power of these different bands (radiometric and textural) with histograms, and the confusion matrix tool allows to compare the performance of these different scenarios and classification parameters. These comparators show the CTU, including the CTU multisource, as the most discriminating criterion; his presence adds variability in the image thus allowing a clearer segmentation (homogeneous and non-redundant), a classification both more detailed and more efficient. Indeed, the accuracy changes from 0.5 with the Optics image to 0.74 for the CTU image while confusion decreases from 0.30 (in Optics) to 0.02 (in the CTU).
Zamite, João Miguel Quintino de Morais 1985. "Multisource epidemic data collector". Master's thesis, 2010. http://hdl.handle.net/10451/2346.
Pełny tekst źródłaEpidemic surveillance has recently been subject to the development of Web based information retrieval systems. The majority of these systems extract information directly from users, official epidemic reports or news sources. Others extract epidemic data fromInternet based social network services. The currently existing epidemic surveillance systems are mostly monolithic, not being designed for knowledge share or their integration with other applications such as epidemic forecasting tools. In this dissertation, an approach is presented to the creation of a data collection system which enables the integration of data from diverse sources. Based on the principles of interoperability and modularity, this system not only addresses the current needs for data integration but is also expansible to enable data extraction from future epidemic data sources. This system was developed as a module for the ”Epidemic Marketplace” under the EPIWORK project with the objective of becoming a valuable data source for epidemic modeling and forecasting tools. This document describes the requirements and development stages for this epidemic surveillance system and its evaluation.
Nos últimos anos, a vigilância epidemiológica tem sido um campo de desenvolvimento de sistemas de recolha de informação da Web. A maioria destes sistemas extraem informação directamente dos utilizadores, de relatórios oficiais ou de fontes noticiosas, enquanto outros extraem dados epidemiológicos de redes sociais da Internet. Estes sistemas de vigilância epidemiológica são na sua maioria monolíticos, não sendo desenhados para a partilha de dados e sua integração com outras aplicacões, como ferramentas de previsão epidemiológica. Ao longo desta dissertação apresento uma abordagempara a criação de um sistema de colecta de dados que permite a integração de dados de diversas fontes. Baseado nos princípios de interoperabilidade e modularidade, este sistema não só aborda a necessidade para a integração de informação mas é também expansível para permitir a extracção de dados de fontes futuras de dados epidemiológicos. Este sistema foi desenvolvido como um módulo para o ”Epidemic Marketplace” no projecto EPIWORK com o objectivo de se tornar uma fonte de dados para ferramentas de modelação e previsão epidemiológica. Este documento descreve os requisitos e fases de desenvolvimento deste sistema de vigilância epidemiológica bem como a sua avaliação.
European Commission - EPIWORK project under the Seventh Framework Programme (Grant # 231807), the EPIWORK project partners, CMU-Portugal partnership and FCT (Portuguese research funding agency) for its LaSIGE Multi-annual support
You, MIn-Ruei, i 游旻叡. "Deriving deep sea seafood tracability maps using multisource data aggregation". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/remqvk.
Pełny tekst źródła國立臺灣海洋大學
資訊工程學系
106
Taiwan is an island country surrounded by the ocean that is rich in marine resources. According to Taiwan Fishery Agency's statistical data, the fishery industry is very important since it produces an estimated value of 86 billion NTD per year. However, marine resources are not unlimited and may deplete if not sustained. Illegal, unreported, and unregulated (IUU) fishing is a major reason of unsustained fishery and has aroused concerned of different sectors in the community. To strengthen the management of deep sea fisheries, Fisheries Agency, Council of Agriculture, Executive Yuan established Fisheries Management Center. Since July 1, 2016, the Fisheries Agency implemented a declaration system for landing and began to use a new generation eLogbook system, hoping that these strategies can make monitoring and management more complete. The international regulation attaches great importance to the traceability of seafood, which is a key to battle IUU fishing. In Taiwan, the Agricultural Traceability System has already been established. However, there is no such system yet in the deep sea fishery sector. From the landing declarations and eLogbook system developed by the Fisheries Agency, we can construct a traceability map of deep sea seafood. We apply data aggregation techniques on multisource data and use Closest Point of Approach (CPA) algorithm to estimate the position where the fishing vessel transships to another vessel. This seafood traceability map system can map catch and transshipment information of major fish products. It provides a web-based visualization interface which can show either the region of catch or travel map of produces. Authorities and users can use this system to understand the source and validity of fish catches.
Chen, Lu-Yang, i 陳履洋. "GPU-Acceleration of Multisource Data Fusion for Image Classification Using Nearest Feature Space Approach". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/4598s5.
Pełny tekst źródła國立臺北科技大學
電機工程系所
101
The disaster damage investigations and scale estimates can provide critical information for follow-up disaster relief responses and interactions. Recently, with the advance of satellite and image processing technology, remote sensing imaging becomes a mature and feasible technique for disaster interpretation. In this paper a nearest feature space (NFS) approach is proposed for the purpose of landslide hazard assessment using multisource images. In the past NFS was applied to hyperspectral image classification. The NFS algorithm keeps the original structure of training samples and calculates the nearest distance between test samples and training samples in the feature space. However, when the number of training samples is large, the computational loads of NFS is high. A parallel version of NFS algorithm based on graphics processing units (GPUs) is proposed to overcome this drawback. The proposed method based on the Compute Unified Device Architecture (CUDA) programming model is applied to implement the high performance computing of NFS. The experimental results demonstrate that the NFS approach is effective for land cover classification in the field of Earth remote sensing.
Wang, Yi Chun (Benny), i 王怡鈞. "Multisource Data Fusion and Fisher Criterion Based Nearest Feature Space Approach to Landslide Classification". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/rtw5uf.
Pełny tekst źródła國立臺北科技大學
電機工程系博士班
103
In this dissertation, a novel technique known as the Fisher criterion based nearest feature space (FCNFS) approach is proposed for supervised classification of multisource images for the purpose of landslide hazard assessment. The method is developed for land cover classification based upon the fusion of remotely sensed images of the same scene collected from multiple sources. This dissertation presents a framework for data fusion of multisource remotely sensed images, consisting of two approaches: the band generation process (BGP) and the FCNFS classifier. The multiple adaptive BGP is introduced to create an additional set of bands that are specifically accommodated to the landslide class and are extracted from the original multisource images. In comparison to the original nearest feature space (NFS) method, the proposed FCNFS classifier uses the Fisher criterion of between-class and within-class discrimination to enhance the classifier. In the training phase, the labeled samples are discriminated by the Fisher criterion, which can be treated as a pre-processing step of the NFS method. After completion of the training, the classification results can be obtained from the NFS algorithm. Experimental results show that the proposed BGP/FCNFS framework is suitable for land cover classification in Earth remote sensing and improves the classification accuracy compared to conventional classifiers.
Corongiu, Manuela. "Modelling Railways in the Context of Interoperable Geospatial Data". Doctoral thesis, 2021. http://hdl.handle.net/2158/1240386.
Pełny tekst źródłaHuang, Zhi. "Individual and combined AI models for complicated predictive forest type mapping using multisource GIS data". Phd thesis, 2004. http://hdl.handle.net/1885/148517.
Pełny tekst źródłaRUBEENA. "OBJECT-BASED CLASSIFICATION USING MULTI-RESOLUTION IMAGE FUSION". Thesis, 2019. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18082.
Pełny tekst źródłaTrivedi, Neeta. "Robust, Energy‐efficient Distributed Inference in Wireless Sensor Networks With Applications to Multitarget Tracking". Thesis, 2014. https://etd.iisc.ac.in/handle/2005/4569.
Pełny tekst źródła