Academic literature on the topic 'Multisources data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multisources data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multisources data"

1

Cui, Chen, Shuang Wu, Zhenyong Wang, Qing Guo, and Wei Xiang. "A Polar Codes-Based Distributed UEP Scheme for the Internet of Things." Wireless Communications and Mobile Computing 2021 (December 16, 2021): 1–10. http://dx.doi.org/10.1155/2021/5875797.

Full text
Abstract:
The Internet of Things (IoT), which is expected to support a massive number of devices, is a promising communication scenario. Usually, the data of different devices has different reliability requirements. Channel codes with the unequal error protection (UEP) property are rather appealing for such applications. Due to the power-constrained characteristic of the IoT services, most of the data has short packets; therefore, channel codes are of short lengths. Consequently, how to transmit such nonuniform data from multisources efficiently and reliably becomes an issue be solved urgently. To address this issue, in this paper, a distributed coding scheme based on polar codes which can provide UEP property is proposed. The distributed polar codes are realized by the groundbreaking combination method of noisy coded bits. With the proposed coding scheme, the various data from multisources can be recovered with a single common decoder. Various reliability can be achieved; thus, UEP is provided. Finally, the simulation results show that the proposed coding scheme is viable.
APA, Harvard, Vancouver, ISO, and other styles
2

Yan, Puchen, Qisheng Han, Yangming Feng, and Shaozhong Kang. "Estimating LAI for Cotton Using Multisource UAV Data and a Modified Universal Model." Remote Sensing 14, no. 17 (August 30, 2022): 4272. http://dx.doi.org/10.3390/rs14174272.

Full text
Abstract:
Leaf area index(LAI) is an important indicator of crop growth and water status. With the continuous development of precision agriculture, estimating LAI using an unmanned aerial vehicle (UAV) remote sensing has received extensive attention due to its low cost, high throughput and accuracy. In this study, multispectral and light detection and ranging (LiDAR) sensors carried by a UAV were used to obtain multisource data of a cotton field. The method to accurately relate ground measured data with UAV data was built using empirical statistical regression models and machine learning algorithm models (RFR, SVR and ANN). In addition to the traditional spectral parameters, it is also feasible to estimate LAI using UAVs with LiDAR to obtain structural parameters. Machine learning models, especially the RFR model (R2 = 0.950, RMSE = 0.332), can estimate cotton LAI more accurately than empirical statistical regression models. Different plots and years of cotton datasets were used to test the model robustness and generality; although the accuracy of the machine learning model decreased overall, the estimation accuracy based on structural and multisources was still acceptable. However, selecting appropriate input parameters for different canopy opening and closing statuses can alleviate the degradation of accuracy, where input parameters select multisource parameters before canopy closure while structural parameters are selected after canopy closure. Finally, we propose a gap fraction model based on a LAImax threshold at various periods of cotton growth that can estimate cotton LAI with high accuracy, particularly when the calculation grid is 20 cm (R2 = 0.952, NRMSE = 12.6%). This method does not require much data modeling and has strong universality. It can be widely used in cotton LAI prediction in a variety of environments.
APA, Harvard, Vancouver, ISO, and other styles
3

Hamze, Ouazene, Chebbo, and Maatouk. "Multisources of Energy Contracting Strategy with an Ecofriendly Factor and Demand Uncertainties." Energies 12, no. 20 (October 16, 2019): 3928. http://dx.doi.org/10.3390/en12203928.

Full text
Abstract:
This study presents a mathematical formulation to optimize contracting capacity strategiesof multisources of energy for institutional or industrial consumers considering demand uncertainties.The objective consists of minimizing the total costs composed of the different types of energy contractcapacity costs, penalty price, and an ecofriendly factor. The penalty price is charged on the demand ofenergy exceeding the total contract capacities. The ecofriendly factor encourages the use of renewableenergy and reduces the traditional energy used in the optimal mix of energy sources. The proposedmodel is tested based on demand of energy inspired from real data. These numerical experiments areanalyzed to illustrate the impact of encouraging the use of renewable energy sources by introducingthe ecofriendly factor and the influence of penalty price and uncertainty in the demand of energy.The results show that in the presence of low penalty price or low uncertainty a large amount ofecofriendly support is needed for using more renewable energy sources in the optimal contractcapacity combination.
APA, Harvard, Vancouver, ISO, and other styles
4

Xi, Jianjun, and Wenben Li. "2.5D Inversion Algorithm of Frequency-Domain Airborne Electromagnetics with Topography." Mathematical Problems in Engineering 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/1468514.

Full text
Abstract:
We presented a 2.5D inversion algorithm with topography for frequency-domain airborne electromagnetic data. The forward modeling is based on edge finite element method and uses the irregular hexahedron to adapt the topography. The electric and magnetic fields are split into primary (background) and secondary (scattered) field to eliminate the source singularity. For the multisources of frequency-domain airborne electromagnetic method, we use the large-scale sparse matrix parallel shared memory direct solver PARDISO to solve the linear system of equations efficiently. The inversion algorithm is based on Gauss-Newton method, which has the efficient convergence rate. The Jacobian matrix is calculated by “adjoint forward modelling” efficiently. The synthetic inversion examples indicated that our proposed method is correct and effective. Furthermore, ignoring the topography effect can lead to incorrect results and interpretations.
APA, Harvard, Vancouver, ISO, and other styles
5

Korenromp, Eline L., Keith Sabin, John Stover, Tim Brown, Leigh F. Johnson, Rowan Martin-Hughes, Debra ten Brink, et al. "New HIV Infections Among Key Populations and Their Partners in 2010 and 2022, by World Region: A Multisources Estimation." JAIDS Journal of Acquired Immune Deficiency Syndromes 95, no. 1S (January 1, 2024): e34-e45. http://dx.doi.org/10.1097/qai.0000000000003340.

Full text
Abstract:
Background: Previously, The Joint United Nations Programme on HIV/AIDS estimated proportions of adult new HIV infections among key populations (KPs) in the last calendar year, globally and in 8 regions. We refined and updated these, for 2010 and 2022, using country-level trend models informed by national data. Methods: Infections among 15–49 year olds were estimated for sex workers (SWs), male clients of female SW, men who have sex with men (MSM), people who inject drugs (PWID), transgender women (TGW), and non-KP sex partners of these groups. Transmission models used were Goals (71 countries), AIDS Epidemic Model (13 Asian countries), Optima (9 European and Central Asian countries), and Thembisa (South Africa). Statistical Estimation and Projection Package fits were used for 15 countries. For 40 countries, new infections in 1 or more KPs were approximated from first-time diagnoses by the mode of transmission. Infection proportions among nonclient partners came from Goals, Optima, AIDS Epidemic Model, and Thembisa. For remaining countries and groups not represented in models, median proportions by KP were extrapolated from countries modeled within the same region. Results: Across 172 countries, estimated proportions of new adult infections in 2010 and 2022 were both 7.7% for SW, 11% and 20% for MSM, 0.72% and 1.1% for TGW, 6.8% and 8.0% for PWID, 12% and 10% for clients, and 5.3% and 8.2% for nonclient partners. In sub-Saharan Africa, proportions of new HIV infections decreased among SW, clients, and non-KP partners but increased for PWID; elsewhere these groups' 2010-to-2022 differences were opposite. For MSM and TGW, the proportions increased across all regions. Conclusions: KPs continue to have disproportionately high HIV incidence.
APA, Harvard, Vancouver, ISO, and other styles
6

Tolosa, Mr Mihiretu Wakwoya. "Action Research on Exploring the Effectiveness of Continuous Assessment on English Common Course in a Case of Plant Science Year I Students Aksum University Shire Campus." IJOHMN (International Journal online of Humanities) 5, no. 4 (August 5, 2019): 1–14. http://dx.doi.org/10.24113/ijohmn.v5i4.112.

Full text
Abstract:
This action research was aimed mainly to investigate the effectiveness of continuous assessment in English common course in case of students of plant science first year program at shire campus. The study involved 55 (M =15, F= 40) students and 1 male English common course instructor as participant of the study. It also employed three data gathering tools: questionnaire, interview and document analysis. Data obtained from these multisources were analyzed quantitatively and qualitatively in which case percentage and verbal description were used respectively. Hence, the finding indicates that there were no bolded theoretical and practical implementation gaps of CA among instructors and students. However, many impressing factors were found, which impede the implementation of CA. Among these, large number of students in a section, instructors’ workload, students’ attitudes toward CA, lack of specific criteria for checking subjective form of students’ assignment and project work were some. Generally, the study attempts to forward action to be taken to tackle the problem, such as lessen teachers’ workload, minimizing number of students in one section accordance with MEO policy, proposing clear-cut criteria for checking and giving feedback for subjective case assignments. Moreover, instructors need to motivate students to work or involve in CA as well as committed themselves to implement effectively that contributed to prove quality of education. Key words: Continuous assessment, effectiveness, exploring.
APA, Harvard, Vancouver, ISO, and other styles
7

Radwan, Ahmed E. "Integrated reservoir, geology, and production data for reservoir damage analysis: A case study of the Miocene sandstone reservoir, Gulf of Suez, Egypt." Interpretation 9, no. 4 (August 4, 2021): SH27—SH37. http://dx.doi.org/10.1190/int-2021-0039.1.

Full text
Abstract:
Reservoir damage is considered one of the major challenges in the oil and gas industry. Many studies have been conducted to understand formation damage mechanisms in borehole wells, but few studies have been conducted to analyze the data to detect the source, causes, and mitigations for each well where damage has occurred. I have investigated and quantified the reasons and mitigation of reservoir damage problems in the middle Miocene reservoir within the El Morgan oil field at the southern central Gulf of Suez, Egypt. I used integrated production, reservoir, and geologic data sets and their history during different operations to assess the reservoir damage in El Morgan-XX well. The collected data include the reservoir rock type, fluid, production, core analysis, rock mineralogy, geology, water chemistry, drilling fluids, perforations depth intervals, workover operations, and stimulation history. The integration of different sets of data gave a robust analysis of reservoir damage causes and helps to suggest suitable remediation. Based on these results, I conclude the following: (1) Workover fluid has been confirmed as the primary damage source, (2) the reservoir damage mechanisms could be generated by multisources including solids and filtrate invasions, fluid/rock interaction (deflocculating of kaolinite clay), water blockage, salinity chock, and the high sulfate content of the invaded fluid, and (3) multidata integration leads to appropriate reservoir damage analysis and effective design of the stimulation treatment. Furthermore, minimizing fluid invasion into the reservoir section by managing the overbalance during drilling and workover operations could be very helpful. Fluid types and solids should be considered when designing the stimulation treatment and compatibility tests should be performed. Long periods of completion fluid in boreholes are not recommended, particularly if the completion fluid pressure and reservoir pressure are out of balance, as well as the presence of sensitive formation minerals.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Chaofeng. "Data Mining-Based Tracking Method for Multisource Target Data of Heterogeneous Networks." Wireless Communications and Mobile Computing 2022 (August 22, 2022): 1–8. http://dx.doi.org/10.1155/2022/1642925.

Full text
Abstract:
In order to solve the problem that the target is easily lost in the process of multisource target data fusion tracking, a multisource target data fusion tracking method based on data mining is proposed. Multisource target data fusion tracking belongs to location level fusion. Firstly, a hybrid heterogeneous network fusion model is established, and then, data features are extracted, and a fusion source big data acquisition algorithm is designed based on compressed sensing to complete data preprocessing to reduce the amount of data acquisition. Based on data mining association multisource fusion target, get the relationship between each measurement and target, and build multisource target data fusion tracking model to ensure the stable state of fusion results. It shows that the proposed method can save the tracking time and improve the tracking accuracy compared with the methods based on NNDA and PDA, which is more conducive to the real-time tracking of multisource targets.
APA, Harvard, Vancouver, ISO, and other styles
9

Guo, Hongyan, and Xintao Li. "Multisource Target Data Fusion Tracking Method for Heterogeneous Network Based on Data Mining." Wireless Communications and Mobile Computing 2022 (June 10, 2022): 1–10. http://dx.doi.org/10.1155/2022/9291319.

Full text
Abstract:
This research is on heterogeneous network fusion method of multisource target data based on data mining. Firstly, it is a distributed storage structure model for building heterogeneous network multisource target data. Then, using the phase space reconstruction method, a grid distribution structure model for data fusion tracking is constructed, and realize visual scheduling and automatic monitoring of multisource target data. Finally, according to the feature extraction results, analyze the statistical characteristics of multisource target data in heterogeneous networks, combined with the fuzzy tomographic analysis method, multilevel fusion, and adaptive mining of multisource target data, extract the associated feature quantities in it, and realize the fusion tracking of data. The simulation results show that, in relatively simple heterogeneous networks, the feature mining error of the proposed method is nearly 2.11% lower than the two traditional methods. In relatively complex heterogeneous networks, the feature mining error of the proposed method is nearly 6.48% lower than the two traditional methods. It can be seen that this method has better adaptability for fusion tracking of heterogeneous network multisource target data, the anti-interference ability is strong, and the tracking accuracy in the data fusion tracking process is also improved.
APA, Harvard, Vancouver, ISO, and other styles
10

Dai Song, 戴嵩, 孙喜明 Sun Ximing, 张精明 Zhang Jingming, 朱永山 Zhu Yongshan, 王斌 Wang Bin, and 宋冬梅 Song Dongmei. "基于多尺度卷积神经网络的多源数据融合岩性分类方法." Laser & Optoelectronics Progress 61, no. 14 (2024): 1437005. http://dx.doi.org/10.3788/lop232491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Multisources data"

1

Berka, Anas. "Smart farming : Système d’aide à la décision basé sur la fusion de données multi-sources." Electronic Thesis or Diss., Bourges, INSA Centre Val de Loire, 2024. http://www.theses.fr/2024ISAB0012.

Full text
Abstract:
Le travail présenté dans ce manuscrit traite de l'intégration de données multi-sources, dans le cadre applicatif de l'agriculture de précision, en mettant l'accent sur la fusion de données issues de diverses modalités telles que les images satellitaires, aériennes et de proximité. La fusion de ces sources variées vise à exploiter leurs complémentarités pour améliorer la gestion des cultures, notamment dans le cadre de la détection des maladies.Plusieurs approches basées sur l'apprentissage profond ont ainsi été proposées, notamment les architectures Vision Transformers (ViT) et DeepLab, adaptées à la classification et à la segmentation sémantique. L'une des principales contributions de notre travail de recherche est l'architecture DIFD (Dual-Input Fusion network based on DeepLabV3+) qui permet de combiner les données satellitaires et aériennes pour générer des cartes précises de la couverture végétale. En combinant cette approche avec la détection de proximité à l'aide de capteurs de vision au sol, il est possible de générer des cartes précises de localisation des anomalies. Un cas d'étude exploré dans cette thèse concerne la détection de l'infestation de la cochenille sur le cactus. Une base de données d'images de proximité de cactus a été créée, permettant d'entraîner des modèles de classification pour le diagnostic de l'état de santé du cactus. Dans ce cadre nous avons développé d'une application mobile CactiViT, basée sur un modèle ViT, dans le but d'offrir aux agriculteurs un outil pratique de la surveillance d'état sanitaire du cactus à partir des images acquises avec leurs smartphones, sans nécessiter de connexion Internet. Ce travail met en avant les avantages de l'utilisation de l'intelligence artificielle (IA) et de la vision par ordinateur dans l'agriculture, tout en soulignant les défis techniques liés à l’amélioration des modèles IA à des environnements variés. Les résultats obtenus montrent l'efficacité des approches proposées pour la fusion de données multi-sources et la classification de l'état sanitaire des cactus. Des perspectives d'amélioration ont également été identifiées, notamment l'intégration de nouvelles données, en vue de proposer des solutions plus robustes et généralisables à d'autres types de cultures et d'environnements
The work presented in this manuscript addresses the integration of multi-source data within the framework of precision agriculture, with a focus on the fusion of data from various modalities, such as satellite, aerial, and proximal images. The fusion of these diverse sources aims to exploit their complementarity to improve crop management, particularly in the context of disease detection.Several approaches based on deep learning have been proposed, notably Vision Transformers (ViT) and DeepLab architectures, tailored for classification and semantic segmentation tasks. One of the main contributions of our research is the DIFD (Dual-Input Fusion network based on DeepLabV3+) architecture, which combines satellite and aerial data to generate precise vegetation cover maps. By integrating this approach with proximity detection using ground-based vision sensors, it becomes possible to produce accurate anomaly location maps. A case study explored in this thesis concerns the detection of cochineal infestation on cacti. A database of proximal cactus images was created, enabling the training of classification models for diagnosing the health status of the cacti.In this context, we developed the mobile application CactiViT, based on a ViT model, aimed at providing farmers with a practical diagnostic tool using images captured with their smartphones, without requiring an Internet connection. This work highlights the advantages of using artificial intelligence (AI) and computer vision in agriculture, while also addressing the technical challenges related to improving AI models for diverse environments. The results demonstrate the effectiveness of the proposed approaches for multi-source data fusion and cactus health status classification. Opportunities for improvement have also been identified, particularly through the integration of new data, with the goal of providing more robust and generalizable solutions to other types of crops and environments
APA, Harvard, Vancouver, ISO, and other styles
2

Mondésir, Jacques Philémon. "Apports de la texture multibande dans la classification orientée-objets d'images multisources (optique et radar)." Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9706.

Full text
Abstract:
Résumé : La texture dispose d’un bon potentiel discriminant qui complète celui des paramètres radiométriques dans le processus de classification d’image. L’indice Compact Texture Unit (CTU) multibande, récemment mis au point par Safia et He (2014), permet d’extraire la texture sur plusieurs bandes à la fois, donc de tirer parti d’un surcroît d’informations ignorées jusqu’ici dans les analyses texturales traditionnelles : l’interdépendance entre les bandes. Toutefois, ce nouvel outil n’a pas encore été testé sur des images multisources, usage qui peut se révéler d’un grand intérêt quand on considère par exemple toute la richesse texturale que le radar peut apporter en supplément à l’optique, par combinaison de données. Cette étude permet donc de compléter la validation initiée par Safia (2014) en appliquant le CTU sur un couple d’images optique-radar. L’analyse texturale de ce jeu de données a permis de générer une image en « texture couleur ». Ces bandes texturales créées sont à nouveau combinées avec les bandes initiales de l’optique, avant d’être intégrées dans un processus de classification de l’occupation du sol sous eCognition. Le même procédé de classification (mais sans CTU) est appliqué respectivement sur : la donnée Optique, puis le Radar, et enfin la combinaison Optique-Radar. Par ailleurs le CTU généré sur l’Optique uniquement (monosource) est comparé à celui dérivant du couple Optique-Radar (multisources). L’analyse du pouvoir séparateur de ces différentes bandes à partir d’histogrammes, ainsi que l’outil matrice de confusion, permet de confronter la performance de ces différents cas de figure et paramètres utilisés. Ces éléments de comparaison présentent le CTU, et notamment le CTU multisources, comme le critère le plus discriminant ; sa présence rajoute de la variabilité dans l’image permettant ainsi une segmentation plus nette, une classification à la fois plus détaillée et plus performante. En effet, la précision passe de 0.5 avec l’image Optique à 0.74 pour l’image CTU, alors que la confusion diminue en passant de 0.30 (dans l’Optique) à 0.02 (dans le CTU).
Abstract : Texture has a good discriminating power which complements the radiometric parameters in the image classification process. The index Compact Texture Unit multiband, recently developed by Safia and He (2014), allows to extract texture from several bands at a time, so taking advantage of extra information not previously considered in the traditional textural analysis: the interdependence between bands. However, this new tool has not yet been tested on multi-source images, use that could be an interesting added-value considering, for example, all the textural richness the radar can provide in addition to optics, by combining data. This study allows to complete validation initiated by Safia (2014), by applying the CTU on an optics-radar dataset. The textural analysis of this multisource data allowed to produce a "color texture" image. These newly created textural bands are again combined with the initial optical bands before their use in a classification process of land cover in eCognition. The same classification process (but without CTU) was applied respectively to: Optics data, then Radar, finally on the Optics-Radar combination. Otherwise, the CTU generated on the optics separately (monosource) was compared to CTU arising from Optical-Radar couple (multisource). The analysis of the separating power of these different bands (radiometric and textural) with histograms, and the confusion matrix tool allows to compare the performance of these different scenarios and classification parameters. These comparators show the CTU, including the CTU multisource, as the most discriminating criterion; his presence adds variability in the image thus allowing a clearer segmentation (homogeneous and non-redundant), a classification both more detailed and more efficient. Indeed, the accuracy changes from 0.5 with the Optics image to 0.74 for the CTU image while confusion decreases from 0.30 (in Optics) to 0.02 (in the CTU).
APA, Harvard, Vancouver, ISO, and other styles
3

Ben, Hassine Soumaya. "Évaluation et requêtage de données multisources : une approche guidée par la préférence et la qualité des données : application aux campagnes marketing B2B dans les bases de données de prospection." Thesis, Lyon 2, 2014. http://www.theses.fr/2014LYO22012/document.

Full text
Abstract:
Avec l’avènement du traitement distribué et l’utilisation accrue des services web inter et intra organisationnels alimentée par la disponibilité des connexions réseaux à faibles coûts, les données multisources partagées ont de plus en plus envahi les systèmes d’informations. Ceci a induit, dans un premier temps, le changement de leurs architectures du centralisé au distribué en passant par le coopératif et le fédéré ; et dans un deuxième temps, une panoplie de problèmes d’exploitation allant du traitement des incohérences des données doubles à la synchronisation des données distribuées. C’est le cas des bases de prospection marketing où les données sont enrichies par des fichiers provenant de différents fournisseurs.Nous nous intéressons au cadre particulier de construction de fichiers de prospection pour la réalisation de campagnes marketing B-to-B, tâche traitée manuellement par les experts métier. Nous visons alors à modéliser le raisonnement de brokers humains, afin d’optimiser et d’automatiser la sélection du « plan fichier » à partir d’un ensemble de données d’enrichissement multisources. L’optimisation en question s’exprimera en termes de gain (coût, qualité) des données sélectionnées, le coût se limitant à l’unique considération du prix d’utilisation de ces données.Ce mémoire présente une triple contribution quant à la gestion des bases de données multisources. La première contribution concerne l’évaluation rigoureuse de la qualité des données multisources. La deuxième contribution porte sur la modélisation et l’agrégation préférentielle des critères d’évaluation qualité par l’intégrale de Choquet. La troisième contribution concerne BrokerACO, un prototype d’automatisation et d’optimisation du brokering multisources basé sur l’algorithme heuristique d’optimisation par les colonies de fourmis (ACO) et dont la Pareto-optimalité de la solution est assurée par l’utilisation de la fonction d’agrégation des préférences des utilisateurs définie dans la deuxième contribution. L’efficacité du prototype est montrée par l’analyse de campagnes marketing tests effectuées sur des données réelles de prospection
In Business-to-Business (B-to-B) marketing campaigns, manufacturing “the highest volume of sales at the lowest cost” and achieving the best return on investment (ROI) score is a significant challenge. ROI performance depends on a set of subjective and objective factors such as dialogue strategy, invested budget, marketing technology and organisation, and above all data and, particularly, data quality. However, data issues in marketing databases are overwhelming, leading to insufficient target knowledge that handicaps B-to-B salespersons when interacting with prospects. B-to-B prospection data is indeed mainly structured through a set of independent, heterogeneous, separate and sometimes overlapping files that form a messy multisource prospect selection environment. Data quality thus appears as a crucial issue when dealing with prospection databases. Moreover, beyond data quality, the ROI metric mainly depends on campaigns costs. Given the vagueness of (direct and indirect) cost definition, we limit our focus to price considerations.Price and quality thus define the fundamental constraints data marketers consider when designing a marketing campaign file, as they typically look for the "best-qualified selection at the lowest price". However, this goal is not always reachable and compromises often have to be defined. Compromise must first be modelled and formalized, and then deployed for multisource selection issues. In this thesis, we propose a preference-driven selection approach for multisource environments that aims at: 1) modelling and quantifying decision makers’ preferences, and 2) defining and optimizing a selection routine based on these preferences. Concretely, we first deal with the data marketer’s quality preference modelling by appraising multisource data using robust evaluation criteria (quality dimensions) that are rigorously summarized into a global quality score. Based on this global quality score and data price, we exploit in a second step a preference-based selection algorithm to return "the best qualified records bearing the lowest possible price". An optimisation algorithm, BrokerACO, is finally run to generate the best selection result
APA, Harvard, Vancouver, ISO, and other styles
4

Fiskio-Lasseter, John Howard Eli. "Specification and solution of multisource data flow problems /." view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1280151111&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 150-162). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
5

Filiberti, Daniel Paul. "Combined Spatial-Spectral Processing of Multisource Data Using Thematic Content." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1066%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kayani, Amina Josetta. "Critical determinants influencing employee reactions to multisource feedback systems." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/150.

Full text
Abstract:
The current study examines the Multisource Feedback (MSF) system by investigating the impact several MSF design and implementation factors have on employees’ reaction towards the system. The fundamental goal of the research was to advance the understanding of what is currently known about effectively implementing multisource feedback systems to maximize employee favorable reaction, acceptance and perceptions of usefulness.Of the many management feedback trends that have swept organizations in the past decade, few have had the outstanding impact of MSF. Despite the numerous studies on MSF, perusal of empirical literature lacks overall cohesion in identifying critical factors influencing employees’ reactions to MSF. The constructs examined were delimited to those found to have inherent paradoxes, insufficient coverage, be inconclusive and/or have contradictory findings in the extant literature.A series of main research questions, underscoring the main goal of the study, were developed from the gaps identified in literature to establish which predictors were predominant in influencing the employees’ reactions, acceptance and perceptions of usefulness towards the MSF system. These research questions were formed into hypotheses for testing. The relationships to be tested were integrated into a hypothetical model which encompassed four sub-models to be tested. The models, named the Climate, Reaction, Reaction-Acceptance, Reaction-Perceptions of Usefulness and Acceptance-Perceptions of Usefulness Models were tested in parts using a combination of exploratory factor analysis, correlation analysis and multiple regressions. Further, key informants from each organization and HR managers in three large organizations provided post-survey feedback and information to assist with the elucidation of quantitative findings; this represented the pluralist approach taken in the study.Survey items were derived from extant literature as well as developed specifically for the study. Further, the items were refined using expert reviewers and a pilot study. A cross-sectional web-based survey was administered to employees from a range of managerial levels in three large Malaysian multinational organizations. A total of 420 useable surveys were received, representing a response rate of 47%.Self-report data was used to measure the constructs which were perceptions of the various facets of the MSF. An empirical methodology was used to test the hypotheses to enable the research questions to be answered and to suggest a final model of Critical Determinants Influencing Employee Reaction to MSF Systems.The study was conducted in six phases. In the first phase, a literature map was drawn highlighting the gaps in empirical research. In the second stage, a hypothetical model of employees’ reaction to MSF was developed from past empirical research and literature on MSF. The third phase involved drafting a survey questionnaire on the basis of available literature, with input from academics and practitioners alike. The fourth stage entailed pilot testing the survey instrument using both the ‘paper and pencil’ and web-based methods. The surveys were administered with the assistance of the key informants of the participant organizations in the fifth stage of the study; data received were analysed using a range of statistical tools within SPSS version 15. Content analysis was utilized to categorize themes that emerged from an open-ended question. In the sixth and final stage, empirical results from the quantitative analysis were presented to HR managers to glean first hand understanding over the patterns that emerged.Exploratory factor analysis and reliability analysis indicated that the surveyinstrument was sound in terms of validity and reliability. In the Climate model, itwas found that all the hypothesized predictors, feedback-seeking environment,control over organizational processes, understanding over organizational events,operational support and political awareness were positively associated withpsychological climate for MSF implementation. In terms of predictive power, controlover organizational processes failed to attain significance at the 5% level. In theReaction model, it was found that perceived purpose, perceived anonymity,complexity and rater assignment processes had significant associations withemployee reaction to MSF, but perceived anonymity indicated poor predictive powerfrom the regressions results. As hypothesized, employee reaction was found to be related to MSF acceptance and perceptions of usefulness, and results indicated thatthe two latter outcome constructs were related, but statistically distinct.The two-tier pluralist technique of collecting and examining data was a salient feature of the current study. Indeed, such a holistic approach to investigating the determinants of employee reaction to MSF allowed for better integration of its theory and practice. The study is believed to make a modest, but unique contribution to knowledge, advancing the body of knowledge towards a better understanding of MSF design and implementation issues.The results have implications for calibrating MSF systems and evaluating the needfor, and likely effectiveness of, what has been hailed as one of the powerful newmodels for management feedback in the past two decades. Suggestions were madeabout how the results could benefit academia and practitioners alike. Since mostorganizational and management research has a western ethnocentric bias, the current study encompassed eastern evidence, using cases in Malaysia.
APA, Harvard, Vancouver, ISO, and other styles
7

Peterson, Dwight M. "The Merging of Multisource Telemetry Data to Support Over the Horizon Missile Testing." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608414.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
The testing of instrumented missile systems with extended range capabilities present many challenges to existing T&E and training ranges. Providing over-the-horizon (OTH) telemetry data collection and displaying portions of this data in real time for range safety purposes are just a few of many factors required for successful instrumented range support. Techniques typically used for OTH telemetry data collection are to use fixed or portable antennas installed at strategic down-range locations, instrumented relay pods installed on chase aircraft, and instrumented high flying relay aircraft. Multiple data sources from these various locations typically arrive at a central site within a telemetry ground station and must be merged together to determine the best data source for real time and post processing purposes. Before multiple telemetered sources can be merged, the time skews caused by the relay of down-range land and airborne based sources must be taken into account. The time skews are fixed for land based sources, but vary with airborne sources. Various techniques have been used to remove the time skews associated with multiple telemetered sources. These techniques, which involve both hardware and software applications, have been effective, but are expensive and application and range dependent. This paper describes the use of a personal computer (PC) based workstation, configured with independent Pulse Code Modulation (PCM) decommutators/bit synchronizers, Inner-Range Instrumentation Group (IRIG) timing, and data merging resident software to perform the data merging task. Current technology now permits multiple PCM decommutators, each built as a separate virtual memory expansion (VME) card, to be installed within a PC based workstation. Each land based or airborne source is connected to a dedicated VME based PCM decommutator/bit synchronizer within the workstation. After the exercise has been completed, data merging software resident within the workstation is run which reads the digitized data from each of the disk files and aligns the data on a bit by bit basis to determine the optimum merged result. Both time based and event based alignment is performed when merging the multiple sources.This technique has application for current TOMAHAWK exercises performed at the Air Force Development Test Center, Eglin Air Force Base (AFB), Florida and the Naval Air Warfare Center/Weapons Division (NAWC/WD), Point Mugu, California and future TOMAHAWK Baseline Improvement Program (TBIP) testing.
APA, Harvard, Vancouver, ISO, and other styles
8

Papadopoulos, Georgios. "Towards a 3D building reconstruction using spatial multisource data and computational intelligence techniques." Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0084/document.

Full text
Abstract:
La reconstruction de bâtiments à partir de photographies aériennes et d’autres données spatiales urbaines multi-sources est une tâche qui utilise une multitude de méthodes automatisées et semi-automatisées allant des processus ponctuels au traitement classique des images et au balayage laser. Dans cette thèse, un système de relaxation itératif est développé sur la base de l'examen du contexte local de chaque bord en fonction de multiples sources d'entrée spatiales (masques optiques, d'élévation, d'ombre et de feuillage ainsi que d'autres données prétraitées, décrites au chapitre 6). Toutes ces données multisource et multirésolution sont fusionnées de manière à extraire les segments de ligne probables ou les arêtes correspondant aux limites des bâtiments. Deux nouveaux sous-systèmes ont également été développés dans cette thèse. Ils ont été conçus dans le but de fournir des informations supplémentaires, plus fiables, sur les contours des bâtiments dans une future version du système de relaxation proposé. La première est une méthode de réseau de neurones à convolution profonde (CNN) pour la détection de frontières de construction. Le réseau est notamment basé sur le modèle SRCNN (Dong C. L., 2015) de super-résolution à la pointe de la technologie. Il accepte des photographies aériennes illustrant des données de zones urbaines densément peuplées ainsi que leurs cartes d'altitude numériques (DEM) correspondantes. La formation utilise trois variantes de cet ensemble de données urbaines et vise à détecter les contours des bâtiments grâce à une nouvelle cartographie hétéroassociative super-résolue. Une autre innovation de cette approche est la conception d'une couche de perte personnalisée modifiée appelée Top-N. Dans cette variante, l'erreur quadratique moyenne (MSE) entre l'image de sortie reconstruite et l'image de vérité de sol (GT) fournie des contours de bâtiment est calculée sur les 2N pixels de l'image avec les valeurs les plus élevées. En supposant que la plupart des N pixels de contour de l’image GT figurent également dans les 2N pixels supérieurs de la reconstruction, cette modification équilibre les deux catégories de pixels et améliore le comportement de généralisation du modèle CNN. Les expériences ont montré que la fonction de coût Top-N offre des gains de performance par rapport à une MSE standard. Une amélioration supplémentaire de la capacité de généralisation du réseau est obtenue en utilisant le décrochage. Le deuxième sous-système est un réseau de convolution profonde à super-résolution, qui effectue un mappage associatif à entrée améliorée entre les images d'entrée à basse résolution et à haute résolution. Ce réseau a été formé aux données d’altitude à basse résolution et aux photographies urbaines optiques à haute résolution correspondantes. Une telle différence de résolution entre les images optiques / satellites optiques et les données d'élévation est souvent le cas dans les applications du monde réel
Building reconstruction from aerial photographs and other multi-source urban spatial data is a task endeavored using a plethora of automated and semi-automated methods ranging from point processes, classic image processing and laser scanning. In this thesis, an iterative relaxation system is developed based on the examination of the local context of each edge according to multiple spatial input sources (optical, elevation, shadow & foliage masks as well as other pre-processed data as elaborated in Chapter 6). All these multisource and multiresolution data are fused so that probable line segments or edges are extracted that correspond to prominent building boundaries.Two novel sub-systems have also been developed in this thesis. They were designed with the purpose to provide additional, more reliable, information regarding building contours in a future version of the proposed relaxation system. The first is a deep convolutional neural network (CNN) method for the detection of building borders. In particular, the network is based on the state of the art super-resolution model SRCNN (Dong C. L., 2015). It accepts aerial photographs depicting densely populated urban area data as well as their corresponding digital elevation maps (DEM). Training is performed using three variations of this urban data set and aims at detecting building contours through a novel super-resolved heteroassociative mapping. Another innovation of this approach is the design of a modified custom loss layer named Top-N. In this variation, the mean square error (MSE) between the reconstructed output image and the provided ground truth (GT) image of building contours is computed on the 2N image pixels with highest values . Assuming that most of the N contour pixels of the GT image are also in the top 2N pixels of the re-construction, this modification balances the two pixel categories and improves the generalization behavior of the CNN model. It is shown in the experiments, that the Top-N cost function offers performance gains in comparison to standard MSE. Further improvement in generalization ability of the network is achieved by using dropout.The second sub-system is a super-resolution deep convolutional network, which performs an enhanced-input associative mapping between input low-resolution and high-resolution images. This network has been trained with low-resolution elevation data and the corresponding high-resolution optical urban photographs. Such a resolution discrepancy between optical aerial/satellite images and elevation data is often the case in real world applications. More specifically, low-resolution elevation data augmented by high-resolution optical aerial photographs are used with the aim of augmenting the resolution of the elevation data. This is a unique super-resolution problem where it was found that many of -the proposed general-image SR propositions do not perform as well. The network aptly named building super resolution CNN (BSRCNN) is trained using patches extracted from the aforementioned data. Results show that in comparison with a classic bicubic upscale of the elevation data the proposed implementation offers important improvement as attested by a modified PSNR and SSIM metric. In comparison, other proposed general-image SR methods performed poorer than a standard bicubic up-scaler.Finally, the relaxation system fuses together all these multisource data sources comprising of pre-processed optical data, elevation data, foliage masks, shadow masks and other pre-processed data in an attempt to assign confidence values to each pixel belonging to a building contour. Confidence is augmented or decremented iteratively until the MSE error fails below a specified threshold or a maximum number of iterations have been executed. The confidence matrix can then be used to extract the true building contours via thresholding
APA, Harvard, Vancouver, ISO, and other styles
9

Bascol, Kevin. "Adaptation de domaine multisource sur données déséquilibrées : application à l'amélioration de la sécurité des télésièges." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES062.

Full text
Abstract:
Bluecime a mis au point un système de vidéosurveillance à l'embarquement de télésièges qui a pour but d'améliorer la sécurité des passagers. Ce système est déjà performant, mais il n'utilise pas de techniques d'apprentissage automatique et nécessite une phase de configuration chronophage. L’apprentissage automatique est un sous-domaine de l'intelligence artificielle qui traite de l'étude et de la conception d'algorithmes pouvant apprendre et acquérir des connaissances à partir d'exemples pour une tâche donnée. Une telle tâche pourrait consister à classer les situations sûres ou dangereuses dans les télésièges à partir d'exemples d'images déjà étiquetées dans ces deux catégories, appelés exemples d’entraînement. L'algorithme d'apprentissage automatique apprend un modèle capable de prédire la catégories de nouveaux cas. Depuis 2012, il a été démontré que les modèles d'apprentissage profond sont les modèles d'apprentissage machine les mieux adaptés pour traiter les problèmes de classification d'images lorsque de nombreuses données d’entraînement sont disponibles. Dans ce contexte, cette thèse, financée par Bluecime, vise à améliorer à la fois le coût et l'efficacité du système actuel de Bluecime grâce à l'apprentissage profond
Bluecime has designed a camera-based system to monitor the boarding station of chairlifts in ski resorts, which aims at increasing the safety of all passengers. This already successful system does not use any machine learning component and requires an expensive configuration step. Machine learning is a subfield of artificial intelligence which deals with studying and designing algorithms that can learn and acquire knowledge from examples for a given task. Such a task could be classifying safe or unsafe situations on chairlifts from examples of images already labeled with these two categories, called the training examples. The machine learning algorithm learns a model able to predict one of these two categories on unseen cases. Since 2012, it has been shown that deep learning models are the best suited machine learning models to deal with image classification problems when many training data are available. In this context, this PhD thesis, funded by Bluecime, aims at improving both the cost and the effectiveness of Bluecime's current system using deep learning
APA, Harvard, Vancouver, ISO, and other styles
10

Lahssini, Kamel. "Potentiel et défis du LiDAR spatial pour la cartographie multisource de la hauteur de la canopée dans les forêts tropicales." Electronic Thesis or Diss., Paris, AgroParisTech, 2024. http://www.theses.fr/2024AGPT0010.

Full text
Abstract:
Le programme REDD+ vise à réduire les émissions de carbone résultant de la dégradation des forêts et de la déforestation. Pour atteindre ses objectifs, une meilleure compréhension et caractérisation des forêts du monde entier, en particulier des forêts tropicales, sont nécessaires. Cette thèse de doctorat explore le potentiel et les défis du LiDAR spatial pour la cartographie multisource de la hauteur de la canopée à travers l'intégration de technologies de télédétection complémentaires pour améliorer l'estimation de la hauteur de la canopée dans les forêts tropicales. La hauteur de la canopée est un paramètre essentiel pour quantifier la biomasse et les stocks de carbone des forêts. Cette recherche est structurée en quatre études, chacune correspondant à un article scientifique, qui traitent collectivement de l'utilisation du LiDAR spatial et d'autres données de télédétection pour la cartographie de la hauteur de la canopée. La première étude porte sur la précision des estimations de la hauteur de la canopée obtenues à partir des données LiDAR Global Ecosystem Dynamics Investigation (GEDI) sur des forêts tropicales en Guyane Française (Amérique du Sud) et au Gabon (Afrique). Les résultats révèlent que des modèles de régression incorporant plusieurs métriques GEDI améliorent de manière significative la précision de l'estimation par rapport à l'utilisation d'une seule métrique de hauteur GEDI. Les paramètres physiques du signal tels que le type de faisceau et la sensibilité, qui affectent la pénétration du laser, sont identifiés comme des influences majeures sur la qualité des données GEDI. La deuxième étude étend l'analyse des estimations de la hauteur de la canopée à partir de GEDI aux forêts tropicales de l'île de Mayotte, caractérisées par des hauteurs de canopée modérées et un terrain escarpé. La pente du terrain a un impact significatif sur les mesures effectuées par GEDI et doit être prise en compte lorsque les données ont été acquises dans des zones escarpés. La prise en compte de la pente peut se faire par l'intégration d'informations sur le terrain dans les modèles de régression ou par la correction directe des effets de pente dans les formes d'onde GEDI. De plus, il apparaît que la capacité de pénétration du faisceau LiDAR dépend fortement des caractéristiques de la forêt, puisque la profondeur de pénétration du signal diffère selon le type de forêt. La troisième étude présente une carte complète de la hauteur de la canopée en Guyane Française à une résolution spatiale de 10 m, produite à partir d'une approche opérationnelle de fusion de données qui intègre des données optiques, radar et des sources de données environnementales auxiliaires. L'étude utilise un modèle de réseau de neurones U-Net calibré et validé à partir de données GEDI comme hauteurs de canopée de référence. L'intégration de descripteurs hydrologiques et géomorphologiques, tels que la hauteur au-dessus du drainage le plus proche et les types de paysages forestiers, améliore considérablement la précision du modèle. En outre, la prise en compte des incertitudes de la base de données GEDI par le filtrage des données pertinentes et la correction des erreurs de géolocalisation améliore davantage la performance de l'estimation de la hauteur de la canopée par U-Net. Dans la quatrième et dernière étude, la corrélation spatiale de cette carte de la hauteur de la canopée est analysée et des approches géostatistiques sont mises en œuvre pour améliorer les prédictions de la hauteur de la canopée. Cette analyse montre que les caractéristiques du capteur GEDI, en particulier le type de faisceau laser et le schéma d'échantillonnage au sol, introduisent dans les mesures des anisotropies induites par le capteur qui ne sont pas représentatives de la variabilité spatiale de la hauteur de canopée. La technique d'interpolation spatiale par krigeage des résidus prend en compte l'autocorrélation spatiale des hauteurs de la canopée et améliore la précision des estimations
The REDD+ program aims at reducing carbon emissions resulting from forest degradation and deforestation. To meet its objectives, better understanding and characterization of forests world-wide, especially tropical forests, are required. This doctoral thesis explores the potential and challenges of spaceborne LiDAR for multi-source canopy height mapping through the integration of complementary remote sensing technologies to enhance the estimation of canopy height in tropical forests. Canopy height is a critical parameter for quantifying the biomass and the carbon stocks of forests. This research is structured in four studies, each corresponding to a journal article, which collectively address the use of spaceborne LiDAR and other remote sensing data for canopy height mapping. The first study investigates the accuracy of canopy height estimates derived from the Global Ecosystem Dynamics Investigation (GEDI) LiDAR data over tropical forests in French Guiana (South America) and Gabon (Africa). The results reveal that regression models incorporating multiple GEDI metrics significantly improve the estimation accuracy compared to using a single GEDI height metric. Signal physical parameters such as beam type and sensitivity, which affect laser penetration, are identified as major influences on the quality of GEDI data. The second study extends the analysis of GEDI-derived canopy height estimates to tropical forests of Mayotte Island, characterized by moderate canopy heights and steep terrain. Terrain slope proves to significantly impact the measurements retrieved by GEDI and needs to be accounted for when dealing with data that was acquired over steep areas. Accounting for slope can be done through the integration of terrain information in the regression models or through the direct correction of slope effects in the GEDI waveforms. Moreover, LiDAR beam penetration capability appears to be strongly dependent on forest characteristics, as signal penetration depth differs for different forest types. The third study presents a comprehensive canopy height map of French Guiana at a 10-m spatial resolution, produced through an operational data fusion approach that integrates optical, radar, and ancillary environmental data sources. The study employs a U-Net neural network model trained and validated using GEDI data as reference canopy heights. The integration of hydrological and geomorphological descriptors, such as the height above nearest drainage and forest landscape types, significantly enhances the model's accuracy. Moreover, addressing the uncertainties of the GEDI database through the filtering of relevant data and the correction of geolocation errors further improves the performance of the canopy height estimation through U-Net. In the fourth and final study, the spatial correlation of this canopy height map is analyzed, and geostatistical approaches are implemented to improve the canopy height predictions. This analysis shows that the characteristics of the GEDI sensor, particularly laser beam type and the ground sampling pattern, introduce sensor-induced anisotropies in the measurements that are not representative of the actual spatial variability of the canopy height. The residual kriging spatial interpolation technique addresses the spatial autocorrelation of canopy heights and improves the accuracy of the estimates
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Multisources data"

1

E, Wright Bruce, and Geological Survey (U.S.). National Mapping Division, eds. Integrating multisource land use and land cover data. [Reston, Va.]: U.S. Dept. of the Interior, U.S. Geological Survey, National Mapping Division, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mahler, Ronald P. S. Advances in statistical multisource-multitarget information fusion. Boston: Artech House, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

V, Dasarathy Belur, and Society of Photo-optical Instrumentation Engineers., eds. Multisensor, multisource information fusion: Architectures, algorithms, and applications 2006 : 19-20 April 2006, Kissimmee, Florida, USA. Bellingham, Wash: SPIE, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

V, Dasarathy Belur, Society of Photo-optical Instrumentation Engineers., and Ball Aerospace & Technologies Corporation (USA), eds. Multisensor, multisource information fusion : architectures, algorithms, and applications 2005: 30-31 March, 2005, Orlando, Florida, USA. Bellingham, Wash: SPIE, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

V, Dasarathy Belur, and Society of Photo-optical Instrumentation Engineers., eds. Multisensor, multisource information fusion: Architectures, algorithms, and applications 2007 : 11-12 April, 2007, Orlando, Florida, USA. Bellingham, Wash: SPIE, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

V, Dasarathy Belur, and Society of Photo-optical Instrumentation Engineers., eds. Multisensor, multisource information fusion--architectures, algorithms, and applications 2003: 23-25 April 2003, Orlando, Florida, USA. Bellingham, Wash: SPIE, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Braun, Jerome J. Multisensor, multisource information fusion: Architectures, algorithms, and applications 2011 : 27-28 April 2011, Orlando, Florida, United States. Bellingham, Wash: SPIE, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Braun, Jerome J. Multisensor, multisource information fusion: Architectures, algorithms, and applications 2010 : 7-8 April 2010, Orlando, Florida, United States. Bellingham, Wash: SPIE, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Hakil. A method of classification for multisource data in remote sensing based on interval-valued probabilties. West Lafayette, Indiana: Laboratory for Applications of Remote Sensing and School of Electrical Engineering, Purdue University, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Hakil. A method of classification for multisource data in remote sensing based on interval-valued probabilities. West Lafayette, Ind: Laboratory for Applications of Remote Sensing, Purdue University, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Multisources data"

1

Zamite, João, Fabrício A. B. Silva, Francisco Couto, and Mário J. Silva. "MEDCollector: Multisource Epidemic Data Collector." In Transactions on Large-Scale Data- and Knowledge-Centered Systems IV, 40–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23740-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zamite, João, Fabrício A. B. Silva, Francisco Couto, and Mário J. Silva. "MEDCollector: Multisource Epidemic Data Collector." In Information Technology in Bio- and Medical Informatics, ITBAM 2010, 16–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15020-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kavzoglu, Taskin, Brandt Tso, and Paul M. Mather. "Multisource Image Fusion and Classification." In Classification Methods for Remotely Sensed Data, 124–64. 3rd ed. Boca Raton: CRC Press, 2024. http://dx.doi.org/10.1201/9781003439172-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gao, Xueyuan, and Fuyuan Xiao. "A Generalized $$\chi ^2$$ Divergence for Multisource Information Fusion." In Data Mining and Big Data, 175–84. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7502-7_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Waske, Björn, and Jón Atli Benediktsson. "Decision Fusion, Classification of Multisource Data." In Encyclopedia of Remote Sensing, 140–44. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-0-387-36699-9_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ardagna, Danilo, Cinzia Cappiello, Chiara Francalanci, and Annalisa Groppi. "Brokering Multisource Data with Quality Constraints." In On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE, 807–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11914853_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rhodes, Philip J., R. Daniel Bergeron, and Ted M. Sparr. "A Data Model for Distributed Multiresolution Multisource Scientific Data." In Mathematics and Visualization, 297–317. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-642-55787-3_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Yan. "Traps in Multisource Heterogeneous Big Data Processing." In Artificial Intelligence on Fashion and Textiles, 229–35. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99695-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rhodes, Philip J., R. Daniel Bergeron, and Ted M. Sparr. "Database Support for Multisource Multiresolution Scientific Data." In SOFSEM 2002: Theory and Practice of Informatics, 94–114. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36137-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ignaciuk, Przemysław, and Andrzej Bartoszewicz. "Flow Control in a Multisource Discrete-Time System." In Congestion Control in Data Transmission Networks, 197–288. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4147-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multisources data"

1

Abdurakhmonova, Nilufar, Adkham Zokhirov Mohirdev, Mukhammadali Salokhiddinov, Anvar Narzullayev, and Ayrat Gatiatullin. "NLLB-Based Uzbek NMT: Leveraging Multisource Data." In 2024 9th International Conference on Computer Science and Engineering (UBMK), 1–5. IEEE, 2024. https://doi.org/10.1109/ubmk63289.2024.10773423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Feio, Maria João, Georgios Koutalieris, Symeon Symeonidis, Xenofon Karagiannis, Sónia RQ Serra, Ana Raquel Calapez, and Janine P. Silva. "Integrating Multisource Data for The Assessment of Urban Aquatic Ecosystems Health." In IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, 972–74. IEEE, 2024. http://dx.doi.org/10.1109/igarss53475.2024.10642458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shouwen, Zheng, Zhou Taiqi, Tao Yingzhi, Chen Junru, Wang Ruofei, and Liu Siyuan. "Enhancing Suicide Risk Detection with a Multisource Data Filtering and Fusion Optimization Framework (MDF-FOF)." In 2024 IEEE International Conference on Big Data (BigData), 8581–90. IEEE, 2024. https://doi.org/10.1109/bigdata62323.2024.10825363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mulachela, Sayid Rayhan, Erwin Budi Setiawan, and Gamma Kosala. "Semantic Segmentation of Land Cover in Multisource Aerial Imagery Using U-Net." In 2025 International Conference on Advancement in Data Science, E-learning and Information System (ICADEIS), 1–6. IEEE, 2025. https://doi.org/10.1109/icadeis65852.2025.10933466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hu, Bo, Yongxing Wang, and Qiuyue Sai. "Data-driven estimation of driving distance for battery electric bus with multisource real-world data." In Eighth International Conference on Traffic Engineering and Transportation System (ICTETS 2024), edited by Xiantao Xiao and Jia Yao, 51. SPIE, 2024. https://doi.org/10.1117/12.3054528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xing, Yaxuan, Huiping Lin, Jingwen Zhu, Feng Wang, Feng Xu, and Wen Jiang. "Multisource Data Integration of Sentinel-1 and Sentinel-2 for Above Ground Biomass Inversion." In 2024 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), 1–4. IEEE, 2024. https://doi.org/10.1109/icsidp62679.2024.10868703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Putty, Aarabhi, B. Annappa, R. Prajwal, and Sankar Pariserum Perumal. "Semantic Segmentation of Remotely Sensed Images using Multisource Data: An Experimental Analysis." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10725213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zheng, Yuanmao, Mingzhe Fu, Qiuhua He, Yuanrong He, Jianwei Huang, Xueyu Liang, and Xinyue Liu. "Spatial-temporal evolution analysis of Dongting Lake using multisource remote sensing data." In Sixth International Conference on Geoscience and Remote Sensing Mapping (GRSM 2024), edited by Zhiliang Qin, Jun Chen, and Huaichun Wu, 88. SPIE, 2025. https://doi.org/10.1117/12.3057597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

zeng, deqi, yufeng zhi, wensheng wei, siyan ye, and zhiheng zhu. "Performance evaluation of multisource data detection system for highway tunnel traffic incidents." In 9th International Conference on Electromechanical Control Technology and Transportation (ICECTT 2024), edited by Jinsong Wu and Azanizawati Ma'aram, 218. SPIE, 2024. http://dx.doi.org/10.1117/12.3039901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Peytavin, Laurent, F. Dansaert, and C. Rhin. "Multisources classification: application to temporal refinement of forest cover using SPOT and ERS/SAR data." In Satellite Remote Sensing II, edited by Jacky Desachy. SPIE, 1995. http://dx.doi.org/10.1117/12.226858.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multisources data"

1

Toutin, Th. Multisource Data Integration: Comparison of Geometric and Radiometric Methods. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1995. http://dx.doi.org/10.4095/219858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Toutin, Th. Multisource Data Fusion with an Integrated and Unified Geometric Modelling. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1995. http://dx.doi.org/10.4095/218015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wilson, D., Matthew Kamrath, Caitlin Haedrich, Daniel Breton, and Carl Hart. Urban noise distributions and the influence of geometric spreading on skewness. Engineer Research and Development Center (U.S.), November 2021. http://dx.doi.org/10.21079/11681/42483.

Full text
Abstract:
Statistical distributions of urban noise levels are influenced by many complex phenomena, including spatial and temporal variations in the source level, multisource mixtures, propagation losses, and random fading from multipath reflections. This article provides a broad perspective on the varying impacts of these phenomena. Distributions incorporating random fading and averaging (e.g., gamma and noncentral Erlang) tend to be negatively skewed on logarithmic (decibel) axes but can be positively skewed if the fading process is strongly modulated by source power variations (e.g., compound gamma). In contrast, distributions incorporating randomly positioned sources and explicit geometric spreading [e.g., exponentially modified Gaussian (EMG)] tend to be positively skewed with exponential tails on logarithmic axes. To evaluate the suitability of the various distributions, one-third octave band sound-level data were measured at 37 locations in the North End of Boston, MA. Based on the Kullback-Leibler divergence as calculated across all of the locations and frequencies, the EMG provides the most consistently good agreement with the data, which were generally positively skewed. The compound gamma also fits the data well and even outperforms the EMG for the small minority of cases exhibiting negative skew. The lognormal provides a suitable fit in cases in which particular non-traffic noise sources dominate.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography