Статті в журналах з теми "DATA INTEGRATION APPROACH"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: DATA INTEGRATION APPROACH.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "DATA INTEGRATION APPROACH".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Todorova, Violeta, Veska Gancheva, and Valeri Mladenov. "COVID-19 Medical Data Integration Approach." MOLECULAR SCIENCES AND APPLICATIONS 2 (July 18, 2022): 102–6. http://dx.doi.org/10.37394/232023.2022.2.11.

Повний текст джерела
Анотація:
The need to create automated methods for extracting knowledge from data arises from the accumulation of a large amount of data. This paper presents a conceptual model for integrating and processing medical data in three layers, comprising a total of six phases: a model for integrating, filtering, sorting and aggregating Covid-19 data. A medical data integration workflow was designed, including steps of data integration, filtering and sorting. The workflow for Covid-19 medical data from clinical records of 20400 potential patients was employed.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Aftab, Shoohira, Hammad Afzal, and Amna Khalid. "An Approach for Secure Semantic Data Integration at Data as a Service (DaaS) Layer." International Journal of Information and Education Technology 5, no. 2 (2015): 124–30. http://dx.doi.org/10.7763/ijiet.2015.v5.488.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Soriano, Lorna T. "An ETL-Driven Approach Data Integration for State Universities and Colleges." Journal of Advanced Research in Dynamical and Control Systems 12, no. 01-Special Issue (February 13, 2020): 234–42. http://dx.doi.org/10.5373/jardcs/v12sp1/20201068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Genesereth, Michael. "Data Integration: The Relational Logic Approach." Synthesis Lectures on Artificial Intelligence and Machine Learning 4, no. 1 (January 2010): 1–97. http://dx.doi.org/10.2200/s00226ed1v01y200911aim008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Fusco, Giuseppe, and Lerina Aversano. "An approach for semantic integration of heterogeneous data sources." PeerJ Computer Science 6 (March 2, 2020): e254. http://dx.doi.org/10.7717/peerj-cs.254.

Повний текст джерела
Анотація:
Integrating data from multiple heterogeneous data sources entails dealing with data distributed among heterogeneous information sources, which can be structured, semi-structured or unstructured, and providing the user with a unified view of these data. Thus, in general, gathering information is challenging, and one of the main reasons is that data sources are designed to support specific applications. Very often their structure is unknown to the large part of users. Moreover, the stored data is often redundant, mixed with information only needed to support enterprise processes, and incomplete with respect to the business domain. Collecting, integrating, reconciling and efficiently extracting information from heterogeneous and autonomous data sources is regarded as a major challenge. In this paper, we present an approach for the semantic integration of heterogeneous data sources, DIF (Data Integration Framework), and a software prototype to support all aspects of a complex data integration process. The proposed approach is an ontology-based generalization of both Global-as-View and Local-as-View approaches. In particular, to overcome problems due to semantic heterogeneity and to support interoperability with external systems, ontologies are used as a conceptual schema to represent both data sources to be integrated and the global view.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

CALVANESE, DIEGO, GIUSEPPE DE GIACOMO, MAURIZIO LENZERINI, DANIELE NARDI, and RICCARDO ROSATI. "DATA INTEGRATION IN DATA WAREHOUSING." International Journal of Cooperative Information Systems 10, no. 03 (September 2001): 237–71. http://dx.doi.org/10.1142/s0218843001000345.

Повний текст джерела
Анотація:
Information integration is one of the most important aspects of a Data Warehouse. When data passes from the sources of the application-oriented operational environment to the Data Warehouse, possible inconsistencies and redundancies should be resolved, so that the warehouse is able to provide an integrated and reconciled view of data of the organization. We describe a novel approach to data integration in Data Warehousing. Our approach is based on a conceptual representation of the Data Warehouse application domain, and follows the so-called local-as-view paradigm: both source and Data Warehouse relations are defined as views over the conceptual model. We propose a technique for declaratively specifying suitable reconciliation correspondences to be used in order to solve conflicts among data in different sources. The main goal of the method is to support the design of mediators that materialize the data in the Data Warehouse relations. Starting from the specification of one such relation as a query over the conceptual model, a rewriting algorithm reformulates the query in terms of both the source relations and the reconciliation correspondences, thus obtaining a correct specification of how to load the data in the materialized view.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hasani, S., A. Sadeghi-Niaraki, and M. Jelokhani-Niaraki. "SPATIAL DATA INTEGRATION USING ONTOLOGY-BASED APPROACH." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1-W5 (December 11, 2015): 293–96. http://dx.doi.org/10.5194/isprsarchives-xl-1-w5-293-2015.

Повний текст джерела
Анотація:
In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rohn, Eli. "CAS-Based Approach for Automatic Data Integration." American Journal of Operations Research 03, no. 01 (2013): 181–86. http://dx.doi.org/10.4236/ajor.2013.31a017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sanderson, David, Jack C. Chaplin, and Svetan Ratchev. "Affordable Data Integration Approach for Production Enterprises." Procedia CIRP 93 (2020): 616–21. http://dx.doi.org/10.1016/j.procir.2020.04.124.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

GRANT, JOHN, and JACK MINKER. "A logic-based approach to data integration." Theory and Practice of Logic Programming 2, no. 03 (April 23, 2002): 323–68. http://dx.doi.org/10.1017/s1471068401001375.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Tattersall, Andrew. "Planning for integration — a data oriented approach." Computer Integrated Manufacturing Systems 1, no. 3 (August 1988): 161–68. http://dx.doi.org/10.1016/0951-5240(88)90073-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Curcin, V., A. Barton, M. M. McGilchrist, H. Bastiaens, A. Andreasson, J. Rossiter, L. Zhao, et al. "Clinical Data Integration Model." Methods of Information in Medicine 54, no. 01 (2015): 16–23. http://dx.doi.org/10.3414/me13-02-0024.

Повний текст джерела
Анотація:
SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Managing Interoperability and Complexity in Health Systems”.Background: Primary care data is the single richest source of routine health care data. However its use, both in research and clinical work, often requires data from multiple clinical sites, clinical trials databases and registries. Data integration and interoperability are therefore of utmost importance.Objectives: TRANSFoRm’s general approach relies on a unified interoperability framework, described in a previous paper. We developed a core ontology for an interoperability framework based on data mediation. This article presents how such an ontology, the Clinical Data Integration Model (CDIM), can be designed to support, in conjunction with appropriate terminologies, biomedical data federation within TRANSFoRm, an EU FP7 project that aims to develop the digital infrastructure for a learning healthcare system in European Primary Care.Methods: TRANSFoRm utilizes a unified structural / terminological interoperability frame work, based on the local-as-view mediation paradigm. Such an approach mandates the global information model to describe the domain of interest independently of the data sources to be explored. Following a requirement analysis process, no ontology focusing on primary care research was identified and, thus we designed a realist ontology based on Basic Formal Ontology to support our framework in collaboration with various terminologies used in primary care.Results: The resulting ontology has 549 classes and 82 object properties and is used to support data integration for TRANSFoRm’s use cases. Concepts identified by researchers were successfully expressed in queries using CDIM and pertinent terminologies. As an example, we illustrate how, in TRANSFoRm, the Query Formulation Workbench can capture eligibility criteria in a computable representation, which is based on CDIM.Conclusion: A unified mediation approach to semantic interoperability provides a flexible and extensible framework for all types of interaction between health record systems and research systems. CDIM, as core ontology of such an approach, enables simplicity and consistency of design across the heterogeneous software landscape and can support the specific needs of EHR-driven phenotyping research using primary care data.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Vargas-Vera, Maria. "Data Integration Framework." International Journal of Knowledge Society Research 7, no. 1 (January 2016): 99–112. http://dx.doi.org/10.4018/ijksr.2016010107.

Повний текст джерела
Анотація:
This paper presents a proposal for a data integration framework. The purpose of the framework is to locate automatically records of participants from the ALSPAC database (Avon Longitudinal Study of Parents and Children) within its counterpart GPRD database (General Practice Research Database). The ALSPAC database is a collection of data from children and parents from before birth to late puberty. This collection contains several variables of interest for clinical researchers but we concentrate in asthma as a golden standard for evaluation of asthma has been made by a clinical researcher. The main component of the framework is a module called Mapper which locates similar records and performs record linkage. The mapper contains a library of similarity measures such Jaccard, Jaro-Winkler, Monge-Elkan, MatchScore, Levenstein and TFIDF similarity. Finally, the author evaluates the approach on quality of the mappings.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Dr.A.Mekala. "An Ontology Approach to Data Integration using Mapping Method." International Journal for Modern Trends in Science and Technology 6, no. 12 (December 4, 2020): 28–32. http://dx.doi.org/10.46501/ijmtst061206.

Повний текст джерела
Анотація:
Text mining is a technique to discover meaningful patterns from the available text documents. The pattern sighting from the text and document association of document is a well-known problem in data mining. Analysis of text content and categorization of the documents is a composite task of data mining. Some of them are supervised and some of them unsupervised manner of document compilation. The term “Federated Databases” refers to the in sequence integration of distributed, autonomous and heterogeneous databases. Nevertheless, a federation can also include information systems, not only databases. At integrating data, more than a few issues must be addressed. Here, we focus on the trouble of heterogeneity, more specifically on semantic heterogeneity – that is, problems correlated to semantically equivalent concepts or semantically related/unrelated concepts. In categorize to address this problem; we apply the idea of ontologies as a tool for data integration. In this paper, we make clear this concept and we briefly explain a technique for constructing ontology by using a hybrid ontology approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Bartalesi, Valentina, Carlo Meghini, and Costantino Thanos. "A data model-independent approach to big research data integration." International Journal of Metadata, Semantics and Ontologies 13, no. 4 (2019): 330. http://dx.doi.org/10.1504/ijmso.2019.10024347.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Bartalesi, Valentina, Carlo Meghini, and Costantino Thanos. "A data model-independent approach to big research data integration." International Journal of Metadata, Semantics and Ontologies 13, no. 4 (2019): 330. http://dx.doi.org/10.1504/ijmso.2019.102680.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

FALOMIR, ZOE, VICENT CASTELLÓ, M. TERESA ESCRIG, and JUAN CARLOS PERIS. "FUZZY DISTANCE SENSOR DATA INTEGRATION AND INTERPRETATION." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, no. 03 (June 2011): 499–528. http://dx.doi.org/10.1142/s0218488511007106.

Повний текст джерела
Анотація:
An approach to distance sensor data integration that obtains a robust interpretation of the robot environment is presented in this paper. This approach consists in obtaining patterns of fuzzy distance zones from sensor readings; comparing these patterns in order to detect non-working sensors; and integrating the patterns obtained by each kind of sensor in order to obtain a final pattern that detects obstacles of any sort. A dissimilarity measure between fuzzy sets has been defined and applied to this approach. Moreover, an algorithm to classify orientation reference systems (built by corners detected in the robot world) as open or closed is also presented. The final pattern of fuzzy distances, resulting from the integration process, is used to extract the important reference systems when a glass wall is included in the robot environment. Finally, our approach has been tested in an ActivMedia Pioneer 2 dx mobile robot using the Player/Stage as the control interface and promising results have been obtained.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Subramanian, Indhupriya, Srikant Verma, Shiva Kumar, Abhay Jere, and Krishanpal Anamika. "Multi-omics Data Integration, Interpretation, and Its Application." Bioinformatics and Biology Insights 14 (January 2020): 117793221989905. http://dx.doi.org/10.1177/1177932219899051.

Повний текст джерела
Анотація:
To study complex biological processes holistically, it is imperative to take an integrative approach that combines multi-omics data to highlight the interrelationships of the involved biomolecules and their functions. With the advent of high-throughput techniques and availability of multi-omics data generated from a large set of samples, several promising tools and methods have been developed for data integration and interpretation. In this review, we collected the tools and methods that adopt integrative approach to analyze multiple omics data and summarized their ability to address applications such as disease subtyping, biomarker prediction, and deriving insights into the data. We provide the methodology, use-cases, and limitations of these tools; brief account of multi-omics data repositories and visualization portals; and challenges associated with multi-omics data integration.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Colleoni Couto, Julia, Olimar Teixeira Borges, and Duncan Dubugras Ruiz. "Data integration in a Hadoop-based data lake: A bioinformatics case." International Journal of Data Mining & Knowledge Management Process 12, no. 4 (July 31, 2022): 1–24. http://dx.doi.org/10.5121/ijdkp.2022.12401.

Повний текст джерела
Анотація:
When we work in a data lake, data integration is not easy, mainly because the data is usually stored in raw format. Manually performing data integration is a time-consuming task that requires the supervision of a specialist, which can make mistakes or not be able to see the optimal point for data integration among two or more datasets. This paper presents a model to perform heterogeneous in-memory data integration in a Hadoop-based data lake based on a top-k set similarity approach. Our main contribution is the process of ingesting, storing, processing, integrating, and visualizing the data integration points. The algorithm for data integration is based on the Overlap coefficient since it presented better results when compared with the set similarity metrics Jaccard, Sørensen-Dice, and the Tversky index. We tested our model applying it on eight bioinformatics-domain datasets. Our model presents better results when compared to an analysis of a specialist, and we expect our model can be reused for other domains of datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Daraio, Cinzia, Simone Di Leo, and Monica Scannapieco. "Accounting for quality in data integration systems: a completeness-aware integration approach." Scientometrics 127, no. 3 (January 27, 2022): 1465–90. http://dx.doi.org/10.1007/s11192-022-04266-0.

Повний текст джерела
Анотація:
AbstractEnsuring the quality of integrated data is undoubtedly one of the main problems of integrated data systems. When focusing on multi-national and historical data integration systems, where the “space” and “time” dimensions play a relevant role, it is very much important to build the integration layer in such a way that the final user accesses a layer that is “by design” as much complete as possible. In this paper, we propose a method for accessing data in multipurpose data infrastructures, like data integration systems, which has the properties of (i) relieving the final user from the need to access single data sources while, at the same time, (ii) ensuring to maximize the amount of the information available for the user at the integration layer. Our approach is based on a completeness-aware integration approach which allows the user to have ready available all the maximum information that can get out of the integrated data system without having to carry out the preliminary data quality analysis on each of the databases included in the system. Our proposal of providing data quality information at the integrated level extends then the functions of the individual data sources, opening the data infrastructure to additional uses. This may be a first step to move from data infrastructures towards knowledge infrastructures. A case study on the research infrastructure for the science and innovation studies shows the usefulness of the proposed approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chromiak, Michal, and Marcin Grabowiecki. "Heterogeneous Data Integration Architecture-Challenging Integration Issues." Annales Universitatis Mariae Curie-Sklodowska, sectio AI – Informatica 15, no. 1 (January 1, 2015): 7. http://dx.doi.org/10.17951/ai.2015.15.1.7-11.

Повний текст джерела
Анотація:
As of today, most of the data processing systems have to deal with a large amount of data originated from numerous sources. Data sources almost always differ regarding its purpose of existence. Thus model, data processing engine and technology differ intensely. Due to current trend for systems fusion there is a growing demand for data to be present in a common way regardless of its legacy. Many systems have been devised as a response to such integration needs. However, the present data integration systems mostly are dedicated solutions that bring constraints and issues when considered in general. In this paper we will focus on the present solutions for data integration, their flaws originating from their architecture or design concepts and present an abstract and general approach that could be introduced as an response to existing issues. The system integration is considered out of scope for this paper, we will focus particularly on efficient data integration.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Truman, Gregory E. "A Discrepancy-Based Measurement Approach for Data Integration." Journal of Organizational Computing and Electronic Commerce 8, no. 3 (September 1998): 169–93. http://dx.doi.org/10.1207/s15327744joce0803_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Fayaz Ahmed, P. Mohammad, and Gunthati Prathap. "An Integration Approach for Data Extraction and Coalition." International Journal of Computer Trends and Technology 13, no. 3 (July 25, 2014): 128–31. http://dx.doi.org/10.14445/22312803/ijctt-v13p127.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Trinh, Tuan-Dat, Peter Wetz, Ba-Lam Do, Elmar Kiesling, and A. Min Tjoa. "Distributed mashups: a collaborative approach to data integration." International Journal of Web Information Systems 11, no. 3 (August 17, 2015): 370–96. http://dx.doi.org/10.1108/ijwis-04-2015-0018.

Повний текст джерела
Анотація:
Purpose – This paper aims to present a collaborative mashup platform for dynamic integration of heterogeneous data sources. The platform encourages sharing and connects data publishers, integrators, developers and end users. Design/methodology/approach – This approach is based on a visual programming paradigm and follows three fundamental principles: openness, connectedness and reusability. The platform is based on semantic Web technologies and the concept of linked widgets, i.e. semantic modules that allow users to access, integrate and visualize data in a creative and collaborative manner. Findings – The platform can effectively tackle data integration challenges by allowing users to explore relevant data sources for different contexts, tackling the data heterogeneity problem and facilitating automatic data integration, easing data integration via simple operations and fostering reusability of data processing tasks. Research limitations/implications – This research has focused exclusively on conceptual and technical aspects so far; a comprehensive user study, extensive performance and scalability testing is left for future work. Originality/value – A key contribution of this paper is the concept of distributed mashups. These ad hoc data integration applications allow users to perform data processing tasks in a collaborative and distributed manner simultaneously on multiple devices. This approach requires no server infrastructure to upload data, but rather allows each user to keep control over their data and expose only relevant subsets. Distributed mashups can run persistently in the background and are hence ideal for real-time data monitoring or data streaming use cases. Furthermore, we introduce automatic mashup composition as an innovative approach based on an explicit semantic widget model.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zhu, Fujun, Mark Turner, Ioannis Kotsiopoulos, Keith Bennett, Michelle Russell, David Budgen, Pearl Brereton, et al. "Dynamic data integration: a service-based broker approach." International Journal of Business Process Integration and Management 1, no. 3 (2006): 175. http://dx.doi.org/10.1504/ijbpim.2006.010903.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Amghar, Souad, Safae Cherdal, and Salma Mouline. "A Schema Integration Approach for Big Data Analysis." Ingénierie des systèmes d information 28, no. 2 (April 30, 2023): 315–25. http://dx.doi.org/10.18280/isi.280207.

Повний текст джерела
Анотація:
A huge volume of data is analyzed by organizations to understand their clients and improve their services. In many cases, these data are stored separately in different database systems and need to be integrated before being used in analysis tools or prediction applications. One of the main tasks of data integration process is the definition of the global schema. Defining a global schema in the context of NoSQL systems is a demanding task since it necessitates dealing with a variety of issues, including the lack of local schemas, data model heterogeneity, and semantic heterogeneity. To address these challenges, this work aims to automatically define the global schema of a set of databases stored in heterogeneous NoSQL systems. The main contributions of this work are presented in three phases: (1) Schema extraction where we define the local schemas using a unified representation. (2) Schema matching in which we propose a hybrid approach to find matching attributes between the local schemas. (3) Schema integration where we define the global schema using the schema matching results. A Covid-19 use case as well as other benchmarks are presented in this paper to evaluate the results of the proposed approach and illustrate its effectiveness.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Vinasco-Alvarez, D., J. Samuel, S. Servigne, and G. Gesquière. "TOWARDS LIMITING SEMANTIC DATA LOSS IN 4D URBAN DATA SEMANTIC GRAPH GENERATION." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences VIII-4/W2-2021 (October 7, 2021): 37–44. http://dx.doi.org/10.5194/isprs-annals-viii-4-w2-2021-37-2021.

Повний текст джерела
Анотація:
Abstract. To enrich urban digital twins and better understand city evolution, the integration of heterogeneous, spatio-temporal data has become a large area of research in the enrichment of 3D and 4D (3D + Time) semantic city models. These models, which can represent the 3D geospatial data of a city and their evolving semantic relations, may require data-driven integration approaches to provide temporal and concurrent views of the urban landscape. However, data integration often requires the transformation or conversion of data into a single shared data format, which can be prone to semantic data loss. To combat this, this paper proposes a model-centric ontology-based data integration approach towards limiting semantic data loss in 4D semantic urban data transformations to semantic graph formats. By integrating the underlying conceptual models of urban data standards, a unified spatio-temporal data model can be created as a network of ontologies. Transformation tools can use this model to map datasets to interoperable semantic graph formats of 4D city models. This paper will firstly illustrate how this approach facilitates the integration of rich 3D geospatial, spatio-temporal urban data and semantic web standards with a focus on limiting semantic data loss. Secondly, this paper will demonstrate how semantic graphs based on these models can be implemented for spatial and temporal queries toward 4D semantic city model enrichment.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Usery, E. Lynn, Michael P. Finn, and Michael Starbuck. "Data Layer Integration for The National Map of the United States." Cartographic Perspectives, no. 62 (March 1, 2009): 28–41. http://dx.doi.org/10.14714/cp62.183.

Повний текст джерела
Анотація:
The integration of geographic data layers in multiple raster and vector formats, from many different organizations and at a variety of resolutions and scales, is a significant problem for The National Map of the United States being developed by the U.S. Geological Survey. Our research has examined data integration from a layer-based approach for five of The National Map data layers: digital orthoimages, elevation, land cover, hydrography, and transportation. An empirical approach has included visual assessment by a set of respondents with statistical analysis to establish the meaning of various types of integration. A separate theoretical approach with established hypotheses tested against actual data sets has resulted in an automated procedure for integration of specific layers and is being tested. The empirical analysis has established resolution bounds on meanings of integration with raster datasets and distance bounds for vector data. The theoretical approach has used a combination of theories on cartographic transformation and generalization, such as Töpfer’s radical law, and additional research concerning optimum viewing scales for digital images to establish a set of guiding principles for integrating data of different resolutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Pineda, Silvia, Daniel G. Bunis, Idit Kosti, and Marina Sirota. "Data Integration for Immunology." Annual Review of Biomedical Data Science 3, no. 1 (July 20, 2020): 113–36. http://dx.doi.org/10.1146/annurev-biodatasci-012420-122454.

Повний текст джерела
Анотація:
Over the last several years, next-generation sequencing and its recent push toward single-cell resolution have transformed the landscape of immunology research by revealing novel complexities about all components of the immune system. With the vast amounts of diverse data currently being generated, and with the methods of analyzing and combining diverse data improving as well, integrative systems approaches are becoming more powerful. Previous integrative approaches have combined multiple data types and revealed ways that the immune system, both as a whole and as individual parts, is affected by genetics, the microbiome, and other factors. In this review, we explore the data types that are available for studying immunology with an integrative systems approach, as well as the current strategies and challenges for conducting such analyses.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Chan, Mei Lan. "An Explicit Pragmatic Approach to Integrative Data Analysis Strategies for Mixed Methods Research." International Journal of Linguistics 9, no. 3 (June 26, 2017): 166. http://dx.doi.org/10.5296/ijl.v9i3.11246.

Повний текст джерела
Анотація:
Mixed methods research is becoming an important methodology for the investigation of various topics in applied linguistics. However, data integration remains a challenge for mixed methods researchers and thus needs further development. This study discusses the integrative data analysis strategies used in an embedded mixed methods study in applied linguistics, illustrated through two phases of the study, and the way in which the adoption of a pragmatic approach explicitly aids data integration by abductive reflection on the knowledge acquired. This study investigated the language learning strategies used by English as a Foreign Language nursing students in higher education in Macao, and the effectiveness of the students’ learning outcomes as a result of strategy instruction. Six integrative data analysis strategies are discussed, and the explicit pragmatic approach that guided the exploratory sequential design sheds further light on the integrative data analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Aggoune, Aicha. "Intelligent data integration from heterogeneous relational databases containing incomplete and uncertain information." Intelligent Data Analysis 26, no. 1 (January 14, 2022): 75–99. http://dx.doi.org/10.3233/ida-205535.

Повний текст джерела
Анотація:
The integration of incomplete and uncertain information has emerged as a crucial issue in many application domains, including data warehousing, data mining, data analysis, and artificial intelligence. This paper proposes a novel approach of mediation-based integration for integrating these types of information from heterogeneous relational databases. We present in detail the different processes in the layered architecture of the proposed flexible mediator system. The integration process of our mediator is based on the use of fuzzy logic and semantic similarity measures for more effective integration of incomplete and uncertain information. We also define fuzzy views over the mediator’s global fuzzy schema to express incomplete and uncertain databases and specify the mappings between this global schema and these sources. Moreover, our approach provides intelligent data integration, enabling efficient generation of cooperative answers from similar ones, retrieved by queried flexible wrappers. These answers contain information that is more detailed and complete than the information contained in the initial answers. A thorough experiment verifies our approach improves the performance of data integration under various configurations.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Hands, Africa S. "Integrating quantitative and qualitative data in mixed methods research: An illustration." Canadian Journal of Information and Library Science 45, no. 1 (March 14, 2022): 1–20. http://dx.doi.org/10.5206/cjilsrcsib.v45i1.10645.

Повний текст джерела
Анотація:
Employing a mixed methods approach to research is meant to deliver a comprehensive examination of the phenomenon under study. An integral step in mixed methods research is integrating qualitative and quantitative data. However, published reports rarely detail the process of mixing data from both approaches. Presented here is an illustration of integrating qualitative and quantitative data sets using a convergence table. A review of mixed methods research in LIS is presented, and a reflection on the challenges of integration is shared. As the mixed methods approach increases in LIS research, the example offered here aims to make integration more transparent.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Niemi, Timo, Turkka Näppilä, and Kalervo Järvelin. "A relational data harmonization approach to XML." Journal of Information Science 35, no. 5 (June 29, 2009): 571–601. http://dx.doi.org/10.1177/0165551509104231.

Повний текст джерела
Анотація:
There are numerous approaches for integrating data from heterogeneous data sources. A common background assumption is that the data sources remain quite stable and are known in advance. Hence an integration system can be built to manipulate them. In practice there is, however, often a demand for supporting ad hoc information needs concerning unexpected autonomous data sources containing volatile data. A different approach is therefore needed. We propose that semantically similar data are harmonized when extracting data from XML-based data sources. We introduce a constructor algebra, which is a powerful tool in the harmonization of XML data. This algebra is able to form for any XML data source a unique relational representation, called an XML relation. We demonstrate that the XML relation representation supports grouping and aggregation of data needed, for example, in OLAP (online analytical processing) -style applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Khanal, Ram Chandra. "Concerns and Challenges of Data Integration from Objective Post-Positivist Approach and a Subjective Non-Positivist Interpretive Approach and Their Validity/Credibility Issues." Journal of the Institute of Engineering 9, no. 1 (June 30, 2014): 115–29. http://dx.doi.org/10.3126/jie.v9i1.10677.

Повний текст джерела
Анотація:
Integration of data derived from objective post positivist approach and interpretive non-positivist approach through mixed methods research has gained increasing attention in the recent past. But, at the same, concerns have been raised in the process of integrating data and, hence, enhancing validity/credibility of a research. This article seeks to analyze some concerns and challenges related to these aspects and provides some process to address these challenges. This article reviewed various peer reviewed journals and other grey literatures focusing on data integration within mixed method research. The paper presents some theoretical and methodological concerns and challenges of data integration and reviews two validity/credibility frameworks. Based on these review, the paper outlines a strategy of data integration. The strategy includes selection of appropriate research methodology and data conversion processes based on the research need. The paper provides a four step process for data conversion by adopting quantitizing approach which include; creating focus questions, response coding, thematic categorizing and employing qualitative data analysis process. DOI: http://dx.doi.org/10.3126/jie.v9i1.10677Journal of the Institute of Engineering, Vol. 9, No. 1, pp. 115–129
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Coover, E. "Voice - data integration in the office: A PBX approach." IEEE Communications Magazine 24, no. 7 (July 1986): 24–29. http://dx.doi.org/10.1109/mcom.1986.1093129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Nguyen, Tin, Rebecca Tagett, Diana Diaz, and Sorin Draghici. "A novel approach for data integration and disease subtyping." Genome Research 27, no. 12 (October 24, 2017): 2025–39. http://dx.doi.org/10.1101/gr.215129.116.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Batanov, D. N., and A. K. Lekova. "Data and knowledge integration through the feature-based approach." Artificial Intelligence in Engineering 8, no. 1 (January 1993): 77–83. http://dx.doi.org/10.1016/0954-1810(93)90033-c.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

May, Wolfgang. "Logic-based XML data integration: a semi-materializing approach." Journal of Applied Logic 3, no. 2 (June 2005): 271–307. http://dx.doi.org/10.1016/j.jal.2004.07.020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Klauza, Marcin, Piotr Czekalski, and Krzysztof Tokarz. "Air Traffic Data Integration using the Semantic Web Approach." Athens Journal of Τechnology & Engineering 2, no. 2 (May 31, 2015): 115–28. http://dx.doi.org/10.30958/ajte.2-2-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Wu, Jue-Bo, and Zong-Ling Wu. "Comprehensive approach to semantic similarity for rapid data integration." International Journal of Control, Automation and Systems 12, no. 3 (May 10, 2014): 680–87. http://dx.doi.org/10.1007/s12555-012-0291-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Meddah, Fatiha Guerroudji, Yousra Ayouani, and Ishak H. A. Meddah. "An Integrated Approach to Geovisualize Epidemiological Data." International Journal of Applied Geospatial Research 13, no. 1 (January 2022): 1–12. http://dx.doi.org/10.4018/ijagr.298296.

Повний текст джерела
Анотація:
Today, geovisualization is frequently and effectively used to communicate and present geographic information. Indeed, By using dynamic and interactive tools, geovisualization makes it possible to catalyse the transition from raw data to informative data transmitted to the user via a graphic representation, such as the map or 3D visualization. In this paper we presents an integration system based on a methodological approach dedicated to geovisualization of epidemiological data integrating GIS and anamorphic maps :cartograms. The main objective is to explore raw data, structure it, and translate it into interpretable information. This work is part of an approach to assist in the analysis and exploration of data on tuberculosis in the city of Oran. The objective is to produce epidemiological maps in a form adapted to the perceived reality. This deformation of space is constructed by a mathematical model based on Gastner Newman's algorithm and Bertin's graphic semiology.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Horak, Tibor, Peter Strelec, Michal Kebisek, Pavol Tanuska, and Andrea Vaclavova. "Data Integration from Heterogeneous Control Levels for the Purposes of Analysis within Industry 4.0 Concept." Sensors 22, no. 24 (December 15, 2022): 9860. http://dx.doi.org/10.3390/s22249860.

Повний текст джерела
Анотація:
Small- and medium-sized manufacturing companies must adapt their production processes more quickly. The speed with which enterprises can apply a change in the context of data integration and historicization affects their business. This article presents the possibilities of implementing the integration of control processes using modern technologies that will enable the adaptation of production lines. Integration using an object-oriented approach is suitable for complex tasks. Another approach is data integration using the entity referred to as tagging (TAG). Tagging is essential to apply for fast adaptation and modification of the production process. The advantage is identification, easier modification, and generation of data structures where basic entities include attributes, topics, personalization, locale, and APIs. This research proposes a model for integrating manufacturing enterprise data from heterogeneous levels of management. As a result, the model and the design procedure for data integrating production lines can efficiently adapt production changes.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Moura, Ana Maria de Carvalho, Fabio Porto, Vania Vidal, Regis Pires Magalhães, Macedo Maia, Maira Poltosi, and Daniele Palazzi. "A semantic integration approach to publish and retrieve ecological data." International Journal of Web Information Systems 11, no. 1 (April 20, 2015): 87–119. http://dx.doi.org/10.1108/ijwis-08-2014-0028.

Повний текст джерела
Анотація:
Purpose – The purpose of this paper is to present a four-level architecture that aims at integrating, publishing and retrieving ecological data making use of linked data (LD). It allows scientists to explore taxonomical, spatial and temporal ecological information, access trophic chain relations between species and complement this information with other data sets published on the Web of data. The development of ecological information repositories is a crucial step to organize and catalog natural reserves. However, they present some challenges regarding their effectiveness to provide a shared and global view of biodiversity data, such as data heterogeneity, lack of metadata standardization and data interoperability. LD rose as an interesting technology to solve some of these challenges. Design/methodology/approach – Ecological data, which is produced and collected from different media resources, is stored in distinct relational databases and published as RDF triples, using a relational-Resource Description Format mapping language. An application ontology reflects a global view of these datasets and share with them the same vocabulary. Scientists specify their data views by selecting their objects of interest in a friendly way. A data view is internally represented as an algebraic scientific workflow that applies data transformation operations to integrate data sources. Findings – Despite of years of investment, data integration continues offering scientists challenges in obtaining consolidated data views of a large number of heterogeneous scientific data sources. The semantic integration approach presented in this paper simplifies this process both in terms of mappings and query answering through data views. Social implications – This work provides knowledge about the Guanabara Bay ecosystem, as well as to be a source of answers to the anthropic and climatic impacts on the bay ecosystem. Additionally, this work will enable evaluating the adequacy of actions that are being taken to clean up Guanabara Bay, regarding the marine ecology. Originality/value – Mapping complexity is traded by the process of generating the exported ontology. The approach reduces the problem of integration to that of mappings between homogeneous ontologies. As a byproduct, data views are easily rewritten into queries over data sources. The architecture is general and although applied to the ecological context, it can be extended to other domains.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Chung, Yeounoh, Tim Kraska, Neoklis Polyzotis, Ki Hyun Tae, and Steven Euijong Whang. "Automated Data Slicing for Model Validation: A Big Data - AI Integration Approach." IEEE Transactions on Knowledge and Data Engineering 32, no. 12 (December 1, 2020): 2284–96. http://dx.doi.org/10.1109/tkde.2019.2916074.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Zhe, Zhang, and Huang Pei. "Approach to conceptual data integration for multidimensional data analysis in e-commerce." Journal of Systems Engineering and Electronics 17, no. 3 (September 2006): 635–41. http://dx.doi.org/10.1016/s1004-4132(06)60109-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Muntean, Mihaela, Claudiu Brândaş, and Tanita Cîrstea. "Framework for a Symmetric Integration Approach." Symmetry 11, no. 2 (February 14, 2019): 224. http://dx.doi.org/10.3390/sym11020224.

Повний текст джерела
Анотація:
An Application-to-Application integration framework in the cloud environment is proposed. The methodological demarche is developed using a data symmetry approach. Implementation aspects of integration considered the Open Data Protocol (OData) service as an integrator. An important issue in the cloud environment is to integrate and ensure the quality of transferred and processed data. An efficient way of ensuring the completeness and integrity of data transferred between different applications and systems is the symmetry of data integration. With these considerations, the integration of SAP Hybris Cloud for Customer with S/4 HANA Cloud was implemented.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Duan, Ran, Lin Gao, Yong Gao, Yuxuan Hu, Han Xu, Mingfeng Huang, Kuo Song, et al. "Evaluation and comparison of multi-omics data integration methods for cancer subtyping." PLOS Computational Biology 17, no. 8 (August 12, 2021): e1009224. http://dx.doi.org/10.1371/journal.pcbi.1009224.

Повний текст джерела
Анотація:
Computational integrative analysis has become a significant approach in the data-driven exploration of biological problems. Many integration methods for cancer subtyping have been proposed, but evaluating these methods has become a complicated problem due to the lack of gold standards. Moreover, questions of practical importance remain to be addressed regarding the impact of selecting appropriate data types and combinations on the performance of integrative studies. Here, we constructed three classes of benchmarking datasets of nine cancers in TCGA by considering all the eleven combinations of four multi-omics data types. Using these datasets, we conducted a comprehensive evaluation of ten representative integration methods for cancer subtyping in terms of accuracy measured by combining both clustering accuracy and clinical significance, robustness, and computational efficiency. We subsequently investigated the influence of different omics data on cancer subtyping and the effectiveness of their combinations. Refuting the widely held intuition that incorporating more types of omics data always produces better results, our analyses showed that there are situations where integrating more omics data negatively impacts the performance of integration methods. Our analyses also suggested several effective combinations for most cancers under our studies, which may be of particular interest to researchers in omics data analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Farghaly, Karim, F. H. Abanda, Christos Vidalakis, and Graham Wood. "BIM-linked data integration for asset management." Built Environment Project and Asset Management 9, no. 4 (September 9, 2019): 489–502. http://dx.doi.org/10.1108/bepam-11-2018-0136.

Повний текст джерела
Анотація:
Purpose The purpose of this paper is to investigate the transfer of information from the building information modelling (BIM) models to either conventional or advanced asset management platforms using Linked Data. To achieve this aim, a process for generating Linked Data in the asset management context and its integration with BIM data is presented. Design/methodology/approach The research design employs a participatory action research (PAR) approach. The PAR approach utilized two qualitative data collection methods, namely; focus group and interviews to identify and evaluate the required standards for the mapping of different domains. Also prototyping which is an approach of Software Development Methodology is utilized to develop the ontologies and Linked Data. Findings The proposed process offers a comprehensive description of the required standards and classifications in construction domain, related vocabularies and object-oriented links to ensure the effective data integration between different domains. Also the proposed process demonstrates the different stages, tools, best practices and guidelines to develop Linked Data, armed with a comprehensive use case Linked Data generation about building assets that consume energy. Originality/value The Linked Data generation and publications in the domain of AECO is still in its infancy and it also needs methodological guidelines to support its evolution towards maturity in its processes and applications. This research concentrates on the Linked Data applications with BIM to link across domains where few studies have been conducted.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Nurhendratno, Slamet Sudaryanto, and Sudaryanto Sudaryanto. "DATA INTEGRATION MODEL DESIGN FOR SUPPORTING DATA CENTER PATIENT SERVICES DISTRIBUTED INSURANCE PURCHASE WITH VIEW BASED DATA INTEGRATION." Computer Engineering, Science and System Journal 3, no. 2 (August 1, 2018): 162. http://dx.doi.org/10.24114/cess.v3i2.8895.

Повний текст джерела
Анотація:
Data integration is an important step in integrating information from multiple sources. The problem is how to find and combine data from scattered data sources that are heterogeneous and have semantically informant interconnections optimally. The heterogeneity of data sources is the result of a number of factors, including storing databases in different formats, using different software and hardware for database storage systems, designing in different data semantic models (Katsis & Papakonstantiou, 2009, Ziegler & Dittrich , 2004). Nowadays there are two approaches in doing data integration that is Global as View (GAV) and Local as View (LAV), but both have different advantages and limitations so that proper analysis is needed in its application. Some of the major factors to be considered in making efficient and effective data integration of heterogeneous data sources are the understanding of the type and structure of the source data (source schema). Another factor to consider is also the view type of integration result (target schema). The results of the integration can be displayed into one type of global view or a variety of other views. So in integrating data whose source is structured the approach will be different from the integration of the data if the data source is not structured or semi-structured. Scheme mapping is a specific declaration that describes the relationship between the source scheme and the target scheme. In the scheme mapping is expressed in in some logical formulas that can help applications in data interoperability, data exchange and data integration. In this paper, in the case of establishing a patient referral center data center, it requires integration of data whose source is derived from a number of different health facilities, it is necessary to design a schema mapping system (to support optimization). Data Center as the target orientation schema (target schema) from various reference service units as a source schema (source schema) has the characterization and nature of data that is structured and independence. So that the source of data can be integrated tersetruktur of the data source into an integrated view (as a data center) with an equivalent query rewriting (equivalent). The data center as a global schema serves as a schema target requires a "mediator" that serves "guides" to maintain global schemes and map (mapping) between global and local schemes. Data center as from Global As View (GAV) here tends to be single and unified view so to be effective in its integration process with various sources of schema which is needed integration facilities "integration". The "Pemadu" facility is a declarative mapping language that allows to specifically link each of the various schema sources to the data center. So that type of query rewriting equivalent is suitable to be applied in the context of query optimization and maintenance of physical data independence.Keywords: Global as View (GAV), Local as View (LAV), source schema ,mapping schema
Стилі APA, Harvard, Vancouver, ISO та ін.
50

El Yamani, Siham, Rafika Hajji, and Roland Billen. "IFC-CityGML Data Integration for 3D Property Valuation." ISPRS International Journal of Geo-Information 12, no. 9 (August 25, 2023): 351. http://dx.doi.org/10.3390/ijgi12090351.

Повний текст джерела
Анотація:
The accurate assessment of proper value in complex and increasingly high-rise urban environments is a significant challenge. Previous research has identified property value as a composite of indoor elements, such as volume and height, and 3D simulations of the outdoor environment, including variables such as view, noise, and pollution. These simulations have been preliminary performed in taxation context; however, there has been no work addressing the simulation of property valuation. In this paper, we propose an IFC-CityGML data integration approach for property valuation and develop a workflow based on IFC-CityGML 3.0 to simulate and model 3D property variables at the Level of Information Need. We evaluate this approach by testing it for two indoor variables, indoor daylight and property unit cost. Our proposed approach aims to improve the accuracy of property valuation by integrating data from indoor and outdoor environments and providing a standardized and efficient workflow for property valuation modeling using IFC and CityGML. Our approach represents a solid base for future works toward a 3D property valuation extension.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії