Artículos de revistas sobre el tema "Data integration"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Data integration.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Data integration".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Arputhamary, B. y L. Arockiam. "Data Integration in Big Data Environment". Bonfring International Journal of Data Mining 5, n.º 1 (10 de febrero de 2015): 01–05. http://dx.doi.org/10.9756/bijdm.8001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Vaishnawi, Chittamuru y Dr Bhuvana J. "Renewable Energy Integration in Cloud Data Centers". International Journal of Research Publication and Reviews 5, n.º 3 (9 de marzo de 2024): 2346–54. http://dx.doi.org/10.55248/gengpi.5.0324.0737.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Olmsted, Aspen. "Heterogeneous system integration data integration guarantees". Journal of Computational Methods in Sciences and Engineering 17 (19 de enero de 2017): S85—S94. http://dx.doi.org/10.3233/jcm-160682.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

CALVANESE, DIEGO, GIUSEPPE DE GIACOMO, MAURIZIO LENZERINI, DANIELE NARDI y RICCARDO ROSATI. "DATA INTEGRATION IN DATA WAREHOUSING". International Journal of Cooperative Information Systems 10, n.º 03 (septiembre de 2001): 237–71. http://dx.doi.org/10.1142/s0218843001000345.

Texto completo
Resumen
Information integration is one of the most important aspects of a Data Warehouse. When data passes from the sources of the application-oriented operational environment to the Data Warehouse, possible inconsistencies and redundancies should be resolved, so that the warehouse is able to provide an integrated and reconciled view of data of the organization. We describe a novel approach to data integration in Data Warehousing. Our approach is based on a conceptual representation of the Data Warehouse application domain, and follows the so-called local-as-view paradigm: both source and Data Warehouse relations are defined as views over the conceptual model. We propose a technique for declaratively specifying suitable reconciliation correspondences to be used in order to solve conflicts among data in different sources. The main goal of the method is to support the design of mediators that materialize the data in the Data Warehouse relations. Starting from the specification of one such relation as a query over the conceptual model, a rewriting algorithm reformulates the query in terms of both the source relations and the reconciliation correspondences, thus obtaining a correct specification of how to load the data in the materialized view.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

NASSIRI, Hassana. "Data Model Integration". International Journal of New Computer Architectures and their Applications 7, n.º 2 (2017): 45–49. http://dx.doi.org/10.17781/p002327.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Miller, Renée J. "Open data integration". Proceedings of the VLDB Endowment 11, n.º 12 (agosto de 2018): 2130–39. http://dx.doi.org/10.14778/3229863.3240491.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Dong, Xin Luna y Divesh Srivastava. "Big data integration". Proceedings of the VLDB Endowment 6, n.º 11 (27 de agosto de 2013): 1188–89. http://dx.doi.org/10.14778/2536222.2536253.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Dong, Xin Luna y Divesh Srivastava. "Big Data Integration". Synthesis Lectures on Data Management 7, n.º 1 (15 de febrero de 2015): 1–198. http://dx.doi.org/10.2200/s00578ed1v01y201404dtm040.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Vargas-Vera, Maria. "Data Integration Framework". International Journal of Knowledge Society Research 7, n.º 1 (enero de 2016): 99–112. http://dx.doi.org/10.4018/ijksr.2016010107.

Texto completo
Resumen
This paper presents a proposal for a data integration framework. The purpose of the framework is to locate automatically records of participants from the ALSPAC database (Avon Longitudinal Study of Parents and Children) within its counterpart GPRD database (General Practice Research Database). The ALSPAC database is a collection of data from children and parents from before birth to late puberty. This collection contains several variables of interest for clinical researchers but we concentrate in asthma as a golden standard for evaluation of asthma has been made by a clinical researcher. The main component of the framework is a module called Mapper which locates similar records and performs record linkage. The mapper contains a library of similarity measures such Jaccard, Jaro-Winkler, Monge-Elkan, MatchScore, Levenstein and TFIDF similarity. Finally, the author evaluates the approach on quality of the mappings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Tang, Lin. "Genomics data integration". Nature Methods 20, n.º 1 (enero de 2023): 34. http://dx.doi.org/10.1038/s41592-022-01736-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Slater, Ted, Christopher Bouton y Enoch S. Huang. "Beyond data integration". Drug Discovery Today 13, n.º 13-14 (julio de 2008): 584–89. http://dx.doi.org/10.1016/j.drudis.2008.01.008.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Youngmann, Brit, Michael Cafarella, Babak Salimi y Anna Zeng. "Causal Data Integration". Proceedings of the VLDB Endowment 16, n.º 10 (junio de 2023): 2659–65. http://dx.doi.org/10.14778/3603581.3603602.

Texto completo
Resumen
Causal inference is fundamental to empirical scientific discoveries in natural and social sciences; however, in the process of conducting causal inference, data management problems can lead to false discoveries. Two such problems are (i) not having all attributes required for analysis, and (ii) misidentifying which attributes are to be included in the analysis. Analysts often only have access to partial data, and they critically rely on (often unavailable or incomplete) domain knowledge to identify attributes to include for analysis, which is often given in the form of a causal DAG. We argue that data management techniques can surmount both of these challenges. In this work, we introduce the Causal Data Integration (CDI) problem, in which unobserved attributes are mined from external sources and a corresponding causal DAG is automatically built. We identify key challenges and research opportunities in designing a CDI system, and present a system architecture for solving the CDI problem. Our preliminary experimental results demonstrate that solving CDI is achievable and pave the way for future research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Bakshi, Waseem Jeelani, Rana Hashmy, Majid Zaman y Muheet Ahmed Butt. "Logical Data Integration Model for the Integration of Data Repositories". International Journal of Database Theory and Application 11, n.º 1 (31 de marzo de 2018): 21–28. http://dx.doi.org/10.14257/ijdta.2018.11.1.03.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

López de Maturana, Evangelina, Lola Alonso, Pablo Alarcón, Isabel Adoración Martín-Antoniano, Silvia Pineda, Lucas Piorno, M. Luz Calle y Núria Malats. "Challenges in the Integration of Omics and Non-Omics Data". Genes 10, n.º 3 (20 de marzo de 2019): 238. http://dx.doi.org/10.3390/genes10030238.

Texto completo
Resumen
Omics data integration is already a reality. However, few omics-based algorithms show enough predictive ability to be implemented into clinics or public health domains. Clinical/epidemiological data tend to explain most of the variation of health-related traits, and its joint modeling with omics data is crucial to increase the algorithm’s predictive ability. Only a small number of published studies performed a “real” integration of omics and non-omics (OnO) data, mainly to predict cancer outcomes. Challenges in OnO data integration regard the nature and heterogeneity of non-omics data, the possibility of integrating large-scale non-omics data with high-throughput omics data, the relationship between OnO data (i.e., ascertainment bias), the presence of interactions, the fairness of the models, and the presence of subphenotypes. These challenges demand the development and application of new analysis strategies to integrate OnO data. In this contribution we discuss different attempts of OnO data integration in clinical and epidemiological studies. Most of the reviewed papers considered only one type of omics data set, mainly RNA expression data. All selected papers incorporated non-omics data in a low-dimensionality fashion. The integrative strategies used in the identified papers adopted three modeling methods: Independent, conditional, and joint modeling. This review presents, discusses, and proposes integrative analytical strategies towards OnO data integration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Wu, Cen, Fei Zhou, Jie Ren, Xiaoxi Li, Yu Jiang y Shuangge Ma. "A Selective Review of Multi-Level Omics Data Integration Using Variable Selection". High-Throughput 8, n.º 1 (18 de enero de 2019): 4. http://dx.doi.org/10.3390/ht8010004.

Texto completo
Resumen
High-throughput technologies have been used to generate a large amount of omics data. In the past, single-level analysis has been extensively conducted where the omics measurements at different levels, including mRNA, microRNA, CNV and DNA methylation, are analyzed separately. As the molecular complexity of disease etiology exists at all different levels, integrative analysis offers an effective way to borrow strength across multi-level omics data and can be more powerful than single level analysis. In this article, we focus on reviewing existing multi-omics integration studies by paying special attention to variable selection methods. We first summarize published reviews on integrating multi-level omics data. Next, after a brief overview on variable selection methods, we review existing supervised, semi-supervised and unsupervised integrative analyses within parallel and hierarchical integration studies, respectively. The strength and limitations of the methods are discussed in detail. No existing integration method can dominate the rest. The computation aspects are also investigated. The review concludes with possible limitations and future directions for multi-level omics data integration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Todorova, Violeta, Veska Gancheva y Valeri Mladenov. "COVID-19 Medical Data Integration Approach". MOLECULAR SCIENCES AND APPLICATIONS 2 (18 de julio de 2022): 102–6. http://dx.doi.org/10.37394/232023.2022.2.11.

Texto completo
Resumen
The need to create automated methods for extracting knowledge from data arises from the accumulation of a large amount of data. This paper presents a conceptual model for integrating and processing medical data in three layers, comprising a total of six phases: a model for integrating, filtering, sorting and aggregating Covid-19 data. A medical data integration workflow was designed, including steps of data integration, filtering and sorting. The workflow for Covid-19 medical data from clinical records of 20400 potential patients was employed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Colleoni Couto, Julia, Olimar Teixeira Borges y Duncan Dubugras Ruiz. "Data integration in a Hadoop-based data lake: A bioinformatics case". International Journal of Data Mining & Knowledge Management Process 12, n.º 4 (31 de julio de 2022): 1–24. http://dx.doi.org/10.5121/ijdkp.2022.12401.

Texto completo
Resumen
When we work in a data lake, data integration is not easy, mainly because the data is usually stored in raw format. Manually performing data integration is a time-consuming task that requires the supervision of a specialist, which can make mistakes or not be able to see the optimal point for data integration among two or more datasets. This paper presents a model to perform heterogeneous in-memory data integration in a Hadoop-based data lake based on a top-k set similarity approach. Our main contribution is the process of ingesting, storing, processing, integrating, and visualizing the data integration points. The algorithm for data integration is based on the Overlap coefficient since it presented better results when compared with the set similarity metrics Jaccard, Sørensen-Dice, and the Tversky index. We tested our model applying it on eight bioinformatics-domain datasets. Our model presents better results when compared to an analysis of a specialist, and we expect our model can be reused for other domains of datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Ruíz-Ceniceros, Juan Antonio, José Alfonso Aguilar-Calderón, Carolina Tripp-Barba y Aníbal Zaldívar-Colado. "Dynamic Canonical Data Model: An Architecture Proposal for the External and Data Loose Coupling for the Integration of Software Units". Applied Sciences 13, n.º 19 (7 de octubre de 2023): 11040. http://dx.doi.org/10.3390/app131911040.

Texto completo
Resumen
Integrating third-party and legacy systems has become a critical necessity for companies, driven by the need to exchange information with various entities such as banks, suppliers, customers, and partners. Ensuring data integrity, keeping integrations up-to-date, reducing transaction risks, and preventing data loss are all vital aspects of this complex task. Achieving success in this endeavor, which involves both technological and business challenges, necessitates the implementation of a well-suited architecture. This article introduces an architecture known as the Dynamic Canonical Data Model through Agnostic Messages. The proposal addresses the integration of loosely coupled software units, mainly when dealing with internal and external data integration. To illustrate the architecture’s components, a case study from the Mexican Logistics Company Paquetexpress is presented. This organization manages integrations across several platforms, including SalesForce and Oracle ERP, with clients like Amazon, Mercado Libre, Grainger, and Afull. Each of these incurs costs ranging from USD 30,000 to USD 36,000, with consultants from firms such as Quanam, K&F, TSOL, and TekSi playing a crucial role in their execution. This consumes much time, making maintenance costs considerably high when clients request data transmission or type changes, particularly when utilizing tools like Oracle Integration Cloud (OIC) or Oracle Service Bus (OSB). The article provides insights into the architecture’s design and implementation in a real-world scenario within the delivery company. The proposed architecture significantly reduces integration and maintenance times and costs while maximizing scalability and encouraging the reuse of components. The source code for this implementation has been registered in the National Registry of Copyrights in Mexico.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Saloni Kumari. "Data integration: “Seamless data harmony: The art and science of effective data integration”". International Journal of Engineering & Technology 12, n.º 2 (4 de octubre de 2023): 26–30. http://dx.doi.org/10.14419/ijet.v12i2.32335.

Texto completo
Resumen
The idea of data integration has evolved as a key strategy in today's data-driven environment, as data is supplied from various and heterogeneous sources. This article explores the relevance, methodology, difficulties, and transformative possibilities of data integration, delving into its multidimensional world. Data integration serves as the cornerstone for well-informed decision-making by connecting heterogeneous datasets and fostering unified insights. This article gives readers a sneak preview of the in-depth investigation into data integration, illuminating its technical complexities and strategic ramifications for companies and organizations looking to maximize the value of their data as-sets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Chromiak, Michal y Marcin Grabowiecki. "Heterogeneous Data Integration Architecture-Challenging Integration Issues". Annales Universitatis Mariae Curie-Sklodowska, sectio AI – Informatica 15, n.º 1 (1 de enero de 2015): 7. http://dx.doi.org/10.17951/ai.2015.15.1.7-11.

Texto completo
Resumen
As of today, most of the data processing systems have to deal with a large amount of data originated from numerous sources. Data sources almost always differ regarding its purpose of existence. Thus model, data processing engine and technology differ intensely. Due to current trend for systems fusion there is a growing demand for data to be present in a common way regardless of its legacy. Many systems have been devised as a response to such integration needs. However, the present data integration systems mostly are dedicated solutions that bring constraints and issues when considered in general. In this paper we will focus on the present solutions for data integration, their flaws originating from their architecture or design concepts and present an abstract and general approach that could be introduced as an response to existing issues. The system integration is considered out of scope for this paper, we will focus particularly on efficient data integration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Kasyanova, Nataliia, Serhii Koverha y Vladyslav Okhrimenko. "УПРАВЛІННЯ ТА ІНТЕГРАЦІЯ ДАНИХ В УМОВАХ ЦИФРОВІЗАЦІЇ ЕКОНОМІЧНИХ ПРОЦЕСІВ: ВИКЛИКИ ТА ПЕРСПЕКТИВИ". Economical 1, n.º 27 (2023): 71–87. http://dx.doi.org/10.31474/1680-0044-2023-1(27)-71-87.

Texto completo
Resumen
Objective. The purpose of the article is to clarify the theoretical and methodological aspects, analyze data management methods in the context of digitalization of economic processes, and choose the priority method of integrating corporate information systems depending on the tasks to be solved in each case. Methods. The paper uses a set of data integration methods: application integration method (EAI), method of extracting data from external sources, transforming them in the appropriate structure and forming data warehouses (ETL); method of real-time integration of incomparable data types from different sources (EI). Results. The paper proves that data management includes the formation and analysis of data architecture, integration of the database management system; data security, identification, segregation and storage of data sources. Data integration refers to the process of combining data from different sources into a single, holistic system and aims to provide access to a complete, updated and easy-to-analyze data set. Data integration is especially important in the areas of e-commerce, logistics and supply chains, where it is necessary to combine data from different sources to optimize processes, in the field of business intelligence, where processing large amounts of data and combining them allows you to identify useful information and certain patterns. Integration of enterprise information systems is the process of combining several IS and individual applications into a single, holistic system that works together to achieve a common goal, aimed at increasing the efficiency of the company, reducing duplication of efforts and streamlining processes. The main functional components of a corporate information system are identified: Business Process Automation IS, Financial Management IS, Customer Relationship Management IS, Supply Chain Management IS, Human Resources Management IS, Business Intelligence IS, Communication IS, and Data Security and Protection IS. Within a corporate information system, several narrowly focused software products operate simultaneously, capable of successfully solving a certain range of tasks. At the same time, some of them may not involve interaction with other information systems. The main approaches to data integration include universal access to data and data warehouses. Universal access technologies allow for equal access to data from different information systems, including on the basis of the concept of data warehouses - a database containing data collected from databases of different information subsystems for further analysis and use. It is proved that the most holistic approach to the integration of information systems is integration at the level of business processes. As part of the integration of business processes, there is an integration of applications, data integration, and integration of people involved in this business process. The article substantiates the feasibility of using three methods of big data management and integration: integration of corporate applications, integration of corporate information, and software for obtaining, transforming, and downloading data. As a result of comparing integration methods and building a generalized scheme for integrating heterogeneous IS, a number of situations have been identified in which the use of a specific integration method is preferable or the only possible one. The scientific novelty of the study is to identify the problems of integrating big data and corporate information systems. Approaches to choosing a method for integrating data and applications based on a generalized scheme for integrating heterogeneous information systems are proposed. Practical significance. The results of the analysis allow optimizing the methods of data integration within a corporate information system. The principles of integration inherent in the considered methods are used to solve a wide range of tasks: from real-time integration to batch integration and application integration. Implementation of the proposed methods of big data integration will make information more transparent; obtain additional detailed information about the efficiency of production and technological equipment, which stimulates innovation and improves the quality of the final product; use more efficient, accurate analytics to minimize risks and identify problems in advance before catastrophic consequences; more effectively manage supply chains, forecast demand, carry out comprehensive business planning, organize cooperation
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Nurhendratno, Slamet Sudaryanto y Sudaryanto Sudaryanto. "DATA INTEGRATION MODEL DESIGN FOR SUPPORTING DATA CENTER PATIENT SERVICES DISTRIBUTED INSURANCE PURCHASE WITH VIEW BASED DATA INTEGRATION". Computer Engineering, Science and System Journal 3, n.º 2 (1 de agosto de 2018): 162. http://dx.doi.org/10.24114/cess.v3i2.8895.

Texto completo
Resumen
Data integration is an important step in integrating information from multiple sources. The problem is how to find and combine data from scattered data sources that are heterogeneous and have semantically informant interconnections optimally. The heterogeneity of data sources is the result of a number of factors, including storing databases in different formats, using different software and hardware for database storage systems, designing in different data semantic models (Katsis & Papakonstantiou, 2009, Ziegler & Dittrich , 2004). Nowadays there are two approaches in doing data integration that is Global as View (GAV) and Local as View (LAV), but both have different advantages and limitations so that proper analysis is needed in its application. Some of the major factors to be considered in making efficient and effective data integration of heterogeneous data sources are the understanding of the type and structure of the source data (source schema). Another factor to consider is also the view type of integration result (target schema). The results of the integration can be displayed into one type of global view or a variety of other views. So in integrating data whose source is structured the approach will be different from the integration of the data if the data source is not structured or semi-structured. Scheme mapping is a specific declaration that describes the relationship between the source scheme and the target scheme. In the scheme mapping is expressed in in some logical formulas that can help applications in data interoperability, data exchange and data integration. In this paper, in the case of establishing a patient referral center data center, it requires integration of data whose source is derived from a number of different health facilities, it is necessary to design a schema mapping system (to support optimization). Data Center as the target orientation schema (target schema) from various reference service units as a source schema (source schema) has the characterization and nature of data that is structured and independence. So that the source of data can be integrated tersetruktur of the data source into an integrated view (as a data center) with an equivalent query rewriting (equivalent). The data center as a global schema serves as a schema target requires a "mediator" that serves "guides" to maintain global schemes and map (mapping) between global and local schemes. Data center as from Global As View (GAV) here tends to be single and unified view so to be effective in its integration process with various sources of schema which is needed integration facilities "integration". The "Pemadu" facility is a declarative mapping language that allows to specifically link each of the various schema sources to the data center. So that type of query rewriting equivalent is suitable to be applied in the context of query optimization and maintenance of physical data independence.Keywords: Global as View (GAV), Local as View (LAV), source schema ,mapping schema
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Bernasconi, Anna. "Data quality-aware genomic data integration". Computer Methods and Programs in Biomedicine Update 1 (2021): 100009. http://dx.doi.org/10.1016/j.cmpbup.2021.100009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Salinas, Sonia Ordonez y Alba Consuelo Nieto Lemus. "Data Warehouse and Big Data Integration". International Journal of Computer Science and Information Technology 9, n.º 2 (30 de abril de 2017): 01–17. http://dx.doi.org/10.5121/ijcsit.2017.9201.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Bernstein, Philip A. "Data Integration for Data-Intensive Science". OMICS: A Journal of Integrative Biology 15, n.º 4 (abril de 2011): 241. http://dx.doi.org/10.1089/omi.2011.0020.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Lu, James J. "A Data Model for Data Integration". Electronic Notes in Theoretical Computer Science 150, n.º 2 (marzo de 2006): 3–19. http://dx.doi.org/10.1016/j.entcs.2005.11.031.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Tsiliki, Georgia, Dimitrios Vlachakis y Sophia Kossida. "On integrating multi-experiment microarray data". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 372, n.º 2016 (28 de mayo de 2014): 20130136. http://dx.doi.org/10.1098/rsta.2013.0136.

Texto completo
Resumen
With the extensive use of microarray technology as a potential prognostic and diagnostic tool, the comparison and reproducibility of results obtained from the use of different platforms is of interest. The integration of those datasets can yield more informative results corresponding to numerous datasets and microarray platforms. We developed a novel integration technique for microarray gene-expression data derived by different studies for the purpose of a two-way Bayesian partition modelling which estimates co-expression profiles under subsets of genes and between biological samples or experimental conditions. The suggested methodology transforms disparate gene-expression data on a common probability scale to obtain inter-study-validated gene signatures. We evaluated the performance of our model using artificial data. Finally, we applied our model to six publicly available cancer gene-expression datasets and compared our results with well-known integrative microarray data methods. Our study shows that the suggested framework can relieve the limited sample size problem while reporting high accuracies by integrating multi-experiment data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Loice Tsinale, Harriet, Samuel Mbugua y Anthony Luvanda. "ARCHITECTURAL HEALTH DATA STANDARDS AND SEMANTIC INTEROPERABILITY: A COMPREHENSIVE REVIEW IN THE CONTEXT OF INTEGRATING MEDICAL DATA INTO BIG DATA ANALYTICS." International Journal of Engineering Applied Sciences and Technology 8, n.º 4 (1 de agosto de 2023): 17–30. http://dx.doi.org/10.33564/ijeast.2023.v08i04.002.

Texto completo
Resumen
The integration of medical data into Big Data analytics holds significant potential for advancing healthcare practices and research. However, achieving semantics interoperability, wherein data is exchanged and interpreted accurately among diverse systems, is a critical challenge. This study explores the impact of existing architectures on semantics interoperability in the context of integrating medical data into Big Data analytics. The study highlights the complexities involved in integrating medical data from various sources, each using different formats, data models, and vocabularies. Without a strong emphasis on semantic interoperability, data integration efforts can result in misinterpretations, inconsistencies, and errors, adversely affecting patient care and research outcomes. The significance of data standards and ontologies in establishing a common vocabulary and structure for medical data integration is underscored. Additionally, the importance of data mapping and transformation is discussed, as data discrepancies can lead to data loss and incorrect analysis results. The success of integrating medical data into Big Data analytics is heavily reliant on existing architectures that prioritize semantics interoperability. A welldesigned architecture addresses data heterogeneity, promotes semantic consistency, and supports data standardization, unlocking the transformative capabilities of medical data analysis for improved healthcare outcomes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Curcin, V., A. Barton, M. M. McGilchrist, H. Bastiaens, A. Andreasson, J. Rossiter, L. Zhao et al. "Clinical Data Integration Model". Methods of Information in Medicine 54, n.º 01 (2015): 16–23. http://dx.doi.org/10.3414/me13-02-0024.

Texto completo
Resumen
SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Managing Interoperability and Complexity in Health Systems”.Background: Primary care data is the single richest source of routine health care data. However its use, both in research and clinical work, often requires data from multiple clinical sites, clinical trials databases and registries. Data integration and interoperability are therefore of utmost importance.Objectives: TRANSFoRm’s general approach relies on a unified interoperability framework, described in a previous paper. We developed a core ontology for an interoperability framework based on data mediation. This article presents how such an ontology, the Clinical Data Integration Model (CDIM), can be designed to support, in conjunction with appropriate terminologies, biomedical data federation within TRANSFoRm, an EU FP7 project that aims to develop the digital infrastructure for a learning healthcare system in European Primary Care.Methods: TRANSFoRm utilizes a unified structural / terminological interoperability frame work, based on the local-as-view mediation paradigm. Such an approach mandates the global information model to describe the domain of interest independently of the data sources to be explored. Following a requirement analysis process, no ontology focusing on primary care research was identified and, thus we designed a realist ontology based on Basic Formal Ontology to support our framework in collaboration with various terminologies used in primary care.Results: The resulting ontology has 549 classes and 82 object properties and is used to support data integration for TRANSFoRm’s use cases. Concepts identified by researchers were successfully expressed in queries using CDIM and pertinent terminologies. As an example, we illustrate how, in TRANSFoRm, the Query Formulation Workbench can capture eligibility criteria in a computable representation, which is based on CDIM.Conclusion: A unified mediation approach to semantic interoperability provides a flexible and extensible framework for all types of interaction between health record systems and research systems. CDIM, as core ontology of such an approach, enables simplicity and consistency of design across the heterogeneous software landscape and can support the specific needs of EHR-driven phenotyping research using primary care data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Neang, Andrew B., Will Sutherland, Michael W. Beach y Charlotte P. Lee. "Data Integration as Coordination". Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (5 de enero de 2021): 1–25. http://dx.doi.org/10.1145/3432955.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Bertino, E. y E. Ferrari. "XML and data integration". IEEE Internet Computing 5, n.º 6 (2001): 75–76. http://dx.doi.org/10.1109/4236.968835.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Di Lorenzo, Giusy, Hakim Hacid, Hye-young Paik y Boualem Benatallah. "Data integration in mashups". ACM SIGMOD Record 38, n.º 1 (24 de junio de 2009): 59–66. http://dx.doi.org/10.1145/1558334.1558343.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Pineda, Silvia, Daniel G. Bunis, Idit Kosti y Marina Sirota. "Data Integration for Immunology". Annual Review of Biomedical Data Science 3, n.º 1 (20 de julio de 2020): 113–36. http://dx.doi.org/10.1146/annurev-biodatasci-012420-122454.

Texto completo
Resumen
Over the last several years, next-generation sequencing and its recent push toward single-cell resolution have transformed the landscape of immunology research by revealing novel complexities about all components of the immune system. With the vast amounts of diverse data currently being generated, and with the methods of analyzing and combining diverse data improving as well, integrative systems approaches are becoming more powerful. Previous integrative approaches have combined multiple data types and revealed ways that the immune system, both as a whole and as individual parts, is affected by genetics, the microbiome, and other factors. In this review, we explore the data types that are available for studying immunology with an integrative systems approach, as well as the current strategies and challenges for conducting such analyses.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Kaufman, G. "Pragmatic ECAD Data Integration". ACM SIGDA Newsletter 20, n.º 1 (junio de 1990): 60–81. http://dx.doi.org/10.1145/378886.1062259.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Svensson, A. y J. Holst. "Integration of Navigation Data". Journal of Navigation 48, n.º 1 (enero de 1995): 114–35. http://dx.doi.org/10.1017/s0373463300012558.

Texto completo
Resumen
This article treats integration of navigation data from a variety of sensors in a submarine using extended Kalman filtering in order to improve the accuracy of position, velocity and heading estimates. The problem has been restricted to planar motion. The measurement system consists of an inertial navigation system, a gyro compass, a passive log, an active log and a satellite navigation system. These subsystems are briefly described and models for the measurement errors are given.Four different extended Kalman filters have been tested by computer simulations. The simulations distinctly show that the passive subsystems alone are insufficient to improve the estimate of the position obtained from the inertial navigation system. A log measuring the velocity relative to the ground or a position determining system are needed. The improvement depends on the accuracy of the measuring instruments, the extent of time the instrument can be used and which filter is being used. The most complex filter, which contains fourteen states, eight to describe the motion of the submarine and six to describe the measurement system, including a model of the inertial navigation system, works very well.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Brazhnik, Olga y John F. Jones. "Anatomy of data integration". Journal of Biomedical Informatics 40, n.º 3 (junio de 2007): 252–69. http://dx.doi.org/10.1016/j.jbi.2006.09.001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Powell, V. J. H. y A. Acharya. "Disease Prevention: Data Integration". Science 338, n.º 6112 (6 de diciembre de 2012): 1285–86. http://dx.doi.org/10.1126/science.338.6112.1285-b.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Riedemann, Catharina y Christian Timm. "Services for data integration". Data Science Journal 2 (2003): 90–99. http://dx.doi.org/10.2481/dsj.2.90.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Kezunovic, M. "Integration of Substation Data". IFAC Proceedings Volumes 44, n.º 1 (enero de 2011): 12861–66. http://dx.doi.org/10.3182/20110828-6-it-1002.02654.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Resnick, Richard J. "Data Integration in Genomics". Biotech Software & Internet Report 1, n.º 1-2 (abril de 2000): 40–43. http://dx.doi.org/10.1089/152791600319268.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Larsen, N., R. Overbeek, S. Pramanik, T. M. Schmidt, E. E. Selkov, O. Strunk, J. M. Tiedje y J. W. Urbance. "Towards microbial data integration". Journal of Industrial Microbiology and Biotechnology 18, n.º 1 (1 de enero de 1997): 68–72. http://dx.doi.org/10.1038/sj.jim.2900366.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Almeida, Jonas S., Chuming Chen, Robert Gorlitsky, Romesh Stanislaus, Marta Aires-de-Sousa, Pedro Eleutério, João Carriço et al. "Data integration gets 'Sloppy'". Nature Biotechnology 24, n.º 9 (1 de septiembre de 2006): 1070–71. http://dx.doi.org/10.1038/nbt0906-1070.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Dong, Xin Luna, Alon Halevy y Cong Yu. "Data integration with uncertainty". VLDB Journal 18, n.º 2 (14 de noviembre de 2008): 469–500. http://dx.doi.org/10.1007/s00778-008-0119-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Sivertsen, Gunnar. "Data integration in Scandinavia". Scientometrics 106, n.º 2 (22 de diciembre de 2015): 849–55. http://dx.doi.org/10.1007/s11192-015-1817-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Muppa, Naveen. "Enterprise Data Integration architecture". Journal of Artificial Intelligence, Machine Learning and Data Science 2, n.º 1 (28 de febrero de 2024): 234–37. http://dx.doi.org/10.51219/jaimld/naveen-muppa/75.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Meyer, Ingo. "Data matters - no service integration without data integration: a transnational learning exercise". International Journal of Integrated Care 21, S1 (1 de septiembre de 2021): 28. http://dx.doi.org/10.5334/ijic.icic20545.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Rao, Rohini R. "The role of Domain Ontology in Semantic Data Integration". Indian Journal of Applied Research 3, n.º 4 (1 de octubre de 2011): 88–89. http://dx.doi.org/10.15373/2249555x/apr2013/29.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

JAMES, Daniel, Raymond LEADBETTER, James LEE, Brendan BURKETT y David THIEL. "B23 Integration of multiple data sources for swimming biomechanics". Proceedings of the Symposium on sports and human dynamics 2011 (2011): 364–66. http://dx.doi.org/10.1299/jsmeshd.2011.364.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Satish Dingre, Sneha. "Data Integration: Exploring Challenges and Emerging Technologies for Automation". International Journal of Science and Research (IJSR) 12, n.º 12 (5 de diciembre de 2023): 1395–97. http://dx.doi.org/10.21275/sr231218073311.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Li, Meng Juan, Lian Yin Jia, Jin Guo You, Jia Man Ding y Hai He Zhou. "Deep Web Data Integration with near Duplicate Free". Advanced Materials Research 756-759 (septiembre de 2013): 1855–59. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.1855.

Texto completo
Resumen
Deep web data integration has become the center of many research efforts in the recent few years. Near duplicate detection is very important for deep web integration system, there are seldom researches focusing on integrating deep web Integration and near duplicate detection together. In this paper, we develop a integration system, DWI-ndfree to solve this problem. The wrapper of DWI-ndfree consists of four parts: the form filler, the navigator, the extractor and the near duplicate detector. To find near duplicate records, we propose efficient algorithm CheckNearDuplicate. DWI-ndfree can integrate deep web data with near duplicate free and has been used to execute several web extraction and integration tasks efficiently.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía