Siga este link para ver outros tipos de publicações sobre o tema: Data integration.

Artigos de revistas sobre o tema "Data integration"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Data integration".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Arputhamary, B., e L. Arockiam. "Data Integration in Big Data Environment". Bonfring International Journal of Data Mining 5, n.º 1 (10 de fevereiro de 2015): 01–05. http://dx.doi.org/10.9756/bijdm.8001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Samrat Medavarapu, Sachin. "XML-Based Data Integration". International Journal of Science and Research (IJSR) 13, n.º 8 (5 de agosto de 2024): 1984–86. http://dx.doi.org/10.21275/sr24810074326.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Vaishnawi, Chittamuru, e Dr Bhuvana J. "Renewable Energy Integration in Cloud Data Centers". International Journal of Research Publication and Reviews 5, n.º 3 (9 de março de 2024): 2346–54. http://dx.doi.org/10.55248/gengpi.5.0324.0737.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Olmsted, Aspen. "Heterogeneous system integration data integration guarantees". Journal of Computational Methods in Sciences and Engineering 17 (19 de janeiro de 2017): S85—S94. http://dx.doi.org/10.3233/jcm-160682.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

CALVANESE, DIEGO, GIUSEPPE DE GIACOMO, MAURIZIO LENZERINI, DANIELE NARDI e RICCARDO ROSATI. "DATA INTEGRATION IN DATA WAREHOUSING". International Journal of Cooperative Information Systems 10, n.º 03 (setembro de 2001): 237–71. http://dx.doi.org/10.1142/s0218843001000345.

Texto completo da fonte
Resumo:
Information integration is one of the most important aspects of a Data Warehouse. When data passes from the sources of the application-oriented operational environment to the Data Warehouse, possible inconsistencies and redundancies should be resolved, so that the warehouse is able to provide an integrated and reconciled view of data of the organization. We describe a novel approach to data integration in Data Warehousing. Our approach is based on a conceptual representation of the Data Warehouse application domain, and follows the so-called local-as-view paradigm: both source and Data Warehouse relations are defined as views over the conceptual model. We propose a technique for declaratively specifying suitable reconciliation correspondences to be used in order to solve conflicts among data in different sources. The main goal of the method is to support the design of mediators that materialize the data in the Data Warehouse relations. Starting from the specification of one such relation as a query over the conceptual model, a rewriting algorithm reformulates the query in terms of both the source relations and the reconciliation correspondences, thus obtaining a correct specification of how to load the data in the materialized view.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

NASSIRI, Hassana. "Data Model Integration". International Journal of New Computer Architectures and their Applications 7, n.º 2 (2017): 45–49. http://dx.doi.org/10.17781/p002327.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Miller, Renée J. "Open data integration". Proceedings of the VLDB Endowment 11, n.º 12 (agosto de 2018): 2130–39. http://dx.doi.org/10.14778/3229863.3240491.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Dong, Xin Luna, e Divesh Srivastava. "Big data integration". Proceedings of the VLDB Endowment 6, n.º 11 (27 de agosto de 2013): 1188–89. http://dx.doi.org/10.14778/2536222.2536253.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Dong, Xin Luna, e Divesh Srivastava. "Big Data Integration". Synthesis Lectures on Data Management 7, n.º 1 (15 de fevereiro de 2015): 1–198. http://dx.doi.org/10.2200/s00578ed1v01y201404dtm040.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Vargas-Vera, Maria. "Data Integration Framework". International Journal of Knowledge Society Research 7, n.º 1 (janeiro de 2016): 99–112. http://dx.doi.org/10.4018/ijksr.2016010107.

Texto completo da fonte
Resumo:
This paper presents a proposal for a data integration framework. The purpose of the framework is to locate automatically records of participants from the ALSPAC database (Avon Longitudinal Study of Parents and Children) within its counterpart GPRD database (General Practice Research Database). The ALSPAC database is a collection of data from children and parents from before birth to late puberty. This collection contains several variables of interest for clinical researchers but we concentrate in asthma as a golden standard for evaluation of asthma has been made by a clinical researcher. The main component of the framework is a module called Mapper which locates similar records and performs record linkage. The mapper contains a library of similarity measures such Jaccard, Jaro-Winkler, Monge-Elkan, MatchScore, Levenstein and TFIDF similarity. Finally, the author evaluates the approach on quality of the mappings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Tang, Lin. "Genomics data integration". Nature Methods 20, n.º 1 (janeiro de 2023): 34. http://dx.doi.org/10.1038/s41592-022-01736-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Slater, Ted, Christopher Bouton e Enoch S. Huang. "Beyond data integration". Drug Discovery Today 13, n.º 13-14 (julho de 2008): 584–89. http://dx.doi.org/10.1016/j.drudis.2008.01.008.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Youngmann, Brit, Michael Cafarella, Babak Salimi e Anna Zeng. "Causal Data Integration". Proceedings of the VLDB Endowment 16, n.º 10 (junho de 2023): 2659–65. http://dx.doi.org/10.14778/3603581.3603602.

Texto completo da fonte
Resumo:
Causal inference is fundamental to empirical scientific discoveries in natural and social sciences; however, in the process of conducting causal inference, data management problems can lead to false discoveries. Two such problems are (i) not having all attributes required for analysis, and (ii) misidentifying which attributes are to be included in the analysis. Analysts often only have access to partial data, and they critically rely on (often unavailable or incomplete) domain knowledge to identify attributes to include for analysis, which is often given in the form of a causal DAG. We argue that data management techniques can surmount both of these challenges. In this work, we introduce the Causal Data Integration (CDI) problem, in which unobserved attributes are mined from external sources and a corresponding causal DAG is automatically built. We identify key challenges and research opportunities in designing a CDI system, and present a system architecture for solving the CDI problem. Our preliminary experimental results demonstrate that solving CDI is achievable and pave the way for future research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Bakshi, Waseem Jeelani, Rana Hashmy, Majid Zaman e Muheet Ahmed Butt. "Logical Data Integration Model for the Integration of Data Repositories". International Journal of Database Theory and Application 11, n.º 1 (31 de março de 2018): 21–28. http://dx.doi.org/10.14257/ijdta.2018.11.1.03.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Todorova, Violeta, Veska Gancheva e Valeri Mladenov. "COVID-19 Medical Data Integration Approach". MOLECULAR SCIENCES AND APPLICATIONS 2 (18 de julho de 2022): 102–6. http://dx.doi.org/10.37394/232023.2022.2.11.

Texto completo da fonte
Resumo:
The need to create automated methods for extracting knowledge from data arises from the accumulation of a large amount of data. This paper presents a conceptual model for integrating and processing medical data in three layers, comprising a total of six phases: a model for integrating, filtering, sorting and aggregating Covid-19 data. A medical data integration workflow was designed, including steps of data integration, filtering and sorting. The workflow for Covid-19 medical data from clinical records of 20400 potential patients was employed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Chinta, Umababu, Akshun Chhapola e Shalu Jain. "Integration of Salesforce with External Systems: Best Practices for Seamless Data Flow". Journal of Quantum Science and Technology 1, n.º 3 (29 de agosto de 2024): 25–41. http://dx.doi.org/10.36676/jqst.v1.i3.25.

Texto completo da fonte
Resumo:
The integration of Salesforce with external systems is a critical aspect of modern enterprise architecture, enabling seamless data flow and ensuring that businesses can leverage the full potential of their technology ecosystems. As organizations increasingly rely on diverse platforms and applications, the need for efficient and reliable integration strategies becomes paramount. This paper explores best practices for integrating Salesforce with external systems, focusing on achieving seamless data flow while addressing the complexities and challenges associated with such integrations.To begin with, the importance of understanding the unique requirements and constraints of both Salesforce and the external systems is emphasized. Integration strategies must be tailored to the specific use cases, whether they involve real-time data synchronization, batch processing, or event-driven architectures. A thorough analysis of the data types, formats, and structures is essential to ensure compatibility and to avoid data loss or corruption during the integration process.One of the key best practices highlighted in this paper is the use of middleware and integration platforms as a service (iPaaS) solutions. These tools provide a robust framework for managing data flows between Salesforce and external systems, offering features like data transformation, error handling, and process automation. The paper discusses the advantages of using middleware, such as reducing the complexity of integration projects, improving scalability, and enhancing the flexibility to adapt to changing business requirements.Another critical aspect covered is the importance of data governance and security in Salesforce integrations. As data moves between systems, ensuring its integrity, confidentiality, and compliance with regulatory requirements is vital. The paper explores strategies for implementing robust data governance policies, including the use of encryption, access controls, and audit trails to protect sensitive information. Additionally, the role of Salesforce's native security features, such as Shield and Event Monitoring, in safeguarding data during integration processes is discussed.The paper also delves into the challenges of integrating Salesforce with legacy systems, which often require custom solutions due to their outdated technologies and lack of standard integration capabilities. Strategies for overcoming these challenges, such as leveraging APIs, custom connectors, and data mapping tools, are examined. The importance of rigorous testing and validation processes to ensure that integrations meet performance and reliability standards is underscored.Furthermore, the paper emphasizes the need for continuous monitoring and maintenance of Salesforce integrations. As business needs evolve and systems are updated, integration workflows must be regularly reviewed and optimized to prevent disruptions and ensure ongoing efficiency. The use of monitoring tools and automated alerts is recommended to quickly identify and address any issues that arise.Finally, the paper presents several real-world case studies demonstrating successful Salesforce integrations with various external systems, including ERP platforms, marketing automation tools, and e-commerce solutions. These case studies provide practical insights into the application of best practices and highlight the benefits of seamless data flow, such as improved customer experiences, enhanced decision-making capabilities, and increased operational efficiency.In conclusion, integrating Salesforce with external systems requires a strategic approach that considers the unique characteristics of the systems involved, the importance of data governance and security, and the need for continuous monitoring and adaptation. By following the best practices outlined in this paper, organizations can achieve seamless data flow, enabling them to fully harness the power of Salesforce and their broader technology ecosystem.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Ruíz-Ceniceros, Juan Antonio, José Alfonso Aguilar-Calderón, Carolina Tripp-Barba e Aníbal Zaldívar-Colado. "Dynamic Canonical Data Model: An Architecture Proposal for the External and Data Loose Coupling for the Integration of Software Units". Applied Sciences 13, n.º 19 (7 de outubro de 2023): 11040. http://dx.doi.org/10.3390/app131911040.

Texto completo da fonte
Resumo:
Integrating third-party and legacy systems has become a critical necessity for companies, driven by the need to exchange information with various entities such as banks, suppliers, customers, and partners. Ensuring data integrity, keeping integrations up-to-date, reducing transaction risks, and preventing data loss are all vital aspects of this complex task. Achieving success in this endeavor, which involves both technological and business challenges, necessitates the implementation of a well-suited architecture. This article introduces an architecture known as the Dynamic Canonical Data Model through Agnostic Messages. The proposal addresses the integration of loosely coupled software units, mainly when dealing with internal and external data integration. To illustrate the architecture’s components, a case study from the Mexican Logistics Company Paquetexpress is presented. This organization manages integrations across several platforms, including SalesForce and Oracle ERP, with clients like Amazon, Mercado Libre, Grainger, and Afull. Each of these incurs costs ranging from USD 30,000 to USD 36,000, with consultants from firms such as Quanam, K&F, TSOL, and TekSi playing a crucial role in their execution. This consumes much time, making maintenance costs considerably high when clients request data transmission or type changes, particularly when utilizing tools like Oracle Integration Cloud (OIC) or Oracle Service Bus (OSB). The article provides insights into the architecture’s design and implementation in a real-world scenario within the delivery company. The proposed architecture significantly reduces integration and maintenance times and costs while maximizing scalability and encouraging the reuse of components. The source code for this implementation has been registered in the National Registry of Copyrights in Mexico.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Colleoni Couto, Julia, Olimar Teixeira Borges e Duncan Dubugras Ruiz. "Data integration in a Hadoop-based data lake: A bioinformatics case". International Journal of Data Mining & Knowledge Management Process 12, n.º 4 (31 de julho de 2022): 1–24. http://dx.doi.org/10.5121/ijdkp.2022.12401.

Texto completo da fonte
Resumo:
When we work in a data lake, data integration is not easy, mainly because the data is usually stored in raw format. Manually performing data integration is a time-consuming task that requires the supervision of a specialist, which can make mistakes or not be able to see the optimal point for data integration among two or more datasets. This paper presents a model to perform heterogeneous in-memory data integration in a Hadoop-based data lake based on a top-k set similarity approach. Our main contribution is the process of ingesting, storing, processing, integrating, and visualizing the data integration points. The algorithm for data integration is based on the Overlap coefficient since it presented better results when compared with the set similarity metrics Jaccard, Sørensen-Dice, and the Tversky index. We tested our model applying it on eight bioinformatics-domain datasets. Our model presents better results when compared to an analysis of a specialist, and we expect our model can be reused for other domains of datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Saloni Kumari. "Data integration: “Seamless data harmony: The art and science of effective data integration”". International Journal of Engineering & Technology 12, n.º 2 (4 de outubro de 2023): 26–30. http://dx.doi.org/10.14419/ijet.v12i2.32335.

Texto completo da fonte
Resumo:
The idea of data integration has evolved as a key strategy in today's data-driven environment, as data is supplied from various and heterogeneous sources. This article explores the relevance, methodology, difficulties, and transformative possibilities of data integration, delving into its multidimensional world. Data integration serves as the cornerstone for well-informed decision-making by connecting heterogeneous datasets and fostering unified insights. This article gives readers a sneak preview of the in-depth investigation into data integration, illuminating its technical complexities and strategic ramifications for companies and organizations looking to maximize the value of their data as-sets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Chromiak, Michal, e Marcin Grabowiecki. "Heterogeneous Data Integration Architecture-Challenging Integration Issues". Annales Universitatis Mariae Curie-Sklodowska, sectio AI – Informatica 15, n.º 1 (1 de janeiro de 2015): 7. http://dx.doi.org/10.17951/ai.2015.15.1.7-11.

Texto completo da fonte
Resumo:
As of today, most of the data processing systems have to deal with a large amount of data originated from numerous sources. Data sources almost always differ regarding its purpose of existence. Thus model, data processing engine and technology differ intensely. Due to current trend for systems fusion there is a growing demand for data to be present in a common way regardless of its legacy. Many systems have been devised as a response to such integration needs. However, the present data integration systems mostly are dedicated solutions that bring constraints and issues when considered in general. In this paper we will focus on the present solutions for data integration, their flaws originating from their architecture or design concepts and present an abstract and general approach that could be introduced as an response to existing issues. The system integration is considered out of scope for this paper, we will focus particularly on efficient data integration.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Kasyanova, Nataliia, Serhii Koverha e Vladyslav Okhrimenko. "УПРАВЛІННЯ ТА ІНТЕГРАЦІЯ ДАНИХ В УМОВАХ ЦИФРОВІЗАЦІЇ ЕКОНОМІЧНИХ ПРОЦЕСІВ: ВИКЛИКИ ТА ПЕРСПЕКТИВИ". Economical 1, n.º 27 (2023): 71–87. http://dx.doi.org/10.31474/1680-0044-2023-1(27)-71-87.

Texto completo da fonte
Resumo:
Objective. The purpose of the article is to clarify the theoretical and methodological aspects, analyze data management methods in the context of digitalization of economic processes, and choose the priority method of integrating corporate information systems depending on the tasks to be solved in each case. Methods. The paper uses a set of data integration methods: application integration method (EAI), method of extracting data from external sources, transforming them in the appropriate structure and forming data warehouses (ETL); method of real-time integration of incomparable data types from different sources (EI). Results. The paper proves that data management includes the formation and analysis of data architecture, integration of the database management system; data security, identification, segregation and storage of data sources. Data integration refers to the process of combining data from different sources into a single, holistic system and aims to provide access to a complete, updated and easy-to-analyze data set. Data integration is especially important in the areas of e-commerce, logistics and supply chains, where it is necessary to combine data from different sources to optimize processes, in the field of business intelligence, where processing large amounts of data and combining them allows you to identify useful information and certain patterns. Integration of enterprise information systems is the process of combining several IS and individual applications into a single, holistic system that works together to achieve a common goal, aimed at increasing the efficiency of the company, reducing duplication of efforts and streamlining processes. The main functional components of a corporate information system are identified: Business Process Automation IS, Financial Management IS, Customer Relationship Management IS, Supply Chain Management IS, Human Resources Management IS, Business Intelligence IS, Communication IS, and Data Security and Protection IS. Within a corporate information system, several narrowly focused software products operate simultaneously, capable of successfully solving a certain range of tasks. At the same time, some of them may not involve interaction with other information systems. The main approaches to data integration include universal access to data and data warehouses. Universal access technologies allow for equal access to data from different information systems, including on the basis of the concept of data warehouses - a database containing data collected from databases of different information subsystems for further analysis and use. It is proved that the most holistic approach to the integration of information systems is integration at the level of business processes. As part of the integration of business processes, there is an integration of applications, data integration, and integration of people involved in this business process. The article substantiates the feasibility of using three methods of big data management and integration: integration of corporate applications, integration of corporate information, and software for obtaining, transforming, and downloading data. As a result of comparing integration methods and building a generalized scheme for integrating heterogeneous IS, a number of situations have been identified in which the use of a specific integration method is preferable or the only possible one. The scientific novelty of the study is to identify the problems of integrating big data and corporate information systems. Approaches to choosing a method for integrating data and applications based on a generalized scheme for integrating heterogeneous information systems are proposed. Practical significance. The results of the analysis allow optimizing the methods of data integration within a corporate information system. The principles of integration inherent in the considered methods are used to solve a wide range of tasks: from real-time integration to batch integration and application integration. Implementation of the proposed methods of big data integration will make information more transparent; obtain additional detailed information about the efficiency of production and technological equipment, which stimulates innovation and improves the quality of the final product; use more efficient, accurate analytics to minimize risks and identify problems in advance before catastrophic consequences; more effectively manage supply chains, forecast demand, carry out comprehensive business planning, organize cooperation
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Nurhendratno, Slamet Sudaryanto, e Sudaryanto Sudaryanto. "DATA INTEGRATION MODEL DESIGN FOR SUPPORTING DATA CENTER PATIENT SERVICES DISTRIBUTED INSURANCE PURCHASE WITH VIEW BASED DATA INTEGRATION". Computer Engineering, Science and System Journal 3, n.º 2 (1 de agosto de 2018): 162. http://dx.doi.org/10.24114/cess.v3i2.8895.

Texto completo da fonte
Resumo:
Data integration is an important step in integrating information from multiple sources. The problem is how to find and combine data from scattered data sources that are heterogeneous and have semantically informant interconnections optimally. The heterogeneity of data sources is the result of a number of factors, including storing databases in different formats, using different software and hardware for database storage systems, designing in different data semantic models (Katsis & Papakonstantiou, 2009, Ziegler & Dittrich , 2004). Nowadays there are two approaches in doing data integration that is Global as View (GAV) and Local as View (LAV), but both have different advantages and limitations so that proper analysis is needed in its application. Some of the major factors to be considered in making efficient and effective data integration of heterogeneous data sources are the understanding of the type and structure of the source data (source schema). Another factor to consider is also the view type of integration result (target schema). The results of the integration can be displayed into one type of global view or a variety of other views. So in integrating data whose source is structured the approach will be different from the integration of the data if the data source is not structured or semi-structured. Scheme mapping is a specific declaration that describes the relationship between the source scheme and the target scheme. In the scheme mapping is expressed in in some logical formulas that can help applications in data interoperability, data exchange and data integration. In this paper, in the case of establishing a patient referral center data center, it requires integration of data whose source is derived from a number of different health facilities, it is necessary to design a schema mapping system (to support optimization). Data Center as the target orientation schema (target schema) from various reference service units as a source schema (source schema) has the characterization and nature of data that is structured and independence. So that the source of data can be integrated tersetruktur of the data source into an integrated view (as a data center) with an equivalent query rewriting (equivalent). The data center as a global schema serves as a schema target requires a "mediator" that serves "guides" to maintain global schemes and map (mapping) between global and local schemes. Data center as from Global As View (GAV) here tends to be single and unified view so to be effective in its integration process with various sources of schema which is needed integration facilities "integration". The "Pemadu" facility is a declarative mapping language that allows to specifically link each of the various schema sources to the data center. So that type of query rewriting equivalent is suitable to be applied in the context of query optimization and maintenance of physical data independence.Keywords: Global as View (GAV), Local as View (LAV), source schema ,mapping schema
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Bernasconi, Anna. "Data quality-aware genomic data integration". Computer Methods and Programs in Biomedicine Update 1 (2021): 100009. http://dx.doi.org/10.1016/j.cmpbup.2021.100009.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Salinas, Sonia Ordonez, e Alba Consuelo Nieto Lemus. "Data Warehouse and Big Data Integration". International Journal of Computer Science and Information Technology 9, n.º 2 (30 de abril de 2017): 01–17. http://dx.doi.org/10.5121/ijcsit.2017.9201.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Bernstein, Philip A. "Data Integration for Data-Intensive Science". OMICS: A Journal of Integrative Biology 15, n.º 4 (abril de 2011): 241. http://dx.doi.org/10.1089/omi.2011.0020.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Lu, James J. "A Data Model for Data Integration". Electronic Notes in Theoretical Computer Science 150, n.º 2 (março de 2006): 3–19. http://dx.doi.org/10.1016/j.entcs.2005.11.031.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Mandala, Vishwanadham. "Data Integration and Data Engineering Techniques". International Journal of Scientific Research and Management (IJSRM) 5, n.º 5 (13 de julho de 2024): 5354–59. http://dx.doi.org/10.18535/ijsrm/v5i5.13.

Texto completo da fonte
Resumo:
Data integration and data engineering techniques play a crucial role in the modern data landscape, facilitating the seamless amalgamation of diverse data sources to derive meaningful insights. As organizations increasingly rely on big data analytics, the need for efficient and robust data integration methodologies becomes paramount. This paper explores various techniques for data integration, including Extract, Transform, Load (ETL), data virtualization, and data federation, emphasizing their applicability across different domains. Additionally, we discuss data engineering practices that ensure the quality, scalability, and accessibility of integrated data, such as data modeling, pipeline architecture, and real-time data processing. By examining case studies and emerging trends, this work highlights the significance of these techniques in enabling organizations to harness the full potential of their data, ultimately driving informed decision-making and fostering innovation in an increasingly data-driven world.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Loice Tsinale, Harriet, Samuel Mbugua e Anthony Luvanda. "ARCHITECTURAL HEALTH DATA STANDARDS AND SEMANTIC INTEROPERABILITY: A COMPREHENSIVE REVIEW IN THE CONTEXT OF INTEGRATING MEDICAL DATA INTO BIG DATA ANALYTICS." International Journal of Engineering Applied Sciences and Technology 8, n.º 4 (1 de agosto de 2023): 17–30. http://dx.doi.org/10.33564/ijeast.2023.v08i04.002.

Texto completo da fonte
Resumo:
The integration of medical data into Big Data analytics holds significant potential for advancing healthcare practices and research. However, achieving semantics interoperability, wherein data is exchanged and interpreted accurately among diverse systems, is a critical challenge. This study explores the impact of existing architectures on semantics interoperability in the context of integrating medical data into Big Data analytics. The study highlights the complexities involved in integrating medical data from various sources, each using different formats, data models, and vocabularies. Without a strong emphasis on semantic interoperability, data integration efforts can result in misinterpretations, inconsistencies, and errors, adversely affecting patient care and research outcomes. The significance of data standards and ontologies in establishing a common vocabulary and structure for medical data integration is underscored. Additionally, the importance of data mapping and transformation is discussed, as data discrepancies can lead to data loss and incorrect analysis results. The success of integrating medical data into Big Data analytics is heavily reliant on existing architectures that prioritize semantics interoperability. A welldesigned architecture addresses data heterogeneity, promotes semantic consistency, and supports data standardization, unlocking the transformative capabilities of medical data analysis for improved healthcare outcomes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Curcin, V., A. Barton, M. M. McGilchrist, H. Bastiaens, A. Andreasson, J. Rossiter, L. Zhao et al. "Clinical Data Integration Model". Methods of Information in Medicine 54, n.º 01 (2015): 16–23. http://dx.doi.org/10.3414/me13-02-0024.

Texto completo da fonte
Resumo:
SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Managing Interoperability and Complexity in Health Systems”.Background: Primary care data is the single richest source of routine health care data. However its use, both in research and clinical work, often requires data from multiple clinical sites, clinical trials databases and registries. Data integration and interoperability are therefore of utmost importance.Objectives: TRANSFoRm’s general approach relies on a unified interoperability framework, described in a previous paper. We developed a core ontology for an interoperability framework based on data mediation. This article presents how such an ontology, the Clinical Data Integration Model (CDIM), can be designed to support, in conjunction with appropriate terminologies, biomedical data federation within TRANSFoRm, an EU FP7 project that aims to develop the digital infrastructure for a learning healthcare system in European Primary Care.Methods: TRANSFoRm utilizes a unified structural / terminological interoperability frame work, based on the local-as-view mediation paradigm. Such an approach mandates the global information model to describe the domain of interest independently of the data sources to be explored. Following a requirement analysis process, no ontology focusing on primary care research was identified and, thus we designed a realist ontology based on Basic Formal Ontology to support our framework in collaboration with various terminologies used in primary care.Results: The resulting ontology has 549 classes and 82 object properties and is used to support data integration for TRANSFoRm’s use cases. Concepts identified by researchers were successfully expressed in queries using CDIM and pertinent terminologies. As an example, we illustrate how, in TRANSFoRm, the Query Formulation Workbench can capture eligibility criteria in a computable representation, which is based on CDIM.Conclusion: A unified mediation approach to semantic interoperability provides a flexible and extensible framework for all types of interaction between health record systems and research systems. CDIM, as core ontology of such an approach, enables simplicity and consistency of design across the heterogeneous software landscape and can support the specific needs of EHR-driven phenotyping research using primary care data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Neang, Andrew B., Will Sutherland, Michael W. Beach e Charlotte P. Lee. "Data Integration as Coordination". Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (5 de janeiro de 2021): 1–25. http://dx.doi.org/10.1145/3432955.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Bertino, E., e E. Ferrari. "XML and data integration". IEEE Internet Computing 5, n.º 6 (2001): 75–76. http://dx.doi.org/10.1109/4236.968835.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Di Lorenzo, Giusy, Hakim Hacid, Hye-young Paik e Boualem Benatallah. "Data integration in mashups". ACM SIGMOD Record 38, n.º 1 (24 de junho de 2009): 59–66. http://dx.doi.org/10.1145/1558334.1558343.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Pineda, Silvia, Daniel G. Bunis, Idit Kosti e Marina Sirota. "Data Integration for Immunology". Annual Review of Biomedical Data Science 3, n.º 1 (20 de julho de 2020): 113–36. http://dx.doi.org/10.1146/annurev-biodatasci-012420-122454.

Texto completo da fonte
Resumo:
Over the last several years, next-generation sequencing and its recent push toward single-cell resolution have transformed the landscape of immunology research by revealing novel complexities about all components of the immune system. With the vast amounts of diverse data currently being generated, and with the methods of analyzing and combining diverse data improving as well, integrative systems approaches are becoming more powerful. Previous integrative approaches have combined multiple data types and revealed ways that the immune system, both as a whole and as individual parts, is affected by genetics, the microbiome, and other factors. In this review, we explore the data types that are available for studying immunology with an integrative systems approach, as well as the current strategies and challenges for conducting such analyses.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Kaufman, G. "Pragmatic ECAD Data Integration". ACM SIGDA Newsletter 20, n.º 1 (junho de 1990): 60–81. http://dx.doi.org/10.1145/378886.1062259.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Svensson, A., e J. Holst. "Integration of Navigation Data". Journal of Navigation 48, n.º 1 (janeiro de 1995): 114–35. http://dx.doi.org/10.1017/s0373463300012558.

Texto completo da fonte
Resumo:
This article treats integration of navigation data from a variety of sensors in a submarine using extended Kalman filtering in order to improve the accuracy of position, velocity and heading estimates. The problem has been restricted to planar motion. The measurement system consists of an inertial navigation system, a gyro compass, a passive log, an active log and a satellite navigation system. These subsystems are briefly described and models for the measurement errors are given.Four different extended Kalman filters have been tested by computer simulations. The simulations distinctly show that the passive subsystems alone are insufficient to improve the estimate of the position obtained from the inertial navigation system. A log measuring the velocity relative to the ground or a position determining system are needed. The improvement depends on the accuracy of the measuring instruments, the extent of time the instrument can be used and which filter is being used. The most complex filter, which contains fourteen states, eight to describe the motion of the submarine and six to describe the measurement system, including a model of the inertial navigation system, works very well.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Brazhnik, Olga, e John F. Jones. "Anatomy of data integration". Journal of Biomedical Informatics 40, n.º 3 (junho de 2007): 252–69. http://dx.doi.org/10.1016/j.jbi.2006.09.001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Powell, V. J. H., e A. Acharya. "Disease Prevention: Data Integration". Science 338, n.º 6112 (6 de dezembro de 2012): 1285–86. http://dx.doi.org/10.1126/science.338.6112.1285-b.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Riedemann, Catharina, e Christian Timm. "Services for data integration". Data Science Journal 2 (2003): 90–99. http://dx.doi.org/10.2481/dsj.2.90.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Kezunovic, M. "Integration of Substation Data". IFAC Proceedings Volumes 44, n.º 1 (janeiro de 2011): 12861–66. http://dx.doi.org/10.3182/20110828-6-it-1002.02654.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Resnick, Richard J. "Data Integration in Genomics". Biotech Software & Internet Report 1, n.º 1-2 (abril de 2000): 40–43. http://dx.doi.org/10.1089/152791600319268.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Larsen, N., R. Overbeek, S. Pramanik, T. M. Schmidt, E. E. Selkov, O. Strunk, J. M. Tiedje e J. W. Urbance. "Towards microbial data integration". Journal of Industrial Microbiology and Biotechnology 18, n.º 1 (1 de janeiro de 1997): 68–72. http://dx.doi.org/10.1038/sj.jim.2900366.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Almeida, Jonas S., Chuming Chen, Robert Gorlitsky, Romesh Stanislaus, Marta Aires-de-Sousa, Pedro Eleutério, João Carriço et al. "Data integration gets 'Sloppy'". Nature Biotechnology 24, n.º 9 (1 de setembro de 2006): 1070–71. http://dx.doi.org/10.1038/nbt0906-1070.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Dong, Xin Luna, Alon Halevy e Cong Yu. "Data integration with uncertainty". VLDB Journal 18, n.º 2 (14 de novembro de 2008): 469–500. http://dx.doi.org/10.1007/s00778-008-0119-9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Sivertsen, Gunnar. "Data integration in Scandinavia". Scientometrics 106, n.º 2 (22 de dezembro de 2015): 849–55. http://dx.doi.org/10.1007/s11192-015-1817-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Muppa, Naveen. "Enterprise Data Integration architecture". Journal of Artificial Intelligence, Machine Learning and Data Science 2, n.º 1 (28 de fevereiro de 2024): 234–37. http://dx.doi.org/10.51219/jaimld/naveen-muppa/75.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Meyer, Ingo. "Data matters - no service integration without data integration: a transnational learning exercise". International Journal of Integrated Care 21, S1 (1 de setembro de 2021): 28. http://dx.doi.org/10.5334/ijic.icic20545.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Rao, Rohini R. "The role of Domain Ontology in Semantic Data Integration". Indian Journal of Applied Research 3, n.º 4 (1 de outubro de 2011): 88–89. http://dx.doi.org/10.15373/2249555x/apr2013/29.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

JAMES, Daniel, Raymond LEADBETTER, James LEE, Brendan BURKETT e David THIEL. "B23 Integration of multiple data sources for swimming biomechanics". Proceedings of the Symposium on sports and human dynamics 2011 (2011): 364–66. http://dx.doi.org/10.1299/jsmeshd.2011.364.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Satish Dingre, Sneha. "Data Integration: Exploring Challenges and Emerging Technologies for Automation". International Journal of Science and Research (IJSR) 12, n.º 12 (5 de dezembro de 2023): 1395–97. http://dx.doi.org/10.21275/sr231218073311.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Li, Meng Juan, Lian Yin Jia, Jin Guo You, Jia Man Ding e Hai He Zhou. "Deep Web Data Integration with near Duplicate Free". Advanced Materials Research 756-759 (setembro de 2013): 1855–59. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.1855.

Texto completo da fonte
Resumo:
Deep web data integration has become the center of many research efforts in the recent few years. Near duplicate detection is very important for deep web integration system, there are seldom researches focusing on integrating deep web Integration and near duplicate detection together. In this paper, we develop a integration system, DWI-ndfree to solve this problem. The wrapper of DWI-ndfree consists of four parts: the form filler, the navigator, the extractor and the near duplicate detector. To find near duplicate records, we propose efficient algorithm CheckNearDuplicate. DWI-ndfree can integrate deep web data with near duplicate free and has been used to execute several web extraction and integration tasks efficiently.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia