Gotowa bibliografia na temat „DATA INTEGRATION APPROACH”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „DATA INTEGRATION APPROACH”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "DATA INTEGRATION APPROACH"

1

Todorova, Violeta, Veska Gancheva i Valeri Mladenov. "COVID-19 Medical Data Integration Approach". MOLECULAR SCIENCES AND APPLICATIONS 2 (18.07.2022): 102–6. http://dx.doi.org/10.37394/232023.2022.2.11.

Pełny tekst źródła
Streszczenie:
The need to create automated methods for extracting knowledge from data arises from the accumulation of a large amount of data. This paper presents a conceptual model for integrating and processing medical data in three layers, comprising a total of six phases: a model for integrating, filtering, sorting and aggregating Covid-19 data. A medical data integration workflow was designed, including steps of data integration, filtering and sorting. The workflow for Covid-19 medical data from clinical records of 20400 potential patients was employed.
Style APA, Harvard, Vancouver, ISO itp.
2

Aftab, Shoohira, Hammad Afzal i Amna Khalid. "An Approach for Secure Semantic Data Integration at Data as a Service (DaaS) Layer". International Journal of Information and Education Technology 5, nr 2 (2015): 124–30. http://dx.doi.org/10.7763/ijiet.2015.v5.488.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Soriano, Lorna T. "An ETL-Driven Approach Data Integration for State Universities and Colleges". Journal of Advanced Research in Dynamical and Control Systems 12, nr 01-Special Issue (13.02.2020): 234–42. http://dx.doi.org/10.5373/jardcs/v12sp1/20201068.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Genesereth, Michael. "Data Integration: The Relational Logic Approach". Synthesis Lectures on Artificial Intelligence and Machine Learning 4, nr 1 (styczeń 2010): 1–97. http://dx.doi.org/10.2200/s00226ed1v01y200911aim008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Fusco, Giuseppe, i Lerina Aversano. "An approach for semantic integration of heterogeneous data sources". PeerJ Computer Science 6 (2.03.2020): e254. http://dx.doi.org/10.7717/peerj-cs.254.

Pełny tekst źródła
Streszczenie:
Integrating data from multiple heterogeneous data sources entails dealing with data distributed among heterogeneous information sources, which can be structured, semi-structured or unstructured, and providing the user with a unified view of these data. Thus, in general, gathering information is challenging, and one of the main reasons is that data sources are designed to support specific applications. Very often their structure is unknown to the large part of users. Moreover, the stored data is often redundant, mixed with information only needed to support enterprise processes, and incomplete with respect to the business domain. Collecting, integrating, reconciling and efficiently extracting information from heterogeneous and autonomous data sources is regarded as a major challenge. In this paper, we present an approach for the semantic integration of heterogeneous data sources, DIF (Data Integration Framework), and a software prototype to support all aspects of a complex data integration process. The proposed approach is an ontology-based generalization of both Global-as-View and Local-as-View approaches. In particular, to overcome problems due to semantic heterogeneity and to support interoperability with external systems, ontologies are used as a conceptual schema to represent both data sources to be integrated and the global view.
Style APA, Harvard, Vancouver, ISO itp.
6

CALVANESE, DIEGO, GIUSEPPE DE GIACOMO, MAURIZIO LENZERINI, DANIELE NARDI i RICCARDO ROSATI. "DATA INTEGRATION IN DATA WAREHOUSING". International Journal of Cooperative Information Systems 10, nr 03 (wrzesień 2001): 237–71. http://dx.doi.org/10.1142/s0218843001000345.

Pełny tekst źródła
Streszczenie:
Information integration is one of the most important aspects of a Data Warehouse. When data passes from the sources of the application-oriented operational environment to the Data Warehouse, possible inconsistencies and redundancies should be resolved, so that the warehouse is able to provide an integrated and reconciled view of data of the organization. We describe a novel approach to data integration in Data Warehousing. Our approach is based on a conceptual representation of the Data Warehouse application domain, and follows the so-called local-as-view paradigm: both source and Data Warehouse relations are defined as views over the conceptual model. We propose a technique for declaratively specifying suitable reconciliation correspondences to be used in order to solve conflicts among data in different sources. The main goal of the method is to support the design of mediators that materialize the data in the Data Warehouse relations. Starting from the specification of one such relation as a query over the conceptual model, a rewriting algorithm reformulates the query in terms of both the source relations and the reconciliation correspondences, thus obtaining a correct specification of how to load the data in the materialized view.
Style APA, Harvard, Vancouver, ISO itp.
7

Hasani, S., A. Sadeghi-Niaraki i M. Jelokhani-Niaraki. "SPATIAL DATA INTEGRATION USING ONTOLOGY-BASED APPROACH". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1-W5 (11.12.2015): 293–96. http://dx.doi.org/10.5194/isprsarchives-xl-1-w5-293-2015.

Pełny tekst źródła
Streszczenie:
In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.
Style APA, Harvard, Vancouver, ISO itp.
8

Rohn, Eli. "CAS-Based Approach for Automatic Data Integration". American Journal of Operations Research 03, nr 01 (2013): 181–86. http://dx.doi.org/10.4236/ajor.2013.31a017.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Sanderson, David, Jack C. Chaplin i Svetan Ratchev. "Affordable Data Integration Approach for Production Enterprises". Procedia CIRP 93 (2020): 616–21. http://dx.doi.org/10.1016/j.procir.2020.04.124.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

GRANT, JOHN, i JACK MINKER. "A logic-based approach to data integration". Theory and Practice of Logic Programming 2, nr 03 (23.04.2002): 323–68. http://dx.doi.org/10.1017/s1471068401001375.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "DATA INTEGRATION APPROACH"

1

Fan, Hao. "Investigating a heterogeneous data integration approach for data warehousing". Thesis, Birkbeck (University of London), 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.424299.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ziegler, Patrick. "The SIRUP approach to personal semantic data integration /". [S.l. : s.n.], 2007. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016357341&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Criollo, Manjarrez Rotman A. "An approach for hydrogeological data management, integration and analysis". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/666507.

Pełny tekst źródła
Streszczenie:
The conceptualisation of a groundwater system involves continuous monitoring and evaluation of a large number of parameters (e.g., hydraulic parameters). Regarding hydraulic properties of the aquifers, their quantification is one of the most common problems in groundwater resources and it is recognised that all methods to obtain them have their limitations and are scale dependants. Therefore, it is necessary to have methods and tools to estimate them within a spatial context and to validate their uncertainty when they are applied in an upper scale. All these datasets collected and generated to perform a groundwater conceptual model are often stored in different scales and formats (e.g., maps, spreadsheets or databases). This continuous growing volume of data entails further improving on how it is compiled, stored and integrated for their analysis. This thesis contributes to: (i) provide dynamic and scalable methodologies for migrating and integrating multiple data infrastructures (data warehouses, spatial data infrastructures, ICT tools); (ii) to gain higher performance of their analysis within their spatial context; (iii) to provide specific tools to analyse hydrogeological processes and to obtain hydraulic parameters that have a key role in groundwater studies; and (iv) to share open-source and user-friendly software that allows standardisation, management, analysis, interpretation and sharing of hydrogeological data with a numerical model within a unique geographical platform (GIS platform). A dynamic and scalable methodology has been designed to harmonise and standardise multiple datasets and third-party databases from different origins, or to connect them with ICT tools. This methodology can be widely applied in any kind of data migration and integration (DMI) process, to develop Data warehouses, Spatial Data Infrastructures or to implement ICT tools on existing data infrastructures for further analyses, improving data governance. A higher performance to obtain hydraulic parameters of the aquifer has been addressed from the development of a GIS-based tool. The interpretation of pumping tests within its spatial context can reduce the uncertainty of its analysis with an accurate knowledge of the aquifer geometry and boundaries. This software designed to collect, manage, visualise and analyse pumping tests in a GIS environment supports the hydraulic parameterization of groundwater flow and transport models. To enhance the hydraulic parameters quantification, a compilation, revision and analysis of the hydraulic conductivity based on grain size methodologies have been performed. Afterwards, the uncertainty of applying these methods on a larger scale has been addressed and discussed by comparison of the upscaling results with pumping tests. Finally, a sharing, open-source and user-friendly GIS-based tool is presented. This new generation of GIS-based tool aims at simplifying the characterisation of groundwater bodies for the purpose of building rigorous and data-based environmental conceptual models. It allows to standardise, manage, analyse and interpret hydrogeological and hydrochemical data. Due to its free and open-source architecture, it can be updated and extended depending on the tailored applications.
La conceptualització d’un sistema hidrogeològic implica una continua monitorització i avaluació d’una gran quantitat de paràmetres (e.g., paràmetres hidràulics). Pel que fa als paràmetres hidràulics de l’aqüífer, la seva quantificació és un dels problemes més comuns als estudis hidrogeològics. És àmpliament reconegut que els mètodes per obtenir aquest tipus de paràmetres tenen les seves limitacions i són dependents de l’escala d’anàlisi. Per aquest motiu, cal disposar de mètodes i eines per estimar-los dins del seu context espacial i validar la seva incertesa quan s’apliquen en una escala superior d’anàlisi. Les dades recopilades i generades per realitzar un model conceptual hidrogeològic sovint s'emmagatzemen en diferents escales i formats (e.g., mapes, fulls de càlcul o bases de dades). Aquest volum de dades en continu creixement requereix d'eines i metodologies que millorin la seva compilació i gestió per al seu posterior anàlisi. Les contribucions realitzades en aquesta tesi son: (i) proporcionar metodologies dinàmiques i escalables per migrar i integrar múltiples infraestructures de dades (infraestructures de dades espacials i no espacials, o la implementació d'eines TIC); (ii) obtenir un major rendiment de l'anàlisi hidrogeològic tenint en compte el seu context espacial; (iii) proporcionar eines específiques per analitzar processos hidrogeològics i obtenir paràmetres hidràulics que tenen un paper clau en els estudis d'aigües subterrànies; i (iv) difondre software de codi lliure i de fàcil accés que permeti l'estandardització, gestió, anàlisi, interpretació i intercanvi de dades hidrogeològiques amb un model numèric dins d'una única plataforma de informació geogràfica (SIG). S'ha dissenyat una metodologia dinàmica i escalable per harmonitzar i estandarditzar múltiples conjunts de dades de diferents orígens, o bé per connectar aquestes infraestructures de dades amb eines TIC. Aquesta metodologia pot ser implementada en qualsevol tipus de procés de migració i integració de dades (DMI), per a desenvolupar infraestructures de dades espacials i no espacials, o bé per implementar eines TIC a les infraestructures de dades existents per a anàlisi addicionals; millorant així la governança de les dades. Un major rendiment per obtenir els paràmetres hidràulics de l'aqüífer s'adreça des del desenvolupament d'una eina SIG. La interpretació dels assaigs de bombament dins del seu context espacial, pot reduir la incertesa del seu anàlisi amb un coneixement precís de la geometria i els límits de l'aqüífer. Aquest software dissenyat per recopilar, administrar, visualitzar i analitzar els assaigs de bombament en un entorn GIS, dóna suport a la parametrització hidràulica dels models de flux i transport d'aigües subterrànies. Per millorar la quantificació dels paràmetres hidràulics, es va realitzar una compilació, revisió i anàlisi de la conductivitat hidràulica basada en metodologies de mida de gra. Posteriorment, s'ha considerat i discutit la incertesa d'aplicar aquests mètodes en una escala major comparant els resultats de la millora d'escala amb les proves de bombament. Finalment, es presenta una eina SIG lliure, de codi obert i de fàcil aplicació. Aquesta nova generació d'eines SIG pretenen simplificar la caracterització de les masses d'aigua subterrània amb el propòsit de construir models conceptuals ambientals rigorosos. A més, aquesta eina permet estandarditzar, gestionar, analitzar i interpretar dades hidrogeològiques i hidroquímiques. Donat que la seva arquitectura és de codi lliure i obert, es pot actualitzar i ampliar segons les aplicacions personalitzades que cada usuari requereixi.
La conceptualización de un sistema hidrogeológico implica el continuo monitoreo y evaluación de una gran cantidad de parámetros (e.g., parámetros hidráulicos). Con respecto a los parámetros hidráulicos, su cuantificación es uno de los problemas más comunes en los estudios hidrogeológicos. Es ampliamente reconocido que los métodos para obtener este tipo de parámetros tienen sus limitaciones y son dependientes de la escala de análisis. En este sentido, es necesario disponer de métodos y herramientas para estimarlos dentro de su contexto espacial y validar su incertidumbre cuando se aplican en una escala superior de análisis. Los datos recopilados y generados para realizar un modelo conceptual hidrogeológico a menudo se almacenan en diferentes escalas y formatos (e.g., mapas, hojas de cálculo o bases de datos). Este volumen de datos en continuo crecimiento requiere de herramientas y metodologías que mejoren su compilación y gestión para su posterior análisis. Las contribuciones realizadas son: (i) proporcionar metodologías dinámicas y escalables para migrar e integrar múltiples infraestructuras de datos (ya sean infraestructuras de datos espaciales y no espaciales, o la implementación de herramientas TIC); (ii) obtener un mayor rendimiento del análisis hidrogeológico teniendo en cuenta su contexto espacial; (iii) proporcionar herramientas específicas para analizar procesos hidrogeológicos y obtener parámetros hidráulicos que desempeñan un papel clave en los estudios de aguas subterráneas; y (iv) difundir software de código abierto y de fácil acceso que permita la estandarización, gestión, análisis, interpretación e intercambio de datos hidrogeológicos con un modelo numérico dentro de una única plataforma de información geográfica (SIG). Se ha diseñado una metodología dinámica y escalable para armonizar y estandarizar múltiples conjuntos de datos de diferentes orígenes, o bien para conectar éstas infraestructuras de datos con herramientas TIC. Esta metodología puede ser implementada en cualquier tipo de proceso de migración e integración de datos (DMI), para desarrollar infraestructuras de datos espaciales y no espaciales, o para implementar herramientas TIC en las infraestructuras de datos existentes para análisis adicionales; mejorando así la gobernanza de los datos. Un mayor rendimiento para obtener los parámetros hidráulicos del acuífero se ha abordado desde el desarrollo de una herramienta SIG. La interpretación de ensayos de bombeo dentro de su contexto espacial, puede reducir la incertidumbre de su análisis con un conocimiento preciso de la geometría y los límites del acuífero. Este software diseñado para recopilar, administrar, visualizar y analizar las pruebas de bombeo en un entorno SIG, apoya la parametrización hidráulica de los modelos de flujo y transporte de aguas subterráneas. Para mejorar la cuantificación de los parámetros hidráulicos, se ha realizado una compilación, revisión y análisis de la conductividad hidráulica basada en metodologías de tamaño de grano. Posteriormente, se ha considerado y discutido la incertidumbre de aplicar estos métodos en una escala mayor comparando los resultados de la mejora de escala con los obtenidos en ensayos de bombeo. Finalmente, se presenta una herramienta SIG libre, de código abierto y de fácil aplicación. Esta nueva generación de herramienta SIG pretende simplificar la caracterización de los cuerpos de agua subterránea con el propósito de construir modelos conceptuales ambientales rigurosos. Además, esta herramienta permite estandarizar, gestionar, analizar e interpretar datos hidrogeológicos e hidroquímicos. Gracias a su arquitectura de código libre y abierto, se puede actualizar y ampliar según las aplicaciones personalizadas que cada usuario requiera
Style APA, Harvard, Vancouver, ISO itp.
4

Faulstich, Lukas C. "The HyperView approach to the integration of semistructured data". [S.l. : s.n.], 2000. http://www.diss.fu-berlin.de/2000/33/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kittivoravitkul, Sasivimol. "A bi-directional transformation approach for semistructured data integration". Thesis, Imperial College London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.444093.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

GHEZZI, ANNALISA. "A new approach to data integration in Archaeological geophysics". Doctoral thesis, Università degli Studi di Camerino, 2020. http://hdl.handle.net/11581/447386.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Lewis, Richard. "A semantic approach to railway data integration and decision support". Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/5959/.

Pełny tekst źródła
Streszczenie:
The work presented in this thesis was motivated by the desire of the railway industry to capitalise on new technology developments promising seamless integration of distributed data. This includes systems that generate, consume and transmit data for asset decision support. The primary aim of the research was to investigate the limitations of previous syntactic data integration exercises, creating a foundation for semantic system development. The objective was to create a modelling process enabling domain experts to provide the information concepts and semantic relationships between those concepts. The resulting model caters for the heterogeneity between systems supplying data that previous syntactic approaches failed to achieve and integrate data from multiple systems such that the context of data is not lost when centralised in a repository. The essence of this work is founded on two characteristics of distributed data management; the first is that current Web tools, like XML, are not effective for all aspects of technical interoperability because they do not capture the context of the data; and second, there is little relationship between conventional database management systems and the data structures that are utilised in Web based data exchange which means that a different set of architecture components are required.
Style APA, Harvard, Vancouver, ISO itp.
8

Mireku, Kwakye Michael. "A Practical Approach to Merging Multidimensional Data Models". Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20457.

Pełny tekst źródła
Streszczenie:
Schema merging is the process of incorporating data models into an integrated, consistent schema from which query solutions satisfying all incorporated models can be derived. The efficiency of such a process is reliant on the effective semantic representation of the chosen data models, as well as the mapping relationships between the elements of the source data models. Consider a scenario where, as a result of company mergers or acquisitions, a number of related, but possible disparate data marts need to be integrated into a global data warehouse. The ability to retrieve data across these disparate, but related, data marts poses an important challenge. Intuitively, forming an all-inclusive data warehouse includes the tedious tasks of identifying related fact and dimension table attributes, as well as the design of a schema merge algorithm for the integration. Additionally, the evaluation of the combined set of correct answers to queries, likely to be independently posed to such data marts, becomes difficult to achieve. Model management refers to a high-level, abstract programming language designed to efficiently manipulate schemas and mappings. Particularly, model management operations such as match, compose mappings, apply functions and merge, offer a way to handle the above-mentioned data integration problem within the domain of data warehousing. In this research, we introduce a methodology for the integration of star schema source data marts into a single consolidated data warehouse based on model management. In our methodology, we discuss the development of three (3) main streamlined steps to facilitate the generation of a global data warehouse. That is, we adopt techniques for deriving attribute correspondences, and for schema mapping discovery. Finally, we formulate and design a merge algorithm, based on multidimensional star schemas; which is primarily the core contribution of this research. Our approach focuses on delivering a polynomial time solution needed for the expected volume of data and its associated large-scale query processing. The experimental evaluation shows that an integrated schema, alongside instance data, can be derived based on the type of mappings adopted in the mapping discovery step. The adoption of Global-And-Local-As-View (GLAV) mapping models delivered a maximally-contained or exact representation of all fact and dimensional instance data tuples needed in query processing on the integrated data warehouse. Additionally, different forms of conflicts, such as semantic conflicts for related or unrelated dimension entities, and descriptive conflicts for differing attribute data types, were encountered and resolved in the developed solution. Finally, this research has highlighted some critical and inherent issues regarding functional dependencies in mapping models, integrity constraints at the source data marts, and multi-valued dimension attributes. These issues were encountered during the integration of the source data marts, as it has been the case of evaluating the queries processed on the merged data warehouse as against that on the independent data marts.
Style APA, Harvard, Vancouver, ISO itp.
9

Mukviboonchai, Suvimol. "The mediated data integration (MeDInt) : An approach to the integration of database and legacy systems". Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2003. https://ro.ecu.edu.au/theses/1308.

Pełny tekst źródła
Streszczenie:
The information required for decision making by executives in organizations is normally scattered across disparate data sources including databases and legacy systems. To gain a competitive advantage, it is extremely important for executives to be able to obtain one unique view of information in an accurate and timely manner. To do this, it is necessary to interoperate multiple data sources, which differ structurally and semantically. Particular problems occur when applying traditional integration approaches, for example, the global schema needs to be recreated when the component schema has been modified. This research investigates the following heterogeneities between heterogeneous data sources: Data Model Heterogeneities, Schematic Heterogeneities and Semantic Heterogeneities. The problems of existing integration approaches are reviewed and solved by introducing and designing a new integration approach to logically interoperate heterogeneous data sources and to resolve three previously classified heterogeneities. The research attempts to reduce the complexity of the integration process by maximising the degree of automation. Mediation and wrapping techniques are employed in this research. The Mediated Data Integration (MeDint) architecture has been introduced to integrate heterogeneous data sources. Three major elements, the MeDint Mediator, wrappers, and the Mediated Data Model (MDM) play important roles in the integration of heterogeneous data sources. The MeDint Mediator acts as an intermediate layer transforming queries to sub-queries, resolving conflicts, and consolidating conflict-resolved results. Wrappers serve as translators between the MeDint Mediator and data sources. Both the mediator and wrappers arc well-supported by MDM, a semantically-rich data model which can describe or represent heterogeneous data schematically and semantically. Some organisational information systems have been tested and evaluated using the MeDint architecture. The results have addressed all the research questions regarding the interoperability of heterogeneous data sources. In addition, the results also confirm that the Me Dint architecture is able to provide integration that is transparent to users and that the schema evolution does not affect the integration.
Style APA, Harvard, Vancouver, ISO itp.
10

Engström, Henrik. "Selection of maintenance policies for a data warehousing environment : a cost based approach to meeting quality of service requirements". Thesis, University of Exeter, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269667.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "DATA INTEGRATION APPROACH"

1

Hsu, Cheng. Enterprise Integration and Modeling: The Metadatabase Approach. Boston, MA: Springer US, 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Kacani, Jolta. A Data-Centric Approach to Breaking the FDI Trap Through Integration in Global Value Chains. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43189-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

An artificial intelligence approach to VLSI routing. Boston: Kluwer Academic Publishers, 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

The bounding approach to VLSI circuit simulation. Boston: Kluwer Academic Publishers, 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Yu, Meng-Lin. Automatic random logic layout synthesis: A module generator approach. Urbana, Ill: Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

An artificial intelligence approach to test generation. Boston: Kluwer Academic Publishers, 1987.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

PSPICE and MATLAB for electronics: An integrated approach. Wyd. 2. Boca Raton: CRC Press, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Ware, Robert. Object definition in neuropathology: An assessment of an object-orientated approach to the integration of simple data structures and image information, in order to facilitate neuropathological diagnosis and research. [s.l: The author], 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Gray, Peter M. D., 1940-, red. The Functional approach to data management: Modeling, analyzing, and integrating heterogeneous data. Berlin: Springer, 2004.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Taniar, David, i Li Chen. Integrations of data warehousing, data mining and database technologies: Innovative approaches. Hershey, PA: Information Science Reference, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "DATA INTEGRATION APPROACH"

1

Raheem, Nasir. "Big Data Ingestion, Integration, and Management". W Big DataA Tutorial-Based Approach, 49–57. First edition. | Boca Raton, FL : Taylor & Francis Group, [2019]: Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/9780429060939-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Boudjlida, N., i O. Perrin. "An Open Approach for Data Integration". W OOIS’ 95, 94–98. London: Springer London, 1996. http://dx.doi.org/10.1007/978-1-4471-1009-5_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Benadjaoud, G. N., i B. T. David. "Data Integration: a Federated Approach with Data Exchanges". W IFIP Advances in Information and Communication Technology, 107–14. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-0-387-35065-3_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Jarke, Matthias, i Christoph Quix. "Federated Data Integration in Data Spaces". W Designing Data Spaces, 181–94. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93975-5_11.

Pełny tekst źródła
Streszczenie:
AbstractData Spaces form a network for sovereign data sharing. In this chapter, we explore the implications that the IDS reference architecture will have on typical scenarios of federated data integration and question answering processes. After a classification of data integration scenarios and their special requirements, we first present a workflow-based solution for integrated data materialization that has been used in several IDS use cases. We then discuss some limitations of such approaches and propose an additional approach based on logic formalisms and machine learning methods that promise to reduce data traffic, security, and privacy risks while helping users to select more meaningful data sources.
Style APA, Harvard, Vancouver, ISO itp.
5

Davidson, Susan B., i Limsoon Wong. "The Kleisli Approach to Data Transformation and Integration". W The Functional Approach to Data Management, 135–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-662-05372-0_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Risch, Tore, Vanja Josifovski i Timour Katchaounov. "Functional Data Integration in a Distributed Mediator System". W The Functional Approach to Data Management, 211–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-662-05372-0_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Peim, Martin, Norman W. Paton i Enrico Franconi. "Applying Functional Languages in Knowledge-Based Information Integration Systems". W The Functional Approach to Data Management, 239–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-662-05372-0_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Tria, Francesco Di, Ezio Lefons i Filippo Tangorra. "Ontological Approach to Data Warehouse Source Integration". W Information Sciences and Systems 2013, 251–59. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-01604-7_25.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ji, Qiu, Zhiqiang Gao i Zhisheng Huang. "Integration of Pattern-Based Debugging Approach into RaDON". W Linked Data and Knowledge Graph, 243–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-54025-7_23.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Gray, Peter M. D., i Graham J. L. Kemp. "An Expressive Functional Data Model and Query Language for Bioinformatics Data Integration". W The Functional Approach to Data Management, 166–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-662-05372-0_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "DATA INTEGRATION APPROACH"

1

Liu, Ting, Sharon Small, James Kubricht, Peter Tu, Harry Shen, Lydia Cartwright i Samuil Orlioglu. "A Data Driven Approach for Language Acquisition". W Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems. AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1002842.

Pełny tekst źródła
Streszczenie:
Automatic Language Acquisition focuses on teaching an agent to acquire knowledge to understand the surrounding environment and be adaptive to a new environment. The traditional language understanding models fall into three main categories, supervised, semi-supervised, and unsupervised. A supervised approach is usually accurate but requires a large training dataset, which building process is expensive and time consuming. In addition, the trained model is difficult to shift to other domains. On the other hand, building an unsupervised model is cheap and flexible, but its performance is usually significantly lower than the performance of the supervised one. With a relatively small set of guidance at the beginning, a semi-supervised approach can teach itself through the unlabeled dataset to achieve a comparable performance as a supervised modal. However, building the guidance is not a trivial task since the learning process won’t be effective if the relationship between labeled data and unlabeled data. Different from the traditional modals, when children learn, they do not require large amounts of training data. Instead, they can accurately generalize their knowledge from one object to other objects. In addition, the communication between them and their parents/teachers/peers helps to fix the wrong claims from the generalization. In this paper, we present a multimodal system that simulates the children’s learning process to acquire the knowledge of the entities by studying three types of attributes, descriptive (the outlook of an entity), defining (the components of an entity), and affordance (how an entity can be used). We first utilize an unsupervised Emergent Language (EL) approach to generate symbolic language (EL codes) to interpret the given images of the entities (10 images per entity). The K-Mean clustering methods to group entity images that share the similar EL code. Then we employ a data driven approach to teach the agent the attributes of the entities in the clusters. We first calculate the tf-idf scores of the words from the text pieces (extracted from Corpus of Contemporary American English, a balanced corpus for American English) containing an entity. From the top ranked words with a few sample text pieces, the human expert tells which words are attributes and the attribute type. The human expert also marked the text pieces having the attributes. For example, red is the descriptive attribute of cup in “the red cup’ but not in “a cup of red wine”. The learned knowledge is sent to a bootstrapping modal to find not only new attributes but also new entities. Our system results show that the data driven approach spent much less time but learned more attributes compared with the baseline of our system, teaching the agent the defining attributes of the entities using a carefully designed curriculum.
Style APA, Harvard, Vancouver, ISO itp.
2

Tsumoto, Shusaku, Shoji Hirano i Yuko Tsumoto. "Information reuse in hospital information systems: A data mining approach". W Integration (IRI). IEEE, 2011. http://dx.doi.org/10.1109/iri.2011.6009541.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Murugesan, K., Md Rukunuddin Ghalib, J. Gitanjali, J. Indumathi i D. Manjula. "A pioneering Cryptic Random Projection based approach for privacy preserving data mining". W Integration (IRI). IEEE, 2009. http://dx.doi.org/10.1109/iri.2009.5211646.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Minguez, Jorge, Mihaly Jakob, Uwe Heinkel i Bernhard Mitschang. "A SOA-based approach for the integration of a data propagation system". W Integration (IRI). IEEE, 2009. http://dx.doi.org/10.1109/iri.2009.5211609.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bouchra, Boulkroun, Benchikha Fouzia i Bachtarzi Chahinez. "Data Sources Integration Using Viewpoint-Based Approach". W IPAC '15: International Conference on Intelligent Information Processing, Security and Advanced Communication. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2816839.2816877.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Abdel-Moneim, Mohamed Samir, Ali Hamed El-Bastawissy i Mohamed Hamed Kholief. "Quality Driven Approach for Data Integration Systems". W The 7th International Conference on Information Technology. Al-Zaytoonah University of Jordan, 2015. http://dx.doi.org/10.15849/icit.2015.0078.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kettouch, Mohamed Salah, Cristina Luca, Mike Hobbs i Arooj Fatima. "Data integration approach for semi-structured and structured data (Linked Data)". W 2015 IEEE 13th International Conference on Industrial Informatics (INDIN). IEEE, 2015. http://dx.doi.org/10.1109/indin.2015.7281842.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Thomas, Matthew L., Gavin Shaddick, David Topping, Karyn Morrissey, Thomas J. Brannan, Mike Diessner, Ruth C. E. Bowyer i in. "A Data Integration Approach to Estimating Personal Exposures to Air Pollution". W 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020701.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Bukhari, Syed Ahmad Chan, Jeff Mandell, Steven H. Kleinstein i Kei-Hoi Cheung. "A linked data graph approach to integration of immunological data". W 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2019. http://dx.doi.org/10.1109/bibm47256.2019.8982986.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zhang, Laomo, Ying Ma i Guodong Wang. "An Extended Hybrid Ontology Approach to Data Integration". W 2009 2nd International Conference on Biomedical Engineering and Informatics. IEEE, 2009. http://dx.doi.org/10.1109/bmei.2009.5304734.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "DATA INTEGRATION APPROACH"

1

Widom, Jennifer. A Warehousing Approach to Data and Knowledge Integration. Fort Belvoir, VA: Defense Technical Information Center, sierpień 1997. http://dx.doi.org/10.21236/ada329436.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Whitney, Paul D., Christian Posse i Xingye C. Lei. Towards a Unified Approach to Information Integration - A review paper on data/information fusion. Office of Scientific and Technical Information (OSTI), październik 2005. http://dx.doi.org/10.2172/881949.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

RUGGERI, Paolo, Erwan GLOAGUEN, James IRVING i Klaus HOLLIGER. Integration of Local-Scale Hydrological and Regional-Scale Geophysical Data Based on a non-Linear Bayesian Equential Simulation Approach. Cogeo@oeaw-giscience, wrzesień 2011. http://dx.doi.org/10.5242/iamg.2011.0250.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Juden, Matthew, Tichaona Mapuwei, Till Tietz, Rachel Sarguta, Lily Medina, Audrey Prost, Macartan Humphreys i in. Process Outcome Integration with Theory (POInT): academic report. Centre for Excellence and Development Impact and Learning (CEDIL), marzec 2023. http://dx.doi.org/10.51744/crpp5.

Pełny tekst źródła
Streszczenie:
This paper describes the development and testing of a novel approach to evaluating development interventions – the POInT approach. The authors used Bayesian causal modelling to integrate process and outcome data to generate insights about all aspects of the theory of change, including outcomes, mechanisms, mediators and moderators. They partnered with two teams who had evaluated or were evaluating complex development interventions: The UPAVAN team had evaluated a nutrition-sensitive agriculture intervention in Odisha, India, and the DIG team was in the process of evaluating a disability-inclusive poverty graduation intervention in Uganda. The partner teams’ theory of change were adapted into a formal causal model, depicted as a directed acyclic graph (DAG). The DAG was specified in the statistical software R, using the CausalQueries package, having extended the package to handle large models. Using a novel prior elicitation strategy to elicit beliefs over many more parameters than has previously been possible, the partner teams’ beliefs about the nature and strength of causal links in the causal model (priors) were elicited and combined into a single set of shared prior beliefs. The model was updated on data alone as well as on data plus priors to generate posterior models under different assumptions. Finally, the prior and posterior models were queried to learn about estimates of interest, and the relative role of prior beliefs and data in the combined analysis.
Style APA, Harvard, Vancouver, ISO itp.
5

Borgida, Alex, i Ralf Küsters. What's not in a name? Initial Explorations of a Structural Approach to Integrating Large Concept Knowledge-Bases. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.101.

Pełny tekst źródła
Streszczenie:
Aus der Einleitung: Given two ontologies/terminologies collections of terms and their 'meanings' as used in some universe of discourse (UofD), our general task is to integrate them into a single ontology, which captures the meanings of the original terms and their inter-relationships. This problem is motivated by several application scenarios: • First, such ontologies have been and are being developed independently by multiple groups for knowledge-based and other applications. Among others, medicine is an area in which such ontologies already abound [RZStGC, CCHJ94, SCC97]. • Second, a traditional step in database design has been so-called 'view integration': taking the descriptions of the database needs of different parts of an organization (called 'external views'), and coming up with a unified central schema (called the 'logical schema') for the database [BLN86]. Although the database views might be expressed in some low-level formalism, such as the relational data model, one can express the semantics (meta-data) in a more expressive notation, which can be thought of as an ontology. Then the integration of the ontologies can guide the integration of the views. • Finally, databases and semistructured data on the internet provide many examples where there are multiple, existing heterogeneous information sources, for which uniform access is desired. To achieve this goal, it is necessary to relate the contents of the various information sources. The approach of choice has been the development of a single, integrated ontology, starting from separate ontologies capturing the semantics of the heterogeneous sources[Kas97, CDGL+98]. Of course, we could just take the union of the two ontologies, and return the result as the integration. However, except for the case when the ontologies had absolutely nothing to do with each other, this seems inappropriate. Therefore part of our task will to be explore what it means to 'integrate' two ontologies. To help in this, we will in fact assume here that the ontologies are describing exactly the same aspects of the universe of discourse (UofD), leaving for a separate paper the issue of dealing with partially overlapping ontologies.
Style APA, Harvard, Vancouver, ISO itp.
6

Dempsey, Terri L. Handling the Qualitative Side of Mixed Methods Research: A Multisite, Team-Based High School Education Evaluation Study. RTI Press, wrzesień 2018. http://dx.doi.org/10.3768/rtipress.2018.mr.0039.1809.

Pełny tekst źródła
Streszczenie:
Attention to mixed methods studies research has increased in recent years, particularly among funding agencies that increasingly require a mixed methods approach for program evaluation. At the same time, researchers operating within large-scale, rapid-turnaround research projects are faced with the reality that collection and analysis of large amounts of qualitative data typically require an intense amount of project resources and time. However, practical examples of efficiently collecting and handling high-quality qualitative data within these studies are limited. More examples are also needed of procedures for integrating the qualitative and quantitative strands of a study from design to interpretation in ways that can facilitate efficiencies. This paper provides a detailed description of the strategies used to collect and analyze qualitative data in what the research team believed to be an efficient, high-quality way within a team-based mixed methods evaluation study of science, technology, engineering, and math (STEM) high-school education. The research team employed an iterative approach to qualitative data analysis that combined matrix analyses with Microsoft Excel and the qualitative data analysis software program ATLAS.ti. This approach yielded a number of practical benefits. Selected preliminary results illustrate how this approach can simplify analysis and facilitate data integration.
Style APA, Harvard, Vancouver, ISO itp.
7

Balali, Vahid. System-of-Systems Integration for Civil Infrastructures Resiliency Toward MultiHazard Events. Mineta Transportation Institute, sierpień 2023. http://dx.doi.org/10.31979/mti.2023.2245.

Pełny tekst źródła
Streszczenie:
Civil infrastructure systems—facilities that supply principal services, such as electricity, water, transportation, etc., to a community—are the backbone of modern society. These systems are frequently subject to multi-hazard events, such as earthquakes. The poor resiliency of these infrastructures results in many human casualties and significant economic losses every year. An outline of a holistic view that considers how different civil infrastructure systems operate independently and how they interact and communicate with each other is required to have a resilient infrastructure system. More specifically a systems engineering approach is required to enable infrastructure to remain resilient in the case of extreme events, including natural disasters. To address these challenges, this research builds on the proposal that the infrastructure systems be equipped with state-of-the-art sensor networks that continuously record the condition and performance of the infrastructure. The sensor data from each infrastructure are then transferred to a data analysis system component that employs artificial intelligence techniques to constantly analyze the infrastructure’s resiliency and energy efficiency performance. This research models the resilient infrastructure problem as a System of Systems (SoS) comprised of the abovementioned components. It explores system integration and operability challenges and proposes solutions to meet the requirements of the SoS. An integration ontology, as well as a data-centric architecture, is developed to enable infrastructure resiliency toward multi-hazard events. The Federal Emergency Management Agency (FEMA), and infrastructure managers, such as Departments of Transportation (DOTs) and the Federal Highway Administration (FHWA), can learn from and integrate these solutions to make civil infrastructure systems more resilient for all.
Style APA, Harvard, Vancouver, ISO itp.
8

Frazer, Sarah, Anna Wetterberg i Eric Johnson. The Value of Integrating Governance and Sector Programs: Evidence from Senegal. RTI Press, wrzesień 2021. http://dx.doi.org/10.3768/rtipress.2021.rb.0028.2109.

Pełny tekst źródła
Streszczenie:
As the global community works toward the Sustainable Development Goals, closer integration between governance and sectoral interventions offers a promising, yet unproven avenue for improving health service delivery. We interrogate what value an integrated governance approach, intentionally combining governance and sectoral investments in strategic collaboration, adds to health service readiness and delivery using data from a study in Senegal. Our quasi-experimental research design compared treatment and control communes to determine the value added of an integrated governance approach in Senegal compared to health interventions alone. Our analysis shows that integrated governance is associated with improvements in some health service delivery dimensions, specifically, in aspects of health facility access and quality. These findings—that health facilities are more open, with higher quality infrastructure and staff more frequently following correct procedures after integrated governance treatment—suggests a higher level of service readiness. We suggest that capacity building of governance structures and an emphasis on social accountability could explain the added value of integrating governance and health programming. These elements may help overcome a critical bottleneck between citizens and local government often seen with narrower sector or governance-only approaches. We discuss implications for health services in Senegal, international development program design, and further research.
Style APA, Harvard, Vancouver, ISO itp.
9

Komba, Aneth, i Richard Shukia. An Analysis of the Basic Education Curriculum in Tanzania: The Integration, Scope, and Sequence of 21st Century Skills. Research on Improving Systems of Education (RISE), luty 2023. http://dx.doi.org/10.35489/bsg-rise-wp_2023/129.

Pełny tekst źródła
Streszczenie:
This study generated evidence on whether or not the basic education curriculum is geared towards developing problem-solving, collaboration, creativity, and critical thinking skills among those who graduate from the basic education system. It was informed by a mixed methodology research approach. The data were collected using interviews and documentary review. The findings reveal that the intention to promote 21st century skills through the basic education system in Tanzania is clear, as it is stated in various policy documents, including the Education for Self-Reliance philosophy, the 2014 Education and Training Policy and the National Curriculum Framework for Basic and Teacher Education. Furthermore, these skills are clearly reflected in every curriculum and syllabus document, yet those who graduate from the basic and advanced secondary levels are claimed to lack these skills. This suggests a variation between the enacted and the intended curriculum. We conclude that certain system elements are weak, and hence threaten the effective implementation of the curriculum. These weak system elements include limited finance, a teacher shortage, and the lack of a teacher continuous professional development programme. This research suggests that due consideration should be given to provision of the resources required for the successful implementation of the curriculum. These include, allocation of sufficient funds, the employment of more teachers and the provision of regular continuous professional development for teachers as a way to strengthen the system elements that we identified.
Style APA, Harvard, Vancouver, ISO itp.
10

Vogtsberger. L52138 Resolution Capabilities of High Resolution Axial Flux Leakage Casing Inspection Tools. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), styczeń 2008. http://dx.doi.org/10.55274/r0011164.

Pełny tekst źródła
Streszczenie:
Objective Design and construction of 5.5 inch prototype downhole inspection tool and electronics module. Implement preliminary software acquisition and display module. Integrate hardware and software systems. Conduct operational field test (OFT). Technical Approach From the continuation of Phase 1 of this research, the lessons learned in Phase 1 (Bench Test Prototype) were incorporated into the design and construction of an operational field test instrument. Implementation of preliminary software acquisition and display of data along with integration into existing wireline systems operations accomplished the total systems requirements. Finally, operational field test on PRCI committee member wells provide verification that all components of the system operate and provide desired results.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii