Academic literature on the topic 'Data integration method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data integration method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data integration method"

1

Kim, Jee-Hyun, and Young-Im Cho. "Research on Big Data Integration Method." Journal of the Korea Society of Computer and Information 22, no. 1 (January 31, 2017): 49–56. http://dx.doi.org/10.9708/jksci.2017.22.01.049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Cen, Fei Zhou, Jie Ren, Xiaoxi Li, Yu Jiang, and Shuangge Ma. "A Selective Review of Multi-Level Omics Data Integration Using Variable Selection." High-Throughput 8, no. 1 (January 18, 2019): 4. http://dx.doi.org/10.3390/ht8010004.

Full text
Abstract:
High-throughput technologies have been used to generate a large amount of omics data. In the past, single-level analysis has been extensively conducted where the omics measurements at different levels, including mRNA, microRNA, CNV and DNA methylation, are analyzed separately. As the molecular complexity of disease etiology exists at all different levels, integrative analysis offers an effective way to borrow strength across multi-level omics data and can be more powerful than single level analysis. In this article, we focus on reviewing existing multi-omics integration studies by paying special attention to variable selection methods. We first summarize published reviews on integrating multi-level omics data. Next, after a brief overview on variable selection methods, we review existing supervised, semi-supervised and unsupervised integrative analyses within parallel and hierarchical integration studies, respectively. The strength and limitations of the methods are discussed in detail. No existing integration method can dominate the rest. The computation aspects are also investigated. The review concludes with possible limitations and future directions for multi-level omics data integration.
APA, Harvard, Vancouver, ISO, and other styles
3

Maleszka, Marcin, and Ngoc Thanh Nguyen. "A METHOD FOR COMPLEX HIERARCHICAL DATA INTEGRATION." Cybernetics and Systems 42, no. 5 (June 2011): 358–78. http://dx.doi.org/10.1080/01969722.2011.595341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Peng, Lihui, Bowen Zhang, Huichao Zhao, Sonh Aubin Stephane, Hiroaki Ishikawa, and Kazuyoshi Shimizu. "Data Integration Method for Multipath Ultrasonic Flowmeter." IEEE Sensors Journal 12, no. 9 (September 2012): 2866–74. http://dx.doi.org/10.1109/jsen.2012.2204738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Golpîra, Hêriş. "A Hybrid Clustering Method Using Balanced Scorecard and Data Envelopment Analysis." International Journal Of Innovation And Economic Development 1, no. 7 (2015): 15–25. http://dx.doi.org/10.18775/ijied.1849-7551-7020.2015.17.2002.

Full text
Abstract:
This paper introduces a new hybrid clustering method using Data Envelopment Analysis (DEA) and Balanced Scorecard (BSC) methods. DEA cannot identify its’ input and output itself, and it is a major weakness of the DEA. In the proposed method, this gap is resolved by integrating DEA with BSC. Some decision-making units (DMUs) needed in DEA method, in compliance with some inputs and outputs is the major drawback of this integration. To deal with this disadvantage, the proposed method selects the most important strategic factors, attained from the BSC method. These data considered to be the input data for the DEA method to calculate relative closeness (RC) of each DMU to the ideal one. Plotting the screen diagram regarding RC index leads us to the final clustering method. Finally, computational results show the applicability and usefulness of the method.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Xinming, Haoxiang Tan, Kaijun Chen, Hua Tang, Gansen Zhao, Yong Tang, and Ruihua Nie. "A policy integration method based on multilevel security for data integration." Wuhan University Journal of Natural Sciences 20, no. 6 (November 7, 2015): 483–89. http://dx.doi.org/10.1007/s11859-015-1123-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Guo Ling, and Chun Hua Yang. "A Method of Data Integration Based on Cloud." Applied Mechanics and Materials 433-435 (October 2013): 1876–79. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.1876.

Full text
Abstract:
This work studied the situation of today’s data integration. To better solve the complexity and diversity in data integration, we propose a new idea of data integration service platform based on SaaS. The model architecture of data integration and its work principles are introduced. In the platform source data and result data are two interface.in the form of service. In the data integration platform, data are filtered and combined. the data are produced by the users’ requirements. Users call SaaS service to get the data that they wanted. Finally we analyze the model’s characters. This model of platform is more intelligent, efficient and personalized in solving complicate data integration.
APA, Harvard, Vancouver, ISO, and other styles
8

Dr.A.Mekala. "An Ontology Approach to Data Integration using Mapping Method." International Journal for Modern Trends in Science and Technology 6, no. 12 (December 4, 2020): 28–32. http://dx.doi.org/10.46501/ijmtst061206.

Full text
Abstract:
Text mining is a technique to discover meaningful patterns from the available text documents. The pattern sighting from the text and document association of document is a well-known problem in data mining. Analysis of text content and categorization of the documents is a composite task of data mining. Some of them are supervised and some of them unsupervised manner of document compilation. The term “Federated Databases” refers to the in sequence integration of distributed, autonomous and heterogeneous databases. Nevertheless, a federation can also include information systems, not only databases. At integrating data, more than a few issues must be addressed. Here, we focus on the trouble of heterogeneity, more specifically on semantic heterogeneity – that is, problems correlated to semantically equivalent concepts or semantically related/unrelated concepts. In categorize to address this problem; we apply the idea of ontologies as a tool for data integration. In this paper, we make clear this concept and we briefly explain a technique for constructing ontology by using a hybrid ontology approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Jiao, Ming Lian. "GPS-InSAR Data Integration Method and its Application." Applied Mechanics and Materials 170-173 (May 2012): 2799–802. http://dx.doi.org/10.4028/www.scientific.net/amm.170-173.2799.

Full text
Abstract:
The powerful tool of GPS-InSAR integration is drawing more and more attention in deformation monitoring. This paper introduces firstly method of atmospheric corrections and orbit errors to InSAR images using GPS datas. Then, the scheme of GPS-InSAR data integrating is expounded, Finally, Analysis and examples prove that GPS and InSAR technology has highly complementary. On the one hand the GPS provides good approach for resolving sensitivity that the InSAR opposite to atmospheric parameters change and the orbit error correction, on the other hand, we can use GPS technology to raise the InSAR spatial resolution , and are able to monitor surface deformation in millimeter-level precision. Therefore, using GPS-InSAR integrated technology will break through the technical limitations of a single application. They play each of their respective advantages, which greatly improve the resolved capacity in space domain and time domain, so as to provide better services to monitor the surface deformation.
APA, Harvard, Vancouver, ISO, and other styles
10

Rice, Gary K., Greg King, and Jay Henson. "Integration of geology, seismic, and geochemical data — Theory and practice in Cheeseburger Field, Stonewall County, Texas, USA." Interpretation 4, no. 2 (May 1, 2016): T215—T225. http://dx.doi.org/10.1190/int-2015-0132.1.

Full text
Abstract:
Difficult-to-find petroleum resources and expensive drilling drive the need for improved exploration methods. Although improvement can be made by technically advancing individual methods, greater improvement comes from integrating existing independent exploration methods to dramatically improve drilling success. Exploration integration is often discussed, but it is less often carried out. A reason exploration integration has been limited may be due to the lack of clearly defined integration methods. In this study, we looked at the integration of independent exploration methods; we studied its fundamental principles, how it works, and why it is effective. Derived from basic probability theory, a simple map overlay of independent exploration data can be an effective integration method. Probability calculations determine the probability of a successful well from known probabilities of integrated independent techniques. A successful integration of data from Cheeseburger Field, Eastern Shelf of the Midland Basin, Stonewall County, Texas, illustrates how integration of 3D seismic, subsurface geologic, and surface geochemical data improve drilling results beyond those achieved from any single method used alone. In Cheeseburger Field, 3D seismic and subsurface geology resulted in [Formula: see text] successful wells. After integrating geochemical exploration data, results improved to [Formula: see text]. Modern petroleum exploration is a multitool, integrated information science. Probability theory provides a means for predicting outcome of integrating independent exploration methods. Enhanced exploration success can be achieved by combining independent and complementary exploration methods in this integration process.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Data integration method"

1

Chavali, Krishna Kumar. "Integration of statistical and neural network method for data analysis." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4749.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains viii, 68 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 50-51).
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, Shih-Yung. "Integration and processing of high-resolution moiré-interferometry data." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/40181.

Full text
Abstract:
A new hybrid method combining moire interferometry, high resolution data-reduction technique, two-dimensional datasmoothing method, and Finite Element Method (FEM) has been successfully developed. This hybrid method has been applied to residual strain analyses of composite panels, strain concentrations around optical fibers embedded in composites, and cruciform composite shear test. This hybrid method allows moire data to be collected with higher precision and accuracy by digitizing overexposed moire patterns (U & V fields) with appropriate carrier fringes. The resolution of the data is ± 20 nm. The data extracted from the moire patterns are interfaced to an FEM package through an automatic mesh generator. This mesh generator produces a nonuniform FEM mesh by connecting the digitized data points into triangles. The mesh, which uses digitized displacement data as boundary conditions, is then fed to and processed by a commercial FEM package. Due to the natural scatter of the displacement data digitized from moire patterns, the accuracy of strain values is significantly affected. A modified finite-element model with linear spring elements is introduced so data-smoothing can be done easily in two dimensional space. The results of the data smoothing are controlled by limiting the stretch of those springs to be less than the resolution of the experimental method. With the full-field hybrid method, the strain contours from moire interferometry can be easily obtained with good accuracy. If the properties of the material are known, the stress patterns can also be obtained. In addition, this method can be used to analyze any two-dimensional displacement data, including the grid method and holography.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Graciolli, Vinicius Medeiros. "A novel classification method applied to well log data calibrated by ontology based core descriptions." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/174993.

Full text
Abstract:
Um método para a detecção automática de tipos litológicos e contato entre camadas foi desenvolvido através de uma combinação de análise estatística de um conjunto de perfis geofísicos de poços convencionais, calibrado por descrições sistemáticas de testemunhos. O objetivo deste projeto é permitir a integração de dados de rocha em modelos de reservatório. Os testemunhos são descritos com o suporte de um sistema de nomenclatura baseado em ontologias que formaliza extensamente uma grande gama de atributos de rocha. As descrições são armazenadas em um banco de dados relacional junto com dados de perfis de poço convencionais de cada poço analisado. Esta estrutura permite definir protótipos de valores de perfil combinados para cada litologia reconhecida através do cálculo de média e dos valores de variância e covariância dos valores medidos por cada ferramenta de perfilagem para cada litologia descrita nos testemunhos. O algoritmo estatístico é capaz de aprender com cada novo testemunho e valor de log adicionado ao banco de dados, refinando progressivamente a identificação litológica. A detecção de contatos litológicos é realizada através da suavização de cada um dos perfis através da aplicação de duas médias móveis de diferentes tamanhos em cada um dos perfis. Os resultados de cada par de perfis suavizados são comparados, e as posições onde as linhas se cruzam definem profundidades onde ocorrem mudanças bruscas no valor do perfil, indicando uma potencial mudança de litologia. Os resultados da aplicação desse método em cada um dos perfis são então unificados em uma única avaliação de limites litológicos Os valores de média e variância-covariância derivados da correlação entre testemunhos e perfis são então utilizados na construção de uma distribuição gaussiana n-dimensional para cada uma das litologias reconhecidas. Neste ponto, probabilidades a priori também são calculadas para cada litologia. Estas distribuições são comparadas contra cada um dos intervalos litológicos previamente detectados por meio de uma função densidade de probabilidade, avaliando o quão perto o intervalo está de cada litologia e permitindo a atribuição de um tipo litológico para cada intervalo. O método desenvolvido foi testado em um grupo de poços da bacia de Sergipe- Alagoas, e a precisão da predição atingida durante os testes mostra-se superior a algoritmos clássicos de reconhecimento de padrões como redes neurais e classificadores KNN. O método desenvolvido foi então combinado com estes métodos clássicos em um sistema multi-agentes. Os resultados mostram um potencial significante para aplicação operacional efetiva na construção de modelos geológicos para a exploração e desenvolvimento de áreas com grande volume de dados de perfil e intervalos testemunhados.
A method for the automatic detection of lithological types and layer contacts was developed through the combined statistical analysis of a suite of conventional wireline logs, calibrated by the systematic description of cores. The intent of this project is to allow the integration of rock data into reservoir models. The cores are described with support of an ontology-based nomenclature system that extensively formalizes a large set of attributes of the rocks, including lithology, texture, primary and diagenetic composition and depositional, diagenetic and deformational structures. The descriptions are stored in a relational database along with the records of conventional wireline logs (gamma ray, resistivity, density, neutrons, sonic) of each analyzed well. This structure allows defining prototypes of combined log values for each lithology recognized, by calculating the mean and the variance-covariance values measured by each log tool for each of the lithologies described in the cores. The statistical algorithm is able to learn with each addition of described and logged core interval, in order to progressively refine the automatic lithological identification. The detection of lithological contacts is performed through the smoothing of each of the logs by the application of two moving means with different window sizes. The results of each pair of smoothed logs are compared, and the places where the lines cross define the locations where there are abrupt shifts in the values of each log, therefore potentially indicating a change of lithology. The results from applying this method to each log are then unified in a single assessment of lithological boundaries The mean and variance-covariance data derived from the core samples is then used to build an n-dimensional gaussian distribution for each of the lithologies recognized. At this point, Bayesian priors are also calculated for each lithology. These distributions are checked against each of the previously detected lithological intervals by means of a probability density function, evaluating how close the interval is to each lithology prototype and allowing the assignment of a lithological type to each interval. The developed method was tested in a set of wells in the Sergipe-Alagoas basin and the prediction accuracy achieved during testing is superior to classic pattern recognition methods such as neural networks and KNN classifiers. The method was then combined with neural networks and KNN classifiers into a multi-agent system. The results show significant potential for effective operational application to the construction of geological models for the exploration and development of areas with large volume of conventional wireline log data and representative cored intervals.
APA, Harvard, Vancouver, ISO, and other styles
4

Stock, Kristin Mary. "A new method for representing and translating the semantics of hetrogenous spatial databases." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sukcharoenpong, Anuchit. "Shoreline Mapping with Integrated HSI-DEM using Active Contour Method." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1406147249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Darrell, Leopold Augustus. "Development of an NDT method to characterise flaws based on multiple eddy current sensor integration and data fusion." Thesis, Leeds Beckett University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lindström, Maria, and Lena Ljungwald. "A study of the integration of complementary analysis methods : Analysing qualitative data for distributed tactical operations." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4750.

Full text
Abstract:

Complex socio-technical systems, like command and control work in military operations and rescue operations, are becoming more and more common in the society, and there is a growing urge for more useful and effective systems. Qualitative data from complex socio-technical systems can be challenging to analyse. This thesis probes one way of enhancing existing analysis methods to better suit this task.

Our case study is carried out at FOI (the Swedish Defence Research Agency). One of FOI’s tasks is to analyse complex situations, for example military operations, and they have developed an approach called the Reconstruction – exploration approach (R&E) for analysing distributed tactical operations (DTOs). The R&E approach has a rich contextual approach, but lacks a systematic analytic methodology.

The assignment of this thesis is to investigate how the R&E approach could be enhanced and possibly merged with other existing cognitive analysis methods to better suit the analysis of DTOs. We identified that the R&E approach’s main weaknesses were the lack of structure and insufficient way of handling subjective data, which contributed to difficulties when performing a deeper analysis. The approach also needed a well-defined analysis method for increasing the validity of the identified results.

One way of improvement was to integrate the R&E approach with several cognitive analysis methods based on their respective individual strengths. We started by analysing the R&E approach and then identified qualities in other methods that complemented the weaknesses in the R&E approach. Finally we developed an integrated method.

The Critical Decision Method (CDM) appeared to be the most suitable method for integration with the R&E approach. Nevertheless, the CDM did not have all the qualities asked for so we chose to use functions from other methods included in our initial analysis as well; ETA and Grounded theory.

The integration resulted in a method with a well-defined method for analysis and the possibility to handle subjective data. This can contribute to a deeper analysis of DTOs.

APA, Harvard, Vancouver, ISO, and other styles
8

Söderström, Eva. "Merging Modelling Techniques: A Case Study and its Implications." Thesis, University of Skövde, Department of Computer Science, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-393.

Full text
Abstract:

There are a countless number of methods in the field of Information Systems Development (ISD) today, where only a few have received much attention by practitioners. These ISD methods are described and developed using knowledge in the field of Method Engineering (ME). Most methods concern either what a system is to contain or how the system is to be realised, but as of now, there is no best method for all situations. Bridging the gap between the fuzzier "what"-methods and the more formal "how"-methods is difficult, if not impossible. Methods therefore need to be integrated to cover as much of the systems life cycle as possible. An integration of two methods, one from each side of the gap, can be performed in a number of different ways, each way having its own obstacles that need to be overcome.

The case study we have performed concerns a method integration of the fuzzier Business Process Model (BPM) in the EKD method with the more formal description technique SDL (Specification and Description Language). One meta model per technique was created, which were then used to compare BPM and SDL. The integration process consisted of translating EKD business process diagrams into SDL correspondences, while carefully documenting and analysing encountered problems. The encountered problems mainly arose because of either transaction-independence differences or method focus deviations. The case study resulted in, for example, a number of implications for both EKD and SDL, as well as for ME, and include suggestions for future work.

APA, Harvard, Vancouver, ISO, and other styles
9

Forst, Marie Bess. "Zoophonics keyboards: A venue for technology integration in kindergarten." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2560.

Full text
Abstract:
The purpose of the project was to create a program of instruction that seamlessly meshed with my current emergent literacy curriculum, a popularly used phonics program entitled Zoo-phonics, which can easily be applied by other kindergarten teachers using the same phonics instruction program.
APA, Harvard, Vancouver, ISO, and other styles
10

Zeng, Sai. "Knowledge-based FEA Modeling Method for Highly Coupled Variable Topology Multi-body Problems." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4772.

Full text
Abstract:
The increasingly competitive market is forcing the industry to develop higher-quality products more quickly and less expensively. Engineering analysis, at the same time, plays an important role in helping designers evaluate the performance of the designed product against design requirements. In the context of automated CAD/FEA integration, the domain-dependent engineers different usage views toward product models cause an information gap between CAD and FEA models, which impedes the interoperability among these engineering tools and the automatic transformation from an idealized design model into a solvable FEA model. Especially in highly coupled variable topology multi-body (HCVTMB) problems, this transformation process is usually very labor-intensive and time-consuming. In this dissertation, a knowledge-based FEA modeling method, which consists of three information models and the transformation processes between these models, is presented. An Analysis Building Block (ABB) model represents the idealized analytical concepts in a FEA modeling process. Solution Method Models (SMMs) represent these analytical concepts in a solution technique-specific format. When FEA is used as the solution technique, an SMM consists of a Ready to Mesh Model (RMM) and a Control Information Model (CIM). An RMM is obtained from an ABB through geometry manipulation so that the quality mesh can be automatically generated using FEA tools. CIMs contain information that controls the FEA modeling and solving activities. A Solution Tool Model (STM) represents an analytical model at the tool-specific level to guide the entire FEA modeling process. Two information transformation processes are presented between these information models. A solution method mapping transforms an ABB into an RMM through a complex cell decomposition process and an attribute association process. A solution tool mapping transforms an SMM into an STM by mimicking an engineers selection of FEA modeling operations. Four HCVTMB industrial FEA modeling cases are presented for demonstration and validation. These involve thermo-mechanical analysis scenarios: a simple chip package, a Plastic Ball Grid Array (PBGA), and an Enhanced Ball Grid Array (EBGA), as well as a thermal analysis scenario: another PBGA. Compared to traditional methods, results indicate that this method provides better knowledge capture and decreases the modeling time from days/hours to hours/minutes.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Data integration method"

1

Klaus, Kronlöf, ed. Method integration: Concepts and case studies. Chichester: Wiley, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cottrell, J. Austin. Isogeometric analysis: Toward integration of CAD and FEA. Chichester, West Sussex, U.K: J. Wiley, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

scientist), Ma Weijun (Computer, ed. Xin xi xi tong ji cheng fang fa yu ji shu: Information System Integration Method and Technology. Beijing Shi: Qi xiang chu ban she, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Walker, Duncan Moore Henry. Yield simulation for integrated circuits. Boston: Kluwer Academic Publishers, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Distinguished Instructor Short Course (2003 Tulsa, Okla.). Geostatistics for seismic data integration in earth models: 2003 Distinguished Instructor Short Course. Tulsa, OK: Society of Exploration Geophysicists, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

NATO Advanced Research Workshop on Real-Time Integration Methods for Mechanical System Simulation (1989 Snowbird, Utah). Real-time integration methods for mechanical system simulation. Berlin: Springer-Verlag, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Formal methods in circuit design. Cambridge: Cambridge University Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ravikumār, C. P. Parallel methods for VLSI layout design. Norwood, N.J: Ablex, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

National Research Council (U.S.). Committee on Applied and Theoretical Statistics, ed. Steps toward large-scale data integration in the sciences: Summary of a workshop. Washington, D.C: National Academies Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

CHARME'99 (1999 Bad Herrenalb, Germany). Correct hardware design and verification methods: 10th IFIP WG10.5 advanced research working conference, CHARME'99, Bad Herrenalb, Germany, September 27-29, 1999 : proceedings. New York: Springer, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data integration method"

1

Gao, Guofeng, Hui Li, Rong Chen, Xin Ge, and Shikai Guo. "Identification of High Priority Bug Reports via Integration Method." In Big Data, 336–49. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2922-7_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lê Cao, Kim-Anh, and Zoe Marie Welham. "Choose the right method for the right question in mixOmics." In Multivariate Data Integration Using R, 29–44. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003026860-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Caers, Jef. "Data Integration using the Probability Perturbation Method." In Geostatistics Banff 2004, 13–22. Dordrecht: Springer Netherlands, 2005. http://dx.doi.org/10.1007/978-1-4020-3610-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Hong, and Yanshen Sun. "Dynamic Multi-relational Networks Integration and Extended Link Prediction Method." In Intelligence Science and Big Data Engineering. Big Data and Machine Learning Techniques, 193–203. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23862-3_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kern, Rafał, Grzegorz Dobrowolski, and Ngoc Thanh Nguyen. "A Method for Response Integration in Federated Data Warehouses." In New Trends in Computational Collective Intelligence, 63–73. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-10774-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Krasnokucki, Daniel, Grzegorz Kwiatkowski, and Tomasz Jastrząb. "A New Method of XML-Based Wordnets’ Data Integration." In Beyond Databases, Architectures and Structures. Towards Efficient Solutions for Data Analysis and Knowledge Representation, 302–15. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58274-0_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jamshidi, Jafar, Antony Roy Mileham, and Geraint Wyn Owen. "Rapid and Accurate Data Integration Method for Reverse Engineering Applications." In Advances in Integrated Design and Manufacturing in Mechanical Engineering II, 163–75. Dordrecht: Springer Netherlands, 2007. http://dx.doi.org/10.1007/978-1-4020-6761-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Macioł, Andrzej. "Integration of Data and Rules in Inference with Queries Method." In Business Information Systems, 424–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-79396-0_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kurik, L., V. Sinivee, M. Lints, and U. Kallavus. "Method for Data Collection and Integration into 3D Architectural Model." In Lecture Notes in Electrical Engineering, 707–17. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-3535-8_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xiang, Shijing, and Ming Fang. "Research of the data integration method for drilling data warehouse based on Hadoop." In Advances in Energy Science and Equipment Engineering II, 1827–32. Taylor & Francis Group, 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742: CRC Press, 2017. http://dx.doi.org/10.1201/9781315116174-188.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data integration method"

1

Grachev, Ya V., X. Liu, A. N. Tsypkin, V. G. Bespalov, S. A. Kozlov, and X. C. Zhang. "Data Spectral Encoding Method with Pulsed Terahertz Sources." In Optoelectronic Devices and Integration. Washington, D.C.: OSA, 2015. http://dx.doi.org/10.1364/oedi.2015.ow2c.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mo, Lin, and Hua Zheng. "A Method for Measuring Data Quality in Data Integration." In 2008 International Seminar on Future Information Technology and Management Engineering. IEEE, 2008. http://dx.doi.org/10.1109/fitme.2008.146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Ontology Similarity Measurement Method in Rapid Data Integration." In International Conference on Data Technologies and Applications. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0004037102370240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Nei-Mao, John T. Kuo, and Yu-Hua Chu. "Characteristics-integration method applied to slant stacked data." In 1985 SEG Technical Program Expanded Abstracts. SEG, 1985. http://dx.doi.org/10.1190/1.1892814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Rui, and Nianbin Wang. "Ontology-Based Deep Web Data Interface Schemas Integration Method." In 2010 2nd International Conference on E-business and Information System Security (EBISS). IEEE, 2010. http://dx.doi.org/10.1109/ebiss.2010.5473489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Synnergren, Jane, Björn Olsson, and Jonas Gamalielsson. "A data integration method for exploring gene regulatory mechanisms." In Proceeding of the 2nd international workshop. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1458449.1458468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Chao, Jie Lu, and Guangquan Zhang. "An ontology data matching method for web information integration." In the 10th International Conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1497308.1497349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kalinichenko, Leonid. "Method for Data Models Integration in the Common Paradigm." In Proceedings of the First East-European Symposium on Advances in Databases and Information Systems. BCS Learning & Development, 1997. http://dx.doi.org/10.14236/ewic/adbis1997.23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Feng, Chuyan, Wei Wu, Huimin Ren, Xiangyu Han, and Xin Tong. "Research on multivariate data integration method based on ETL." In 5th International Conference on Mechatronics and Computer Technology Engineering (MCTE 2022), edited by Dalin Zhang. SPIE, 2022. http://dx.doi.org/10.1117/12.2660837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Xiaojun, Yubo Dong, Sheng Yuan, Xiaochun Ren, Wei Wang, and Cheng Zhang. "One Master Data Identification Method Based on Stacking Integration." In 2021 International Conference on Digital Society and Intelligent Systems (DSInS). IEEE, 2021. http://dx.doi.org/10.1109/dsins54396.2021.9670572.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data integration method"

1

Al Rashdan, Ahmad Y., Cameron J. Krome, Shawn W. St. Germain, and John Rosenlof. Method and application of data integration at a nuclear power plant. Light Water Reactor Sustainability Program report. Office of Scientific and Technical Information (OSTI), June 2019. http://dx.doi.org/10.2172/1546737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weston, L., D. Whitehead, and N. Graves. Recovery actions in PRA (probabilistic risk assessment) for the Risk Methods Integration and Evaluation Program (RMIEP): Volume 1, Development of the data-based method. Office of Scientific and Technical Information (OSTI), June 1987. http://dx.doi.org/10.2172/6345600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Whitehead, D. Recovery actions in PRA (Probabilistic Risk Assessment) for the risk methods integration and evaluation program (RMIEP): Volume 2, Application of the data-based method. Office of Scientific and Technical Information (OSTI), December 1987. http://dx.doi.org/10.2172/5654159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

González-Montaña, Luis Antonio. Semantic-based methods for morphological descriptions: An applied example for Neotropical species of genus Lepidocyrtus Bourlet, 1839 (Collembola: Entomobryidae). Verlag der Österreichischen Akademie der Wissenschaften, November 2021. http://dx.doi.org/10.1553/biosystecol.1.e71620.

Full text
Abstract:
The production of semantic annotations has gained renewed attention due to the development of anatomical ontologies and the documentation of morphological data. Two methods are proposed in this production, differing in their methodological and philosophical approaches: class-based method and instance-based method. The first, the semantic annotations are established as class expressions, while in the second, the annotations incorporate individuals. An empirical evaluation of the above methods was applied in the morphological description of Neotropical species of the genus Lepidocyrtus (Collembola: Entomobryidae: Lepidocyrtinae). The semantic annotations are expressed as RDF triple, which is a language most flexible than the Entity-Quality syntax used commonly in the description of phenotypes. The morphological descriptions were built in Protégé 5.4.0 and stored in an RDF store created with Fuseki Jena. The semantic annotations based on RDF triple increase the interoperability and integration of data from diverse sources, e.g., museum data. However, computational challenges are present, which are related with the development of semi-automatic methods for the generation of RDF triple, interchanging between texts and RDF triple, and the access by non-expert users.
APA, Harvard, Vancouver, ISO, and other styles
5

Toutin, Th. Multisource Data Integration: Comparison of Geometric and Radiometric Methods. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1995. http://dx.doi.org/10.4095/219858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dempsey, Terri L. Handling the Qualitative Side of Mixed Methods Research: A Multisite, Team-Based High School Education Evaluation Study. RTI Press, September 2018. http://dx.doi.org/10.3768/rtipress.2018.mr.0039.1809.

Full text
Abstract:
Attention to mixed methods studies research has increased in recent years, particularly among funding agencies that increasingly require a mixed methods approach for program evaluation. At the same time, researchers operating within large-scale, rapid-turnaround research projects are faced with the reality that collection and analysis of large amounts of qualitative data typically require an intense amount of project resources and time. However, practical examples of efficiently collecting and handling high-quality qualitative data within these studies are limited. More examples are also needed of procedures for integrating the qualitative and quantitative strands of a study from design to interpretation in ways that can facilitate efficiencies. This paper provides a detailed description of the strategies used to collect and analyze qualitative data in what the research team believed to be an efficient, high-quality way within a team-based mixed methods evaluation study of science, technology, engineering, and math (STEM) high-school education. The research team employed an iterative approach to qualitative data analysis that combined matrix analyses with Microsoft Excel and the qualitative data analysis software program ATLAS.ti. This approach yielded a number of practical benefits. Selected preliminary results illustrate how this approach can simplify analysis and facilitate data integration.
APA, Harvard, Vancouver, ISO, and other styles
7

Cheng, Peng, James V. Krogmeier, Mark R. Bell, Joshua Li, and Guangwei Yang. Detection and Classification of Concrete Patches by Integrating GPR and Surface Imaging. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317320.

Full text
Abstract:
This research considers the detection, location, and classification of patches in concrete and asphalt-on-concrete pavements using data taken from ground penetrating radar (GPR) and the WayLink 3D Imaging System. In particular, the project seeks to develop a patching table for “inverted-T” patches. A number of deep neural net methods were investigated for patch detection from 3D elevation and image observation, but the success was inconclusive, partly because of a dearth of training data. Later, a method based on thresholding IRI values computed on a 12-foot window was used to localize pavement distress, particularly as seen by patch settling. This method was far more promising. In addition, algorithms were developed for segmentation of the GPR data and for classification of the ambient pavement and the locations and types of patches found in it. The results so far are promising but far from perfect, with a relatively high rate of false alarms. The two project parts were combined to produce a fused patching table. Several hundred miles of data was captured with the Waylink System to compare with a much more limited GPR dataset. The primary dataset was captured on I-74. A software application for MATLAB has been written to aid in automation of patch table creation.
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Peng, James V. Krogmeier, Mark R. Bell, Joshua Li, and Guangwei Yang. Detection and Classification of Concrete Patches by Integrating GPR and Surface Imaging. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317320.

Full text
Abstract:
This research considers the detection, location, and classification of patches in concrete and asphalt-on-concrete pavements using data taken from ground penetrating radar (GPR) and the WayLink 3D Imaging System. In particular, the project seeks to develop a patching table for “inverted-T” patches. A number of deep neural net methods were investigated for patch detection from 3D elevation and image observation, but the success was inconclusive, partly because of a dearth of training data. Later, a method based on thresholding IRI values computed on a 12-foot window was used to localize pavement distress, particularly as seen by patch settling. This method was far more promising. In addition, algorithms were developed for segmentation of the GPR data and for classification of the ambient pavement and the locations and types of patches found in it. The results so far are promising but far from perfect, with a relatively high rate of false alarms. The two project parts were combined to produce a fused patching table. Several hundred miles of data was captured with the Waylink System to compare with a much more limited GPR dataset. The primary dataset was captured on I-74. A software application for MATLAB has been written to aid in automation of patch table creation.
APA, Harvard, Vancouver, ISO, and other styles
9

Dobson, C. L. A comparison of methods to evaluate reservoir dissolved oxygen data. Average cross-sectional area versus time X depth integration`s. Office of Scientific and Technical Information (OSTI), August 1993. http://dx.doi.org/10.2172/10184388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Schnabel, Filipina, and Danielle Aldridge. Effectiveness of EHR-Depression Screening Among Adult Diabetics in an Urban Primary Care Clinic. University of Tennessee Health Science Center, April 2021. http://dx.doi.org/10.21007/con.dnp.2021.0003.

Full text
Abstract:
Background Diabetes mellitus (DM) and depression are important comorbid conditions that can lead to more serious health outcomes. The American Diabetes Association (ADA) supports routine screening for depression as part of standard diabetes management. The PHQ2 and PHQ9 questionnaires are good diagnostic screening tools used for major depressive disorders in Type 2 diabetes mellitus (DM2). This quality improvement study aims to compare the rate of depression screening, treatment, and referral to behavioral health in adult patients with DM2 pre and post-integration of depression screening tools into the electronic health record (EHR). Methods We conducted a retrospective chart review on patients aged 18 years and above with a diagnosis of DM2 and no initial diagnosis of depression or other mental illnesses. Chart reviews included those from 2018 or prior for before integration data and 2020 to present for after integration. Sixty subjects were randomly selected from a pool of 33,695 patients in the clinic with DM2 from the year 2013-2021. Thirty of the patients were prior to the integration of depression screening tools PHQ2 and PHQ9 into the EHR, while the other half were post-integration. The study population ranged from 18-83 years old. Results All subjects (100%) were screened using PHQ2 before integration and after integration. Twenty percent of patients screened had a positive PHQ2 among subjects before integration, while 10% had a positive PHQ2 after integration. Twenty percent of patients were screened with a PHQ9 pre-integration which accounted for 100% of those subjects with a positive PHQ2. However, of the 10% of patients with a positive PHQ2 post-integration, only 6.7 % of subjects were screened, which means not all patients with a positive PHQ2 were adequately screened post-integration. Interestingly, 10% of patients were treated with antidepressants before integration, while none were treated with medications in the post-integration group. There were no referrals made to the behavior team in either group. Conclusion There is no difference between the prevalence of depression screening before or after integration of depression screening tools in the EHR. The study noted that there is a decrease in the treatment using antidepressants after integration. However, other undetermined conditions could have influenced this. Furthermore, not all patients with positive PHQ2 in the after-integration group were screened with PHQ9. The authors are unsure if the integration of the depression screens influenced this change. In both groups, there is no difference between referrals to the behavior team. Implications to Nursing Practice This quality improvement study shows that providers are good at screening their DM2 patients for depression whether the screening tools were incorporated in the EHR or not. However, future studies regarding providers, support staff, and patient convenience relating to accessibility and availability of the tool should be made. Additional issues to consider are documentation reliability, hours of work to scan documents in the chart, risk of documentation getting lost, and the use of paper that requires shredding to comply with privacy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography