Dissertations / Theses on the topic 'Numerical integration – Data processing'

To see the other types of publications on this topic, follow the link: Numerical integration – Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Numerical integration – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ives, Zachary G. "Efficient query processing for data integration /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Chuan. "Numerical algorithms for data processing and analysis." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/277.

Full text
Abstract:
Magnetic nanoparticles (NPs) with sizes ranging from 2 to 20 nm in diameter represent an important class of artificial nanostructured materials, since the NP size is comparable to the size of a magnetic domain. They have potential applications in data storage, catalysis, permanent magnetic nanocomposites, and biomedicine. To begin with, a brief overview on the background of Fe-based bimetallic NPs and their applications for data-storage and catalysis was presented in Chapter 1. In Chapter 2, L10-ordered FePt NPs with high coercivity were directly prepared from a novel bimetallic acetylenic alternating copolymer P3 by a one-step pyrolysis method without post-thermal annealing. The chemical ordering, morphology and magnetic properties were studied. Magnetic measurements showed that a record coercivity of 3.6 T (1 T = 10 kOe) was obtained in FePt NPs. By comparison of the resultant FePt NPs synthesized under Ar and Ar/H2, the characterization proved that the incorporation of H2 would affect the nucleation and promote the growth of FePt NPs. The L10 FePt NPs were also successfully patterned on Si substrate by nanoimprinting lihthography (NIL). The highly ordered ferromagnetic arrays on a desired substrate for bit-patterned media (BPM) were studied and promised bright prospects for the progress of data-storage. Furthermore, we also reported a new FePt-containing metallopolymer P4 as the single-source precursor for metal alloy NPs synthesis, where the metal fractions were on the side chain and the ratio could be easily controlled. This polymer was synthesized from random copolymer poly(styrene-4-ethynylstyrene) PES-PS and bimetallic precursor TPy-FePt ([Pt(4’-ferrocenyl-(N^N^N))Cl]Cl) by Sonogashira coupling reaction. After pyrolysis of P4, the stoichiometry of Fe and Pt atoms in the synthesized NPs (NPs) is nearly close to 1:1, which is more precise than using TPy-FePt as precursor. Polymer P4 was also more favorable for patterning by high throughout NIL as compared to TPy-FePt. Ferromagnetic nanolines, potentially as bit-patterned magnetic recording media, were successfully fabricated from P4 and fully characterized. In Chapter 3, a novel organometallic compound TPy-FePd-1 [4’-ferrocenyl-(N^N^N)PdOCOCH3] was synthesized and structurally characterized, whose crystal structure showed a coplanar Pd center and Pd-Pd distance (3.17 Å). Two metals Fe and Pd were evenly embedded in the molecular dimension and remained tightly coupled between each other benefiting to the metalmetal (Pd-Pd) and ligand ππ stacking interactions, all of which made it facilitate the nucleation without sintering during preparing the FePd NPs. Ferromagnetic FePd NPs of ca. 16.2 nm in diameter were synthesized by one-pot pyrolysis of the single-source precursor TPy-FePd-1 under getter gas with metal-ion reduction and minimal nanoparticle coalescence, which have a nearly equal atomic ratio (Fe/Pd = 49/51) and exhibited coercivity of 4.9 kOe at 300 K. By imprinting the mixed chloroform solution of TPy-FePd-1 and polystyrene (PS) on Si, reproducible patterning of nanochains was formed due to the excellent self-assembly properties and the incompatibility between TPy-FePd-1 and PS under the slow evaporation of the solvents. The FePd nanochains with average length of ca. 260 nm were evenly dispersed around the PS nanosphere by self-assembly of TPy-FePd-1. In addition, the orientation of the FePd nanochains could also be controlled by tuning the morphology of PS, and the length was shorter in confined space of PS. Orgnic skeleton in TPy-FePd-1 and PS were carbonized and removed by pyrolysis under Ar/H2 (5 wt%) and only magnetic FePd alloy nanochains with domain structure were left. Besides, a bimetallic complex TPy-FePd-2 was prepared and used as a single-source precursor to synthesize ferromagnetic FePd NPs by one-pot pyrolysis. The resultant FePd NPs have a mean size of 19.8 nm and show the coercivity of 1.02 kOe. In addition, the functional group (-NCMe) in TPy-FePd-2 was easily substituted by a pyridyl group. A random copolymer PS-P4VP was used to coordinate with TPy-FePd-2, and the as-synthesized polymer made the metal fraction disperse evenly along the flexible chain. Fabrication of FePd NPs from the polymers was also investigated, and the size could be easily controlled by tuning the metal fraction in polymer. FePd NPs with the mean size of 10.9, 14.2 and 17.9 nm were prepared from the metallopolymer with 5 wt%, 10 wt% and 20wt% of metal fractions, respectively. In Chapter 4, molybdenum disulfide (MoS2) monolayers decorated with ferromagnetic FeCo NPs on the edges were synthesized through a one-step pyrolysis of precursor molecules in an argon atmosphere. The FeCo precursor was spin coated on the MoS2 monolayer grown on Si/SiO2 substrate. Highly-ordered body-centered cubic (bcc) FeCo NPs were revealed under optimized pyrolysis conditions, possessing coercivity up to 1000 Oe at room temperature. The FeCo NPs were well-positioned along the edge sites of MoS2 monolayers. The vibration modes of Mo and S atoms were confined after FeCo NPs decoration, as characterized by Raman shift spectroscopy. These MoS2 monolayers decorated with ferromagnetic FeCo NPs can be used for novel catalytic materials with magnetic recycling capabilities. The sizes of NPs grown on MoS2 monolayers are more uniform than from other preparation routines. Finally, the optimized pyrolysis temperature and conditions provide receipts for decorating related noble catalytic materials. Finally, Chapters 5 and 6 present the concluding remarks and the experimental details of the work described in Chapters 2-4.
APA, Harvard, Vancouver, ISO, and other styles
3

Jakovljevic, Sasa. "Data collecting and processing for substation integration enhancement." Texas A&M University, 2003. http://hdl.handle.net/1969/93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mattasantharam, R. (Rubini). "3D web visualization of continuous integration big data." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201812063239.

Full text
Abstract:
Continuous Integration (CI) is a practice that is used to automate the software build and its test for every code integration to a shared repository. CI runs thousands of test scripts every day in a software organization. Every test produces data which can be test results logs such as errors, warnings, performance measurements and build metrics. This data volume tends to grow at unprecedented rates for the builds that are produced in the Continuous Integration (CI) system. The amount of the integrated test results data in CI grows over time. Visualizing and manipulating the real time and dynamic data is a challenge for the organizations. The 2D visualization of big data has been actively in use in software industry. Though the 2D visualization has numerous advantages, this study is focused on the 3D representation of CI big data visualization and its advantage over 2D visualization. Interactivity with the data and system, and accessibility of the data anytime, anywhere are two important requirements for the system to be usable. Thus, the study focused in creating a 3D user interface to visualize CI system data in 3D web environment. The three-dimensional user interface has been studied by many researchers who have successfully identified various advantages of 3D visualization along with various interaction techniques. Researchers have also described how the system is useful in real world 3D applications. But the usability of 3D user interface in visualizations in not yet reached to a desirable level especially in software industry due its complex data. The purpose of this thesis is to explore the use of 3D data visualization that could help the CI system users of a beneficiary organization in interpreting and exploring CI system data. The study focuses on designing and creating a 3D user interface for providing a more effective and usable system for CI data exploration. Design science research framework is chosen as a suitable research method to conduct the study. This study identifies the advantages of applying 3D visualization to a software system data and then proceeds to explore how 3D visualization could help users in exploring the software data through visualization and its features. The results of the study reveal that the 3D visualization help the beneficiary organization to view and compare multiple datasets in a single screen space, and to see the holistic view of large datasets, as well as focused details of multiple datasets of various categories in a single screen space. Also, it can be said from the results that the 3D visualization help the beneficiary organization CI team to better represent big data in 3D than in 2D.
APA, Harvard, Vancouver, ISO, and other styles
5

Liao, Zhining. "Query processing for data integration from multiple data sources over the Internet." Thesis, University of Ulster, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Shih-Yung. "Integration and processing of high-resolution moiré-interferometry data." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/40181.

Full text
Abstract:
A new hybrid method combining moire interferometry, high resolution data-reduction technique, two-dimensional datasmoothing method, and Finite Element Method (FEM) has been successfully developed. This hybrid method has been applied to residual strain analyses of composite panels, strain concentrations around optical fibers embedded in composites, and cruciform composite shear test. This hybrid method allows moire data to be collected with higher precision and accuracy by digitizing overexposed moire patterns (U & V fields) with appropriate carrier fringes. The resolution of the data is ± 20 nm. The data extracted from the moire patterns are interfaced to an FEM package through an automatic mesh generator. This mesh generator produces a nonuniform FEM mesh by connecting the digitized data points into triangles. The mesh, which uses digitized displacement data as boundary conditions, is then fed to and processed by a commercial FEM package. Due to the natural scatter of the displacement data digitized from moire patterns, the accuracy of strain values is significantly affected. A modified finite-element model with linear spring elements is introduced so data-smoothing can be done easily in two dimensional space. The results of the data smoothing are controlled by limiting the stretch of those springs to be less than the resolution of the experimental method. With the full-field hybrid method, the strain contours from moire interferometry can be easily obtained with good accuracy. If the properties of the material are known, the stress patterns can also be obtained. In addition, this method can be used to analyze any two-dimensional displacement data, including the grid method and holography.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Eberius, Julian. "Query-Time Data Integration." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-191560.

Full text
Abstract:
Today, data is collected in ever increasing scale and variety, opening up enormous potential for new insights and data-centric products. However, in many cases the volume and heterogeneity of new data sources precludes up-front integration using traditional ETL processes and data warehouses. In some cases, it is even unclear if and in what context the collected data will be utilized. Therefore, there is a need for agile methods that defer the effort of integration until the usage context is established. This thesis introduces Query-Time Data Integration as an alternative concept to traditional up-front integration. It aims at enabling users to issue ad-hoc queries on their own data as if all potential other data sources were already integrated, without declaring specific sources and mappings to use. Automated data search and integration methods are then coupled directly with query processing on the available data. The ambiguity and uncertainty introduced through fully automated retrieval and mapping methods is compensated by answering those queries with ranked lists of alternative results. Each result is then based on different data sources or query interpretations, allowing users to pick the result most suitable to their information need. To this end, this thesis makes three main contributions. Firstly, we introduce a novel method for Top-k Entity Augmentation, which is able to construct a top-k list of consistent integration results from a large corpus of heterogeneous data sources. It improves on the state-of-the-art by producing a set of individually consistent, but mutually diverse, set of alternative solutions, while minimizing the number of data sources used. Secondly, based on this novel augmentation method, we introduce the DrillBeyond system, which is able to process Open World SQL queries, i.e., queries referencing arbitrary attributes not defined in the queried database. The original database is then augmented at query time with Web data sources providing those attributes. Its hybrid augmentation/relational query processing enables the use of ad-hoc data search and integration in data analysis queries, and improves both performance and quality when compared to using separate systems for the two tasks. Finally, we studied the management of large-scale dataset corpora such as data lakes or Open Data platforms, which are used as data sources for our augmentation methods. We introduce Publish-time Data Integration as a new technique for data curation systems managing such corpora, which aims at improving the individual reusability of datasets without requiring up-front global integration. This is achieved by automatically generating metadata and format recommendations, allowing publishers to enhance their datasets with minimal effort. Collectively, these three contributions are the foundation of a Query-time Data Integration architecture, that enables ad-hoc data search and integration queries over large heterogeneous dataset collections.
APA, Harvard, Vancouver, ISO, and other styles
8

Jia, Hong. "Clustering of categorical and numerical data without knowing cluster number." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jones, Jonathan A. "Nuclear magnetic resonance data processing methods." Thesis, University of Oxford, 1992. http://ora.ox.ac.uk/objects/uuid:7df97c9a-4e65-4c10-83eb-dfaccfdccefe.

Full text
Abstract:
This thesis describes the application of a wide variety of data processing methods, in particular the Maximum Entropy Method (MEM), to data from Nuclear Magnetic Resonance (NMR) experiments. Chapter 1 provides a brief introduction to NMR and to data processing, which is developed in chapter 2. NMR is described in terms of the classical model due to Bloch, and the principles of conventional (Fourier transform) data processing developed. This is followed by a description of less conventional techniques. The MEM is derived on several grounds, and related to both Bayesian reasoning and Shannon information theory. Chapter 3 describes several methods of evaluating the quality of NMR spectra obtained by a variety of data processing techniques; the simple criterion of spectral appearance is shown to be completely unsatisfactory. A Monte Carlo method is described which allows several different techniques to be compared, and the relative advantages of Fourier transformation and the MEM are assessed. Chapter 4 describes in vivo NMR, particularly the application of the MEM to data from Phase Modulated Rotating Frame Imaging (PMRFI) experiments. In this case the conventional data processing is highly unsatisfactory, and MEM processing results in much clearer spectra. Chapter 5 describes the application of a range of techniques to the estimation and removal of splittings from NMR spectra. The various techniques are discussed using simple examples, and then applied to data from the amino acid iso-leucine. The thesis ends with five appendices which contain historical and philosophical notes, detailed calculations pertaining to PMRFI spectra, and a listing of the MEM computer program.
APA, Harvard, Vancouver, ISO, and other styles
10

Leung, Chi-man, and 梁志文. "Integration of modern GIS into orienteering course planning and map making." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B2977813X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bauckmann, Jana. "Dependency discovery for data integration." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6664/.

Full text
Abstract:
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains.
Datenintegration hat das Ziel, Daten aus unterschiedlichen Quellen zu kombinieren und Nutzern eine einheitliche Sicht auf diese Daten zur Verfügung zu stellen. Diese Aufgabe ist gleichermaßen anspruchsvoll wie wertvoll. In dieser Dissertation werden Algorithmen zum Erkennen von Datenabhängigkeiten vorgestellt, die notwendige Informationen zur Datenintegration liefern. Der Schwerpunkt dieser Arbeit liegt auf Inklusionsabhängigkeiten (inclusion dependency, IND) im Allgemeinen und auf der speziellen Form der Bedingten Inklusionsabhängigkeiten (conditional inclusion dependency, CIND): (i) INDs ermöglichen das Finden von Strukturen in einem gegebenen Schema. (ii) INDs und CINDs unterstützen das Finden von Referenzen zwischen Datenquellen. Eine IND „A in B“ besagt, dass alle Werte des Attributs A in der Menge der Werte des Attributs B enthalten sind. Diese Arbeit liefert einen Algorithmus, der alle INDs in einer relationalen Datenquelle erkennt. Die Herausforderung dieser Aufgabe liegt in der Komplexität alle Attributpaare zu testen und dabei alle Werte dieser Attributpaare zu vergleichen. Die Komplexität bestehender Ansätze ist abhängig von der Anzahl der Attributpaare während der hier vorgestellte Ansatz lediglich von der Anzahl der Attribute abhängt. Damit ermöglicht der vorgestellte Algorithmus unbekannte Datenquellen mit großen Schemata zu untersuchen. Darüber hinaus wird der Algorithmus erweitert, um drei spezielle Formen von INDs zu finden, und ein Ansatz vorgestellt, der Fremdschlüssel aus den erkannten INDs filtert. Bedingte Inklusionsabhängigkeiten (CINDs) sind Inklusionsabhängigkeiten deren Geltungsbereich durch Bedingungen über bestimmten Attributen beschränkt ist. Nur der zutreffende Teil der Instanz muss der Inklusionsabhängigkeit genügen. Die Definition für CINDs wird in der vorliegenden Arbeit generalisiert durch die Unterscheidung von überdeckenden und vollständigen Bedingungen. Ferner werden Qualitätsmaße für Bedingungen definiert. Es werden effiziente Algorithmen vorgestellt, die überdeckende und vollständige Bedingungen mit gegebenen Qualitätsmaßen auffinden. Dabei erfolgt die Auswahl der verwendeten Attribute und Attributkombinationen sowie der Attributwerte automatisch. Bestehende Ansätze beruhen auf einer Vorauswahl von Attributen für die Bedingungen oder erkennen nur Bedingungen mit Schwellwerten von 100% für die Qualitätsmaße. Die Ansätze der vorliegenden Arbeit wurden durch zwei Anwendungsbereiche motiviert: Datenintegration in den Life Sciences und das Erkennen von Links in Linked Open Data. Die Effizienz und der Nutzen der vorgestellten Ansätze werden anhand von Anwendungsfällen in diesen Bereichen aufgezeigt.
APA, Harvard, Vancouver, ISO, and other styles
12

Benatar, Gil. "Thermal/structural integration through relational database management." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/19484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Oelofse, Andries Johannes. "Development of a MAIME-compliant microarray data management system for functional genomics data integration." Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-08222007-135249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Dickson, Neil Edwin Matthew. "The integration of multi-scale hydrogeophysical data into numerical groundwater flow models." Thesis, Queen's University Belfast, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.680157.

Full text
Abstract:
Throughout this research, geophysical data is utilised to constrain numerical groundwater flow models at two applied study areas: a sandstone aquifer in Northern Ireland and a basement rock aquifer in Benin, west Africa. In Northern Ireland, airborne passive magnetics data are used to determine regional heterogeneity occurrence combined with methods of upscaling / equivalence and a density function. Furthermore, a stochastic component is undertaken in the form of multiple point statistics. This analysis performs a probability simulation and pattern matching to determine a statistical occurrence of heterogeneity distribution. In Benin, point magnetic resonance sounding data and electrical resistivity tomography surveys are utilised to determine relationships to hydrogeological properties to aid many conceptualisations of the. region. All studies employed finite element groundwater flow modelling, alongside comparative statistics and model ranking to determine the success and applicability of such analysis. In Northern Ireland, the deterministic analysis indicates that an intermediate level of upscaling (between field scale and one regional anisotropy value) provides statistically significant results at regional scale. The stochastic analysis effectively 'cleans' the magnetics data to provide a new distribution of regional heterogeneity. Modelling results are relatively comparable to the deterministic analysis and demonstrate the successful application of continuous geophysical data into model parameterisation. In Benin, all models provide significant results despite variations in model geometry and parameter conceptualisation. Point geophysical data permits effective model creation and parameter distribution through positive correlation to hydro-structural controls. For all models, minimal boundary conditions are applied and no post-processing is performed. As a result, the benefit of adapting geophysics for model parameterisation is clearly evident and suggests new hydrogeological paradigms for the study areas. Further work is required with regard to predicted anthropogenic and climate change scenarios.
APA, Harvard, Vancouver, ISO, and other styles
15

Siu, Ka Wai. "Numerical algorithms for data analysis with imaging and financial applications." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/550.

Full text
Abstract:
In this thesis, we study modellings and numerical algorithms to data analysis with applications to image processing and financial forecast. The thesis is composed of two parts, namely the tensor regression and data assimilation methods for image restoration.;We start with investigating the tensor regression problem in Chapter 2. It is a generalization of a classical regression in order to adopt and analyze much more information by using multi-dimensional arrays. Since the regression problem is subject to multiple solutions, we propose a regularized tensor regression model to the problem. By imposing a low-rank property of the solution and considering the structure of the tensor product, we develop an algorithm which is suitable for scalable implementations. The regularization method is used to select useful solutions which depend on applications. The proposed model is solved by the alternating minimization method and we prove the convergence of the objective function values and iterates by the maximization-minimization (MM) technique. We study different factors which affects the performance of the algorithm, including sample sizes, solution ranks and the noise levels. Applications include image compressing and financial forecast.;In Chapter 3, we apply filtering methods in data assimilation to image restoration problems. Traditionally, data assimilation methods optimally combine a predictive state from a dynamical system with real partially observations. The motivation is to improve the model forecast by real observation. We construct an artificial dynamics to the non-blind deblurring problems. By making use of spatial information of a single image, a span of ensemble members is constructed. A two-stage use of ensemble transform Kalman filter (ETKF) is adopted to deblur corrupted images. The theoretical background of ETKF and the use of artificial dynamics by stage augmentation method are provided. Numerical experiments include image and video processing.;Concluding remarks and discussion on future extensions are included in Chapter 4.
APA, Harvard, Vancouver, ISO, and other styles
16

Marquardt, Justus. "Metadatendesign zur Integration von Online Analytical Processing in das Wissensmanagement /." Hamburg : Kovač, 2008. http://www.verlagdrkovac.de/978-3-8300-3598-5.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Gao, Yang. "On the integration of qualitative and quantitative methods in data fusion." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Marquardt, Justus. "Metadatendesign zur Integration von online analytical processing in das Wissensmanagement." Hamburg Kovač, 2007. http://www.verlagdrkovac.de/978-3-8300-3598-5.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Choy, Wing Yiu. "Using numerical methods and artificial intelligence in NMR data processing and analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0024/NQ50131.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Choy, Wing Yiu 1969. "Using numerical methods and artificial intelligence in NMR data processing and analysis." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=35864.

Full text
Abstract:
In this thesis, we applied both numerical methods and artificial intelligence techniques to NMR data processing and analysis. First, a comprehensive study of the Iterative Quadratic Maximum Likelihood (IQML) method applied to NMR spectral parameter estimation is reported. The IQML is compared to other conventional time domain data analysis methods. Extensive simulations demonstrate the superior performance of the IQML method. We also develop a new technique, which uses genetic algorithm with a priori knowledge, to improve the quantification of NMR spectral parameters. The new proposed method outperforms the other conventional methods, especially in the situations that there are signals close in frequencies and the signal-to-noise ratio of the FID is low.
The usefulness of Singular Value Decomposition (SVD) method in NMR data processing is further exploited. A new two dimensional spectral processing scheme based on SVD is proposed for suppressing strong diagonal peaks. The superior performance of this method is demonstrated on an experimental phase-sensitive COSY spectrum.
Finally, we studied the feasibility of using neural network predicted secondary structure information in the NMR data analysis. Protein chemical shift databases are compiled and are used with the neural network predicted protein secondary structure information to improve the accuracy of protein chemical shift prediction. The potential of this strategy for amino acid classification in NMR resonance assignment is explored.
APA, Harvard, Vancouver, ISO, and other styles
21

Sonmez, Sunercan Hatice Kevser. "Data Integration Over Horizontally Partitioned Databases In Service-oriented Data Grids." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612414/index.pdf.

Full text
Abstract:
Information integration over distributed and heterogeneous resources has been challenging in many terms: coping with various kinds of heterogeneity including data model, platform, access interfaces
coping with various forms of data distribution and maintenance policies, scalability, performance, security and trust, reliability and resilience, legal issues etc. It is obvious that each of these dimensions deserves a separate thread of research efforts. One particular challenge among the ones listed above that is more relevant to the work presented in this thesis is coping with various forms of data distribution and maintenance policies. This thesis aims to provide a service-oriented data integration solution over data Grids for cases where distributed data sources are partitioned with overlapping sections of various proportions. This is an interesting variation which combines both replicated and partitioned data within the same data management framework. Thus, the data management infrastructure has to deal with specific challenges regarding the identification, access and aggregation of partitioned data with varying proportions of overlapping sections. To provide a solution we have extended OGSA-DAI DQP, a well-known service-oriented data access and integration middleware with distributed query processing facilities, by incorporating UnionPartitions operator into its algebra in order to cope with various unusual forms of horizontally partitioned databases. As a result
our solution extends OGSA-DAI DQP, in two points
1 - A new operator type is added to the algebra to perform a specialized union of the partitions with different characteristics, 2 - OGSA-DAI DQP Federation Description is extended to include some more metadata to facilitate the successful execution of the newly introduced operator.
APA, Harvard, Vancouver, ISO, and other styles
22

Beaton, Duncan. "Integration of data description and quality information using metadata for spatial data and spatial information systems." Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tamborrino, Alexandre. "A Real-Time Reactive Platform for Data Integration and Event Stream Processing." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177203.

Full text
Abstract:
This thesis presents a Real-time Reactive platform for Data Integration and Event Stream Processing. The Data Integration part is composed of data pullers that incrementally pull data changes from REST data sources and propagates them as streams of immutable events across the system according to the Event-Sourcing principle. The Stream Processing part is a Tree-like structure of event-sourced stream processors where a processor can react in various ways to events sent by its parent and send derived sub-streams of events to child processors. A processor use case is maintaining a pre-computed view on aggregated data, which allows to define low read latency business dashboards that are updated in real-time. The platform follows the Reactive architecture principles to maximize performance and minimize resource consumption using an asynchronous nonblocking architecture with an adaptive push-pull stream processing model with automatic back-pressure. Moreover, the platform uses functional programming abstractions for simple and composable asynchronous programming. Performance tests have been performed on a prototype application, which validates the architecture model by showing expected performance patterns concerning event latency between the top of the processing tree and the leaves, and expected fault-tolerance behaviours with acceptable recovery times.
APA, Harvard, Vancouver, ISO, and other styles
24

Constantinescu, Emil Mihai. "Adaptive Numerical Methods for Large Scale Simulations and Data Assimilation." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/27938.

Full text
Abstract:
Numerical simulation is necessary to understand natural phenomena, make assessments and predictions in various research and engineering fields, develop new technologies, etc. New algorithms are needed to take advantage of the increasing computational resources and utilize the emerging hardware and software infrastructure with maximum efficiency. Adaptive numerical discretization methods can accommodate problems with various physical, scale, and dynamic features by adjusting the resolution, order, and the type of method used to solve them. In applications that simulate real systems, the numerical accuracy of the solution is typically just one of the challenges. Measurements can be included in the simulation to constrain the numerical solution through a process called data assimilation in order to anchor the simulation in reality. In this thesis we investigate adaptive discretization methods and data assimilation approaches for large-scale numerical simulations. We develop and investigate novel multirate and implicit-explicit methods that are appropriate for multiscale and multiphysics numerical discretizations. We construct and explore data assimilation approaches for, but not restricted to, atmospheric chemistry applications. A generic approach for describing the structure of the uncertainty in initial conditions that can be applied to the most popular data assimilation approaches is also presented. We show that adaptive numerical methods can effectively address the discretization of large-scale problems. Data assimilation complements the adaptive numerical methods by correcting the numerical solution with real measurements. Test problems and large-scale numerical experiments validate the theoretical findings. Synergistic approaches that use adaptive numerical methods within a data assimilation framework need to be investigated in the future.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

McMahon, Michelle J. "Three-dimensional integration of remotely sensed imagery and subsurface geological data." Thesis, University of Aberdeen, 1993. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=216342.

Full text
Abstract:
The standard approach to integration of satellite imagery and sub-surface geological data has been the comparison of a map-view (two-dimensional) image interpretation with a selection of sub-surface cross-sections. The relationship between surface and subsurface geology can be better understood through quantitative three-dimensional (3-D) computer modelling. This study tests techniques to integrate a 3-D digital terrain model with 3-D sub-surface interpretations. Data types integrated, from a portion of the Paradox Basin, SE Utah, USA, include Landsat TM imagery, digital elevation data (DEM), sub-surface gravity and magnetic data, and wellbore data. Models are constructed at a variety of data resolutions. Combined modelling of basement and topographic features suggests the traditional lineament analysis approach to structural interpretation is over-simplistic. Integration of DEM and image data displayed in 3-D proved more effective for lithology discrimination than a map-view approach. Automated strike and dip interpretation algorithms require DEM data at resolutions of the order of 30 metres or better. Methods are described for the creation of fault-plane maps from three-dimensional displays of surface and subsurface data. The approach used in this study of linking existing software packages (Erdas image processing system, CPS3 mapping package and SGM and GTM three-dimensional geological modelling packages) is recommended for future studies. The methodology developed in this study is beneficial to interpretation of imagery data in frontier exploration areas.
APA, Harvard, Vancouver, ISO, and other styles
26

Wilson, Miyako Watanabe. "The constrained object representation for engineering analysis integration." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/17328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kidane, Dawit K. "Rule-based land cover classification model : expert system integration of image and non-image spatial data." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50445.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2005.
ENGLISH ABSTRACT: Remote sensing and image processing tools provide speedy and up-to-date information on land resources. Although remote sensing is the most effective means of land cover and land use mapping, it is not without limitations. The accuracy of image analysis depends on a number of factors, of which the image classifier used is probably the most significant. It is noted that there is no perfect classifier, but some robust classifiers achieve higher accuracy results than others. For certain land cover/uses, discrimination based only on spectral properties is extremely difficult and often produces poor results. The use of ancillary data can improve the classification process. Some classifiers incorporate ancillary data before or after the classification process, which limits the full utilization of the information contained in the ancillary data. Expert classification, on the other hand, makes better use of ancillary data by incorporating data directly into the classification process. In this study an expert classification model was developed based on spatial operations designed to identify a specific land cover/use, by integrating both spectral and available ancillary data. Ancillary data were derived either from the spectral channels or from other spatial data sources such as DEM (Digital Elevation Model) and topographical maps. The model was developed in ERDAS Imagine image-processing software, using the expert engineer as a final integrator of the different constituent spatial operations. An attempt was made to identify the Level I land cover classes in the South African National Land Cover classification scheme hierarchy. Rules were determined on the basis of expert knowledge or statistical calculations of mean and variance on training samples. Although rules could be determined by using statistical applications, such as the classification analysis regression tree (CART), the absence of adequate and accurate training data for all land cover classes and the fact that all land cover classes do not require the same predictor variables makes this option less desirable. The result of the accuracy assessment showed that the overall classification accuracy was 84.3% and kappa statistics 0.829. Although this level of accuracy might be suitable for most applications, the model is flexible enough to be improved further.
AFRIKAANSE OPSOMMING: Afstandswaameming-en beeldverwerkingstegnieke kan akkurate informasie oorbodemhulpbronne weergee. Alhoewel afstandswaameming die mees effektiewe manier van grondbedekking en grondgebruikkartering is, is dit nie sonder beperkinge nie. Die akkuraatheid van beeldverwerking is afhanklik van verskeie faktore, waarvan die beeld klassifiseerder wat gebruik word, waarskynlik die belangrikste faktor is. Dit is welbekend dat daar geen perfekte klassifiseerder is nie, alhoewel sekere kragtige klassifiseerders hoër akkuraatheid as ander behaal. Vir sekere grondbedekking en -gebruike is uitkenning gebaseer op spektrale eienskappe uiters moeilik en dikwels word swak resultate behaal. Die gebruik van aanvullende data, kan die klassifikasieproses verbeter. Sommige klassifiseerders inkorporeer aanvullende data voor of na die klassifikasieproses, wat die volle aanwending van die informasie in die aanvullende data beperk. Deskundige klassifikasie, aan die ander kant, maak beter gebruik van aanvullende data deurdat dit data direk in die klassifikasieproses inkorporeer. Tydens hierdie studie is 'n deskundige klassifikasiemodel ontwikkel gebaseer op ruimtelike verwerkings, wat ontwerp is om spesifieke grondbedekking en -gebruike te identifiseer. Laasgenoemde is behaal deur beide spektrale en beskikbare aanvullende data te integreer. Aanvullende data is afgelei van, óf spektrale eienskappe, óf ander ruimtelike bronne soos 'n DEM (Digitale Elevasie Model) en topografiese kaarte. Die model is ontwikkel in ERDAS Imagine beeldverwerking sagteware, waar die 'expert engineer' as finale integreerder van die verskillende samestellende ruimtelike verwerkings gebruik is. 'n Poging is aangewend om die Klas I grondbedekkingklasse, in die Suid-Afrikaanse Nasionale Grondbedekking klassifikasiesisteem te identifiseer. Reëls is vasgestel aan die hand van deskundige begrippe of eenvoudige statistiese berekeninge van die gemiddelde en variansie van opleidingsdata. Alhoewel reëls met behulp van statistiese toepassings, soos die 'classification analysis regression tree (CART)' vasgestel kon word, maak die afwesigheid van genoegsame en akkurate opleidingsdata vir al die grondbedekkingsklasse hierdie opsie minder aantreklik. Bykomend tot laasgenoemde, vereis alle grondbedekkingsklasse nie dieselfde voorspellingsveranderlikes nie. Die resultaat van hierdie akkuraatheidsskatting toon dat die algehele klassifikasie-akkuraatheid 84.3% was en die kappa statistieke 0.829. Alhoewel hierdie vlak van akkuraatheid vir die meeste toepassings geskik is, is die model aanpasbaar genoeg om verder te verbeter.
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, Daeyoung. "Strongly partitioned system architecture for integration of real-time applications." [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/anp4300/dkim-phd-aug01.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains xiii, 148 p.; also contains graphics. Vita. Includes bibliographical references (p. 141-147).
APA, Harvard, Vancouver, ISO, and other styles
29

Rangan, Ravi M. "Engineering data integration in a discrete part design and manufacturing environment." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/18837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Marupudi, Surendra Brahma. "Framework for Semantic Integration and Scalable Processing of City Traffic Events." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1472505847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Schabort, Willem Petrus Du Toit. "Integration of kinetic models with data from 13C-metabolic flux experiments." Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Werner, Jeffrey M. "Payload Data Analyzer and Payload Data Generator System for Space Station Integration and Test." International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/607574.

Full text
Abstract:
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada
To support the processing of International Space Station (ISS) Payloads, the Kennedy Space Center (KSC) had the need to develop specialized test and validation equipment to quickly identify interface problems between the payload or experiment under test and the communication and telemetry downlink systems. To meet this need, the Payload Data Analyzer (PDA) System was developed by the Data Systems Technology Division (DSTD) of NASA’s Goddard Space Flight Center (GSFC) to provide a suite of troubleshooting tools and data snapshot features allowing for diagnosis and validation of payload interfaces. The PDA System, in conjunction with the Payload Data Generator (PDG) System, allow for a full set of programmable payload validation tools which can quickly be deployed to solve crucial interface problems. This paper describes the architecture and tools built in the PDA which help facilitate Space Station Payload Processing.
APA, Harvard, Vancouver, ISO, and other styles
33

Schwarz, Holger. "Integration von Data-Mining und online analytical processing : eine Analyse von Datenschemata, Systemarchitekturen und Optimierungsstrategien /." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10720634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Everhorn, Tobias. "Processing and analyzing of chromatin interaction data and numerical fitting of the statistical helix." Thesis, KTH, Tillämpad fysik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Qiu, Pengxiang [Verfasser]. "Automated data processing and numerical methods for travel-time based hydraulic tomography / Pengxiang Qiu." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2020. http://d-nb.info/1215906188/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Chai, Sek Meng. "Real time image processing on parallel arrays for gigascale integration." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/15513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Nangue, Calvain Raoul. "Guidelines for the successful integration of ICT in schools in Cameroon." Thesis, Nelson Mandela Metropolitan University, 2011. http://hdl.handle.net/10948/1311.

Full text
Abstract:
ICT integration in secondary schools in Sub-Saharan Africa is still at an early stage and already faces several setbacks that may undermine the various initiatives undertaken by governments and the private sector to promote the use of computers in schools. Based on literature and other research, this may be attributed to the fact that no guidelines for proper ICT adoption in secondary schools exist; and furthermore, most integration cases were done haphazardly with no systematic approach based on the existing frameworks or tailored towards the real context in the schools concerned. The present study aimed to provide guidelines for the successful integration of ICT into schools in Cameroon. A review of some existing frameworks for ICT integrations in schools, as well as the innovative pathways that some developing countries have taken to ensure the successful integration of ICT into schools were explored through a literature review, revealing the trends and challenges of ICT integration in schools in Sub-Saharan Africa. The current status of ICT in schools in Cameroon being at an introductory stage was established from the available literature. This led to the use of a single case study from the Western Region of Cameroon, where four secondary schools were selected from the most advanced schools in terms of ICT integration. Participants consisting of principals, ICT co-ordinators, teachers, and students were interviewed, in order to establish the current status of ICT in each school, as well as those factors affecting or promoting the adoption of ICT. Teachers’ and students’ surveys, as well as existing documentation were used to triangulate the data gathered from interviews with school principals and ICT co-ordinators. Data were descriptively analysed – and the findings revealed that ICT is at the introductory stage of integration in the Fluck’s Model of ICT development in schools. At school level, the lack of infrastructure and an ICT adoption plan were found to be the key opposing factors to ICT integration, whereas several enablers were identified, such as the positive attitude of teachers towards ICT, the existence of a minimum recurring budget for ICT adoption through parents’ funding, as well as the continually decreasing cost of ICT infrastructure in the market. Based on the findings and experiences from successfully proven projects, a set of guidelines were derived for schools’ decision-makers. It is critical to put in place a well-structured policy for ICT in the school and to recognise all the ICT-related costs.
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Gang. "Numerical modelling of damage to masonry buildings due to tunnelling." Thesis, University of Oxford, 1997. http://ora.ox.ac.uk/objects/uuid:c1390020-daba-40cc-b922-e27314bea2b5.

Full text
Abstract:
Accurate assessment of the damage to buildings due to tunnelling in soft ground becomes an important issue when a tunnel is constructed under historic masonry buildings in urban areas. The current two-stage approach in which settlements estimated from a Gaussian curve are applied to a building does not consider soil-structure interaction and fails to give a correct prediction of the damage. This thesis describes a complete threedimensional finite element model for assessment of the settlement damage to masonry buildings induced by tunnelling in London Clay, and an investigation of the interaction between a masonry building and the ground. A macroscopic elastic no tension model, which assumes the material has zero tensile strength but infinite compressive strength, is developed to simulate the behaviour of masonry. Numerical techniques are proposed to improve the stability of the calculation. The comparison of the no tension and elastic models, by applying Gaussian curve settlement troughs to both a plain wall and a facade, shows that the no tension model predicts different behaviour of the masonry building during tunnelling, including different cracking patterns and damage grades. Two-dimensional finite element analyses combining the building, modelled by the no tension material, and the ground, modelled by a nested yield surface model, give insight into the interaction between the masonry structure and the ground. They suggest the importance of the stresses in the soil prior to the excavation in affecting the ground movements during tunnelling. Thus the weight of the building controls the overall magnitude of the ground movements beneath the building, while the stiffness of the building affects the shape of the trough. A key aspect of the behaviour of the masonry building is the formation of stress arches. Finally the three-dimensional finite element analyses are described. Both symmetric and unsymmetric cases are analysed. The results show that the three-dimensional analysis gives more realistic modelling of the problem and is likely to be necessary for practical situations, especially when a building is not symmetrically located with respect to the tunnel - a case which cannot be analysed in two-dimensions. A special tying scheme is proposed for the connection of the nodes belonging to elements of different types, which are defined in their own local co-ordinate systems. Different types of tie elements are formulated and implemented for connection between two-dimensional and three-dimensional elements in various combinations.
APA, Harvard, Vancouver, ISO, and other styles
39

Parshall, Elaine Ruth. "A numerical model of optical beam propagation in photorefractive crystals and comparisons with experiment /." Thesis, Connect to Dissertations & Theses @ Tufts University, 1995.

Find full text
Abstract:
Thesis (Ph.D.)--Tufts University, 1995.
Adviser: M. Cronin-Golomb. Submitted to the Dept. of Electrical Engineering. Includes bibliographical references. Access restricted to members of the Tufts University community. Also available via the World Wide Web;
APA, Harvard, Vancouver, ISO, and other styles
40

Ng, Hang-yi, and 吳杏儀. "An internet integration system design for housing management: case study on HKHA District TenancyManagement Office." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B42577421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Patil, Manjiri Pandurang. "Schema exportation and integration for achieving information sharing in a transnational setting." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0009360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hilger, James Daniel. "Contour integration and interpolation geometry, phenomenology, and multiple inputs /." Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1973074431&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Väntti, S. (Sami). "Automated processing and gathering of data provided by the continuous integration of Nokia’s 5G uplane." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201806062484.

Full text
Abstract:
The agile way of working has gained ground among developers and companies in software development. Large projects may have thousands of commits per day. Software updates and patches come more often and more frequently these days. A faster development pace demands tools and procedures that can handle nonstop automated software building and testing. Continuous integration (CI) is a phase in software development that takes care of software testing, deployment and releasing. This thesis concentrates on the testing phase. CI executes thousands of tests on developers’ changes in source code every single day. Different tests provide valuable information about the code’s condition and performance. CI provided data should be presented in a way that makes it efficient for developers or CI personnel to go through. This thesis introduces two different database solutions with visualization tools. This thesis also goes through how data from CI machinery is gathered and processed into databases. The system in this thesis is a distributed system that ingests data from CI and processes it
Ketterien kehitysmenetelmien käyttäminen ohjelmiston kehityksessä on yleistynyt kehittäjien ja yhtiöiden keskuudessa. Isoissa projekteissa saatetaan tehdä tuhansiakin muutoksia ja lisäyksiä koodiin päivässä. Ohjelmistoja päivitetään nykypäivänä useasti ja säännöllisesti. Nopea kehitystahti vaatii työkalut ja menetelmät, jotka pysyvät ohjelmistokehityksen vauhdissa mukana. Jatkuva integraatio (continuous integration, CI) on ohjelmistokehityksen vaihe, joka kattaa ohjelmiston automaatio testauksen ja julkaisun. Tässä diplomityössä keskitytään CI:n testausvaiheeseen ja sen tuottaman datan käsittelyyn. CI ajaa tuhansia testejä kehittäjien tekemille koodimuutoksille. CI:n tekemät testit tarjoavat arvokasta dataa koodin tehokkuudesta. Kyseessä oleva data pitäisi kuitenkin saada esitettyä paremmin kuin nykyisellään, jotta kehittäjät pystyisivät tehokkaammin näkemään testien tulokset. Tässä diplomityössä esitellään kaksi eri tietokantaratkaisua, joissa molemmissa on visualisointityökalut datan esittämistä varten. Tässä työssä käydään myös läpi miten data kerätään CI-koneistosta ja erilaiset ratkaisut, kuinka data voidaan prosessoida tietokantoihin. Diplomityötä varten rakennettu systeemi on hajautettu järjestelmä, joka kerää dataa CI:stä ja prosessoi sen
APA, Harvard, Vancouver, ISO, and other styles
44

Moreira, Helder. "Sensor data integration and management of smart environments." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17884.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática
Num mundo de constante desenvolvimento tecnológico e acelerado crescimento populacional, observa-se um aumento da utilização de recursos energéticos. Sendo os edifícios responsáveis por uma grande parte deste consumo energético, desencadeiam-se vários esforços de investigações de forma a criarem-se edifícios energeticamente eficientes e espaços inteligentes. Esta dissertação visa, numa primeira fase, apresentar uma revisão das atuais soluções que combinam sistemas de automação de edifícios e a Internet das Coisas. Posteriormente, é apresentada uma solução de automação para edifícios, com base em princípios da Internet das Coisas e explorando as vantagens de sistemas de processamento complexo de eventos, de forma a fornecer uma maior integração dos múltiplos sistemas existentes num edifício. Esta solução é depois validada através de uma implementação, baseada em protocolos leves desenhados para a Internet das Coisas, plataformas de alto desempenho, e métodos complexos para análise de grandes fluxos de dados. Esta implementação é ainda aplicada num cenário real, e será usada como a solução padrão para gestão e automação num edifício existente.
In a world of constant technological development and accelerated population growth, an increased use of energy resources is being observed. With buildings responsible for a large share of this energy consumption, a lot of research activities are pursued with the goal to create energy efficient buildings and smart spaces. This dissertation aims to, in a first stage, present a review of the current solutions combining Building Automation Systems (BAS) and Internet of Things (IoT). Then, a solution for building automation is presented based on IoT principles and exploiting the advantages of Complex Event Processing (CEP) systems, to provide higher integration of the multiple building subsystems. This solution was validated through an implementation, based on standard lightweight protocols designed for IoT, high performance and real time platforms, and complex methods for analysis of large streams of data. The implementation is also applied to a real world scenario, and will be used as a standard solution for management and automation of an existing building
APA, Harvard, Vancouver, ISO, and other styles
45

Bauckmann, Jana, Ulf Leser, and Felix Naumann. "Efficient and exact computation of inclusion dependencies for data integration." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4139/.

Full text
Abstract:
Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
APA, Harvard, Vancouver, ISO, and other styles
46

Howe, Bill. "Gridfields: Model-Driven Data Transformation in the Physical Sciences." PDXScholar, 2006. https://pdxscholar.library.pdx.edu/open_access_etds/2676.

Full text
Abstract:
Scientists' ability to generate and store simulation results is outpacing their ability to analyze them via ad hoc programs. We observe that these programs exhibit an algebraic structure that can be used to facilitate reasoning and improve performance. In this dissertation, we present a formal data model that exposes this algebraic structure, then implement the model, evaluate it, and use it to express, optimize, and reason about data transformations in a variety of scientific domains. Simulation results are defined over a logical grid structure that allows a continuous domain to be represented discretely in the computer. Existing approaches for manipulating these gridded datasets are incomplete. The performance of SQL queries that manipulate large numeric datasets is not competitive with that of specialized tools, and the up-front effort required to deploy a relational database makes them unpopular for dynamic scientific applications. Tools for processing multidimensional arrays can only capture regular, rectilinear grids. Visualization libraries accommodate arbitrary grids, but no algebra has been developed to simplify their use and afford optimization. Further, these libraries are data dependent—physical changes to data characteristics break user programs. We adopt the grid as a first-class citizen, separating topology from geometry and separating structure from data. Our model is agnostic with respect to dimension, uniformly capturing, for example, particle trajectories (1-D), sea-surface temperatures (2-D), and blood flow in the heart (3-D). Equipped with data, a grid becomes a gridfield. We provide operators for constructing, transforming, and aggregating gridfields that admit algebraic laws useful for optimization. We implement the model by analyzing several candidate data structures and incorporating their best features. We then show how to deploy gridfields in practice by injecting the model as middleware between heterogeneous, ad hoc file formats and a popular visualization library. In this dissertation, we define, develop, implement, evaluate and deploy a model of gridded datasets that accommodates a variety of complex grid structures and a variety of complex data products. We evaluate the applicability and performance of the model using datasets from oceanography, seismology, and medicine and conclude that our model-driven approach offers significant advantages over the status quo.
APA, Harvard, Vancouver, ISO, and other styles
47

Kirsch, Matthew Robert. "Signal Processing Algorithms for Analysis of Categorical and Numerical Time Series: Application to Sleep Study Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1278606480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mostert, Nicolette. "Towards an extended enterprise through e-Business integration." Thesis, Port Elizabeth Technikon, 2004. http://hdl.handle.net/10948/268.

Full text
Abstract:
The focus of this project will be on introducing the concept of an extended enterprise to business leaders, subsequently presenting e- Business Integration and the supporting role that it can play in the establishment of an extended enterprise. Various literature sources will be consolidated to describe the integration approaches and supporting integration technologies and standards that can be employed in establishing integrated communication between the members of the extended enterprise. Finally, a phased approach will be proposed that can be employed in supporting the establishment of an extended enterprise through e-Business Integration.
APA, Harvard, Vancouver, ISO, and other styles
49

Harris, Jeff R. "Processing and integration of geochemical data for mineral exploration: Application of statistics, geostatistics and GIS technology." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6421.

Full text
Abstract:
Geographic Information Systems (GIS) used in concert with statistical and geostatistical software provide the geologist with a powerful tool for processing, visualizing and analysing geoscience data for mineral exploration applications. This thesis focuses on different methods for analysing, visualizing and integrating geochemical data sampled from various media (rock, till, soil, humus), with other types of geoscience data. Different methods for defining geochemical anomalies and separating geochemical anomalies due to mineralization from other lithologic or surficial factors (i.e. true from false anomalies) are investigated. With respect to lithogeochemical data, this includes methods to distinguish between altered and un-altered samples, methods (normalization) for identifying lithologic from mineralization effects, and various statistical and visual methods for identifying anomalous geochemical concentrations from background. With respect to surficial geochemical data, methods for identifying bedrock signatures, and scavenging effects are presented. In addition, a new algorithm, the dispersal train identification algorithm (DTIA), is presented which broadly helps to identify and characterize anisotropies in till data due to glacial dispersion and more specifically identifies potential dispersal trains using a number of statistical parameters. The issue of interpolation of geochemical data is addressed and methods for determining whether geochemical data should or should not be interpolated are presented. New methods for visualizing geochemical data using red-green-blue (RGB) ternary displays are illustrated. Finally data techniques for integrating geochemical data with other geoscience data to produce mineral prospectivity maps are demonstrated. Both data and knowledge-driven GIS modeling methodologies are used (and compared) for producing prospectivity maps. New ways of preparing geochemical data for input to modeling are demonstrated with the aim of getting the most out of your data for mineral exploration purposes. Processing geochemical data by sub-populations, either by geographic unit (i.e., lithology) or by geochemical classification and alteration style was useful for better identification of geochemical anomalies, with respect to background, and for assessing varying alteration styles. Normal probability plots of geochemical concentrations based on spatial (lithologic) divisions and Principal Component Analysis (PCA) were found to be particularly useful for identifying geochemical anomalies and for identifying associations between major oxide elements that in turn reflect different alteration styles. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
50

Mat, Lela Mohamed Said bin. "The integration of remotely sensed data using Landsat and radar imagery with ancillary information for forest management." Thesis, University of Nottingham, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography