Dissertations / Theses on the topic 'Data integration method'

To see the other types of publications on this topic, follow the link: Data integration method.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data integration method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chavali, Krishna Kumar. "Integration of statistical and neural network method for data analysis." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4749.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains viii, 68 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 50-51).
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, Shih-Yung. "Integration and processing of high-resolution moiré-interferometry data." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/40181.

Full text
Abstract:
A new hybrid method combining moire interferometry, high resolution data-reduction technique, two-dimensional datasmoothing method, and Finite Element Method (FEM) has been successfully developed. This hybrid method has been applied to residual strain analyses of composite panels, strain concentrations around optical fibers embedded in composites, and cruciform composite shear test. This hybrid method allows moire data to be collected with higher precision and accuracy by digitizing overexposed moire patterns (U & V fields) with appropriate carrier fringes. The resolution of the data is ± 20 nm. The data extracted from the moire patterns are interfaced to an FEM package through an automatic mesh generator. This mesh generator produces a nonuniform FEM mesh by connecting the digitized data points into triangles. The mesh, which uses digitized displacement data as boundary conditions, is then fed to and processed by a commercial FEM package. Due to the natural scatter of the displacement data digitized from moire patterns, the accuracy of strain values is significantly affected. A modified finite-element model with linear spring elements is introduced so data-smoothing can be done easily in two dimensional space. The results of the data smoothing are controlled by limiting the stretch of those springs to be less than the resolution of the experimental method. With the full-field hybrid method, the strain contours from moire interferometry can be easily obtained with good accuracy. If the properties of the material are known, the stress patterns can also be obtained. In addition, this method can be used to analyze any two-dimensional displacement data, including the grid method and holography.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Graciolli, Vinicius Medeiros. "A novel classification method applied to well log data calibrated by ontology based core descriptions." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/174993.

Full text
Abstract:
Um método para a detecção automática de tipos litológicos e contato entre camadas foi desenvolvido através de uma combinação de análise estatística de um conjunto de perfis geofísicos de poços convencionais, calibrado por descrições sistemáticas de testemunhos. O objetivo deste projeto é permitir a integração de dados de rocha em modelos de reservatório. Os testemunhos são descritos com o suporte de um sistema de nomenclatura baseado em ontologias que formaliza extensamente uma grande gama de atributos de rocha. As descrições são armazenadas em um banco de dados relacional junto com dados de perfis de poço convencionais de cada poço analisado. Esta estrutura permite definir protótipos de valores de perfil combinados para cada litologia reconhecida através do cálculo de média e dos valores de variância e covariância dos valores medidos por cada ferramenta de perfilagem para cada litologia descrita nos testemunhos. O algoritmo estatístico é capaz de aprender com cada novo testemunho e valor de log adicionado ao banco de dados, refinando progressivamente a identificação litológica. A detecção de contatos litológicos é realizada através da suavização de cada um dos perfis através da aplicação de duas médias móveis de diferentes tamanhos em cada um dos perfis. Os resultados de cada par de perfis suavizados são comparados, e as posições onde as linhas se cruzam definem profundidades onde ocorrem mudanças bruscas no valor do perfil, indicando uma potencial mudança de litologia. Os resultados da aplicação desse método em cada um dos perfis são então unificados em uma única avaliação de limites litológicos Os valores de média e variância-covariância derivados da correlação entre testemunhos e perfis são então utilizados na construção de uma distribuição gaussiana n-dimensional para cada uma das litologias reconhecidas. Neste ponto, probabilidades a priori também são calculadas para cada litologia. Estas distribuições são comparadas contra cada um dos intervalos litológicos previamente detectados por meio de uma função densidade de probabilidade, avaliando o quão perto o intervalo está de cada litologia e permitindo a atribuição de um tipo litológico para cada intervalo. O método desenvolvido foi testado em um grupo de poços da bacia de Sergipe- Alagoas, e a precisão da predição atingida durante os testes mostra-se superior a algoritmos clássicos de reconhecimento de padrões como redes neurais e classificadores KNN. O método desenvolvido foi então combinado com estes métodos clássicos em um sistema multi-agentes. Os resultados mostram um potencial significante para aplicação operacional efetiva na construção de modelos geológicos para a exploração e desenvolvimento de áreas com grande volume de dados de perfil e intervalos testemunhados.
A method for the automatic detection of lithological types and layer contacts was developed through the combined statistical analysis of a suite of conventional wireline logs, calibrated by the systematic description of cores. The intent of this project is to allow the integration of rock data into reservoir models. The cores are described with support of an ontology-based nomenclature system that extensively formalizes a large set of attributes of the rocks, including lithology, texture, primary and diagenetic composition and depositional, diagenetic and deformational structures. The descriptions are stored in a relational database along with the records of conventional wireline logs (gamma ray, resistivity, density, neutrons, sonic) of each analyzed well. This structure allows defining prototypes of combined log values for each lithology recognized, by calculating the mean and the variance-covariance values measured by each log tool for each of the lithologies described in the cores. The statistical algorithm is able to learn with each addition of described and logged core interval, in order to progressively refine the automatic lithological identification. The detection of lithological contacts is performed through the smoothing of each of the logs by the application of two moving means with different window sizes. The results of each pair of smoothed logs are compared, and the places where the lines cross define the locations where there are abrupt shifts in the values of each log, therefore potentially indicating a change of lithology. The results from applying this method to each log are then unified in a single assessment of lithological boundaries The mean and variance-covariance data derived from the core samples is then used to build an n-dimensional gaussian distribution for each of the lithologies recognized. At this point, Bayesian priors are also calculated for each lithology. These distributions are checked against each of the previously detected lithological intervals by means of a probability density function, evaluating how close the interval is to each lithology prototype and allowing the assignment of a lithological type to each interval. The developed method was tested in a set of wells in the Sergipe-Alagoas basin and the prediction accuracy achieved during testing is superior to classic pattern recognition methods such as neural networks and KNN classifiers. The method was then combined with neural networks and KNN classifiers into a multi-agent system. The results show significant potential for effective operational application to the construction of geological models for the exploration and development of areas with large volume of conventional wireline log data and representative cored intervals.
APA, Harvard, Vancouver, ISO, and other styles
4

Stock, Kristin Mary. "A new method for representing and translating the semantics of hetrogenous spatial databases." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sukcharoenpong, Anuchit. "Shoreline Mapping with Integrated HSI-DEM using Active Contour Method." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1406147249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Darrell, Leopold Augustus. "Development of an NDT method to characterise flaws based on multiple eddy current sensor integration and data fusion." Thesis, Leeds Beckett University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lindström, Maria, and Lena Ljungwald. "A study of the integration of complementary analysis methods : Analysing qualitative data for distributed tactical operations." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4750.

Full text
Abstract:

Complex socio-technical systems, like command and control work in military operations and rescue operations, are becoming more and more common in the society, and there is a growing urge for more useful and effective systems. Qualitative data from complex socio-technical systems can be challenging to analyse. This thesis probes one way of enhancing existing analysis methods to better suit this task.

Our case study is carried out at FOI (the Swedish Defence Research Agency). One of FOI’s tasks is to analyse complex situations, for example military operations, and they have developed an approach called the Reconstruction – exploration approach (R&E) for analysing distributed tactical operations (DTOs). The R&E approach has a rich contextual approach, but lacks a systematic analytic methodology.

The assignment of this thesis is to investigate how the R&E approach could be enhanced and possibly merged with other existing cognitive analysis methods to better suit the analysis of DTOs. We identified that the R&E approach’s main weaknesses were the lack of structure and insufficient way of handling subjective data, which contributed to difficulties when performing a deeper analysis. The approach also needed a well-defined analysis method for increasing the validity of the identified results.

One way of improvement was to integrate the R&E approach with several cognitive analysis methods based on their respective individual strengths. We started by analysing the R&E approach and then identified qualities in other methods that complemented the weaknesses in the R&E approach. Finally we developed an integrated method.

The Critical Decision Method (CDM) appeared to be the most suitable method for integration with the R&E approach. Nevertheless, the CDM did not have all the qualities asked for so we chose to use functions from other methods included in our initial analysis as well; ETA and Grounded theory.

The integration resulted in a method with a well-defined method for analysis and the possibility to handle subjective data. This can contribute to a deeper analysis of DTOs.

APA, Harvard, Vancouver, ISO, and other styles
8

Söderström, Eva. "Merging Modelling Techniques: A Case Study and its Implications." Thesis, University of Skövde, Department of Computer Science, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-393.

Full text
Abstract:

There are a countless number of methods in the field of Information Systems Development (ISD) today, where only a few have received much attention by practitioners. These ISD methods are described and developed using knowledge in the field of Method Engineering (ME). Most methods concern either what a system is to contain or how the system is to be realised, but as of now, there is no best method for all situations. Bridging the gap between the fuzzier "what"-methods and the more formal "how"-methods is difficult, if not impossible. Methods therefore need to be integrated to cover as much of the systems life cycle as possible. An integration of two methods, one from each side of the gap, can be performed in a number of different ways, each way having its own obstacles that need to be overcome.

The case study we have performed concerns a method integration of the fuzzier Business Process Model (BPM) in the EKD method with the more formal description technique SDL (Specification and Description Language). One meta model per technique was created, which were then used to compare BPM and SDL. The integration process consisted of translating EKD business process diagrams into SDL correspondences, while carefully documenting and analysing encountered problems. The encountered problems mainly arose because of either transaction-independence differences or method focus deviations. The case study resulted in, for example, a number of implications for both EKD and SDL, as well as for ME, and include suggestions for future work.

APA, Harvard, Vancouver, ISO, and other styles
9

Forst, Marie Bess. "Zoophonics keyboards: A venue for technology integration in kindergarten." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2560.

Full text
Abstract:
The purpose of the project was to create a program of instruction that seamlessly meshed with my current emergent literacy curriculum, a popularly used phonics program entitled Zoo-phonics, which can easily be applied by other kindergarten teachers using the same phonics instruction program.
APA, Harvard, Vancouver, ISO, and other styles
10

Zeng, Sai. "Knowledge-based FEA Modeling Method for Highly Coupled Variable Topology Multi-body Problems." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4772.

Full text
Abstract:
The increasingly competitive market is forcing the industry to develop higher-quality products more quickly and less expensively. Engineering analysis, at the same time, plays an important role in helping designers evaluate the performance of the designed product against design requirements. In the context of automated CAD/FEA integration, the domain-dependent engineers different usage views toward product models cause an information gap between CAD and FEA models, which impedes the interoperability among these engineering tools and the automatic transformation from an idealized design model into a solvable FEA model. Especially in highly coupled variable topology multi-body (HCVTMB) problems, this transformation process is usually very labor-intensive and time-consuming. In this dissertation, a knowledge-based FEA modeling method, which consists of three information models and the transformation processes between these models, is presented. An Analysis Building Block (ABB) model represents the idealized analytical concepts in a FEA modeling process. Solution Method Models (SMMs) represent these analytical concepts in a solution technique-specific format. When FEA is used as the solution technique, an SMM consists of a Ready to Mesh Model (RMM) and a Control Information Model (CIM). An RMM is obtained from an ABB through geometry manipulation so that the quality mesh can be automatically generated using FEA tools. CIMs contain information that controls the FEA modeling and solving activities. A Solution Tool Model (STM) represents an analytical model at the tool-specific level to guide the entire FEA modeling process. Two information transformation processes are presented between these information models. A solution method mapping transforms an ABB into an RMM through a complex cell decomposition process and an attribute association process. A solution tool mapping transforms an SMM into an STM by mimicking an engineers selection of FEA modeling operations. Four HCVTMB industrial FEA modeling cases are presented for demonstration and validation. These involve thermo-mechanical analysis scenarios: a simple chip package, a Plastic Ball Grid Array (PBGA), and an Enhanced Ball Grid Array (EBGA), as well as a thermal analysis scenario: another PBGA. Compared to traditional methods, results indicate that this method provides better knowledge capture and decreases the modeling time from days/hours to hours/minutes.
APA, Harvard, Vancouver, ISO, and other styles
11

Ryd, Jonatan, and Jeffrey Persson. "Development of a pipeline to allow continuous development of software onto hardware : Implementation on a Raspberry Pi to simulate a physical pedal using the Hardware In the Loop method." Thesis, KTH, Hälsoinformatik och logistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296952.

Full text
Abstract:
Saab want to examine Hardware In the Loop method as a concept, and how an infrastructure of Hardware In the Loop would look like. Hardware In the Loop is based upon continuously testing hardware, which is simulated. The software Saab wants to use for the Hardware In the Loop method is Jenkins, which is a Continuous Integration, and Continuous Delivery tool. To simulate the hardware, they want to examine the use of an Application Programming Interface between a Raspberry Pi, and the programming language Robot Framework. The reason Saab wants this examined, is because they believe that this method can improve the rate of testing, the quality of the tests, and thereby the quality of their products.The theory behind Hardware In the Loop, Continuous Integration, and Continuous Delivery will be explained in this thesis. The Hardware In the Loop method was implemented upon the Continuous Integration and Continuous Delivery tool Jenkins. An Application Programming Interface between the General Purpose Input/Output pins on a Raspberry Pi and Robot Framework, was developed. With these implementations done, the Hardware In the Loop method was successfully integrated, where a Raspberry Pi was used to simulate the hardware.
Saab vill undersöka metoden Hardware In the Loop som ett koncept, dessutom hur en infrastruktur av Hardware In the Loop skulle se ut. Hardware In the Loop baseras på att kontinuerligt testa hårdvara som är simulerad. Mjukvaran Saab vill använda sig av för Hardware In the Loop metoden är Jenkins, vilket är ett Continuous Integration och Continuous Delivery verktyg. För attsimulera hårdvaran vill Saab undersöka användningen av ett Application Programming Interface mellan en Raspberry Pi och programmeringsspråket Robot Framework. Anledning till att Saab vill undersöka allt det här, är för att de tror att det kan förbättra frekvensen av testning och kvaliteten av testning, vilket skulle leda till en förbättring av deras produkter. Teorin bakom Hardware In the Loop, Continuous Integration och Continuous Delivery kommer att förklaras i den här rapporten. Hardware In the Loop metoden blev implementerad med Continuous Integration och Continuous Delivery verktyget Jenkins. Ett Application Programming Interface mellan General Purpose Input/output pinnarna på en Raspberry Pi och Robot Framework blev utvecklat. Med de här implementationerna utförda, så blev Hardware Inthe Loop metoden slutligen integrerat, där Raspberry Pis användes för att simulera hårdvaran.
APA, Harvard, Vancouver, ISO, and other styles
12

Manser, Paul. "Methods for Integrative Analysis of Genomic Data." VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3638.

Full text
Abstract:
In recent years, the development of new genomic technologies has allowed for the investigation of many regulatory epigenetic marks besides expression levels, on a genome-wide scale. As the price for these technologies continues to decrease, study sizes will not only increase, but several different assays are beginning to be used for the same samples. It is therefore desirable to develop statistical methods to integrate multiple data types that can handle the increased computational burden of incorporating large data sets. Furthermore, it is important to develop sound quality control and normalization methods as technical errors can compound when integrating multiple genomic assays. DNA methylation is a commonly studied epigenetic mark, and the Infinium HumanMethylation450 BeadChip has become a popular microarray that provides genome-wide coverage and is affordable enough to scale to larger study sizes. It employs a complex array design that has complicated efforts to develop normalization methods. We propose a novel normalization method that uses a set of stable methylation sites from housekeeping genes as empirical controls to fit a local regression hypersurface to signal intensities. We demonstrate that our method performs favorably compared to other popular methods for the array. We also discuss an approach to estimating cell-type admixtures, which is a frequent biological confound in these studies. For data integration we propose a gene-centric procedure that uses canonical correlation and subsequent permutation testing to examine correlation or other measures of association and co-localization of epigenetic marks on the genome. Specifically, a likelihood ratio test for general association between data modalities is performed after an initial dimension reduction step. Canonical scores are then regressed against covariates of interest using linear mixed effects models. Lastly, permutation testing is performed on weighted correlation matrices to test for co-localization of relationships to physical locations in the genome. We demonstrate these methods on a set of developmental brain samples from the BrainSpan consortium and find substantial relationships between DNA methylation, gene expression, and alternative promoter usage primarily in genes related to axon guidance. We perform a second integrative analysis on another set of brain samples from the Stanley Medical Research Institute.
APA, Harvard, Vancouver, ISO, and other styles
13

Ming, Jingsi. "Statistical methods for integrative analysis of genomic data." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/545.

Full text
Abstract:
Thousands of risk variants underlying complex phenotypes (quantitative traits and diseases) have been identified in genome-wide association studies (GWAS). However, there are still several challenges towards deepening our understanding of the genetic architectures of complex phenotypes. First, the majority of GWAS hits are in non-coding region and their biological interpretation is still unclear. Second, most complex traits are suggested to be highly polygenic, i.e., they are affected by a vast number of risk variants with individually small or moderate effects, whereas a large proportion of risk variants with small effects remain unknown. Third, accumulating evidence from GWAS suggests the pervasiveness of pleiotropy, a phenomenon that some genetic variants can be associated with multiple traits, but there is a lack of unified framework which is scalable to reveal relationship among a large number of traits and prioritize genetic variants simultaneously with functional annotations integrated. In this thesis, we propose two statistical methods to address these challenges using integrative analysis of summary statistics from GWASs and functional annotations. In the first part, we propose a latent sparse mixed model (LSMM) to integrate functional annotations with GWAS data. Not only does it increase the statistical power of identifying risk variants, but also offers more biological insights by detecting relevant functional annotations. To allow LSMM scalable to millions of variants and hundreds of functional annotations, we developed an efficient variational expectation-maximization (EM) algorithm for model parameter estimation and statistical inference. We first conducted comprehensive simulation studies to evaluate the performance of LSMM. Then we applied it to analyze 30 GWASs of complex phenotypes integrated with nine genic category annotations and 127 cell-type specific functional annotations from the Roadmap project. The results demonstrate that our method possesses more statistical power than conventional methods, and can help researchers achieve deeper understanding of genetic architecture of these complex phenotypes. In the second part, we propose a latent probit model (LPM) which combines summary statistics from multiple GWASs and functional annotations, to characterize relationship and increase statistical power to identify risk variants. LPM can also perform hypothesis testing for pleiotropy and annotations enrichment. To enable the scalability of LPM as the number of GWASs increases, we developed an efficient parameter-expanded EM (PX-EM) algorithm which can execute parallelly. We first validated the performance of LPM through comprehensive simulations, then applied it to analyze 44 GWASs with nine genic category annotations. The results demonstrate the benefits of LPM and can offer new insights of disease etiology.
APA, Harvard, Vancouver, ISO, and other styles
14

Lysenko, Artem. "Integration strategies and data analysis methods for plant systems biology." Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/27798/.

Full text
Abstract:
Understanding how function relates to multiple layers of inactions between biological entities is one of the key goals of bioinformatics research, in particular in such areas as systems biology. However, the realisation of this objective is hampered by the sheer volume and multi-level heterogeneity of potentially relevant information. This work addressed this issue by developing a set of integration pipelines and analysis methods as part of an Ondex data integration framework. The integration process incorporated both relevant data from a set of publically available databases and information derived from predicted approaches, which were also implemented as part of this work. These methods were used to assemble integrated datasets that were of relevance to the study of the model plant species Arabidopsis thaliana and applicable for the network-driven analysis. A particular attention was paid to the evaluation and comparison of the different sources of these data. Approaches were implemented for the identification and characterisation of functional modules in integrated networks and used to study and compare networks constructed from different types of data. The benefits of data integration were also demonstrated in three different bioinformatics research scenarios. The analysis of the constructed datasets has also resulted in a better understanding of the functional role of genes identified in a study of a nitrogen uptake mutant and allowed to select candidate genes for further exploration.
APA, Harvard, Vancouver, ISO, and other styles
15

Gao, Yang. "On the integration of qualitative and quantitative methods in data fusion." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Khalilikhah, Majid. "Traffic Sign Management: Data Integration and Analysis Methods for Mobile LiDAR and Digital Photolog Big Data." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/4744.

Full text
Abstract:
This study links traffic sign visibility and legibility to quantify the effects of damage or deterioration on sign retroreflective performance. In addition, this study proposes GIS-based data integration strategies to obtain and extract climate, location, and emission data for in-service traffic signs. The proposed data integration strategy can also be used to assess all transportation infrastructures’ physical condition. Additionally, non-parametric machine learning methods are applied to analyze the combined GIS, Mobile LiDAR imaging, and digital photolog big data. The results are presented to identify the most important factors affecting sign visual condition, to predict traffic sign vandalism that obstructs critical messages to drivers, and to determine factors contributing to the temporary obstruction of the sign messages. The results of data analysis provide insight to inform transportation agencies in the development of sign management plans, to identify traffic signs with a higher likelihood of failure, and to schedule sign replacement.
APA, Harvard, Vancouver, ISO, and other styles
17

Weirauch, Matthew T. "Data integration methods for systems-level investigation of gene functional association networks /." Diss., Digital Dissertations Database. Restricted to UC campuses, 2009. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Constantinescu, Emil Mihai. "Adaptive Numerical Methods for Large Scale Simulations and Data Assimilation." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/27938.

Full text
Abstract:
Numerical simulation is necessary to understand natural phenomena, make assessments and predictions in various research and engineering fields, develop new technologies, etc. New algorithms are needed to take advantage of the increasing computational resources and utilize the emerging hardware and software infrastructure with maximum efficiency. Adaptive numerical discretization methods can accommodate problems with various physical, scale, and dynamic features by adjusting the resolution, order, and the type of method used to solve them. In applications that simulate real systems, the numerical accuracy of the solution is typically just one of the challenges. Measurements can be included in the simulation to constrain the numerical solution through a process called data assimilation in order to anchor the simulation in reality. In this thesis we investigate adaptive discretization methods and data assimilation approaches for large-scale numerical simulations. We develop and investigate novel multirate and implicit-explicit methods that are appropriate for multiscale and multiphysics numerical discretizations. We construct and explore data assimilation approaches for, but not restricted to, atmospheric chemistry applications. A generic approach for describing the structure of the uncertainty in initial conditions that can be applied to the most popular data assimilation approaches is also presented. We show that adaptive numerical methods can effectively address the discretization of large-scale problems. Data assimilation complements the adaptive numerical methods by correcting the numerical solution with real measurements. Test problems and large-scale numerical experiments validate the theoretical findings. Synergistic approaches that use adaptive numerical methods within a data assimilation framework need to be investigated in the future.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Choo, Jae gul. "Integration of computational methods and visual analytics for large-scale high-dimensional data." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49121.

Full text
Abstract:
With the increasing amount of collected data, large-scale high-dimensional data analysis is becoming essential in many areas. These data can be analyzed either by using fully computational methods or by leveraging human capabilities via interactive visualization. However, each method has its drawbacks. While a fully computational method can deal with large amounts of data, it lacks depth in its understanding of the data, which is critical to the analysis. With the interactive visualization method, the user can give a deeper insight on the data but suffers when large amounts of data need to be analyzed. Even with an apparent need for these two approaches to be integrated, little progress has been made. As ways to tackle this problem, computational methods have to be re-designed both theoretically and algorithmically, and the visual analytics system has to expose these computational methods to users so that they can choose the proper algorithms and settings. To achieve an appropriate integration between computational methods and visual analytics, the thesis focuses on essential computational methods for visualization, such as dimension reduction and clustering, and it presents fundamental development of computational methods as well as visual analytic systems involving newly developed methods. The contributions of the thesis include (1) the two-stage dimension reduction framework that better handles significant information loss in visualization of high-dimensional data, (2) efficient parametric updating of computational methods for fast and smooth user interactions, and (3) an iteration-wise integration framework of computational methods in real-time visual analytics. The latter parts of the thesis focus on the development of visual analytics systems involving the presented computational methods, such as (1) Testbed: an interactive visual testbed system for various dimension reduction and clustering methods, (2) iVisClassifier: an interactive visual classification system using supervised dimension reduction, and (3) VisIRR: an interactive visual information retrieval and recommender system for large-scale document data.
APA, Harvard, Vancouver, ISO, and other styles
20

Jeanmougin, Marine. "Statistical methods for robust analysis of transcriptome data by integration of biological prior knowledge." Thesis, Evry-Val d'Essonne, 2012. http://www.theses.fr/2012EVRY0029/document.

Full text
Abstract:
Au cours de la dernière décennie, les progrès en Biologie Moléculaire ont accéléré le développement de techniques d'investigation à haut-débit. En particulier, l'étude du transcriptome a permis des avancées majeures dans la recherche médicale. Dans cette thèse, nous nous intéressons au développement de méthodes statistiques dédiées au traitement et à l'analyse de données transcriptomiques à grande échelle. Nous abordons le problème de sélection de signatures de gènes à partir de méthodes d'analyse de l'expression différentielle et proposons une étude de comparaison de différentes approches, basée sur plusieurs stratégies de simulations et sur des données réelles. Afin de pallier les limites de ces méthodes classiques qui s'avèrent peu reproductibles, nous présentons un nouvel outil, DiAMS (DIsease Associated Modules Selection), dédié à la sélection de modules de gènes significatifs. DiAMS repose sur une extension du score-local et permet l'intégration de données d'expressions et de données d'interactions protéiques. Par la suite, nous nous intéressons au problème d'inférence de réseaux de régulation de gènes. Nous proposons une méthode de reconstruction à partir de modèles graphiques Gaussiens, basée sur l'introduction d'a priori biologique sur la structure des réseaux. Cette approche nous permet d'étudier les interactions entre gènes et d'identifier des altérations dans les mécanismes de régulation, qui peuvent conduire à l'apparition ou à la progression d'une maladie. Enfin l'ensemble de ces développements méthodologiques sont intégrés dans un pipeline d'analyse que nous appliquons à l'étude de la rechute métastatique dans le cancer du sein
Recent advances in Molecular Biology have led biologists toward high-throughput genomic studies. In particular, the investigation of the human transcriptome offers unprecedented opportunities for understanding cellular and disease mechanisms. In this PhD, we put our focus on providing robust statistical methods dedicated to the treatment and the analysis of high-throughput transcriptome data. We discuss the differential analysis approaches available in the literature for identifying genes associated with a phenotype of interest and propose a comparison study. We provide practical recommendations on the appropriate method to be used based on various simulation models and real datasets. With the eventual goal of overcoming the inherent instability of differential analysis strategies, we have developed an innovative approach called DiAMS, for DIsease Associated Modules Selection. This method was applied to select significant modules of genes rather than individual genes and involves the integration of both transcriptome and protein interactions data in a local-score strategy. We then focus on the development of a framework to infer gene regulatory networks by integration of a biological informative prior over network structures using Gaussian graphical models. This approach offers the possibility of exploring the molecular relationships between genes, leading to the identification of altered regulations potentially involved in disease processes. Finally, we apply our statistical developments to study the metastatic relapse of breast cancer
APA, Harvard, Vancouver, ISO, and other styles
21

Aich, Sudipto. "Evaluation of Driver Performance While Making Unprotected Intersection Turns Utilizing Naturalistic Data Integration Methods." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/76892.

Full text
Abstract:
Within the set of all vehicle crashes that occur annually, of intersection-related crashes are over-represented. The research conducted here uses an empirical approach to study driver behavior at intersections, in a naturalistic paradigm. A data-mining algorithm was used to aggregate the data from two different naturalistic databases to obtain instances of unprotected turns at the same intersection. Several dependent variables were analyzed which included visual entropy, mean-duration of glances to locations in the driver's view, gap-acceptance/rejection time. Kinematic dependent variables include peak/average speed, and peak longitudinal and lateral acceleration. Results indicated that visual entropy and peak speed differs amongst drivers of the three age-groups (older, middle-age, teens) in the presence of traffic in the intersecting streams while negotiating a left turn. Although not significant, but approaching significance, were differences in gap acceptance times, with the older driver accepting larger gaps compared to the younger teen drivers. Significant differences were observed for peak speed and average speed during a left turn, with younger drivers exhibiting higher values for both. Overall, this research has resulted in contribution towards two types of engineering application. Firstly, the analyses of traffic levels, gap acceptance, and gap non-acceptance represented exploratory efforts, ones that ventured into new areas of technical content, using newly available naturalistic driving data. Secondly, the findings from this thesis are among the few that can be used to inform the further development, refinement, and testing of technology (and training) solutions intended to assist drivers in making successful turns and avoiding crashes at intersections.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
22

HÅKANSSON, MICHAEL. "Matching Methods for Information Sharingwith Supply Chain Context." Thesis, KTH, Industriell Management, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-199188.

Full text
Abstract:
The productivity and competitiveness of companies fundamentally depend on their ability to handle information. With the available technology, the opportunities to collect and utilise information are better than ever. One of the industries that has proven to benefit significantly from analysing large quantities of information is the retail industry. However, before information can be analysed it has to be obtained. This often means that information has to flow between members in a supply chain. The purpose of this study was to investigate which methods that are suitable for sharing information in different contexts between suppliers and retailers. The research was conducted as a case study within the Swedish sporting goods industry, where the information sharing relationship between one supplier and seven of its customers was investigated. The studied methods for information sharing were manual document handling, web portals and through a third-party EDI service provider. The third-party EDI solution benefits both parties. However, this method is not always applicable. If resources are scarce for both communicating parties and no technological solution for information sharing is in place, the manual document handling method is a suitable short-term solution. If one party with lots of resources frequently share information with parties that cannot afford to invest in technological information sharing solutions, a portal can be a suitable compromise to let the company that invests in the portal gain efficiency benefits while the other parties continue to manually provide information.
Ett företags produktivitet och konkurrenskraft beror på dess förmåga att hantera information. Med den teknik som finns tillgänglig är möjligheterna att samla in och behandla information bättre än någonsin. En av de branscher som har visat sig ha stor nytta av att analysera stora mängder av information är detaljhandeln. Hur som helst måste informationen tas emot innan den kan analyseras. Detta innebär ofta att informationen måste flöda mellan medlemmar i en distributionskedja. Syftet med denna studie var att undersöka vilka metoder som är lämpliga för att dela information mellan leverantörer och återförsäljare. Undersökningen genomfördes som en fallstudie inom den svenska sportvaruindustrin, där informationsdelningsförhållandet mellan en leverantör och sju av dess kunder undersöktes. De studerade metoderna för informationsdelning var manuell dokumenthantering, webbportaler och genom en tredjeparts EDI-tjänst. EDI-lösningen gynnar båda parter, men är inte alltid tillämplig. Om resurserna är knappa för båda kommunicerande parter och ingen teknisk lösning för att dela information finns på plats är den manuella metoden en lämplig lösning på kort sikt. Om en part med stora resurser ofta delar information med parter som inte har möjlighet att investera i informationsdelningslösningar kan en portal vara en lämplig kompromiss. Den lösningen ger ffektivitetsvinster till företaget som investerar i portalen medan de andra parterna kan fortsätta att manuellt tillhandahålla information.
APA, Harvard, Vancouver, ISO, and other styles
23

Wu, Chao-Min. "Computational Methods for Integrating Different Anatomical Data Sets of The Human Tongue /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu148793324553722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Potter, Dustin Paul. "A combinatorial approach to scientific exploration of gene expression data: An integrative method using Formal Concept Analysis for the comparative analysis of microarray data." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28792.

Full text
Abstract:
Functional genetics is the study of the genes present in a genome of an organism, the complex interplay of all genes and their environment being the primary focus of study. The motivation for such studies is the premise that gene expression patterns in a cell are characteristic of its current state. The availability of the entire genome for many organisms now allows scientists unparalleled opportunities to characterize, classify, and manipulate genes or gene networks involved in metabolism, cellular differentiation, development, and disease. System-wide studies of biological systems have been made possible by the advent of high-throughput and large-scale tools such as microarrays which are capable of measuring the mRNA levels of all genes in a genome. Tools and methods for the integration, visualization, and modeling of the large-scale data obtained in typical systems biology experiments are indispensable. Our work focuses on a method that integrates gene expression values obtained from microarray experiments with biological functional information related to the genes measured in order to make global comparisons of multiple experiments. In our method, the integrated data is represented as a lattice and, using appropriate measures, a reference experiment can be compared to samples from a database of similar experiments, and a ranking of similarity is returned. In this work, support for the validity of our method is demonstrated both theoretically and empirically: a mathematical description of the lattice structure with respect to the integrated information is developed and the method is applied to data sets of both simulated and reported microarray experiments. A fast algorithm for constructing the lattice representation is also developed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Golbamaki, Bakhtyari Azadi. "Integration of toxicity data from experiments and non-testing methods within a weight of evidence procedure." Thesis, Open University, 2018. http://oro.open.ac.uk/55615/.

Full text
Abstract:
Assessment of human health and environmental risk is based on multiple sources of information, requiring the integration of the lines of evidence in order to reach a conclusion. There is an increasing need for data to fill the gaps and new methods for the data integration. From a regulatory point of view, risk assessors take advantage of all the available data by means of weight of evidence (WOE) and expert judgement approaches to develop conclusions about the risk posed by chemicals and also nanoparticles. The integration of the physico-chemical properties and toxicological effects shed light on relationships between the molecular properties and biological effects, leading us to non-testing methods. (Quantitative) structure-activity relationship ((Q)SAR) and read-across are examples of non-testing methods. In this dissertation, (i) two new structure-based carcinogenicity models, (ii) ToxDelta, a new read-across model for mutagenicity endpoint and (iii) a genotoxicity model for the metal oxide nanoparticles are introduced. Within the latter section, best professional judgement method is employed for the selection of reliable data from scientific publications to develop a data base of nanomaterials with their genotoxicity effect. We developed a decision tree model for the classification of these nanomaterials. The (Q)SAR models used in qualitative WOE approaches mainly lack transparency resulting in risk estimates needing quantified uncertainties. Our two structure-based carcinogenicity models, provide transparent reasoning in their predictions. Additionally, ToxDelta provides better supported techniques in read-across terms based on the analysis of the differences of the molecules structures. We propose a basic qualitative WOE framework that couples the in silico models predictions with the inspections of the similar compounds. We demonstrate the application of this framework to two realistic case studies, and discuss how to deal with different and sometimes conflicting data obtained from various in silico models in qualitative WOE terms to facilitate structured and transparent development of answers to scientific questions.
APA, Harvard, Vancouver, ISO, and other styles
26

Kromanis, Rolands. "Structural performance evaluation of bridges : characterizing and integrating thermal response." Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/17440.

Full text
Abstract:
Bridge monitoring studies indicate that the quasi-static response of a bridge, while dependent on various input forces, is affected predominantly by variations in temperature. In many structures, the quasi-static response can even be approximated as equal to its thermal response. Consequently, interpretation of measurements from quasi-static monitoring requires accounting for the thermal response in measurements. Developing solutions to this challenge, which is critical to relate measurements to decision-making and thereby realize the full potential of SHM for bridge management, is the main focus of this research. This research proposes a data-driven approach referred to as temperature-based measurement interpretation (TB-MI) approach for structural performance evaluation of bridges based on continuous bridge monitoring. The approach characterizes and predicts thermal response of structures by exploiting the relationship between temperature distributions across a bridge and measured bridge response. The TB-MI approach has two components - (i) a regression-based thermal response prediction (RBTRP) methodology and (ii) an anomaly detection methodology. The RBTRP methodology generates models to predict real-time structural response from distributed temperature measurements. The anomaly detection methodology analyses prediction error signals, which are the differences between predicted and real-time response to detect the onset of anomaly events. In order to generate realistic data-sets for evaluating the proposed TB-MI approach, this research has built a small-scale truss structure in the laboratory as a test-bed. The truss is subject to accelerated diurnal temperature cycles using a system of heating lamps. Various damage scenarios are also simulated on this structure. This research further investigates if the underlying concept of using distributed temperature measurements to predict thermal response can be implemented using physics-based models. The case study of Cleddau Bridge is considered. This research also extends the general concept of predicting bridge response from knowledge of input loads to predict structural response due to traffic loads. Starting from the TB-MI approach, it creates an integrated approach for analyzing measured response due to both thermal and vehicular loads. The proposed approaches are evaluated on measurement time-histories from a number of case studies including numerical models, laboratory-scale truss and full-scale bridges. Results illustrate that the approaches accurately predicts thermal response, and that anomaly events are detectable using signal processing techniques such as signal subtraction method and cointegration. The study demonstrates that the proposed TB-MI approach is applicable for interpreting measurements from full-scale bridges, and can be integrated within a measurement interpretation platform for continuous bridge monitoring.
APA, Harvard, Vancouver, ISO, and other styles
27

Bylesjö, Max. "Latent variable based computational methods for applications in life sciences : Analysis and integration of omics data sets." Doctoral thesis, Umeå universitet, Kemi, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1616.

Full text
Abstract:
With the increasing availability of high-throughput systems for parallel monitoring of multiple variables, e.g. levels of large numbers of transcripts in functional genomics experiments, massive amounts of data are being collected even from single experiments. Extracting useful information from such systems is a non-trivial task that requires powerful computational methods to identify common trends and to help detect the underlying biological patterns. This thesis deals with the general computational problems of classifying and integrating high-dimensional empirical data using a latent variable based modeling approach. The underlying principle of this approach is that a complex system can be characterized by a few independent components that characterize the systematic properties of the system. Such a strategy is well suited for handling noisy, multivariate data sets with strong multicollinearity structures, such as those typically encountered in many biological and chemical applications. The main foci of the studies this thesis is based upon are applications and extensions of the orthogonal projections to latent structures (OPLS) method in life science contexts. OPLS is a latent variable based regression method that separately describes systematic sources of variation that are related and unrelated to the modeling aim (for instance, classifying two different categories of samples). This separation of sources of variation can be used to pre-process data, but also has distinct advantages for model interpretation, as exemplified throughout the work. For classification cases, a probabilistic framework for OPLS has been developed that allows the incorporation of both variance and covariance into classification decisions. This can be seen as a unification of two historical classification paradigms based on either variance or covariance. In addition, a non-linear reformulation of the OPLS algorithm is outlined, which is useful for particularly complex regression or classification tasks. The general trend in functional genomics studies in the post-genomics era is to perform increasingly comprehensive characterizations of organisms in order to study the associations between their molecular and cellular components in greater detail. Frequently, abundances of all transcripts, proteins and metabolites are measured simultaneously in an organism at a current state or over time. In this work, a generalization of OPLS is described for the analysis of multiple data sets. It is shown that this method can be used to integrate data in functional genomics experiments by separating the systematic variation that is common to all data sets considered from sources of variation that are specific to each data set.
Funktionsgenomik är ett forskningsområde med det slutgiltiga målet att karakterisera alla gener i ett genom hos en organism. Detta inkluderar studier av hur DNA transkriberas till mRNA, hur det sedan translateras till proteiner och hur dessa proteiner interagerar och påverkar organismens biokemiska processer. Den traditionella ansatsen har varit att studera funktionen, regleringen och translateringen av en gen i taget. Ny teknik inom fältet har dock möjliggjort studier av hur tusentals transkript, proteiner och små molekyler uppträder gemensamt i en organism vid ett givet tillfälle eller över tid. Konkret innebär detta även att stora mängder data genereras även från små, isolerade experiment. Att hitta globala trender och att utvinna användbar information från liknande data-mängder är ett icke-trivialt beräkningsmässigt problem som kräver avancerade och tolkningsbara matematiska modeller. Denna avhandling beskriver utvecklingen och tillämpningen av olika beräkningsmässiga metoder för att klassificera och integrera stora mängder empiriskt (uppmätt) data. Gemensamt för alla metoder är att de baseras på latenta variabler: variabler som inte uppmätts direkt utan som beräknats från andra, observerade variabler. Detta koncept är väl anpassat till studier av komplexa system som kan beskrivas av ett fåtal, oberoende faktorer som karakteriserar de huvudsakliga egenskaperna hos systemet, vilket är kännetecknande för många kemiska och biologiska system. Metoderna som beskrivs i avhandlingen är generella men i huvudsak utvecklade för och tillämpade på data från biologiska experiment. I avhandlingen demonstreras hur dessa metoder kan användas för att hitta komplexa samband mellan uppmätt data och andra faktorer av intresse, utan att förlora de egenskaper hos metoden som är kritiska för att tolka resultaten. Metoderna tillämpas för att hitta gemensamma och unika egenskaper hos regleringen av transkript och hur dessa påverkas av och påverkar små molekyler i trädet poppel. Utöver detta beskrivs ett större experiment i poppel där relationen mellan nivåer av transkript, proteiner och små molekyler undersöks med de utvecklade metoderna.
APA, Harvard, Vancouver, ISO, and other styles
28

Bylesjö, Max. "Latent variable based computational methods for applications in life sciences : Analysis and integration of omics data sets /." Umeå : Chemistry Kemi, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kaden, Marika. "Integration of Auxiliary Data Knowledge in Prototype Based Vector Quantization and Classification Models." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-206413.

Full text
Abstract:
This thesis deals with the integration of auxiliary data knowledge into machine learning methods especially prototype based classification models. The problem of classification is diverse and evaluation of the result by using only the accuracy is not adequate in many applications. Therefore, the classification tasks are analyzed more deeply. Possibilities to extend prototype based methods to integrate extra knowledge about the data or the classification goal is presented to obtain problem adequate models. One of the proposed extensions is Generalized Learning Vector Quantization for direct optimization of statistical measurements besides the classification accuracy. But also modifying the metric adaptation of the Generalized Learning Vector Quantization for functional data, i. e. data with lateral dependencies in the features, is considered.
APA, Harvard, Vancouver, ISO, and other styles
30

Varatharajah, Thujeepan. "Integrating UCD with Agile Methods : From the perspective of UX-Designers." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-262705.

Full text
Abstract:
With the increasing popularity of Agile methods in software development projects, an emerging question is how Agile incorporates user needs to their process – which is the staple of User Centered Design (UCD). Existing reports indicate that integrating Agile and UCD has shown to improve the process and end product and that they are a natural fit. However, there is also a general lack of guidelines of how an effective integration may be done, and further research is requested. This study aims to provide that by portraying some aspects of how Agile and UCD may be integrated in practice, but also some factors that may affect such an integration. This is done through an empirical study, by gaining insights from the perspective of UX-designers who are part of Scrum teams. TenUX-designers took part in semi-structured interviews, and based on a thematic analysis, results are portrayed in terms of suggested factors to consider when integrating Agile and UCD methods.
Samtidigt som Agila metoder ökar i popularitet inom mjukvaruutvecklingsprojekt, så uppstår även frågan om hur Agilt arbete integrerar användarcentrerade krav i sin process ett område som är i fokus inom Användarcentrerad Design (ACD). Tillgängliga rapporter indikerar på att integrationen av Agilt och ACD har givit förbättrade processer och slutprodukt, samt att båda processer är kompatibla med varandra. Det anses dock finnas en brist på riktlinjer i hur man kan integrera båda processer, och det efterfrågas vidare studier i ämnet. Denna studie ämnar till att erbjuda just detta genom att presentera några faktorer av hur Agilt och ACD kan integreras i praktiken, men också exempel på faktorer som kan påverka hur väl integration lyckas. Detta tas fram genom en empirisk studie, genom att ta del av insikter från UX-designers som jobbar i olika Scrum projekt. Tio UX-designers deltog i semistrukturerade intervjuer, och baserat på en tematisk analys så presenteras resultat i form av föreslagna faktorer att ta del av när man vill integrera Agila och ACD metoder.
APA, Harvard, Vancouver, ISO, and other styles
31

Blankenburg, Hagen [Verfasser], and Mario [Akademischer Betreuer] Albrecht. "Computational methods for integrating and analyzing human systems biology data / Hagen Blankenburg. Betreuer: Mario Albrecht." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2014. http://d-nb.info/1062535944/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Goodbrand, Alan D. "Integrating informal and formal requirements methods, a practical approach for systems employing spatially referenced data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ64954.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Myers, Robert J. "Problem-based learning: a case study in integrating teachers, students, methods, and hypermedia data bases." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lucchi, Francesca <1984&gt. "Reverse Engineering tools: development and experimentation of innovative methods for physical and geometrical data integration and post-processing." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5837/.

Full text
Abstract:
In recent years, the use of Reverse Engineering systems has got a considerable interest for a wide number of applications. Therefore, many research activities are focused on accuracy and precision of the acquired data and post processing phase improvements. In this context, this PhD Thesis deals with the definition of two novel methods for data post processing and data fusion between physical and geometrical information. In particular a technique has been defined for error definition in 3D points’ coordinates acquired by an optical triangulation laser scanner, with the aim to identify adequate correction arrays to apply under different acquisition parameters and operative conditions. Systematic error in data acquired is thus compensated, in order to increase accuracy value. Moreover, the definition of a 3D thermogram is examined. Object geometrical information and its thermal properties, coming from a thermographic inspection, are combined in order to have a temperature value for each recognizable point. Data acquired by an optical triangulation laser scanner are also used to normalize temperature values and make thermal data independent from thermal-camera point of view.
L’impiego di tecniche di Ingegneria Inversa si è ampiamente diffuso e consolidato negli ultimi anni, tanto che questi sistemi sono comunemente impiegati in numerose applicazioni. Pertanto, numerose attività di ricerca sono volte all’analisi del dato acquisito in termini di accuratezza e precisione ed alla definizione di tecniche innovative per il post processing. In questo panorama, l’attività di ricerca presentata in questa tesi di dottorato è rivolta alla definizione di due metodologie, l’una finalizzata a facilitare le operazioni di elaborazione del dato e l’altra a permettere un agevole data fusion tra informazioni fisiche e geometriche di uno stesso oggetto. In particolare, il primo approccio prevede l’individuazione della componente di errore nelle coordinate di punti acquisiti mediate un sistema di scansione a triangolazione ottica. Un’opportuna matrice di correzione della componente sistematica è stata individuata, a seconda delle condizioni operative e dei parametri di acquisizione del sistema. Pertanto, si è raggiunto un miglioramento delle performance del sistema in termini di incremento dell’accuratezza del dato acquisito. Il secondo tema di ricerca affrontato in questa tesi consiste nell’integrazione tra il dato geometrico proveniente da una scansione 3D e le informazioni sulla temperatura rilevata mediante un’indagine termografica. Si è così ottenuto un termogramma in 3D registrando opportunamente su ogni punto acquisito il relativo valore di temperatura. L’informazione geometrica, proveniente dalla scansione laser, è stata inoltre utilizzata per normalizzare il termogramma, rendendolo indipendente dal punto di vista della presa termografica.
APA, Harvard, Vancouver, ISO, and other styles
35

Heine, Jennifer Miers. "Staff Development Methods for Planning Lessons with Integrated Technology." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3343/.

Full text
Abstract:
This study compared cooperative and individual staff development methods for planning lessons with integrated technology. Twenty-three teachers from one elementary school participated in the study. The sample was the entire population. Nine participants were assigned to the control group, and fourteen participants were assigned to the experimental group. Names of participants were randomly drawn to determine group assignment. Participants in the control group worked individually in all three staff development sessions, while participants in the experimental group chose a partner, with whom they worked cooperatively in all three staff development sessions. Each participant or pair of participants submitted a lesson plan prior to participation in three staff development sessions. Following the sessions, each participant or pair of participants submitted a lesson plan. Three independent raters rated lesson plans to determine the participants' respective levels on the Level of Technology Implementation Observation Checklist (Moersch, 2001). The ratings of the lesson plans submitted before the training were compared to those collected after the training using a two-by-two mixed model ANOVA. The occasion (pre- vs. post-test), group, and interaction variables were all statistically significant at the .1 level; however, only the occasion variable had a strong effect size. These data suggest that (1) all teachers who participated in the training, whether individually or cooperatively, were able to develop lesson plans at a higher level of technology implementation and (2) cooperative staff development methods had no advantage over individual staff development methods with respect to teachers' ability to write lessons with integrated technology.
APA, Harvard, Vancouver, ISO, and other styles
36

Feitosa, Neto José Alencar. "Estimação do erro em redes de sensores sem fios." Universidade Federal de Alagoas, 2008. http://repositorio.ufal.br/handle/riufal/815.

Full text
Abstract:
Wireless Sensor Networks (WSNs) are presented in the constext of information acquisition and we propose a generic model based on the processes of signal sampling and reconstruction.We then define a measure of performance using the error when reconstructiong the signal.The analytical assessment of this measure in a variety of scenarios is unfeasible, so we propose and implement a Monte Carlo experiment for estimating the contribution of six factors on the performance of a WSN, namely: (i) the spatial distribution of sensors, (ii) the granularity of the phenomenon being monitored, (iii) the way in which sensors sample the phenomenon (constant characteristic functions defined on Voronoi cells or on cicles), (iv) the communication between sensors (either among neighboring Voronoi cells or among sensors within a range), (v) the clustering and aggregation algorithms (LEACH and SKATER), and (vi) the reconstruction techniques (by Voronoi cells and by Kriging). We conclude that all these factors have significative influence on the performance of a WSN, and we are able to quantitatively assess this influence.
Apresentamos as redes de sensores sem fios no contexto da aquisição de informação, e propomos um modelo genérico baseado nos processos de amostragem e de reconstrução de sinais. Utilizando esse modelo, definimos uma medida de desempenho do funcionamento das redes através do erro de reconstrução do sinal. Dada a complexidade analítica de se calcular esse erro em diferentes cenários, propomos e implementamos uma experiência Monte Carlo que permite avaliar quantitativamente a contribuição de diversos fatores no desempenho de uma rede de sensores sem fios. Esses fatores são (i) a distribuição espacial dos sensores (ii) a granularidade do fenômeno sob observação (iii) a forma em que os sensores amostram o fenômeno (funções características constantes sobre células de Voronoi e sobre círculos), (iv) as características de comunicação entre sensores (por vizinhança entre células de Voronoi e pelo raio de comunicação), (v) os algoritmos de clusterização e agregação (LEACH e SKATER), e (vi) as técnicas de reconstrução (por Voronoi e por Kriging). Os resultados obtidos permitem concluir que todos esses fatores influem significativamente no desempenho de uma rede de sensores sem fios e, pela metodologia de trabalho, foi possível medir essa influência em todos os cenários considerados.
APA, Harvard, Vancouver, ISO, and other styles
37

Singh, Nitesh Kumar [Verfasser]. "Integrating diverse biological sources and computational methods for the analysis of high-throughput expression data / Nitesh Kumar Singh." Greifswald : Universitätsbibliothek Greifswald, 2014. http://d-nb.info/1060136937/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kramer, Frank [Verfasser], Tim [Akademischer Betreuer] Beißbarth, and Stephan [Akademischer Betreuer] Waack. "Integration of Pathway Data as Prior Knowledge into Methods for Network Reconstruction / Frank Kramer. Gutachter: Tim Beißbarth ; Stephan Waack. Betreuer: Tim Beißbarth." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2014. http://d-nb.info/105990764X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Meng, Chen [Verfasser], Bernhard [Akademischer Betreuer] Küster, and Dmitrij [Akademischer Betreuer] Frischmann. "Application of multivariate methods to the integrative analysis of high-throughput omics data / Chen Meng. Betreuer: Bernhard Küster. Gutachter: Bernhard Küster ; Dmitrij Frischmann." München : Universitätsbibliothek der TU München, 2016. http://d-nb.info/1082347299/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ayllón-Benítez, Aarón. "Development of new computational methods for a synthetic gene set annotation." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0305.

Full text
Abstract:
Les avancées dans l'analyse de l'expression différentielle de gènes ont suscité un vif intérêt pour l'étude d'ensembles de gènes présentant une similarité d'expression au cours d'une même condition expérimentale. Les approches classiques pour interpréter l'information biologique reposent sur l'utilisation de méthodes statistiques. Cependant, ces méthodes se focalisent sur les gènes les plus connus tout en générant des informations redondantes qui peuvent être éliminées en prenant en compte la structure des ressources de connaissances qui fournissent l'annotation. Au cours de cette thèse, nous avons exploré différentes méthodes permettant l'annotation d'ensembles de gènes.Premièrement, nous présentons les solutions visuelles développées pour faciliter l'interprétation des résultats d'annota-tion d'un ou plusieurs ensembles de gènes. Dans ce travail, nous avons développé un prototype de visualisation, appelé MOTVIS, qui explore l'annotation d'une collection d'ensembles des gènes. MOTVIS utilise ainsi une combinaison de deux vues inter-connectées : une arborescence qui fournit un aperçu global des données mais aussi des informations détaillées sur les ensembles de gènes, et une visualisation qui permet de se concentrer sur les termes d'annotation d'intérêt. La combinaison de ces deux visualisations a l'avantage de faciliter la compréhension des résultats biologiques lorsque des données complexes sont représentées.Deuxièmement, nous abordons les limitations des approches d'enrichissement statistique en proposant une méthode originale qui analyse l'impact d'utiliser différentes mesures de similarité sémantique pour annoter les ensembles de gènes. Pour évaluer l'impact de chaque mesure, nous avons considéré deux critères comme étant pertinents pour évaluer une annotation synthétique de qualité d'un ensemble de gènes : (i) le nombre de termes d'annotation doit être réduit considérablement tout en gardant un niveau suffisant de détail, et (ii) le nombre de gènes décrits par les termes sélectionnés doit être maximisé. Ainsi, neuf mesures de similarité sémantique ont été analysées pour trouver le meilleur compromis possible entre réduire le nombre de termes et maintenir un niveau suffisant de détails fournis par les termes choisis. Tout en utilisant la Gene Ontology (GO) pour annoter les ensembles de gènes, nous avons obtenu de meilleurs résultats pour les mesures de similarité sémantique basées sur les nœuds qui utilisent les attributs des termes, par rapport aux mesures basées sur les arêtes qui utilisent les relations qui connectent les termes. Enfin, nous avons développé GSAn, un serveur web basé sur les développements précédents et dédié à l'annotation d'un ensemble de gènes a priori. GSAn intègre MOTVIS comme outil de visualisation pour présenter conjointement les termes représentatifs et les gènes de l'ensemble étudié. Nous avons comparé GSAn avec des outils d'enrichissement et avons montré que les résultats de GSAn constituent un bon compromis pour maximiser la couverture de gènes tout en minimisant le nombre de termes.Le dernier point exploré est une étape visant à étudier la faisabilité d'intégrer d'autres ressources dans GSAn. Nous avons ainsi intégré deux ressources, l'une décrivant les maladies humaines avec Disease Ontology (DO) et l'autre les voies métaboliques avec Reactome. Le but était de fournir de l'information supplémentaire aux utilisateurs finaux de GSAn. Nous avons évalué l'impact de l'ajout de ces ressources dans GSAn lors de l'analyse d’ensembles de gènes. L'intégration a amélioré les résultats en couvrant d'avantage de gènes sans pour autant affecter de manière significative le nombre de termes impliqués. Ensuite, les termes GO ont été mis en correspondance avec les termes DO et Reactome, a priori et a posteriori des calculs effectués par GSAn. Nous avons montré qu'un processus de mise en correspondance appliqué a priori permettait d'obtenir un plus grand nombre d'inter-relations entre les deux ressources
The revolution in new sequencing technologies, by strongly improving the production of omics data, is greatly leading to new understandings of the relations between genotype and phenotype. To interpret and analyze data grouped according to a phenotype of interest, methods based on statistical enrichment became a standard in biology. However, these methods synthesize the biological information by a priori selecting the over-represented terms and focus on the most studied genes that may represent a limited coverage of annotated genes within a gene set. During this thesis, we explored different methods for annotating gene sets. In this frame, we developed three studies allowing the annotation of gene sets and thus improving the understanding of their biological context.First, visualization approaches were applied to represent annotation results provided by enrichment analysis for a gene set or a repertoire of gene sets. In this work, a visualization prototype called MOTVIS (MOdular Term VISualization) has been developed to provide an interactive representation of a repertoire of gene sets combining two visual metaphors: a treemap view that provides an overview and also displays detailed information about gene sets, and an indented tree view that can be used to focus on the annotation terms of interest. MOTVIS has the advantage to solve the limitations of each visual metaphor when used individually. This illustrates the interest of using different visual metaphors to facilitate the comprehension of biological results by representing complex data.Secondly, to address the issues of enrichment analysis, a new method for analyzing the impact of using different semantic similarity measures on gene set annotation was proposed. To evaluate the impact of each measure, two relevant criteria were considered for characterizing a "good" synthetic gene set annotation: (i) the number of annotation terms has to be drastically reduced while maintaining a sufficient level of details, and (ii) the number of genes described by the selected terms should be as large as possible. Thus, nine semantic similarity measures were analyzed to identify the best possible compromise between both criteria while maintaining a sufficient level of details. Using GO to annotate the gene sets, we observed better results with node-based measures that use the terms’ characteristics than with edge-based measures that use the relations terms. The annotation of the gene sets achieved with the node-based measures did not exhibit major differences regardless of the characteristics of the terms used. Then, we developed GSAn (Gene Set Annotation), a novel gene set annotation web server that uses semantic similarity measures to synthesize a priori GO annotation terms. GSAn contains the interactive visualization MOTVIS, dedicated to visualize the representative terms of gene set annotations. Compared to enrichment analysis tools, GSAn has shown excellent results in terms of maximizing the gene coverage while minimizing the number of terms.At last, the third work consisted in enriching the annotation results provided by GSAn. Since the knowledge described in GO may not be sufficient for interpreting gene sets, other biological information, such as pathways and diseases, may be useful to provide a wider biological context. Thus, two additional knowledge resources, being Reactome and Disease Ontology (DO), were integrated within GSAn. In practice, GO terms were mapped to terms of Reactome and DO, before and after applying the GSAn method. The integration of these resources improved the results in terms of gene coverage without affecting significantly the number of involved terms. Two strategies were applied to find mappings (generated or extracted from the web) between each new resource and GO. We have shown that a mapping process before computing the GSAn method allowed to obtain a larger number of inter-relations between the two knowledge resources
APA, Harvard, Vancouver, ISO, and other styles
41

Melamid, Elan. "What works? integrating multiple data sources and policy research methods in assessing need and evaluating outcomes in community-based child and family service systems /." Santa Monica, Calif. : RAND, 2002. http://www.rand.org/publications/RGSD/RGSD161/RGSD161.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Kai. "Mitigating Congestion by Integrating Time Forecasting and Realtime Information Aggregation in Cellular Networks." FIU Digital Commons, 2011. http://digitalcommons.fiu.edu/etd/412.

Full text
Abstract:
An iterative travel time forecasting scheme, named the Advanced Multilane Prediction based Real-time Fastest Path (AMPRFP) algorithm, is presented in this dissertation. This scheme is derived from the conventional kernel estimator based prediction model by the association of real-time nonlinear impacts that caused by neighboring arcs’ traffic patterns with the historical traffic behaviors. The AMPRFP algorithm is evaluated by prediction of the travel time of congested arcs in the urban area of Jacksonville City. Experiment results illustrate that the proposed scheme is able to significantly reduce both the relative mean error (RME) and the root-mean-squared error (RMSE) of the predicted travel time. To obtain high quality real-time traffic information, which is essential to the performance of the AMPRFP algorithm, a data clean scheme enhanced empirical learning (DCSEEL) algorithm is also introduced. This novel method investigates the correlation between distance and direction in the geometrical map, which is not considered in existing fingerprint localization methods. Specifically, empirical learning methods are applied to minimize the error that exists in the estimated distance. A direction filter is developed to clean joints that have negative influence to the localization accuracy. Synthetic experiments in urban, suburban and rural environments are designed to evaluate the performance of DCSEEL algorithm in determining the cellular probe’s position. The results show that the cellular probe’s localization accuracy can be notably improved by the DCSEEL algorithm. Additionally, a new fast correlation technique for overcoming the time efficiency problem of the existing correlation algorithm based floating car data (FCD) technique is developed. The matching process is transformed into a 1-dimensional (1-D) curve matching problem and the Fast Normalized Cross-Correlation (FNCC) algorithm is introduced to supersede the Pearson product Moment Correlation Co-efficient (PMCC) algorithm in order to achieve the real-time requirement of the FCD method. The fast correlation technique shows a significant improvement in reducing the computational cost without affecting the accuracy of the matching process.
APA, Harvard, Vancouver, ISO, and other styles
43

Arnold, Matthias [Verfasser], Hans-Werner [Akademischer Betreuer] [Gutachter] Mewes, Florian [Gutachter] Kronenberg, and Fabian J. [Gutachter] Theis. "Supporting the evidence for human trait-associated genetic variants by computational biology methods and multi-level data integration. / Matthias Arnold ; Gutachter: Florian Kronenberg, Fabian J. Theis, Hans-Werner Mewes ; Betreuer: Hans-Werner Mewes." München : Universitätsbibliothek der TU München, 2016. http://d-nb.info/1113749164/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ruppel, Antonia [Verfasser], Frank [Akademischer Betreuer] Lisker, Frank [Gutachter] Lisker, and Laura [Gutachter] Crispini. "A multi-method approach to study the geodynamic evolution of eastern Dronning Maud Land in East Antarctica by integrating geophysical data with surface geology / Antonia Ruppel ; Gutachter: Frank Lisker, Laura Crispini ; Betreuer: Frank Lisker." Bremen : Staats- und Universitätsbibliothek Bremen, 2019. http://d-nb.info/1194156746/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Obeidat, Laith Mohammad. "Enhancing the Indoor-Outdoor Visual Relationship: Framework for Developing and Integrating a 3D-Geospatial-Based Inside-Out Design Approach to the Design Process." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/97726.

Full text
Abstract:
This research study aims to enhance the effectiveness of the architectural design process regarding the exploration and framing of the best visual connections to the outside environment within built environments. Specifically, it aims to develop a framework for developing and integrating an inside-out design approach augmented and informed by digital 3D geospatial data as a way to enhance the explorative ability and decision-making process for designers regarding the visual connection to the outside environment. To do so, the strategy of logical argumentation is used to analyze and study the phenomenon of making visual connections to a surrounding context. The initial recommendation of this stage is to integrate an inside-out design approach that operates within the digital immersion within 3D digital representations of the surrounding context. This strategy will help to identify the basic logical steps of the proposed inside-out design process. Then, the method of immersive case study is used to test and further develop a proposed process by designing a specific building, specifically, an Art Museum building on the campus of Virginia Tech. Finally, the Delphi method is used in order to evaluate the necessity and importance of the proposed approach to the design process and its ability to achieve this goal. A multi-round survey was distributed to measure the consensus among a number of experts regarding the proposed design approach and its developed design tool. Overall, findings refer to a total agreement among the participating experts regarding the proposed design approach with some different concerns regarding the proposed design tool.
Doctor of Philosophy
Achieving a well-designed visual connection to one's surroundings is considered by many philosophers and theorists to be an essential aspect of our spatial experience within built environments. The goal of this research is to help designers to achieve better visual connections to the outside environment and therefore create more meaningful spatial experiences within the built environment. This research aims to enhance the ability of designers to explore the best possible views and make the right design decisions to frame these views of the outdoors from the inside of their buildings. Of course, the physical presence of designers at a building site has been the traditional method of determining the best views; however, this is not always possible during the design process for many reasons. Thus, this research aims to find a more effective alternative to visiting a building site in order to inform each design decision regarding the quality of its visual connection to the outdoors. To do so, this research developed a proposed inside-out design approach to be integrated into the design process. Specifically, it outlines a process that allows the designers to be digitally immersed within an accurate 3D representation of the surrounding context, which will help designers to explore views from multiple angles both inside the space and in response make the most suitable design decision. For further developing the proposed process, it was used during conducting this research to design an Art Museum on Virginia Tech Campus.
APA, Harvard, Vancouver, ISO, and other styles
46

Obrocki, Lea Marit [Verfasser]. "Advances in geoarchaeological site formation research by integrating geophysical methods, direct push sensing techniques and stratigraphic borehole data - case studies from central Europe and the western Peloponnese around ancient Olympia - / Lea Marit Obrocki." Mainz : Universitätsbibliothek Mainz, 2019. http://d-nb.info/118923730X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Yuepeng. "Integrative methods for gene data analysis and knowledge discovery on the case study of KEDRI's brain gene ontology a thesis submitted to Auckland University of Technology in partial fulfilment of the requirements for the degree of Master of Computer and Information sciences, 2008 /." Click here to access this resource online, 2008. http://hdl.handle.net/10292/467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lemaréchal, Jean-Didier. "Estimation des propriétés dynamiques des réseaux cérébraux à large échelle par modèles de masse neurale de potentiels évoqués cortico-corticaux Comparison of two integration methods for dynamic causal modeling of electrophysiological data. NeuroImage An atlas of neural parameters based on dynamic causal modeling of cortico-cortical evoked potentials." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALS007.

Full text
Abstract:
Ce travail de thèse porte sur la modélisation des potentiels évoqués cortico-corticaux (PECCs) induits par stimulation électrique intracérébrale lors de procédures de chirurgie de l’épilepsie en stéréo-électroencéphalographie. Nous utilisons pour cela des modèles de masse neurale de type modèles causaux dynamiques (Dynamic causal modeling, DCM).Dans un premier temps, nous démontrons l'importance d'utiliser une technique d'intégration précise pour résoudre le système d'équations différentielles formalisant la dynamique du modèle (Lemaréchal et al., 2018), en particulier pour une estimation précise des paramètres neuronaux du modèle.Dans une seconde étude, nous développons cette méthodologie pour l'appliquer aux PECCs de la base de données du projet F-TRACT. Les délais et les vitesses de propagation axonale entre régions cérébrales ainsi que les constantes de temps synaptiques locales sont estimés et projetés sur des parcellisations corticales validées par la communauté internationale en neuroimagerie. Le nombre important de jeux de données utilisés dans cette étude (>300) permet en particulier de mettre en évidence des différences de propriétés dynamiques de connectivité en fonction de l'âge des populations considérées (Lemaréchal et al., soumis).Enfin, le dernier travail montre comment, dans le contexte Bayésien de DCM, un atlas de connectivité peut servir à améliorer la spécification et l'estimation d'un modèle de masse neurale pour l’explication de données électrophysiologiques de surface de type électroencéphalographique ou magnétoencéphalographique, en fournissant des distributions a priori sur ses paramètres de connectivité.Dans l'ensemble, cette thèse propose de nouvelles estimations des propriétés dynamiques des interactions cortico-corticales. Grâce à la publication et à la mise à disposition de nouveaux atlas regroupant ces propriétés neuronales, les résultats générés peuvent dès à présent servir à une meilleure spécification et une estimation plus précise de modèles neuronaux de cerveau entier
This thesis work aims at modeling cortico-cortical evoked potentials (CCEPs) induced by intracortical direct electrical stimulation in epileptic patients being recorded with stereo-electroencephalography during epilepsy surgery. Neural mass models implemented within the dynamic causal modeling (DCM) framework are used for this purpose.We first demonstrate the importance of using an accurate integration scheme to solve the system of differential equations governing the global dynamics of the model, in particular to obtain precise estimates of the neuronal parameters of the model (Lemaréchal et al., 2018).In a second study, this methodology is applied to a large dataset from the F-TRACT project. The axonal conduction delays and speeds between brain regions, as well as the local synaptic time constants are estimated and their spatial mapping is obtained based on validated cortical parcellation schemes. Interestingly, the large amount of data included in this study allow to highlight brain dynamics differences between the young and the older populations (Lemaréchal et al., submitted).Finally, in the Bayesian context of DCM, we show that an atlas of connectivity can improve the specification and the estimation of a neural mass model, for electroencephalographic and magnetoencephalographic studies, by providing a priori distributions on the connectivity parameters of the model.To sum up, this work provides novel insights on dynamical properties of cortico-cortical interactions. The publication of our results in the form of an atlas of neuronal properties already provides an effective tool for a better specification of whole brain neuronal models
APA, Harvard, Vancouver, ISO, and other styles
49

Qasim, Lara. "System reconfiguration : A Model based approach; From an ontology to the methodology bridging engineering and operations Model-Based System Reconfiguration: A Descriptive Study of Current Industrial Challenges Towards a reconfiguration framework for systems engineering integrating use phase data An overall Ontology for System Reconfiguration using Model-Based System Engineering An Ontology for System Reconfiguration: Integrated Modular Avionics IMA Case Study A model-based method for System Reconfiguration." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPAST031.

Full text
Abstract:
Les évolutions des systèmes doivent être gérées de manière à garantir l'efficacité et l'efficience du système tout au long de son cycle de vie, en particulier lorsqu'il s'agit de systèmes complexes qui nécessitent des années de développement et des dizaines d'années d'utilisation. La reconfiguration des systèmes est primordiale pour la gestion des systèmes complexes, car elle permet d'assurer la flexibilité et l'adaptabilité des systèmes en ce qui concerne leur évolution. La reconfiguration des systèmes assure l'efficacité opérationnelle et augmente les qualités des systèmes (par exemple, la fiabilité, la disponibilité, la sécurité, etc.).Cette thèse a été effectuée en partenariat avec une entreprise évoluant dans les domaines de l’aérospatial, de l’espace, du transport, de la défense et de la sécurité. Les entreprises portent un intérêt croissant sur la reconfiguration des systèmes afin de garantir leurs efficacités opérationnelles. L’objectif de cette thèse est de proposer une approche basée sur les modèles pour soutenir la reconfiguration de système.En effectuant une étude descriptive, basée sur une étude de terrain et l’analyse de l’état de l’art, le développement d’un support lié à la reconfiguration de système a été identifié comme enjeu industriel majeur. Le défi principal consiste à identifier les données relatives à la reconfiguration des systèmes et leurs mécanismes d’intégration afin d’atteindre cet objectif.Dans cette thèse, nous présentons une ontologie, que nous avons nommé OSysRec, qui intègre les données nécessaires pour la reconfiguration et gestion des systèmes. De plus, OSysRec agrège les trois aspects indispensables à la gestion des process de la reconfiguration de système: la structure, la dynamique, et la gestion.Nous présentons également une méthode basée sur les modèles (MBSysRec) qui intègre les données de reconfiguration et fait le lien entre les phases d’ingénierie et d’opération. Cette méthode est multidisciplinaire qui implique des générations combinatoires de configurations et des décisions multicritères pour leurs évaluations et sélections. Nous avons pu démontrer sur deux cas d’étude la validité de cette méthode pour trouver des solutions performantes et pertinentes.Cette thèse est un premier étape pour la mise en œuvre d’une approche basée sur les modèles pour la reconfiguration de système permettant leur flexibilité et leur adaptabilité
System evolutions have to be managed to ensure system effectiveness and efficiency through its whole lifecycle, particularly when it comes to complex systems that take years of development and dozens of years of usage. System Reconfiguration is key in complex systems management, as it is an enabler of system flexibility and adaptability regarding system evolutions. System reconfiguration ensures operational effectiveness and increases system qualities (e.g., reliability, availability, safety, and usability).This research has been conducted in the context of a large international aerospace, space, ground transportation, defense, and security company. This research aims at supporting system reconfiguration during operations.First, we conducted a descriptive study based on a field study and a literature review to identify the industrial challenges related to system reconfiguration. The main issue lies in the development of reconfiguration support. More specifically, challenges related to data identification and integration were identified.In this thesis, we present the OSysRec ontology, which captures and formalizes the reconfiguration data. The ontology synthesizes the structure, dynamics, and management aspects necessary to support the system reconfiguration process in an overall manner.Furthermore, we present a model-based method (MBSysRec) that integrates system reconfiguration data and bridges both the engineering and the operational phases. MBSysRec is a multidisciplinary method that involves combinatorial configuration generation and a multi-criteria decision-making method for configuration evaluation and selection.This thesis is a step towards a model-based approach for system reconfiguration of evolving systems, ensuring their flexibility and adaptability
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Jiuqiang. "Designing scientific workflows following a structure and provenance-aware strategy." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00931122.

Full text
Abstract:
Les systèmes de workflows disposent de modules de gestion de provenance qui collectent les informations relatives aux exécutions (données consommées et produites) permettant d'assurer la reproductibilité d'une expérience. Pour plusieurs raisons, la complexité de la structure du workflow et de ses d'exécutions est en augmentation, rendant la réutilisation de workflows plus difficile. L'objectif global de cette thèse est d'améliorer la réutilisation des workflows en fournissant des stratégies pour réduire la complexité des structures de workflow tout en préservant la provenance. Deux stratégies sont introduites. Tout d'abord, nous introduisons SPFlow un algorithme de réécriture de workflow scientifique préservant la provenance et transformant tout graphe acyclique orienté (DAG) en une structure plus simple, série-parallèle (SP). Ces structures permettent la conception d'algorithmes polynomiaux pour effectuer des opérations complexes sur les workflows (par exemple, leur comparaison) alors que ces mêmes opérations sont associées à des problèmes NP-difficile pour des structures générales de DAG. Deuxièmement, nous proposons une technique capable de réduire la redondance présente dans les workflow en détectant et supprimant des motifs responsables de cette redondance, nommés "anti-patterns". Nous avons conçu l'algorithme DistillFlow capable de transformer un workflow en un workflow sémantiquement équivalent "distillé", possédant une structure plus concise et dans laquelle on retire autant que possible les anti-patterns. Nos solutions (SPFlow et DistillFlow) ont été testées systématiquement sur de grandes collections de workflows réels, en particulier avec le système Taverna. Nos outils sont disponibles à l'adresse: https://www.lri.fr/~chenj/.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography