Dissertations / Theses on the topic 'Meta Models'

To see the other types of publications on this topic, follow the link: Meta Models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Meta Models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bäckström, Fredrik, and Anders Ivarsson. "Meta-Model Guided Error Correction for UML Models." Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8746.

Full text
Abstract:

Modeling is a complex process which is quite hard to do in a structured and controlled way. Many companies provide a set of guidelines for model structure, naming conventions and other modeling rules. Using meta-models to describe these guidelines makes it possible to check whether an UML model follows the guidelines or not. Providing this error checking of UML models is only one step on the way to making modeling software an even more valuable and powerful tool.

Moreover, by providing correction suggestions and automatic correction of these errors, we try to give the modeler as much help as possible in creating correct UML models. Since the area of model correction based on meta-models has not been researched earlier, we have taken an explorative approach.

The aim of the project is to create an extension of the program MetaModelAgent, by Objektfabriken, which is a meta-modeling plug-in for IBM Rational Software Architect. The thesis shows that error correction of UML models based on meta-models is a possible way to provide automatic checking of modeling guidelines. The developed prototype is able to give correction suggestions and automatic correction for many types of errors that can occur in a model.

The results imply that meta-model guided error correction techniques should be further researched and developed to enhance the functionality of existing modeling software.

APA, Harvard, Vancouver, ISO, and other styles
2

Trefan, Laszlo. "Development of empirical models for pork quality." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5761.

Full text
Abstract:
Pork quality is an important issue for the whole meat chain, from producers, abattoirs, retailers through to costumers and is affected by a web of multi-factorial actions that occur throughout the pork production chain. A vast amount of information is available on how these diverse factors influence different pork quality traits. However, results derived from individual studies often vary and are in some cases even contradictory due to different experimental designs or different pork quality assessment techniques or protocols. Also, individual influencing factors are often studied in isolation, ignoring interacting effects. A suitable method is therefore required to account for a range of interacting factors, to combine the results from different experiments and to derive generic response-laws. The aim of this thesis was to use meta-analyses to produce quantitative, predictive models that describe how diverse factors affect pork quality over a range of experimental conditions.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Yang. "Automatic calibration of numerical models using meta-models." Thesis, University of Exeter, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bashar, Hasanain. "Meta-modelling of intensive computational models." Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/13667/.

Full text
Abstract:
Engineering process design for applications that use computationally intensive nonlinear dynamical systems can be expensive in time and resources. The presented work reviews the concept of a meta-model as a way to improve the efficiency of this process. The proposed meta-model will have a computational advantage in implementation over the computationally intensive model therefore reducing the time and resources required to design an engineering process. This work proposes to meta-model a computationally intensive nonlinear dynamical system using reduced-order linear parameter varying system modelling approach with local linear models in velocity based linearization form. The parameters of the linear time-varying meta-model are blended using Gaussian Processes regression models. The meta-model structure is transparent and relates directly to the dynamics of the computationally intensive model while the velocity-based local linear models faithfully reproduce the original system dynamics anywhere in the operating space of the system. The non-parametric blending of the meta-model local linear models by Gaussian Processes regression models is ideal to deal with data sparsity and will provide uncertainty information about the meta-model predictions. The proposed meta-model structure has been applied to second-order nonlinear dynamical systems, a small sized nonlinear transmission line model, medium sized fluid dynamics problem and the computationally intensive nonlinear transmission line model of order 5000.
APA, Harvard, Vancouver, ISO, and other styles
5

Kajero, Olumayowa T. "Meta-model assisted calibration of computational fluid dynamics simulation models." Thesis, University of Surrey, 2017. http://epubs.surrey.ac.uk/813857/.

Full text
Abstract:
Computational fluid dynamics (CFD) is a computer-based analysis of the dynamics of fluid flow, and it is widely used in chemical and process engineering applications. However, computation usually becomes a herculean task when calibration of the CFD models with experimental data or sensitivity analysis of the output relative to the inputs is required. This is due to the simulation process being highly computationally intensive, often requiring a large number of simulation runs, with a single simulation run taking hours or days to be completed. Hence, in this research project, the kriging meta-modelling method was coupled with expected improvement (EI) global optimisation approach to address the CFD model calibration challenge. In addition, a kriging meta-model based sensitivity analysis technique was implemented to study the model parameter input-output relationship. A novel EI measure was developed for the sum of squared errors (SSE) which conforms to a generalised chi-square distribution, where existing normal distribution-based EI measures are not applicable. This novel EI measure suggested the values of CFD model parameters to simulate with, hence minimising SSE and improving the match between simulation and experiments. To test the proposed methodology, a non-CFD numerical simulation case of the semi-batch reactor was considered as a case study which confirmed a saving in computational time, and an improvement of the simulation model with the actual plant data. The usefulness of the developed method has been subsequently demonstrated through a CFD case study of a single-phase flow in both a straight type and convergent-divergent type annular jet pump, where both a single turbulent model parameter, C_μ and two turbulent model parameters, C_μ and C_2ε where considered for calibration. Sensitivity analysis was subsequently based on C_μ as the input parameter. In calibration using both single and two model parameters, a significant improvement in the agreement with experimental data was obtained. The novel method gave a significant reduction in simulation computational time as compared to traditional CFD. A new correlation was proposed relating C_μ to the flow ratio, which could serve as a guide for future simulations. The meta-model based calibration aids exploration of different parameter combinations which would have been computationally challenging using CFD. In addition, computational time was significantly reduced with kriging-assisted sensitivity analysis studies which explored effect of different C_μ values on the output, the pressure coefficient. The numerical simulation case of the semi-batch reactor was also used as a basis of comparison between the previous EI measure and the newly proposed EI measure, which overall revealed that the latter gave a significant improvement at fewer number of simulation runs as compared to the former. The research studies carried out has hence been able to propose and successfully demonstrate the use of a novel methodology for faster calibration and sensitivity analysis studies of computational fluid dynamics simulations. This is essential in the design, analysis and optimisation of chemical and process engineering systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Popaiz, Alessandro <1987&gt. "Business Meta Model: A Structured Framework for Business Models Representation." Master's Degree Thesis, Università Ca' Foscari Venezia, 2013. http://hdl.handle.net/10579/2390.

Full text
Abstract:
Given the central role of Business Models in company characterization, it is not surprising that a great deal of effort has been spent in studying suitable representations for them. Most of the proposed models, however, pursue a semi-formal human-readable graphical paradigm that is mainly meant to be discussed among stakeholders, rather than to be easily handled by information systems. In this paper we introduce a formal meta model that aspires to be general enough to capture the expressiveness of most currently adopted paradigms. At the same time, each produced Business Model instance is regular and structured enough to be processed through automated algorithms. Specifically, data are organized as a structured graph, allowing for the adoption of well-known graph-based mining techniques. The ability of the framework to deal with real-world scenarios is assessed by modelling several actual companies. Further, some examples of data processing are given, specifically with the aim of spotting common patterns within a data base of Business Models.
APA, Harvard, Vancouver, ISO, and other styles
7

Barrowman, Nicholas J. "Nonlinear mixed effects models for meta-analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ57342.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Salimi-Khorshidi, Gholamreza. "Statistical models for neuroimaging meta-analytic inference." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:40a10327-7f36-42e7-8120-ae04bd8be1d4.

Full text
Abstract:
A statistical meta-analysis combines the results of several studies that address a set of related research hypotheses, thus increasing the power and reliability of the inference. Meta-analytic methods are over 50 years old and play an important role in science; pooling evidence from many trials to provide answers that any one trial would have insufficient samples to address. On the other hand, the number of neuroimaging studies is growing dramatically, with many of these publications containing conflicting results, or being based on only a small number of subjects. Hence there has been increasing interest in using meta-analysis methods to find consistent results for a specific functional task, or for predicting the results of a study that has not been performed directly. Current state of neuroimaging meta-analysis is limited to coordinate-based meta-analysis (CBMA), i.e., using only the coordinates of activation peaks that are reported by a group of studies, in order to "localize" the brain regions that respond to a certain type of stimulus. This class of meta-analysis suffers from a series of problems and hence cannot result in as accurate results as desired. In this research, we describe the problems that existing CBMA methods are suffering from and introduce a hierarchical mixed-effects image-based metaanalysis (IBMA) solution that incorporates the sufficient statistics (i.e., voxel-wise effect size and its associated uncertainty) from each study. In order to improve the statistical-inference stage of our proposed IBMA method, we introduce a nonparametric technique that is capable of adjusting such an inference for spatial nonstationarity. Given that in common practice, neuroimaging studies rarely provide the full image data, in an attempt to improve the existing CBMA techniques we introduce a fully automatic model-based approach that employs Gaussian-process regression (GPR) for estimating the meta-analytic statistic image from its corresponding sparse and noisy observations (i.e., the collected foci). To conclude, we introduce a new way to approach neuroimaging meta-analysis that enables the analysis to result in information such as “functional connectivity” and networks of the brain regions’ interactions, rather than just localizing the functions.
APA, Harvard, Vancouver, ISO, and other styles
9

Ko, Hung-Tse. "Distribution system meta-models in an electronic commerce environment." Ohio : Ohio University, 2001. http://www.ohiolink.edu/etd/view.cgi?ohiou1173977323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Preston, Carrol Lesley. "Statistical models of publication basis in meta-analysis." Thesis, Queen Mary, University of London, 2000. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1624.

Full text
Abstract:
Objectives: To review, apply and compare existing publication bias methodology. To extend the selection model methods that adjust combined estimates and to develop models to adjust for publication bias and heterogeneity simultaneously. ' Methods: Methodologies that test for the existence of publication bias, estimate the number of missing studies, and adjust combined estimates for publication bias are reviewed. Parametric weighted distribution methodology is developed further. The existing family of distributions is extended to include a logistic function. Weight functions previously limited to modelling selection based on two-tailed p-values have been restructured for one-tailed p-values. The selection mechanism model has been developed to incorporate both p-values and precision. The model for effect size has been developed to incorporate linear predictors, so heterogeneity and publication bias can be modelled simultaneously. Data: Two systematic reviews taken from the Cochrane Library and simulation studies. Results: Methods that test for the existence of publication bias or estimate the number of missing studies are limited by the strength of their assumptions and low power. Weighted distributions offer the only way to directly assess the impact of publication bias. In data sets in which there is heterogeneity or the true treatment effect is null, modelling the selection mechanism on p-values only can lead to over-adjusted estimates and considerable variability between estimates with wide confidence intervals. Extending the selection model to include precision reduces this. It is then possible to include other covariates such as study quality or type. The effect-size model can be extended in a similar way to include linear predictors. Combination of these two models allows simultaneous consideration of the influence of publication bias and heterogeneity. Conclusions: Weighted distributions offer a flexible approach to modelling publication bias. Inclusion of precision in the selection model reduces sensitivity of the model to the shape of the selection model improving consistency of results. No selection model should be used on its own but in conjunction with others to allow a sensitivity approach
APA, Harvard, Vancouver, ISO, and other styles
11

Kroetz, Henrique Machado. "Meta-modelagem em confiabilidade estrutural." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-08042015-162956/.

Full text
Abstract:
A aplicação de simulações numéricas em problemas de confiabilidade estrutural costuma estar associada a grandes custos computacionais, dada a pequena probabilidade de falha inerente às estruturas. Ainda que diversos casos possam ser endereçados através de técnicas de redução da variância das amostras, a solução de problemas envolvendo grande número de graus de liberdade, respostas dinâmicas, não lineares e problemas de otimização na presença de incertezas são comumente ainda inviáveis de se resolver por esta abordagem. Tais problemas, porém, podem ser resolvidos através de representações analíticas que aproximam a resposta que seria obtida com a utilização de modelos computacionais mais complexos, chamadas chamados meta-modelos. O presente trabalho trata da compilação, assimilação, programação em computador e comparação de técnicas modernas de meta-modelagem no contexto da confiabilidade estrutural, utilizando representações construídas a partir de redes neurais artificiais, expansões em polinômios de caos e através de krigagem. Estas técnicas foram implementadas no programa computacional StRAnD - Structural Reliability Analysis and Design, desenvolvido junto ao Departamento de Engenharia de Estruturas, USP, resultando assim em um benefício permanente para a análise de confiabilidade estrutural junto à Universidade de São Paulo.
The application of numerical simulations to structural reliability problems is often associated with high computational costs, given the small probability of failure inherent to the structures. Although many cases can be addressed using variance reduction techniques, solving problems involving large number of degrees of freedom, nonlinear and dynamic responses, and problems of optimization in the presence of uncertainties are sometimes still infeasible to solve by this approach. Such problems, however, can be solved by analytical representations that approximate the response that would be obtained with the use of more complex computational models, called meta-models. This work deals with the collection, assimilation, computer programming and comparison of modern meta-modeling techniques in the context of structural reliability, using representations constructed from artificial neural networks, polynomial chaos expansions and Kriging. These techniques are implemented in the computer program StRAnD - Structural Reliability Analysis and Design, developed at the Department of Structural Engineering, USP; thus resulting in a permanent benefit to structural reliability analysis at the University of São Paulo.
APA, Harvard, Vancouver, ISO, and other styles
12

Wambalaba, Wamukota Francis. "Towards Comprehensive Migration Modeling: A Meta-analytic Approach." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/1244.

Full text
Abstract:
In view of theoretical proliferations in migration studies, there is a need for a more comprehensive approach to migration modeling. A central problem identified in this study was the multitude of potential variables for migration research and the lack of established procedures for selecting among them. Several studies on migration have attempted to answer cornmon migration questions, but with differing variables and therefore divergent conclusions. There is thus a strong potential for misinterpretation by researchers and policy makers. Partial theories of migration have been developed rather than a unified one. This study offers an objective process through which variables may be selected for purposes of migration model design or interpreting completed studies by researchers, policy makers and others. Meta-analysis was used to develop a heuristic framework as an operational tool for selection of migration modeling options. Because meta-analysis uses past studies as its data, a wide range of previous literature was reviewed. The literature was derived from a number of disciplines, i.e., economics, sociology, geography, demography, and schools of thought within disciplines to move toward a unified modeling framework. The variables identified for meta-analytic procedure were further subjected to a factor analysis to identify the inherent variable constructs. The 1980 intrastate migration between counties in the state of Oregon was used. The data were obtained from the IRS County to County Migration Records, the County and City Data Book, and the 1980 Census of Population. Seven clusters (constructs) emerged. They included: urban amenity, low mobility, individual mobility, negative amenity, low spatial mobility, mobility, and amenity. Each cluster was representative of a partial approach. These clusters were then tested by a regression analysis by sorting them out into amenity, spatial, and mobility related variables. The two most frequently used techniques, i.e., the basic Ordinary Least Squares (OLS) and the gravity approach, were used with the same data as in factor analysis. Both OLS and the gravity approach produced a similar pattern of results. Thus, when mobility, spatial, and amenity variables were tested individually their R2 was not as high as when variables were selected from each (in spite of having the same number of variables in each). These findings have several implications. Thus a rationalized unified model, where each significant cluster is represented by a variable, allows parsimonious prediction of migration. A factor analysis is the key technique in pinpointing the minimal set of useful variables. The significance of this heuristic approach also has further implications. First, identification of an analytical structure for the development of a unified theory in migration studies. This heuristic is useful as an applied forecasting device and an academic tool in policy areas. Secondly, it provides a framework that may be useful in other social sciences’ development of theory. This modeling heuristic has some caveats. Whether an OLS or gravity model specification is used, a factor analysis of potential independent variables is an essential step. In some cases, actual data for this factor analysis may be expensive and difficult to obtain. Variables representing all clusters may not be available: irreducible specification errors are implied. Also, factor analysis requires some qualitative interpretation to elaborate clusters, both in naming them and selecting those to appear: in the reduced model. Hence, there is not a single specification from a given structure. Similarly, qualitative analysis is critical in phase I of the framework. However, in both of these instances, a wide coverage of literature provides reasonable insurance against subjective error.
APA, Harvard, Vancouver, ISO, and other styles
13

Husna, Husain Dantu Ram. "Models to combat email spam botnets and unwanted phone calls." [Denton, Tex.] : University of North Texas, 2008. http://digital.library.unt.edu/permalink/meta-dc-6095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bakbergenuly, Ilyas. "Transformation bias in mixed effects models of meta-analysis." Thesis, University of East Anglia, 2017. https://ueaeprints.uea.ac.uk/65314/.

Full text
Abstract:
When binary data exhibit the greater variation than expected, the statistical methods have to account for extra-binomial variation. Possible explanations for extra-binomial variation include intra-cluster dependence or the variability of binomial probabilities. Both of these reasons lead to overdispersion of binomial counts and the resulting heterogeneity in their meta-analysis. Variance stabilizing or normalizing transformations are often applied to binomial counts to enable the use of standard methods based on normality. In meta-analysis, this is routinely done for the inference on overall effect measure. However, these transformations might result in biases in the presence of overdispersion. We study biases arising in the result of transformations of binary variables in the random or mixed effects models. We demonstrate considerable biases arising from standard log-odds and arcsine transformations both for single studies and in meta-analysis. We also explore possibilities of bias correction. In meta-analysis, the heterogeneity of the log odds ratios across the studies is usually incorporated by standard (additive) random effects model (REM). An alternative, multiplicative random effects model is based on the concept of an overdispersion. The multiplicative factor in this overdispersed random effects model can be interpreted as an intra-class correlation parameter. This model arises when one or both binomial distributions in the 2 by 2 tables are changed to betabinomial distributions. The Mantel-Haenzsel and inverse-variance approaches are extended to this setting. The estimation of the random effect parameter is based on profiling the modified Breslow-Day test and improving the approximation for distribution of Q statistic in Mandel-Paule method. The biases and coverages from new methods are compared to standard methods through simulation studies. The misspecification of the REM in respect to the mechanism of its generation is an important issue which is also discussed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
15

Dockes, Jérôme. "Statistical models for comprehensive meta-analysis of neuroimaging studies." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT048.

Full text
Abstract:
La neuroimagerie permet d’étudier les liens entre la structure et le fonctionnement du cerveau. Des milliers d’études de neuroimagerie sont publiées chaque année. Il est difficile d’exploiter cette grande quantité de résultats. En effet, chaque étude manque de puissance statistique et peut reporter beaucoup de faux positifs. De plus, certains effets sont spécifiques à un protocole expérimental et difficile à reproduire. Les méta-analyses rassemblent plusieurs études pour identifier les associations entre structures anatomiques et processus cognitifs qui sont établies de manière consistante dans la littérature. Les méthodes classiques de méta-analyse commencent par constituer un échantillon d’études focalisées sur un même processus mental ou une même maladie. Ensuite, un test statistique permet de délimiter les régions cérébrales dans lesquelles le nombre d’observations reportées est significatif. Dans cette thèse, nous introduisons une nouvelle forme de méta-analyse, qui s’attache à construire des prédictions plutôt qu’à tester des hypothèses. Nous introduisons des modèles statistiques qui prédisent la distribution spatiale des observations neurologiques à partir de la description textuelle d’une expérience, d’un processus cognitif ou d’une maladie cérébrale. Notre approche est basée sur l’apprentissage statistique supervisé qui fournit un cadre classique pour évaluer et comparer les modèles. Nous construisons le plus grand jeu de données d’études de neuroimagerie et de coordonnées stéréotaxiques existant, qui rassemble plus de 13 000 publications. Dans la dernière partie, nous nous intéressons au décodage: prédire des états psychologiques à partir de l’activité cérébrale. La méta-analyse standard est un outil indispensable pour distinguer les vraies découvertes du bruit et des artefacts parmi les résultats publiés en neuroimagerie. Cette thèse introduit des méthodes adaptées à la méta-analyse prédictive. Cette approche est complémentaire de la méta-analyse standard, et aide à interpréter les résultats d’études de neuroimagerie ainsi qu’à formuler des hypothèses ou des a priori statistiques
Thousands of neuroimaging studies are published every year. Exploiting this huge amount of results is difficult. Indeed, individual studies lack statistical power and report many spurious findings. Even genuine effects are often specific to particular experimental settings and difficult to reproduce. Meta- analysis aggregates studies to identify consistent trends in reported associations between brain structure and behavior. The standard approach to meta-analysis starts by gathering a sample of studies that investigate a same mental process or disease.Then, a statistical test delineates brain regions where there is a significant agreement among reported findings. In this thesis, we develop a different kind of metaanalysis that focuses on prediction rather than hypothesis testing. We build predictive models that map textual descriptions of experiments, mental processes or diseases to anatomical regions in the brain. Our supervised learning approach comes with a natural quantitative evaluation framework, and we conduct extensive experiments to validate and compare statistical models. We collect and share the largest existing dataset of neuroimaging studies and stereotactic coordinates. This dataset contains the full text and locations of neurological observations for over 13 000 publications. In the last part, we turn to decoding: inferring mental states from brain activity.We perform this task through meta-analysis of fMRI statistical maps collected from an online data repository. We use fMRI data to distinguish a wide range of mental conditions. Standard meta-analysis is an essential tool to distinguish true discoveries from noise and artifacts. This thesis introduces methods for predictive metaanalysis, which complement the standard approach and help interpret neuroimaging results and formulate hypotheses or formal statistical priors
APA, Harvard, Vancouver, ISO, and other styles
16

Ulrich, Michael David. "Meta-Analysis Using Bayesian Hierarchical Models in Organizational Behavior." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2349.

Full text
Abstract:
Meta-analysis is a tool used to combine the results from multiple studies into one comprehensive analysis. First developed in the 1970s, meta-analysis is a major statistical method in academic, medical, business, and industrial research. There are three traditional ways in which a meta-analysis is conducted: fixed or random effects, and using an empirical Bayesian approach. Derivations for conducting meta-analysis on correlations in the industrial psychology and organizational behavior (OB) discipline were reviewed by Hunter and Schmidt (2004). In this approach, Hunter and Schmidt propose an empirical Bayesian analysis where the results from previous studies are used as a prior. This approach is still widely used in OB despite recent advances in Bayesian methodology. This paper presents the results of a hierarchical Bayesian model for conducting meta-analysis of correlations and then compares these results to a traditional Hunter-Schmidt analysis conducted by Judge et al. (2001). In our approach we treat the correlations from previous studies as a likelihood, and present a prior distribution for correlations.
APA, Harvard, Vancouver, ISO, and other styles
17

Dzikiewicz, Joseph. "Cyrano : a meta model for federated database systems /." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-11082006-133632/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Powers, James Murray. "Population-averaged models for diagnostic accuracy studies and meta-analysis." Thesis, The University of North Carolina at Chapel Hill, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3562789.

Full text
Abstract:

Modern medical decision making often involves one or more diagnostic tools (such as laboratory tests and/or radiographic images) that must be evaluated for their discriminatory ability to detect presence (or absence) of current health state. The first paper of this dissertation extends regression model diagnostics to the Receiver Operating Characteristic (ROC) curve generalized linear model (ROC-GLM) in the setting of individual-level data from a single study through application of generalized estimating equations (GEE) within a correlated binary data framework (Alonzo and Pepe, 2002). Motivated by the need for model diagnostics for the ROC-GLM model (Krzanowski and Hand, 2009), GEE cluster-deletion diagnostics (Preisser and Qaqish, 1996) are applied in an example data set to identify cases that have undue influence on the model parameters describing the ROC curve. In addition, deletion diagnostics are applied in an earlier stage in the estimation of the ROC-GLM, when a linear model is chosen to represent the relationship between the test measurement and covariates in the control subjects. The second paper presents a new model for diagnostic test accuracy meta-analysis. The common analysis framework for the meta-analysis of diagnostic studies is the generalized linear mixed model, in particular, the bivariate logistic-normal random effects model. Considering that such cluster-specific models are most appropriately used if the model for a given cluster (i.e. study) is of interest, a population-average (PA) model may be appropriate in diagnostic test meta-analysis settings where mean estimates of sensitivity and specificity are desired. A PA model for correlated binomial outcomes is estimated with GEE in the meta-analysis of two data sets. It is compared to an indirect method of estimation of PA parameters based on transformations of bivariate random effects model parameters. The third paper presents an analysis guide for a new SAS macro, PAMETA (Population-averaged meta-analysis), for fitting population-averaged (PA) diagnostic accuracy models with GEE as described in the second paper. The impact of covariates, influential clusters and observations is investigated in the analysis of two example data sets.

APA, Harvard, Vancouver, ISO, and other styles
19

Lins, Hertz Wilton de Castro. "Especifica??o e implementa??o de uma linguagem para transforma??o de modelos MOF em reposit?rios dMOF." Universidade Federal do Rio Grande do Norte, 2006. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15511.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:56:21Z (GMT). No. of bitstreams: 1 HertzWCL.pdf: 1101364 bytes, checksum: 81e217f795557edb8c6aa671cd62dabf (MD5) Previous issue date: 2006-10-06
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
This work presents the specification and the implementation of a language of Transformations in definite Models specification MOF (Meta Object Facility) of OMG (Object Management Group). The specification uses a boarding based on rules ECA (Event-Condition-Action) and was made on the basis of a set of scenes of use previously defined. The Parser Responsible parser for guaranteeing that the syntactic structure of the language is correct was constructed with the tool JavaCC (Java Compiler Compiler) and the description of the syntax of the language was made with EBNF (Extended Backus-Naur Form). The implementation is divided in three parts: the creation of the interpretative program properly said in Java, the creation of an executor of the actions specified in the language and its integration with the type of considered repository (generated for tool DSTC dMOF). A final prototype was developed and tested in the scenes previously defined
Este trabalho apresenta a especifica??o e a implementa??o de uma linguagem de Transforma??es em Modelos definidos segundo a especifica??o MOF (Meta Object Facility) da OMG (Object Management Group). A especifica??o utiliza uma abordagem baseada em regras ECA (Event-Condition-Action) e foi feita com base em um conjunto de cen?rios de uso previamente definidos. O parser respons?vel por garantir que a estrutura sint?tica da linguagem est? correta foi constru?do com a ferramenta JavaCC (Java Compiler Compiler) e a descri??o da sintaxe da linguagem foi feita com EBNF (Extended Backus-Naur Form). A implementa??o est? dividida em tr?s partes: a cria??o do programa interpretador propriamente dito em Java, a cria??o de um executor das a??es especificadas na linguagem e sua integra??o com o tipo de reposit?rio considerado (gerados pela ferramenta DSTC dMOF). Um prot?tipo final foi desenvolvido e testado nos cen?rios previamente definidos
APA, Harvard, Vancouver, ISO, and other styles
20

Clowes, Darren. "Hybrid semantic-document models." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/14736.

Full text
Abstract:
This thesis presents the concept of hybrid semantic-document models to aid information management when using standards for complex technical domains such as military data communication. These standards are traditionally text based documents for human interpretation, but prose sections can often be ambiguous and can lead to discrepancies and subsequent implementation problems. Many organisations produce semantic representations of the material to ensure common understanding and to exploit computer aided development. In developing these semantic representations, no relationship is maintained to the original prose. Maintaining relationships between the original prose and the semantic model has key benefits, including assessing conformance at a semantic level, and enabling original content authors to explicitly define their intentions, thus reducing ambiguity and facilitating computer aided functionality. Through the use of a case study method based on the military standard MIL-STD-6016C, a framework of relationships is proposed. These relationships can integrate with common document modelling techniques and provide the necessary functionality to allow semantic content to be mapped into document views. These relationships are then generalised for applicability to a wider context. Additionally, this framework is coupled with a templating approach which, for repeating sections, can improve consistency and further enhance quality. A reflective approach to model driven web rendering is presented and evaluated. This reflective approach uses self-inspection at runtime to read directly from the model, thus eliminating the need for any generative processes which result in data duplication across source used for different purpose.
APA, Harvard, Vancouver, ISO, and other styles
21

Schulze, Frank. "Meta Modelle - Neue Planungswerkzeuge für Materialflußsysteme." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-233345.

Full text
Abstract:
Meta-Modelle sind Rechenmodelle, die das Verhalten technischer Systeme näherungsweise beschreiben oder nachbilden. Sie werden aus Beobachtungen von Simulationsmodellen der technischen Systeme abgeleitet. Es handelt sich also um Modelle von Modellen, um Meta-Modelle. Meta-Modelle unterscheiden sich grundsätzlich von analytischen Ansätzen zur Systembeschreibung. Während analytische Ansätze in ihrer mathematischen Struktur die tatsächlichen Gegebenheiten des betrachteten Systems wiedergeben, sind Meta-Modelle stets Näherungen. Der Vorteil von Meta-Modellen liegt in ihrer einfachen Form. Sie sind leicht zu bilden und anzuwenden. Ihr Nachteil ist die nur annähernde und u.U. unvollständige Beschreibung des Systemverhaltens. Im folgenden wird die Bildung von Meta-Modellen anhand eines Bediensystems dargestellt. Zuerst werden die Möglichkeiten einer analytischen Beschreibung bewertet. Danach werden zwei unterschiedliche Meta-Modelle, Polynome und neuronale Netze, vorgestellt. Möglichkeiten und Grenzen dieser Formen der Darstellung des Systemverhaltens werden diskutiert. Abschließend werden praktische Einsatzfelder von Meta-Modellen in der Materialflußplanung und -simulation aufgezeigt.
APA, Harvard, Vancouver, ISO, and other styles
22

Hauksson, Hilmar. "Metamodeling for Business Model Design : Facilitating development and communication of Business Model Canvas (BMC) models with an OMG standards-based metamodel." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138139.

Full text
Abstract:
Interest for business models and business modeling has increased rapidly since the mid-1990‘s and there are numerous approaches used to create business models. The business model concept has many definitions which can lead to confusion and slower progress in the research and development of business models. A business model ontology (BMO) was created in 2004 where the business model concept was conceptualized based on an analysis of existing literature. A few years later the Business Model Canvas (BMC) was published; a popular business modeling approach providing a high-level, semi-formal approach to create and communicate business models. While this approach is easy to use, the informality and high-level approach can cause ambiguity and it has limited computer-aided support available. In order to propose a solution to address this problem and facilitate the development and communication of Business Model Canvas models, two artifacts are created, demonstrated and evaluated; a structured metamodel for the Business Model Canvas and its implementation in an OMG standards-based modeling tool to provide tool support for BMC modeling.This research is carried out following the design science approach where the artifacts are created to better understand and improve the identified problem. The problem and its background are explicated and the planned artifacts and requirements are outlined. The design and development of the artifacts are detailed and the resulting BMC metamodel is presented as a class diagram in Unified Modeling Language (UML) and implemented to provide tool support for BMC modeling. A demonstration with a business model and an evaluation is performed with expert interviews and informed arguments.The creation of a BMC metamodel exposed some ambiguity in the definition and use of the Business Model Canvas and the importance of graphical presentation and flexibility in the tools used.The evaluation of the resulting artifacts suggests that the artifacts do facilitate the development and communication of the Business Model Canvas models by improving the encapsulation and communication of information in a standardized way and thereby the goals of the research are met.
APA, Harvard, Vancouver, ISO, and other styles
23

Massa, Maria Sofia. "Combining information from Gaussian graphical models." Doctoral thesis, Università degli studi di Padova, 2008. http://hdl.handle.net/11577/3425480.

Full text
Abstract:
Given a joint probability distribution, one can generally find its marginal components. However, it is not straightforward, or even possible, the inverse procedure. In this thesis, we shall study the wide context of combining families of distributions. In particular, we shall consider absolutely continuous distributions with respect to a product measure. The conditions for compatibility of the marginal families shall be the initial research problem to be investigated. Next, we shall classify two types of combination corresponding to the initial available information. The methodology previously introduced shall permit to lead the way to study the combination of families of multivariate normal distributions respecting some conditional independence relationships, i.e., Gaussian graphical models. Examples of combination of Gaussian graphical models and methods to estimate the parameters of the joint family of distributions shall be also provided. Eventually, we shall perform two simulation studies to assess the proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles
24

Nascimento, Fábio Fialho do 1983. "Análise estocástica linear de estruturas complexas usando meta-modelo modal." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265736.

Full text
Abstract:
Orientador: José Maria Campos dos Santos
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-28T14:26:32Z (GMT). No. of bitstreams: 1 Nascimento_FabioFialhodo_M.pdf: 3796792 bytes, checksum: 3a40db657ae4ddf5e1331783c582bd04 (MD5) Previous issue date: 2015
Resumo: Este trabalho tem como objetivo geral investigar abordagens para a análise de incerteza em problemas de dinâmica estrutural, de forma computacionalmente eficiente, no contexto industrial. Neste sentido, utilizou-se um metamodelo, baseado no método da superfície de resposta, para simplificar a etapa do cálculo dos modos e das frequências naturais na análise de resposta em frequência da estrutura. Para viabilizar a análise de grandes modelos, a solução de elementos finitos foi realizada pelo Nastran®. O MatLab® foi utilizado para manipular os autovalores e autovetores, e calcular as FRFs. Já o processo de amostragem das variáveis, a preparação da superfície de resposta e a integração com os demais aplicativos, foram realizados por meio do Isight®. Inicialmente, a abordagem foi avaliada em um modelo simples de um para-brisa veicular, com espessura, modo de elasticidade e densidade como parâmetros incertos. Posteriormente, o método foi aplicado para um modelo de uma estrutura veicular com milhares graus de liberdade. Neste caso, as variáveis aleatórias consideradas foram espessuras de vinte peças estampadas. Todas as variáveis foram consideradas com distribuição normal. Para quantificar a incerteza na resposta dinâmica, a simulação por Monte Carlo foi conduzida em conjunto com o metamodelo. A variabilidade das frequências naturais e da FRF é comparada com o resultado do Monte Carlo direto
Abstract: This work has as general objective to investigate approaches for uncertainty analysis in structural dynamics problems in a computational efficient manner in an industrial context. In this sense, we used a metamodel based on the response surface method to simplify the process of modes and natural frequencies calculation for frequency response analysis of a structure. In order to make the process feasible for large models, the finite element solution was performed using Nastran®. MatLab® was used to manipulate the eigenvalues and eigenvectors and calculate the FRFs. Isight® was responsible for the variable sampling process, response surface preparation and integrating other applications as well. Initially, the approach was assessed in a simple model of a car windshield with its thickness, Young¿s modulus and material density as uncertain parameters. Later the method was applied to a vehicle structure model with thousands degrees of freedom. In this case, the random variables considered were thicknesses of twenty stamped parts. Gaussian distribution was considered for all variables. For the purpose of uncertainty quantification in the dynamic response, Monte Carlo simulation was performed over the metamodel. The variability of the natural frequencies and FRF is compared against to direct Monte Carlo results
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
25

COLAGROSSI, MARCO. "META-ANALYSIS AND META-REGRESSION ANALYSIS IN ECONOMICS: METHODOLOGY AND APPLICATIONS." Doctoral thesis, Università Cattolica del Sacro Cuore, 2017. http://hdl.handle.net/10280/19697.

Full text
Abstract:
A partire dagli anni ’80, la diffusione dei metodi statistici, abbinata ai progressi nelle capacità computazionali dei personal computers, ha progressivamente facilitato i ricercatori nel testare empiricamente le proprie teorie. Gli economisti sono diventati in grado di eseguire milioni di regressioni prima di pranzo senza abbandonare le proprie scrivanie. Purtroppo, ciò ha portato ad un accumulo di evidenze spesso eterogenee, quando non contradditorie se non esplicitamente in conflitto. Per affrontare il problema, questa tesi fornirà una panoramica dei metodi meta-analitici disponibili in economia. Nella prima parte verranno introdotte le intuizioni alla base dei modelli gerarchici a fattori fissi e casuali capaci di risolvere le problematicità derivanti dalla presenza di osservazioni non indipendenti. Verrà inoltre affrontato il tema dell’errore sistematico di pubblicazione in presenza di elevata eterogeneità tra gli studi. La metodologia verrà successivamente applicata, nella seconda e terza parte, a due diverse aree della letteratura economica: l’impatto del rapporto banca-impresa sulle prestazioni aziendali e il dibattito sulla relazione fra democrazia e crescita. Mentre nel primo caso la correlazione negativa non è influenzata da fattori specifici ai singoli paesi, il contrario è vero per spiegare l’impatto (statisticamente non significativo) delle istituzioni democratiche sullo sviluppo economico. Quali siano questi fattori è però meno chiaro; gli studiosi non hanno ancora individuato le co-variate – o la corretta misurazione di esse – capaci di spiegare questa discussa relazione.
Starting in the late 1980s, improved computing performances and spread knowledge of statistical methods allowed researchers to put their theories to test. Formerly constrained economists became able [to] run millions of regressions before lunch without leaving their desks. Unfortunately, this led to an accumulation of often conflicting evidences. To address such issue, this thesis will provide an overview of the meta-analysis methods available in economics. The first paper will explain the intuitions behind fixed and random effects models in such a framework. It will then detail how multilevel modelling can help overcome hierarchical dependence issues. Finally, it will address the problem of publication bias in presence of high between-studies heterogeneity. Such methods will be then applied, in the second and third papers, to two different areas of the economics literature: the effect of relationship banking on firm performances and the democracy and growth conundrum. Results are far-reaching. While in the first case the documented negative relation is not driven by country-specific characteristics the opposite is true for the (statistically insignificant) impact of democratic institutions on economic growth. What these characteristics are is, however, less clear. Scholars have not yet found the covariates - or their suitable proxies - that matter to explain such much-debated relationship.
APA, Harvard, Vancouver, ISO, and other styles
26

COLAGROSSI, MARCO. "META-ANALYSIS AND META-REGRESSION ANALYSIS IN ECONOMICS: METHODOLOGY AND APPLICATIONS." Doctoral thesis, Università Cattolica del Sacro Cuore, 2017. http://hdl.handle.net/10280/19697.

Full text
Abstract:
A partire dagli anni ’80, la diffusione dei metodi statistici, abbinata ai progressi nelle capacità computazionali dei personal computers, ha progressivamente facilitato i ricercatori nel testare empiricamente le proprie teorie. Gli economisti sono diventati in grado di eseguire milioni di regressioni prima di pranzo senza abbandonare le proprie scrivanie. Purtroppo, ciò ha portato ad un accumulo di evidenze spesso eterogenee, quando non contradditorie se non esplicitamente in conflitto. Per affrontare il problema, questa tesi fornirà una panoramica dei metodi meta-analitici disponibili in economia. Nella prima parte verranno introdotte le intuizioni alla base dei modelli gerarchici a fattori fissi e casuali capaci di risolvere le problematicità derivanti dalla presenza di osservazioni non indipendenti. Verrà inoltre affrontato il tema dell’errore sistematico di pubblicazione in presenza di elevata eterogeneità tra gli studi. La metodologia verrà successivamente applicata, nella seconda e terza parte, a due diverse aree della letteratura economica: l’impatto del rapporto banca-impresa sulle prestazioni aziendali e il dibattito sulla relazione fra democrazia e crescita. Mentre nel primo caso la correlazione negativa non è influenzata da fattori specifici ai singoli paesi, il contrario è vero per spiegare l’impatto (statisticamente non significativo) delle istituzioni democratiche sullo sviluppo economico. Quali siano questi fattori è però meno chiaro; gli studiosi non hanno ancora individuato le co-variate – o la corretta misurazione di esse – capaci di spiegare questa discussa relazione.
Starting in the late 1980s, improved computing performances and spread knowledge of statistical methods allowed researchers to put their theories to test. Formerly constrained economists became able [to] run millions of regressions before lunch without leaving their desks. Unfortunately, this led to an accumulation of often conflicting evidences. To address such issue, this thesis will provide an overview of the meta-analysis methods available in economics. The first paper will explain the intuitions behind fixed and random effects models in such a framework. It will then detail how multilevel modelling can help overcome hierarchical dependence issues. Finally, it will address the problem of publication bias in presence of high between-studies heterogeneity. Such methods will be then applied, in the second and third papers, to two different areas of the economics literature: the effect of relationship banking on firm performances and the democracy and growth conundrum. Results are far-reaching. While in the first case the documented negative relation is not driven by country-specific characteristics the opposite is true for the (statistically insignificant) impact of democratic institutions on economic growth. What these characteristics are is, however, less clear. Scholars have not yet found the covariates - or their suitable proxies - that matter to explain such much-debated relationship.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhuang, Cuili. "Meta-analysis of gene expression in mouse models of neurodegenerative disorders." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/61320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Palmer, Kurt D. "Data collection plans and meta models for chemical process flowsheet simulators." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/24511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Molesini, Ambra <1980&gt. "Meta-models, environment and layers: agent-oriented engineering of complex systems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/930/1/Tesi_Molesini_Ambra.pdf.

Full text
Abstract:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Molesini, Ambra <1980&gt. "Meta-models, environment and layers: agent-oriented engineering of complex systems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/930/.

Full text
Abstract:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Schulze, Frank. "Meta Modelle - Neue Planungswerkzeuge für Materialflußsysteme." Technische Universität Dresden, 1999. https://tud.qucosa.de/id/qucosa%3A29807.

Full text
Abstract:
Meta-Modelle sind Rechenmodelle, die das Verhalten technischer Systeme näherungsweise beschreiben oder nachbilden. Sie werden aus Beobachtungen von Simulationsmodellen der technischen Systeme abgeleitet. Es handelt sich also um Modelle von Modellen, um Meta-Modelle. Meta-Modelle unterscheiden sich grundsätzlich von analytischen Ansätzen zur Systembeschreibung. Während analytische Ansätze in ihrer mathematischen Struktur die tatsächlichen Gegebenheiten des betrachteten Systems wiedergeben, sind Meta-Modelle stets Näherungen. Der Vorteil von Meta-Modellen liegt in ihrer einfachen Form. Sie sind leicht zu bilden und anzuwenden. Ihr Nachteil ist die nur annähernde und u.U. unvollständige Beschreibung des Systemverhaltens. Im folgenden wird die Bildung von Meta-Modellen anhand eines Bediensystems dargestellt. Zuerst werden die Möglichkeiten einer analytischen Beschreibung bewertet. Danach werden zwei unterschiedliche Meta-Modelle, Polynome und neuronale Netze, vorgestellt. Möglichkeiten und Grenzen dieser Formen der Darstellung des Systemverhaltens werden diskutiert. Abschließend werden praktische Einsatzfelder von Meta-Modellen in der Materialflußplanung und -simulation aufgezeigt.
APA, Harvard, Vancouver, ISO, and other styles
32

Sena, Emily Shamiso. "Systematic review and meta-analysis of animal models of acute ischaemic stroke." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4824.

Full text
Abstract:
Ischaemic stroke is responsible for substantial death and disability and creates a huge financial burden for healthcare budgets worldwide. At present there are few effective treatments for acute stroke and these are urgently required. Increased understanding of the ischaemic cascade has generated interest in neuroprotection for focal cerebral ischaemia. However, treatment effects observed in of over 500 interventions in animal models have yet to be translated to the clinic. Systematic review and meta-analysis allows unbiased identification of all relevant data for a given intervention, gives a clearer view of its true efficacy and the limitations to its therapeutic potential. Understanding the reasons for this bench-to-bedside failure and providing quantitative explanations may help to address these discrepancies. Random effects weighted mean difference meta-analysis of six interventions (tirilazad, tPA, NXY-059, Hypothermia, Piracetam and IL1-RA) reported study quality to be consistently low. In some instances, potential sources of bias were associated with overestimations of efficacy. Likewise, clinical trials have tested interventions in conditions where efficacy was not observed in animals. Cumulative meta-analysis suggests that for tPA the estimate of efficacy is stable after the inclusion of data from 1500 animals; hypothermia and FK506 are the only other interventions to have been tested in at least 1500 animals. Meta-regression suggests biological rather methodical factors are better predictors of outcome; a major limitation of these data is the impact of publication bias, and this work suggests effect sizes from met-analyses are inflated by about 31% because 16% of studies remain unpublished. The systematic review and meta-analysis of hypothermia was used to plan experiments investigating the possible impact of pethidine, a drug used to prevent shivering. This in vivo experiment, in which potential sources of bias were minimised, suggests that pethidine does not influence the observed efficacy of hypothermia in an animal model of ischaemic stroke. This thesis reports that animal studies of ischaemic stroke are often not conducted with sufficient rigour. Both minimising potential sources of bias in individual experiments and using meta-analysis to summarise data from a number of experiments may be helpful in improving the translation of neuroprotective efficacy in ischaemic stroke.
APA, Harvard, Vancouver, ISO, and other styles
33

Siangphoe, Umaporn. "META-ANALYSIS OF GENE EXPRESSION STUDIES." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/4040.

Full text
Abstract:
Combining effect sizes from individual studies using random-effects models are commonly applied in high-dimensional gene expression data. However, unknown study heterogeneity can arise from inconsistency of sample qualities and experimental conditions. High heterogeneity of effect sizes can reduce statistical power of the models. We proposed two new methods for random effects estimation and measurements for model variation and strength of the study heterogeneity. We then developed a statistical technique to test for significance of random effects and identify heterogeneous genes. We also proposed another meta-analytic approach that incorporates informative weights in the random effects meta-analysis models. We compared the proposed methods with the standard and existing meta-analytic techniques in the classical and Bayesian frameworks. We demonstrate our results through a series of simulations and application in gene expression neurodegenerative diseases.
APA, Harvard, Vancouver, ISO, and other styles
34

Martins, Rogério Mendonça [UNESP]. "Estudo do crescimento da cana-de-açúcar através da meta-análise." Universidade Estadual Paulista (UNESP), 2001. http://hdl.handle.net/11449/90617.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:24:43Z (GMT). No. of bitstreams: 0 Previous issue date: 2001-12Bitstream added on 2014-06-13T20:32:02Z : No. of bitstreams: 1 martins_rm_me_botfca.pdf: 172907 bytes, checksum: bcfa16e4e3781a1de74151d13e2991c2 (MD5)
A matemática e a estatística, como ferramentas de trabalho, são utilizadas para melhor explicar resultados de pesquisas. Na área agronômica, vários trabalhos têm sido desenvolvidos, chegando a diferentes modelos matemáticos. Este trabalho visa verificar a influência da aplicação de vinhaça na produção de cana-de-açúcar através de modelos estatísticos, utilizando dados de artigos publicados, resumidos pelos processos estabelecidos pela meta-análise. Foi possível ajustar o modelo de Mitscherlich e uma regressão de terceiro grau aos dados de produção dos experimentos conduzidos em solo arenoso. Para o solo argiloso foi possível o ajuste de um modelo de terceiro grau. A escolha do modelo que melhor se ajuste a esses resultados é feita através da meta-análise.
The Mathematic and the Statistic, as work tools, they are used for best to explain results of researches. In the agronomic area, several works have been developed, arriving to different mathematical models. This work seeks to verify the influence of the vinasse application in the sugarcane production through statistical models, using data of published articles, summarized by the established processes by the meta-analysis. It was possible to adjust the model of Mitscherlich and a regression of third degree to the data of production of the experiments driven in sandy soil. For the loamy soil it was possible the adjustment of a model of third degree. The choice of the model that better adjusted those results is done through the meta-analysis.
APA, Harvard, Vancouver, ISO, and other styles
35

Egan, Kieren. "Systematic review and meta-analysis of transgenic mouse models of Alzheimer's disease." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9538.

Full text
Abstract:
The increasing prevalence of Alzheimer’s disease poses a considerable socioeconomic challenge in the years ahead. There are few clinical treatments available and none capable of halting or slowing the progressive nature of the condition. Despite decades of experimental research and testing over 300 interventions in transgenic mouse models of the condition, clinical success has remained elusive. Deepening our understanding of how such studies have been conducted is likely to provide insights which could inform future preclinical and clinical research. Therefore I performed a systematic review and meta-analysis on interventions tested in transgenic mouse models of Alzheimer’s disease. My systematic search was performed by electronically searching for publications reporting the efficacy of interventions tested in transgenic models of Alzheimer's disease. Across these publications I extracted data regarding study characteristics and reported study quality alongside outcome data for pathology (i.e. plaque burden, amyloid beta species, tau, cellular infiltrates and neurodegeneration) and neurobehaviour. From these data I calculated estimates of efficacy using random effects meta-analysis and subsequently investigated the potential impact of study quality and study characteristics on observed effect size. My search identified 427 publications, 357 interventions and 55 transgenic models representing 11, 688 animals and 1774 experiments. There were a number of principal concerns regarding the dataset: (i) the reported study quality of such studies was relatively low; less than 1 in 5 publications reported blinded assessment of outcome or random allocation to group and no studies reported a sample size calculation, (ii) the depth of data on any individual intervention was relatively poor-only 16 interventions had outcomes described in 5 or more publications and (iii) publication bias analyses suggested 1 in 5 pathological and 1 in 7 neurobehavioural experiments remain unpublished. Where I inspected relationships between outcomes, meta-regression identified a number of notable associations. Changes in amyloid beta 40 were reflective of changes in amyloid beta 42 (R2 = 0.84, p<0.01) and within the Morris water maze changes in the ‘training’ acquisition phase could explain 44% of the changes in the probe ‘test’ phase (p<0.05). Additionally, I identified measures of neurodegeneration as the best pathological predictors of changes in neurobehaviour (R2 = 0.72, p<0.01). Collectively this work identifies a number of potential weaknesses within in vivo modelling of Alzheimer’s disease and demonstrates how the use of empirical data can inform both preclinical and clinical studies.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Zeyu. "Reliability Analysis and Updating with Meta-models: An Adaptive Kriging-Based Approach." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1574789534726544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lamb, Kevin Vieira. "Analyzing Metacommunity Models with Statistical Variance Partitioning: A Review and Meta-Analysis." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/9248.

Full text
Abstract:
The relative importance of deterministic processes versus chance is one of the most important questions in science. We analyze the success of variance partitioning methods used to explain variation in β-diversity and partition it into environmental, spatial, and spatially structured environmental components. We test the hypotheses that 1) the number of environmental descriptors in a study would be positively correlated with the percentage of β-diversity explained by the environment, and that the environment would explain more variation in β-diversity than spatial or shared factors in VP analyses, 2) increasing the complexity of environmental descriptors would help account for more of the total variation in β-diversity, and 3) studies based on functional groups would account for more of the total variation in β-diversity than studies based on taxonomic data. Results show that the amount of unexplained β-diversity is on average 65.6%. There was no evidence showing that the number of environmental descriptors, increased complexity of environmental descriptors, or utilizing functional diversity allowed researchers to account for more variation in β-diversity. We review the characteristics of studies that account for a large percentage of variation in β-diversity as well as explanations for studies that accounted for little variation in β-diversity.
APA, Harvard, Vancouver, ISO, and other styles
38

Weng, Chin-Fang. "Fixed versus mixed parameterization in logistic regression models application to meta-analysis /." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8985.

Full text
Abstract:
Thesis (M.A.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Dept. of Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
39

Asthorsson, Axel. "Simulation meta-modeling of complex industrial production systems using neural networks." Thesis, University of Skövde, School of Humanities and Informatics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-1036.

Full text
Abstract:

Simulations are widely used for analysis and design of complex systems. Real-world complex systems are often too complex to be expressed with tractable mathematical formulations. Therefore simulations are often used instead of mathematical formulations because of their flexibility and ability to model real-world complex systems in some detail. Simulation models can often be complex and slow which lead to the development of simulation meta-models that are simpler and faster models of complex simulation models. Artificial neural networks (ANNs) have been studied for use as simulation meta-models with different results. This final year project further studies the use of ANNs as simulation meta-models by comparing the predictability of five different neural network architectures: feed-forward-, generalized feed-forward-, modular-, radial basis- and Elman artificial neural networks where the underlying simulation is of complex production system. The results where that all architectures gave acceptable results even though it can be said that Elman- and feed-forward ANNs performed the best of the tests conducted here. The difference in accuracy and generalization was considerably small.

APA, Harvard, Vancouver, ISO, and other styles
40

Martins, Rogério Mendonça 1968. "Estudo do crescimento da cana-de-açúcar através da meta-análise /." Botucatu : [s.n.], 2001. http://hdl.handle.net/11449/90617.

Full text
Abstract:
Orientador: Sheila Zambello de Pinho
Resumo: A matemática e a estatística, como ferramentas de trabalho, são utilizadas para melhor explicar resultados de pesquisas. Na área agronômica, vários trabalhos têm sido desenvolvidos, chegando a diferentes modelos matemáticos. Este trabalho visa verificar a influência da aplicação de vinhaça na produção de cana-de-açúcar através de modelos estatísticos, utilizando dados de artigos publicados, resumidos pelos processos estabelecidos pela meta-análise. Foi possível ajustar o modelo de Mitscherlich e uma regressão de terceiro grau aos dados de produção dos experimentos conduzidos em solo arenoso. Para o solo argiloso foi possível o ajuste de um modelo de terceiro grau. A escolha do modelo que melhor se ajuste a esses resultados é feita através da meta-análise.
Abstract: The Mathematic and the Statistic, as work tools, they are used for best to explain results of researches. In the agronomic area, several works have been developed, arriving to different mathematical models. This work seeks to verify the influence of the vinasse application in the sugarcane production through statistical models, using data of published articles, summarized by the established processes by the meta-analysis. It was possible to adjust the model of Mitscherlich and a regression of third degree to the data of production of the experiments driven in sandy soil. For the loamy soil it was possible the adjustment of a model of third degree. The choice of the model that better adjusted those results is done through the meta-analysis.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
41

Ovidiu, Parvu. "Computational model validation using a novel multiscale multidimensional spatio-temporal meta model checking approach." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/11863.

Full text
Abstract:
Computational models of complex biological systems can provide a better understanding of how living systems function but need to be validated before they are employed for real-life (e.g. clinical) applications. One of the most frequently employed in silico approaches for validating such models is model checking. Traditional model checking approaches are limited to uniscale non-spatial computational models because they do not explicitly distinguish between different scales, and do not take properties of (emergent) spatial structures (e.g. density of multicellular population) into account. This thesis defines a novel multiscale multidimensional spatio-temporal meta model checking methodology which enables validating multiscale (spatial) computational models of biological systems relative to how both numeric (e.g. concentrations) and spatial system properties are expected to change over time and across multiple scales. The methodology has two important advantages. First it supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to produce them. Secondly the methodology is generic because it can be automatically reconfigured according to case study specific types of spatial structures and properties using the meta model checking approach. In addition the methodology could be employed for multiple domains of science, but we illustrate its applicability here only against biological case studies. To automate the computational model validation process, the approach was implemented in software tools, which are made freely available online. Their efficacy is illustrated against two uniscale and four multiscale quantitative computational models encoding phase variation in bacterial colonies and the chemotactic aggregation of cells, respectively the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. This novel model checking approach will enable the efficient construction of reliable multiscale computational models of complex systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Brody-Moore, Peter. "Bayesian Hierarchical Meta-Analysis of Asymptomatic Ebola Seroprevalence." Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/cmc_theses/2228.

Full text
Abstract:
The continued study of asymptomatic Ebolavirus infection is necessary to develop a more complete understanding of Ebola transmission dynamics. This paper conducts a meta-analysis of eight studies that measure seroprevalence (the number of subjects that test positive for anti-Ebolavirus antibodies in their blood) in subjects with household exposure or known case-contact with Ebola, but that have shown no symptoms. In our two random effects Bayesian hierarchical models, we find estimated seroprevalences of 8.76% and 9.72%, significantly higher than the 3.3% found by a previous meta-analysis of these eight studies. We also produce a variation of this meta-analysis where we exclude two of the eight studies. In this model, we find an estimated seroprevalence of 4.4%, much lower than our first two Bayesian hierarchical models. We believe a random effects model more accurately reflects the heterogeneity between studies and thus asymptomatic Ebola is more seroprevalent than previously believed among subjects with household exposure or known case-contact. However, a strong conclusion cannot be reached on the seriousness of asymptomatic Ebola without an international testing standard and more data collection using this adopted standard.
APA, Harvard, Vancouver, ISO, and other styles
43

Mansour, Asmaa. "Modeling outcome estimates in meta-analysis using fixed and mixed effects linear models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0001/MQ44216.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Mansour, Asmaâ. "Modeling outcome estimates in meta-analysis using fixed and mixed effects linear models." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20585.

Full text
Abstract:
The main objective of this thesis is to present a quantitative method for modeling data collected from different studies on a same research topic. This quantitative method is called meta-analysis.
The first step of a meta-analysis is the literature search, conducted using computerized and manual search strategies to identify relevant studies. The second step is the data abstraction from different relevant papers. In general, at least two independent raters systematically abstract the information, and interrater reliability check is performed.
The next step is the quantitative analysis of the abstracted data. For this purpose, it is possible to use either fixed or mixed effects linear model. Under the fixed effects model, only the variability due to sampling error is considered. In contrast, under the mixed effects model, an additional random effects variance is being considered. Both, the method of moments and the method of maximum likelihood can be used to estimate the parameters of the model.
Finally, the use of the above mentioned models and methods of estimation is illustrated with a data set on the prognosis of depression in the elderly, made available by Dr. Martin Cole from the Department of Psychiatry at St. Mary's Hospital Center in Montreal.
APA, Harvard, Vancouver, ISO, and other styles
45

FILGUEIRAS, ALBERTO. "NEURAL BASIS OF PHONOLOGICAL WORKING MEMORY: TESTING THEORETICAL MODELS USING FMRI META-ANALYSIS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26568@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
A memória de trabalho fonológica pode ser definida como um grupo de processos mentais usados para codificar, guardar, manter, manipular e recuperar informações auditivas. É o alicerce de outras funções cognitivas superiores e mais complexas como o planejamento, mudança do foco da tarefa, raciocínio lógico e abstrato e linguagem. Algumas evidências mostram a relação entre o desenvolvimento da memória de trabalho fonológica e mais tarde a aquisição da linguagem e inteligência global fluida. A antropologia contemporânea discute o papel da memória de trabalho como uma forma rudimentar de pensamento e suas consequências para o desenvolvimento de ferramentas e cultura entre os hominídeos. Têm sido aceito que a expansão da região frontal do crânio abre espaço para novas formações corticais no cérebro, especialmente no lobo frontal. Crê-se que o córtex pré-frontal tem um importante papel em tarefas de memória de trabalho. Ao mesmo tempo, a memória de trabalho é uma descoberta psicológica recente e diversos autores sugerem diferentes modelos teóricos para explicá-la. Dentre os mais importantes, Alan Baddeley, Nelson Cowan e Adele Diamond são aqueles cujas teorias são as mais estudadas e implementadas pelos pesquisadores que testam suas teorias. Estudar a base neural da memória de trabalho fonológica pode ajudar a lançar luz sobre ambos os pontos: o papel do córtex pré-frontal na evolução humana especialmente no funcionamento da memória de trabalho, e qual modelo teórico é o mais confiável dentro de uma perspectiva neuropsicológica. Para fazer isso, conduzimos uma meta-análise usando o método de estimação de verossimilhança das ativações e discutimos os resultados alicerçados na psicologia evolutiva e cognitiva modernas.
Phonological working memory can be defined as a set of mental processes that encode, store, maintain, manipulate, and retrieve auditory information. It is the foundation for other complex and higher cognitive functions, such as planning, task switching, logical and abstract reasoning, and language. Some evidence shows a relationship between the development of phonological working memory and further language acquisition and general fluid intelligence. Current neuroscience discusses the networks and brain regions that account for working memory. Working memory relies on a parietal-frontal network that is divided according to memory and attention. It has been hypothesized that the prefrontal cortex plays an important role in working memory tasks. Working memory is a relatively recent psychological discovery, and several authors suggest different theoretical models to explain it. Among the most important are those proposed by Alan Baddeley, Nelson Cowan, and Adele Diamond, which have been the most studied and implemented in attempts to test their hypotheses. Studying the neural basis of phonological working memory will help shed light on the organization and location of mnemonic and attentional functions in the brain. The present study comprised a meta-analysis of functional magnetic resonance imaging studies on phonological working memory that were published between 2000 and 2014. The results showed that one region in the temporal lobe and another region in the fronto-polar cortex were clustered intersections of phonological working memory, suggesting that these brains regions may account for sensorial memory and the central executive, respectively.
APA, Harvard, Vancouver, ISO, and other styles
46

Pissini, Carla Fernanda. "Aplicações em meta-análise sob um enfoque bayesiano usando dados médicos." Universidade Federal de São Carlos, 2006. https://repositorio.ufscar.br/handle/ufscar/4593.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:11Z (GMT). No. of bitstreams: 1 DissCFP.pdf: 956101 bytes, checksum: e21a11e1dc4754a5751b0b0840943082 (MD5) Previous issue date: 2006-03-21
Financiadora de Estudos e Projetos
In this work, we consider the use of Meta-analysis with a Bayesian approach. Meta-analysis is a statistical technique that combines the results of di¤erent independent studies with purpose to find general conclusions. This term was introduced by Glass (1976) and it has been used when the number of studies about some research project is small. Usually, the models for Meta-analysis assume a large number of parameters and the Bayesian approach using MCMC (Markov Chain Monte Carlo) methods is a good alternative to combine information of independent studies, to obtain accutrate inferences about a specified treatment. As illustration, we consider real medical data sets on di¤erent studies, in which, we consider fixed and random e¤ects models. We also assume mixture of normal distributions for the error of the models. Another application is considered with educational data.
Neste trabalho, consideramos o uso de Meta-análise sob um enfoque Bayesiano. Meta-análise é uma técnica estatística que combina resultados de diversos estudos in-dependentes, com o propósito de descrever conclusões gerais. Este termo foi introduzido por Glass (1976) usado quando o número de estudos sobre alguma pesquisa científica é pequeno. Os modelos propostos para Meta-análise usualmente assumem muitos parâmetros e o enfoque Bayesiano com MCMC (Monte Carlo em Cadeias de Markov) é uma alternativa apropriada para combinar informações de estudos independentes. O uso de modelos Bayesianos hierárquicos permite combinações de vários estudos independentes, para a obtenção de inferências precisas sobre um determinado tratamento. Como ilustração numérica consideramos conjuntos de dados médicos de diferentes estudos e, na análise, utilizamos modelos de efeitos fixos e aleatórios e mistura de distribuições normais para o erro do modelo de regressão. Em uma outra aplicação relacionamos Meta-análise e Educação, através do efeito da espectativa do professor associada ao QI dos estudantes.
APA, Harvard, Vancouver, ISO, and other styles
47

Jayatilleke, Gaya Buddhinath, and buddhinath@gmail com. "A Model Driven Component Agent Framework for Domain Experts." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080222.162529.

Full text
Abstract:
Industrial software systems are becoming more complex with a large number of interacting parts distributed over networks. Due to the inherent complexity in the problem domains, most such systems are modified over time to incorporate emerging requirements, making incremental development a suitable approach for building complex systems. In domain specific systems it is the domain experts as end users who identify improvements that better suit their needs. Examples include meteorologists who use weather modeling software, engineers who use control systems and business analysts in business process modeling. Most domain experts are not fluent in systems programming and changes are realised through software engineers. This process hinders the evolution of the system, making it time consuming and costly. We hypothesise that if domain experts are empowered to make some of the system changes, it would greatly ease the evolutionary process, thereby making the systems more effective. Agent Oriented Software Engineering (AOSE) is seen as a natural fit for modeling and implementing distributed complex systems. With concepts such as goals and plans, agent systems support easy extension of functionality that facilitates incremental development. Further agents provide an intuitive metaphor that works at a higher level of abstraction compared to the object oriented model. However agent programming is not at a level accessible to domain experts to capitalise on its intuitiveness and appropriateness in building complex systems. We propose a model driven development approach for domain experts that uses visual modeling and automated code generation to simplify the development and evolution of agent systems. Our approach is called the Component Agent Framework for domain-Experts (CAFnE), which builds upon the concepts from Model Driven Development and the Prometheus agent software engineering methodology. CAFnE enables domain experts to work with a graphical representation of the system , which is easier to understand and work with than textual code. The model of the system, updated by domain experts, is then transformed to executable code using a transformation function. CAFnE is supported by a proof-of-concept toolkit that implements the visual modeling, model driven development and code generation. We used the CAFnE toolkit in a user study where five domain experts (weather forecasters) with no prior experience in agent programming were asked to make changes to an existing weather alerting system. Participants were able to rapidly become familiar with CAFnE concepts, comprehend the system's design, make design changes and implement them using the CAFnE toolkit.
APA, Harvard, Vancouver, ISO, and other styles
48

Scheithauer, Gregor [Verfasser] [Akademischer Betreuer]. "A Service Description Method for Service Ecosystems - Meta Models, Modeling Notations, and Model Transformations / Gregor Scheithauer. Betreuer: Gregor Scheithauer." Bamberg : Universitätsbibliothek der Universität Bamberg, 2011. http://d-nb.info/1014896738/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Propst, Michael David. "Applying Linear Regression and Neural Network Meta-Models for Evolutionary Algorithm Based Simulation Optimization." NCSU, 2009. http://www.lib.ncsu.edu/theses/available/etd-08112009-164218/.

Full text
Abstract:
The increase in computing power over the last decade has led to an increase in the use of simulation programs to model real world optimization problems as well as the complexity with which these problems can be modeled. Once a model has been built, an experimental design is often used to determine the effects certain parameters have on the problem trying to determine the good settings that optimize a set of outputs. However, these problems often have a large number of variables or parameters that can be changed with wide value ranges and as these simulation models become increasingly more complex they become computationally expensive to run. Most of these problems are non-linear and may not have a true optimal solution based on the inherent variability in real-world applications and the stochastic simulation model. Evolutionary algorithms are a class of computational optimization techniques that harness the power of the computer to solve a problem. The application of evolutionary search techniques as a simulation optimization technique has yielded reasonable results. However, the algorithm can take a long time evaluating just one set of decision variables owing to replications and computational time of one simulation run and not to mention the sheer number of different sets that have to be evaluated to find good solutions for these complex problems. Linear regression and neural-network meta-models can be used to generate a surface model of the simulation. Evaluating the meta-model is very fast as compared to the simulation model. Therefore, this thesis combines the use of evolution algorithms, simulation models and meta-models to produce a more efficient simulation optimization technique. The two types of meta-models are tested to determine their effectiveness as a meta-modeling technique and the overall effectiveness of finding the best solution.
APA, Harvard, Vancouver, ISO, and other styles
50

Warn, David Edward. "Applications and extensions of Bayesian hierarchical models for meta-analysis of binary outcome data." Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography