Dissertations / Theses on the topic 'Models'

To see the other types of publications on this topic, follow the link: Models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Andriushchenko, Roman. "Computer-Aided Synthesis of Probabilistic Models." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417269.

Full text
Abstract:
Předkládaná práce se zabývá problémem automatizované syntézy pravděpodobnostních systémů: máme-li rodinu Markovských řetězců, jak lze efektivně identifikovat ten který odpovídá zadané specifikaci? Takové rodiny často vznikají v nejrůznějších oblastech inženýrství při modelování systémů s neurčitostí a rozhodování i těch nejjednodušších syntézních otázek představuje NP-těžký problém. V dané práci my zkoumáme existující techniky založené na protipříklady řízené induktivní syntéze (counterexample-guided inductive synthesis, CEGIS) a na zjemňování abstrakce (counterexample-guided abstraction refinement, CEGAR) a navrhujeme novou integrovanou metodu pro pravděpodobnostní syntézu. Experimenty nad relevantními modely demonstrují, že navržená technika je nejenom srovnatelná s moderními metodami, ale ve většině případů dokáže výrazně překonat, někdy i o několik řádů, existující přístupy.
APA, Harvard, Vancouver, ISO, and other styles
2

Rozestraten, Artur Simões. "Estudo sobre a história dos modelos arquitetônicos na antigüidade: origens e características das primeiras maquetes de arquiteto." Universidade de São Paulo, 2003. http://www.teses.usp.br/teses/disponiveis/16/16131/tde-09062009-145825/.

Full text
Abstract:
Este estudo se propõe a identificar dentre os diversos exemplos de modelos arquitetônicos da Antigüidade atualmente conhecidos pela arqueologia e descritos na literatura aqueles que podem ser caracterizados como as primeiras maquetes de arquiteto, isto é, objetos diretamente relacionados ao conhecimento, planejamento e comunicação de conteúdos arquitetônicos. O recuo à Antigüidade se faz necessário na medida em que essa dissertação se propõe a estudar as origens da relação entre modelos tridimensionais e a atividade de arquitetos na cultura ocidental. Em termos cronológicos, este estudo inicia-se cerca de 6.000 anos antes de Cristo e encerra-se no Mundo Romano (séc. V d.C.). Em termos geográficos, este estudo aborda objetos produzidos por culturas do sudeste da Europa neolítica, conjuntos de objetos de culturas do Oriente-Próximo, objetos egípcios, egeanos (cretenses e cicládicos), cipriotas, gregos, villanovianos e romanos. Essa pesquisa conclui que as evidências materiais da existência de maquetes de arquiteto na Antigüidade Clássica são raras e pouco precisas. Alguns objetos no entanto se aproximam dessa caracterização e merecem estudos futuros mais aprofundados, são eles: o conjunto de tijolos miniatura de Tepe Gawra (c. 3500 a.C.); o modelo egípcio de Dashour (1990-1730 a.C.); o modelo minóico de Arkhanes (1.700-1.630 a.C.); os modelos romanos de Óstia (séc. I a.C.), o modelo de templo de Niha (séc. II d.C.), o modelo de teatro de Baalbek (séc. II d.C.), e o modelo de stadium de Villa Adriana (séc. II d.C.).
This study intends to identify the first architects models among the several architectural models already known and presented in literature. Architects models are third-dimensional objects directly related to knowledge, planning and communication of architectural matters. Recession to Antiquity seems necessary in order to study the origins of the relation between three-dimensional model and architects work in the western world. Chronologically this study begins at 6.000 b.C. and ends at the Roman world (c. 200 a.D.). In geographical terms this study focuses objects produced by Neolithic Southeastern European cultures, Near Eastern cultures, Egyptian culture, Aegean cultures (Cretan and Cycladic), Cypriot, Greek, Villanovian and Roman cultures. Material evidences for architects models are rare and inaccurate all over Antiquity. Nevertheless some few objects are very close to architects work deserving deeper future studies: the miniature brick ensemble from Tepe Gawra (c. 3.500 a.C.); the Egyptian Dahshours model (1.990-1.730 a.C.); the minoan model of Arkhanes (1.700-1.630 a.C.) and the Roman models of Ostia (I a.C.), Niha, Baalbek and Villa Adriana (II d.C.).
APA, Harvard, Vancouver, ISO, and other styles
3

Kang, Changsung. "Model testing for causal models." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

LIMA, FILHO Luiz Medeiros de Araújo. "Modelos simétricos transformados não-lineares com diferentes distribuições dos erros: aplicações em ciências florestais." Universidade Federal Rural de Pernambuco, 2009. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/5175.

Full text
Abstract:
Submitted by (ana.araujo@ufrpe.br) on 2016-08-03T15:09:46Z No. of bitstreams: 1 Luiz Medeiros de Araujo Lima Filho.pdf: 529199 bytes, checksum: 06cae9ad9a02975b786cf55a000dbc5b (MD5)
Made available in DSpace on 2016-08-03T15:09:46Z (GMT). No. of bitstreams: 1 Luiz Medeiros de Araujo Lima Filho.pdf: 529199 bytes, checksum: 06cae9ad9a02975b786cf55a000dbc5b (MD5) Previous issue date: 2009-02-13
Historically, the wood of the eucalyptus is used for the most varied applications, such as; firewood, charcoal, cellulose, railway sleepers, posts for electrification, bark to tan leather, essential oils, civil construction, etc. The Gypsum Pole of Araripe in Pernambuco is a great firewood consumer for the gypsum production. Due to great need to find economical and environmental alternatives for the area, the sustainable production of eucalyptus that is a fast growth tree with great versatility has an important role. In the planning of the sustainable forest management there is a variable of extreme importance: the growth. To model the growth is fundamental in the prognosis of the productivity, site quality and dynamics of populations. Usually, the growth curves are fitted through nonlinear models developed empirically to relate, for instance, height and age. The Chapman-Richards model is a nonlinear model frequently used to model forest growth. In studies of this type, in general, it is assumed that the errors follow approximately the normal distribution. However, to model the growth assuming that the errors have a normal distribution is quite sensitive to atypical values that can happen, and generate bad estimates of the parameters. To correct that problem a new class of transformed symmetrical models was developed considering for the errors symmetrical continuous distributions with heavier tails than the normal distribution and allowing a possible nonlinear structure for the mean. With the expectation of obtaining better estimates of eucalyptus growth, it was applied to the Chapman-Richards model the following distributions of the errors: normal, t of Student, Cauchy, exponential potency, logistics I and logistics II. The t distribution of Student with 2 degrees of freedom was the most efficient to estimate height and circumference growth of eucalyptus in the Gypsum Pole of Pernambuco.
Historicamente, a madeira do Eucalyptus é usada para os mais variados fins, tais como; lenha, carvão vegetal, celulose, dormentes ferroviários, postes para eletrificação, casca para curtir couro, óleos essenciais, construção civil, etc. O Pólo Gesseiro do Araripe em Pernambuco é um grande consumidor de madeira para produção de gesso. Devido à grande necessidade de se buscar uma alternativa econômica e ambiental para a região é de interesse obter uma produção sustentável para o Eucalyptus, uma vez que esta é uma árvore de rápido crescimento e grande versatilidade. No planejamento do manejo florestal sustentado uma variável é de extrema importância: o crescimento. Sua modelagem é fundamental na prognose da produtividade, qualidade do local e dinâmica de populações. Geralmente, as curvas de crescimento são estudadas por meio de modelos não-lineares desenvolvidos empiricamente para relacionar, por exemplo, altura e idade. Um modelo não-linear bastante utilizado na prática para modelar curvas de crescimento é o modelo de Chapman-Richards. Em estudos deste tipo, em geral, assume-se que os erros seguem distribuição normal. Contudo, a modelagem sob a suposição de erros com distribuição normal é bastante sensível a valores atípicos que por ventura possam ocorrer, podendo distorcer as estimativas dos parâmetros. Para corrigir esse problema Cordeiro et al. (2009) desenvolveram uma nova classe de modelos simétricos transformados considerando para os erros distribuições contínuas simétricas com caudas mais pesadas do que a distribuição normal e permitindo uma possível estrutura não-linear para a média. Dessa forma, com a expectativa de obter melhores estimativas de crescimento de Eucalyptus, aplicaram-se ao modelo de Chapman-Richards as seguintes distribuições dos erros: normal, t de Student, Cauchy, exponencial potência, logística I e logística II que apresentou a distribuição t de Student com 2 graus de liberdade com melhores estimativas de crescimento em altura e circunferência de Eucalyptus no Pólo Gesseiro de Pernambuco.
APA, Harvard, Vancouver, ISO, and other styles
5

Kotsalis, Georgios. "Model reduction for Hidden Markov models." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38255.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2006.
Includes bibliographical references (leaves 57-60).
The contribution of this thesis is the development of tractable computational methods for reducing the complexity of two classes of dynamical systems, finite alphabet Hidden Markov Models and Jump Linear Systems with finite parameter space. The reduction algorithms employ convex optimization and numerical linear algebra tools and do not pose any structural requirements on the systems at hand. In the Jump Linear Systems case, a distance metric based on randomization of the parametric input is introduced. The main point of the reduction algorithm lies in the formulation of two dissipation inequalities, which in conjunction with a suitably defined storage function enable the derivation of low complexity models, whose fidelity is controlled by a guaranteed upper bound on the stochastic L2 gain of the approximation error. The developed reduction procedure can be interpreted as an extension of the balanced truncation method to the broader class of Jump Linear Systems. In the Hidden Markov Model case, Hidden Markov Models are identified with appropriate Jump Linear Systems that satisfy certain constraints on the coefficients of the linear transformation. This correspondence enables the development of a two step reduction procedure.
(cont.) In the first step, the image of the high dimensional Hidden Markov Model in the space of Jump Linear Systems is simplified by means of the aforementioned balanced truncation method. Subsequently, in the second step, the constraints that reflect the Hidden Markov Model structure are imposed by solving a low dimensional non convex optimization problem. Numerical simulation results provide evidence that the proposed algorithm computes accurate reduced order Hidden Markov Models, while achieving a compression of the state space by orders of magnitude.
by Georgios Kotsalis.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
6

Pommellet, Adrien. "On model-checking pushdown systems models." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC207/document.

Full text
Abstract:
Cette thèse introduit différentes méthodes de vérification (ou model-checking) sur des modèles de systèmes à pile. En effet, les systèmes à pile (pushdown systems) modélisent naturellement les programmes séquentiels grâce à une pile infinie qui peut simuler la pile d'appel du logiciel. La première partie de cette thèse se concentre sur la vérification sur des systèmes à pile de la logique HyperLTL, qui enrichit la logique temporelle LTL de quantificateurs universels et existentiels sur des variables de chemin. Il a été prouvé que le problème de la vérification de la logique HyperLTL sur des systèmes d'états finis est décidable ; nous montrons que ce problème est en revanche indécidable pour les systèmes à pile ainsi que pour la sous-classe des systèmes à pile visibles (visibly pushdown systems). Nous introduisons donc des algorithmes d'approximation de ce problème, que nous appliquons ensuite à la vérification de politiques de sécurité. Dans la seconde partie de cette thèse, dans la mesure où la représentation de la pile d'appel par les systèmes à pile est approximative, nous introduisons les systèmes à surpile (pushdown systems with an upper stack) ; dans ce modèle, les symboles retirés de la pile d'appel persistent dans la zone mémoire au dessus du pointeur de pile, et peuvent être plus tard écrasés par des appels sur la pile. Nous montrons que les ensembles de successeurs post* et de prédécesseurs pre* d'un ensemble régulier de configurations ne sont pas réguliers pour ce modèle, mais que post* est toutefois contextuel (context-sensitive), et que l'on peut ainsi décider de l'accessibilité d'une configuration. Nous introduisons donc des algorithmes de sur-approximation de post* et de sous-approximation de pre*, que nous appliquons à la détection de débordements de pile et de manipulations nuisibles du pointeur de pile. Enfin, dans le but d'analyser des programmes avec plusieurs fils d'exécution, nous introduisons le modèle des réseaux à piles dynamiques synchronisés (synchronized dynamic pushdown networks), que l'on peut voir comme un réseau de systèmes à pile capables d'effectuer des changements d'états synchronisés, de créer de nouveaux systèmes à piles, et d'effectuer des actions internes sur leur pile. Le problème de l'accessibilité étant naturellement indécidable pour un tel modèle, nous calculons une abstraction des chemins d'exécutions entre deux ensembles réguliers de configurations. Nous appliquons ensuite cette méthode à un processus itératif de raffinement des abstractions
In this thesis, we propose different model-checking techniques for pushdown system models. Pushdown systems (PDSs) are indeed known to be a natural model for sequential programs, as they feature an unbounded stack that can simulate the assembly stack of an actual program. Our first contribution consists in model-checking the logic HyperLTL that adds existential and universal quantifiers on path variables to LTL against pushdown systems (PDSs). The model-checking problem of HyperLTL has been shown to be decidable for finite state systems. We prove that this result does not hold for pushdown systems nor for the subclass of visibly pushdown systems. Therefore, we introduce approximation algorithms for the model-checking problem, and show how these can be used to check security policies. In the second part of this thesis, as pushdown systems can fail to accurately represent the way an assembly stack actually operates, we introduce pushdown systems with an upper stack (UPDSs), a model where symbols popped from the stack are not destroyed but instead remain just above its top, and may be overwritten by later push rules. We prove that the sets of successors post* and predecessors pre* of a regular set of configurations of such a system are not always regular, but that post* is context-sensitive, hence, we can decide whether a single configuration is forward reachable or not. We then present methods to overapproximate post* and under-approximate pre*. Finally, we show how these approximations can be used to detect stack overflows and stack pointer manipulations with malicious intent. Finally, in order to analyse multi-threaded programs, we introduce in this thesis a model called synchronized dynamic pushdown networks (SDPNs) that can be seen as a network of pushdown processes executing synchronized transitions, spawning new pushdown processes, and performing internal pushdown actions. The reachability problem for this model is obviously undecidable. Therefore, we compute an abstraction of the execution paths between two regular sets of configurations. We then apply this abstraction framework to a iterative abstraction refinement scheme
APA, Harvard, Vancouver, ISO, and other styles
7

Peak, Russell Speights. "Product model-based analytical models (PBAMs) : a new representation of engineering analysis models." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/18379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mateluna, Diego Ignacio Gallardo. "Extensões em modelos de sobrevivência com fração de cura e efeitos aleatórios." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-24062014-202301/.

Full text
Abstract:
Neste trabalho são apresentadas algumas extensões de modelos de sobrevivência com fração de cura, assumindo o contexto em que as observações estão agrupadas. Dois efeitos aleatórios são incorporados para cada grupo: um para explicar o efeito no tempo de sobrevida das observações suscetíveis e outro para explicar a probabilidade de cura. Apresenta-se uma abordagem clássica através dos estimadores REML e uma abordagem bayesiana através do uso de processos de Dirichlet. Discute-se alguns estudos de simulação em que avalia-se o desempenho dos estimadores propostos, além de comparar as duas abordagens. Finalmente, ilustram-se os resultados com dados reais.
In this work some extensions in survival models with cure fraction are presented, assuming the context in which the observations are grouped into clusters. Two random effects are incorporated for each group: one to explain the effect on survival time of susceptible observations and another to explain the probability of cure. A classical approach through the REML estimators is presented as well as a bayesian approach through Dirichlet Process. Besides comparing both approaches, some simulation studies which evaluates the performance of the proposed estimators are discussed. Finally, the results are illustrated with a real database.
APA, Harvard, Vancouver, ISO, and other styles
9

Fernandes, Walney Reis. "Modelos de emparelhamento integráveis." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-21102010-121332/.

Full text
Abstract:
O objetivo deste trabalho foi o estudo do Ansatz de Bethe Algébrico (ABA), que é uma técnica utilizada na obtenção dos auto-estados do hamiltoniano de inúmeros modelos da Mecânica Estatística e da Teoria Quântica de Campos. Aplicamos este procedimento na diagonalização de três modelos de spins: o modelo de Heisenberg, o modelo de Heisenberg-Sklyanin e o modelo de Heisenberg-Cherednik. Na diagonalização do primeiro modelo, não foi possível encontrar todos os auto-estados do hamiltoniano através do ABA e, durante o procedimento de obtenção das expressões analíticas, nos deparamos com um conjunto de identidades inédito na literatura. A matriz de borda do modelo de Heisenberg-Sklyanin acopla o último e o primeiro sítios, generalizando o modelo anterior, e permite estabelecer uma relação limite com outros modelos integráveis. Neste caso também não conseguimos obter todos os auto-estados utilizando a técnica do ABA. Diferentemente do que ocorreu para os primeiros modelos, o de Heisenberg-Cherednik, com acoplamentos que alternam a intensidade ao longo da cadeia de spin, apresentou um conjunto completo de auto-estados quando diagonalizado pelo ABA.
The goal of this work was to study the Algebraic Bethe ansatz (ABA), which is a technique used to obtain the eigenstates of Hamiltonian of many models of Statistical Mechanics and Quantum Field Theory. We apply this procedure to diagonalize three types of spin models: the Heisenberg model, the Heisenberg-Sklyanin model and the Heisenberg-Cherednik model. On diagonalization of the …rst model, we could not …nd all the eigenstates of Hamiltonian through ABA, and during the procedure for obtaining the analytical expressions, we face an unprecedented set of identities in literature. The Sklyanin´s boundary matrix couples the fi…rst and last sites, generalizing the previous model, and provides a limit for other integrable models. In this case also did not get all eigenstates using the technique of ABA. Unlike what happened with the …rst models, the Heisenberg-Cherednik model, with alternating couplings the intensity along the spin chain, presented a complete set of eigenstates when diagonalized by ABA.
APA, Harvard, Vancouver, ISO, and other styles
10

Ribeiro, Darielder Jesus. "Modelos de contato com probabilidades aperiódicas." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-20052014-190949/.

Full text
Abstract:
A análise de modelos de contato na presença de elementos de desordem fixa indica o surgimento de desvios em relação ao comportamento crítico do modelo uniforme subjacente. Nesse trabalho consideramos o efeito da aperiodicidade, que também é capaz de produzir flutuações de natureza geométrica. Utilizamos distri­ buições aperiódicas de probabilidades, definidas através de regras de substituição determinísticas, a fim de analisar o comportamento crítico desses modelos de con­ tato. Realizamos simulações de Monte Carlo para modelos definidos por três regras distintas, caracterizadas por um expoente w, associado à intensidade das flutuações geométricas. Nos modelos A e B, com w = -1 e w = 0, não constatamos qualquer mudança em relação à classe de universalidade crítica da percolação direcionada. Já no Modelo C, com w = 0.6309, as flutuações geométricas alteram a classe de universalidade crítica.
The analysis of contact models in the presence of quenched disorder indicates the onset of deviations with respect to the critical behavior of the underlying uniform system. In the present work, we consider the effects of aperiodicity, which are also known to produce fluctuation of geometric nature. We use aperiodic distributions of probabilities, given by deterministic substitution rules, in order to analyze the critical behavior. We performed Monte Carlo simulations for three different rules, characterized by an exponent w, which gauges the intensity of the geometric fluc­ tuations. For models A and B, with w = -1and w = 0, we have not detected any changes with respect to the universality class of directed percolation. For model C, with w = 0.6309, the geometric fluctuations change the critical universality class.
APA, Harvard, Vancouver, ISO, and other styles
11

Rotelli, Vanderlei. "Maquetes: o estado da arte." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/16/16132/tde-05072017-100041/.

Full text
Abstract:
Este estudo se propõe a falar sobre as maquetes de apresentação, que são executadas por profissionais especializados, para o Mercado Imobiliário. Foi feito um levantamento sobre como este mercado trabalha atualmente no Brasil, estudando metodologias, materiais e maquinários utilizados. Para tanto, foram feitas visitas técnicas a três das principais empresas especializadas no setor, documentando fotograficamente as formas de trabalho, os materiais mais utilizados e quais máquinas podem ser encontradas. Foi analisada, também, uma das inovações mais recentes e revolucionárias nesta área que é a Fabricação Digital (FD), que, além de influenciar diretamente o setor de construção de maquetes, está alterando a metodologia de trabalho dos arquitetos e designers, pois torna muito mais tênue a linha entre o desenho digital e a execução do modelo físico. Para tanto, foi feita uma visita técnica em um dos doze FabLabs Livres SP, abertos a todos os interessados na cidade de São Paulo, onde foi avaliado omaquinário disponível e suas possibilidades de uso, bem como o acesso aesta nova tecnologia disponibilizada pela Prefeitura Municipal de São Paulo. Estes Laboratórios são abertos ao público e oferecem, além do acesso direto às máquinas, cursos que ensinam a projetar e utilizar esta nova tecnologia.
This study intends to talk about the models of presentation, which are carried out by specialized professionals, for the Real Estate Market. A survey was made on how this market currently works in Brazil, studying methodologies, materials and machinery used. For that, technical visits were made to three of the main companies specialized in this sector, photographically documenting the forms of work, the most used materials and which machines were found there. It was also analyzed one of the most recent and revolutionary innovations in this area that is Digital Fabrication (FD), which, in addition to directly influencing the model construction sector, is changing the architects\' and designers\' working methodology since it makes thinner the line between the digital drawing and the completion of the physical model. A technical visit was made to one of the twelve FabLabs Livres SP, open to all interested in the city of São Paulo, where the available machinery and their possibilities of use were evaluated, as well as the access to this new technology provided by the City Hall Municipality of São Paulo. These laboratories are open to the public and offer courses on how to design and use this new technology. In addition, they offer direct access to the machines.
APA, Harvard, Vancouver, ISO, and other styles
12

Seid, Hamid Jemila. "New residuals in multivariate bilinear models : testing hypotheses, diagnosing models and validating model assumptions /." Uppsala : Dept. of Biometry and Engineering, Swedish University of Agricultural Sciences, 2005. http://epsilon.slu.se/200583.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Yi. "On Model Reduction of Distributed Parameter Models." Licentiate thesis, KTH, Signals, Sensors and Systems, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Giese, Holger, and Stephan Hildebrandt. "Efficient model synchronization of large-scale models." Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2009/2928/.

Full text
Abstract:
Model-driven software development requires techniques to consistently propagate modifications between different related models to realize its full potential. For large-scale models, efficiency is essential in this respect. In this paper, we present an improved model synchronization algorithm based on triple graph grammars that is highly efficient and, therefore, can also synchronize large-scale models sufficiently fast. We can show, that the overall algorithm has optimal complexity if it is dominating the rule matching and further present extensive measurements that show the efficiency of the presented model transformation and synchronization technique.
Die Model-getriebene Softwareentwicklung benötigt Techniken zur Übertragung von Änderungen zwischen verschiedenen zusammenhängenden Modellen, um vollständig nutzbar zu sein. Bei großen Modellen spielt hier die Effizienz eine entscheidende Rolle. In diesem Bericht stellen wir einen verbesserten Modellsynchronisationsalgorithmus vor, der auf Tripel-Graph-Grammatiken basiert. Dieser arbeitet sehr effizient und kann auch sehr große Modelle schnell synchronisieren. Wir können zeigen, dass der Gesamtalgortihmus eine optimale Komplexität aufweist, sofern er die Ausführung dominiert. Die Effizient des Algorithmus' wird durch einige Benchmarkergebnisse belegt.
APA, Harvard, Vancouver, ISO, and other styles
15

Alharthi, Muteb. "Bayesian model assessment for stochastic epidemic models." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33182/.

Full text
Abstract:
Acrucial practical advantage of infectious diseases modelling as a public health tool lies in its application to evaluate various disease-control policies. However, such evaluation is of limited use, unless a sufficiently accurate epidemic model is applied. If the model provides an adequate fit, it is possible to interpret parameter estimates, compare disease epidemics and implement control procedures. Methods to assess and compare stochastic epidemic models in a Bayesian framework are not well-established, particularly in epidemic settings with missing data. In this thesis, we develop novel methods for both model adequacy and model choice for stochastic epidemic models. We work with continuous time epidemic models and assume that only case detection times of infected individuals are available, corresponding to removal times. Throughout, we illustrate our methods using both simulated outbreak data and real disease data. Data augmented Markov Chain Monte Carlo (MCMC) algorithms are employed to make inference for unobserved infection times and model parameters. Under a Bayesian framework, we first conduct a systematic investigation of three different but natural methods of model adequacy for SIR (Susceptible-Infective-Removed) epidemic models. We proceed to develop a new two-stage method for assessing the adequacy of epidemic models. In this two stage method, two predictive distributions are examined, namely the predictive distribution of the final size of the epidemic and the predictive distribution of the removal times. The idea is based onlooking explicitly at the discrepancy between the observed and predicted removal times using the posterior predictive model checking approach in which the notion of Bayesian residuals and the and the posterior predictive p−value are utilized. This approach differs, most importantly, from classical likelihood-based approaches by taking into account uncertainty in both model stochasticity and model parameters. The two-stage method explores how SIR models with different infection mechanisms, infectious periods and population structures can be assessed and distinguished given only a set of removal times. In the last part of this thesis, we consider Bayesian model choice methods for epidemic models. We derive explicit forms for Bayes factors in two different epidemic settings, given complete epidemic data. Additionally, in the setting where the available data are partially observed, we extend the existing power posterior method for estimating Bayes factors to models incorporating missing data and successfully apply our missing-data extension of the power posterior method to various epidemic settings. We further consider the performance of the deviance information criterion (DIC) method to select between epidemic models.
APA, Harvard, Vancouver, ISO, and other styles
16

Billah, Baki 1965. "Model selection for time series forecasting models." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/8840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cloth, Lucia. "Model checking algorithms for Markov reward models." Enschede : University of Twente [Host], 2006. http://doc.utwente.nl/55445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kadhem, Safaa K. "Model fit diagnostics for hidden Markov models." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/9966.

Full text
Abstract:
Hidden Markov models (HMMs) are an efficient tool to describe and model the underlying behaviour of many phenomena. HMMs assume that the observed data are generated independently from a parametric distribution, conditional on an unobserved process that satisfies the Markov property. The model selection or determining the number of hidden states for these models is an important issue which represents the main interest of this thesis. Applying likelihood-based criteria for HMMs is a challenging task as the likelihood function of these models is not available in a closed form. Using the data augmentation approach, we derive two forms of the likelihood function of a HMM in closed form, namely the observed and the conditional likelihoods. Subsequently, we develop several modified versions of the Akaike information criterion (AIC) and Bayesian information criterion (BIC) approximated under the Bayesian principle. We also develop several versions for the deviance information criterion (DIC). These proposed versions are based on the type of likelihood, i.e. conditional or observed likelihood, and also on whether the hidden states are dealt with as missing data or additional parameters in the model. This latter point is referred to as the concept of focus. Finally, we consider model selection from a predictive viewpoint. To this end, we develop the so-called widely applicable information criterion (WAIC). We assess the performance of these various proposed criteria via simulation studies and real-data applications. In this thesis, we apply Poisson HMMs to model the spatial dependence analysis in count data via an application to traffic safety crashes for three highways in the UK. The ultimate interest is in identifying highway segments which have distinctly higher crash rates. Selecting an optimal number of states is an important part of the interpretation. For this purpose, we employ model selection criteria to determine the optimal number of states. We also use several goodness-of-fit checks to assess the model fitted to the data. We implement an MCMC algorithm and check its convergence. We examine the sensitivity of the results to the prior specification, a potential problem given small sample sizes. The Poisson HMMs adopted can provide a different model for analysing spatial dependence on networks. It is possible to identify segments with a higher posterior probability of classification in a high risk state, a task that could prioritise management action.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Lingzhu. "Model checking for general parametric regression models." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/654.

Full text
Abstract:
Model checking for regressions has drawn considerable attention in the last three decades. Compared with global smoothing tests, local smoothing tests, which are more sensitive to high-frequency alternatives, can only detect local alternatives dis- tinct from the null model at a much slower rate when the dimension of predictor is high. When the number of covariates is large, nonparametric estimations used in local smoothing tests lack efficiency. Corresponding tests then have trouble in maintaining the significance level and detecting the alternatives. To tackle the issue, we propose two methods under high but fixed dimension framework. Further, we investigate a model checking test under divergent dimension, where the numbers of covariates and unknown parameters go divergent with the sample size n. The first proposed test is constructed upon a typical kernel-based local smoothing test using projection method. Employed by projection and integral, the resulted test statistic has a closed form that depends only on the residuals and distances of the sample points. A merit of the developed test is that the distance is easy to implement compared with the kernel estimation, especially when the dimension is high. Moreover, the test inherits some feature of local smoothing tests owing to its construction. Although it is eventually similar to an Integrated Conditional Moment test in spirit, it leads to a test with a weight function that helps to collect more information from the samples than Integrated Conditional Moment test. Simulations and real data analysis justify the powerfulness of the test. The second test, which is a synthesis of local and global smoothing tests, aims at solving the slow convergence rate caused by nonparametric estimation in local smoothing tests. A significant feature of this approach is that it allows nonparamet- ric estimation-based tests, under the alternatives, also share the merits of existing empirical process-based tests. The proposed hybrid test can detect local alternatives at the fastest possible rate like the empirical process-based ones, and simultane- ously, retains the sensitivity to high-frequency alternatives from the nonparametric estimation-based ones. This feature is achieved by utilizing an indicative dimension in the field of dimension reduction. As a by-product, we have a systematic study on a residual-related central subspace for model adaptation, showing when alterna- tive models can be indicated and when cannot. Numerical studies are conducted to verify its application. Since the data volume nowadays is increasing, the numbers of predictors and un- known parameters are probably divergent as sample size n goes to infinity. Model checking under divergent dimension, however, is almost uncharted in the literature. In this thesis, an adaptive-to-model test is proposed to handle the divergent dimen- sion based on the two previous introduced tests. Theoretical results tell that, to get the asymptotic normality of the parameter estimator, the number of unknown parameters should be in the order of o(n1/3). Also, as a spinoff, we demonstrate the asymptotic properties of estimations for the residual-related central subspace and central mean subspace under different hypotheses.
APA, Harvard, Vancouver, ISO, and other styles
20

Lattimer, Alan Martin. "Model Reduction of Nonlinear Fire Dynamics Models." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/70870.

Full text
Abstract:
Due to the complexity, multi-scale, and multi-physics nature of the mathematical models for fires, current numerical models require too much computational effort to be useful in design and real-time decision making, especially when dealing with fires over large domains. To reduce the computational time while retaining the complexity of the domain and physics, our research has focused on several reduced-order modeling techniques. Our contributions are improving wildland fire reduced-order models (ROMs), creating new ROM techniques for nonlinear systems, and preserving optimality when discretizing a continuous-time ROM. Currently, proper orthogonal decomposition (POD) is being used to reduce wildland fire-spread models with limited success. We use a technique known as the discrete empirical interpolation method (DEIM) to address the slowness due to the nonlinearity. We create new methods to reduce nonlinear models, such as the Burgers' equation, that perform better than POD over a wider range of input conditions. Further, these ROMs can often be constructed without needing to capture full-order solutions a priori. This significantly reduces the off-line costs associated with creating the ROM. Finally, we investigate methods of time-discretization that preserve the optimality conditions in a certain norm associated with the input to output mapping of a dynamical system. In particular, we are able to show that the Crank-Nicholson method preserves the optimality conditions, but other single-step methods do not. We further clarify the need for these discrete-time ROMs to match at infinity in order to ensure local optimality.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Volinsky, Christopher T. "Bayesian model averaging for censored survival models /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Vasconcelos, Julio Cezar Souza. "Modelo linear parcial generalizado simétrico." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-26072017-105153/.

Full text
Abstract:
Neste trabalho foi proposto o modelo linear parcial generalizado simétrico, com base nos modelos lineares parciais generalizados e nos modelos lineares simétricos, em que a variável resposta segue uma distribuição que pertence à família de distribuições simétricas, considerando um preditor linear que possui uma parte paramétrica e uma não paramétrica. Algumas distribuições que pertencem a essa classe são as distribuições: Normal, t-Student, Exponencial potência, Slash e Hiperbólica, dentre outras. Uma breve revisão dos conceitos utilizados ao longo do trabalho foram apresentados, a saber: análise residual, influência local, parâmetro de suavização, spline, spline cúbico, spline cúbico natural e algoritmo backfitting, dentre outros. Além disso, é apresentada uma breve teoria dos modelos GAMLSS (modelos aditivos generalizados para posição, escala e forma). Os modelos foram ajustados utilizando o pacote gamlss disponível no software livre R. A seleção de modelos foi baseada no critério de Akaike (AIC). Finalmente, uma aplicação é apresentada com base em um conjunto de dados reais da área financeira do Chile.
In this work we propose the symmetric generalized partial linear model, based on the generalized partial linear models and symmetric linear models, that is, the response variable follows a distribution that belongs to the symmetric distribution family, considering a linear predictor that has a parametric and a non-parametric component. Some distributions that belong to this class are distributions: Normal, t-Student, Power Exponential, Slash and Hyperbolic among others. A brief review of the concepts used throughout the work was presented, namely: residual analysis, local influence, smoothing parameter, spline, cubic spline, natural cubic spline and backfitting algorithm, among others. In addition, a brief theory of GAMLSS models is presented (generalized additive models for position, scale and shape). The models were adjusted using the package gamlss available in the free R software. The model selection was based on the Akaike criterion (AIC). Finally, an application is presented based on a set of real data from Chile\'s financial area.
APA, Harvard, Vancouver, ISO, and other styles
23

Cabella, Brenno Caetano Troca. "Modelos aplicados ao crescimento e tratamento de tumores e à disseminação da dengue e tuberculose." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/59/59135/tde-01082012-110701/.

Full text
Abstract:
A generalização de modelos de crescimento por meio de um parâmetro de controle foi primeiramente proposta por Richards, em 1959. Em nosso trabalho, propomos uma forma alternativa de generalização obtendo uma interpretação emp rica e outra microscopica do parâmetro de controle. Mais especificamente, quando consideramos a proliferacão de c elulas, o parâmetro est a relacionado ao alcance da interação e a dimensão fractal da estrutura celular. Obtemos a solucão anal ítica para esta equação diferencial. Mostramos que, atrav és da escolha apropriada da escala conseguimos o colapso de dados representando a independência em relacão aos parâmetros e as condições iniciais. Al ém disso, ao considerarmos a taxa de esforco como a retirada de indiví duos de uma população, podemos associ á-la ao tratamento visando extinguir uma populacãoo de c élulas cancerosas. Em modelos epidemiol ogicos, propomos modelar a dinâmica de transmissão da dengue utilizando equacões diferenciais ordin árias. Em nosso modelo, levamos em conta tanto a dinâmica do hospedeiro quanto a do vetor, assim temos o controle da dinâmica de ambas as populações. Inclu ímos tamb ém no modelo o efeito \"enhancing\" com intuito de verificar sua influência na dinâmica de disseminacão da doença. O efeito \"enhancing\" é considerado uma das principais hipóteses para explicar a dengue hemorr ágica que pode levar a morte. Fizemos o estudo de um modelo epidemiol ógico da dengue com o objetivo de revelar quais são os fatores que levam a disseminação desse caso mais severo da doenca e, possivelmente, sugerir polí ticas p úblicas de sa úde para evit á-lo. Implementamos tamb ém um modelo de transmissão da tuberculose fazendo uso da modelagem computacional baseada em agentes, que oferece a possibilidade de representar explicitamente heterogeneidades em nível individual.
The generalization of growth models by means of a control parameter was first proposed by Richards in 1959. In our work, we propose an alternative way to obtainin an empirical and microscopic interpretation of control parameter. More specically, when considering the proliferation of cells, the parameter is related to the range of interaction and the fractal dimension of the cell structure. We obtain the analytical solution for this dierential equation. We show that, by appropriate choice of scale we have data collapse, representing the independence on parameters and initial conditions. Furthermore, when considering the e ffort as rate the removal of individuals from a population, we can associate it with the treatment to extinguish cancer cells population. In epidemiological models, we propose to model the dynamics of dengue transmission using ordinary dierential equations. In our model, we take into account both the dynamics of the host and the vector, so we have control of the dynamics of both populations. We also included in the model the effect of enhancing in order to verify their inuence on the dynamics of disease spread. The effect of enhancing is considered one of the main hypotheses to explain the hemorrhagic fever that can lead to death. We study a model of epidemiology of dengue in order to reveal what are the factors that lead to the dissemination of this more severe case of the disease and, possibly suggesting public health policies to prevent it. We also implemented a model of tuberculosis transmission making use of agent-based computational modeling, which o ffers the possibility to explicitly represent heterogeneity at the individual level. This approach allows us to deal with each individual in particular, unlike the model of dierential equations, in which all individuals are in the same compartment interact in a similar way as in a mean field interaction.
APA, Harvard, Vancouver, ISO, and other styles
24

Tonner, Jaromír. "Overcomplete Mathematical Models with Applications." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-233893.

Full text
Abstract:
Chen, Donoho a Saunders (1998) studují problematiku hledání řídké reprezentace vektorů (signálů) s použitím speciálních přeurčených systémů vektorů vyplňujících prostor signálu. Takovéto systémy (někdy jsou také nazývány frejmy) jsou typicky vytvořeny buď rozšířením existující báze, nebo sloučením různých bazí. Narozdíl od vektorů, které tvoří konečně rozměrné prostory, může být problém formulován i obecněji v rámci nekonečně rozměrných separabilních Hilbertových prostorů (Veselý, 2002b; Christensen, 2003). Tento funkcionální přístup nám umožňuje nacházet v těchto prostorech přesnější reprezentace objektů, které, na rozdíl od vektorů, nejsou diskrétní. V této disertační práci se zabývám hledáním řídkých representací v přeurčených modelech časových řad náhodných veličin s konečnými druhými momenty. Numerická studie zachycuje výhody a omezení tohoto přístupu aplikovaného na zobecněné lineární modely a na vícerozměrné ARMA modely. Analýzou mnoha numerických simulací i modelů reálných procesů můžeme říci, že tyto metody spolehlivě identifikují parametry blízké nule, a tak nám umožňují redukovat původně špatně podmíněný přeparametrizovaný model. Tímto významně redukují počet odhadovaných parametrů. V konečném důsledku se tak nemusíme starat o řády modelů, jejichž zjišťování je většinou předběžným krokem standardních technik. Pro kratší časové řady (100 a méně vzorků) řídké odhady dávají lepší predikce v porovnání s těmi, které jsou založené na standardních metodách (např. maximální věrohodnosti v MATLABu - MATLAB System Identification Toolbox (IDENT)). Pro delší časové řady (500 a více) obě techniky dávají v podstatě stejně přesné predikce. Na druhou stranu řešení těchto problémů je náročnější, a to i časově, nicméně výpočetní doba je stále přijatelná.
APA, Harvard, Vancouver, ISO, and other styles
25

Alves, Vitor Alex Oliveira. "Comparação de Métodos Diretos e de Dois-Passos na identificação de sistemas em malha fechada." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-31052011-142428/.

Full text
Abstract:
A Identificação de Sistemas em Malha Fechada possui considerável apelo prático, uma vez que oferece maior segurança durante a coleta experimental de dados e ao mesmo tempo, em linhas gerais, proporciona a construção de modelos mais adequados para servir de base ao projeto de sistemas de controle. Esta Tese apresenta, como um de seus principais objetivos, a comparação dos Métodos Diretos aplicados à Identificação em Malha Fechada com a classe dos Métodos de Dois-Passos, que se enquadram na abordagem de Identificação Conjunta Entrada/Saída. Complementando esta comparação, propõe-se um novo algoritmo em Dois-Passos, a Dupla Filtragem. As propriedades de convergência deste método são analisadas em detalhe. O desempenho alcançado pelos modelos identificados pelos Métodos Diretos e com o uso dos Métodos de Dois-Passos aqui considerados a saber, Filtragem-u (VAN DEN HOF; SCHRAMA, 1993), Filtragem-y (HUANG; SHAH, 1997) e Dupla Filtragem são comparados em uma abordagem estatística por meio da aplicação de Simulações de Monte Carlo. Também se propõe uma variante ao método da Filtragem-u, proporcionando duas formas distintas de descrever a função de sensibilidade da saída associada ao processo sob estudo (FORSSELL; LJUNG, 1999). Os critérios de comparação de desempenho adotados nesta tese incluem validações dos modelos identificados em simulações livres (operação em malha aberta), em que os objetos de análise são respostas a pulsos retangulares e, com maior ênfase, validações em malha fechada que utilizam o mesmo controlador instalado no sistema sob estudo. Nesta última situação são empregados sinais de excitação de mesma natureza daqueles adotados nos ensaios de identificação, porém com diferentes realizações. Cada uma dessas validações é acompanhada de seu respectivo fit (LJUNG,1999), índice de mérito que mede a proximidade entre as respostas temporais do sistema físico e de seu modelo matemático. Também são consideradas as respostas em frequência do processo, que constituem a base para a determinação do limite máximo para a incerteza associada ao modelo (ZHU, 2001). Tomando como fundamento tais limites máximos de incerteza, em conjunto com as respostas em frequência dos modelos identificados, é possível associar graduações a esses modelos (A, B, C, ou D). Desta forma, esta tese utiliza índices de mérito fundamentados em ambas as respostas temporais e em frequência. Aspectos relativos à influência da amplitude e do tipo de sinal de excitação aplicado à malha, bem como à relação sinal-ruído estabelecida no sistema, são analisados. Também se investiga a relação entre a qualidade do modelo identificado e o ponto de aplicação do sinal de excitação: no valor de referência da malha de controle ou na saída do controlador. Por fim, verifica-se como a sintonia do controlador afeta o modelo identificado. Todas as simulações realizadas utilizam sinais de perturbação do tipo quase não- estacionário, típicos da indústria de processos (ESMAILI et al., 2000). Os resultados indicam que os Métodos Diretos são mais precisos quando a estrutura de modelo e ordem adotadas são idênticas àquelas do processo real. No entanto, os Métodos de Dois-Passos são capazes de fornecer modelos muito confiáveis mesmo quando a estrutura e ordem do modelo diferem daquelas do processo sob estudo.
Closed-loop System Identification has considerable practical appeal, since it provides increased security during the collection of experimental data and, at the same time, provides the construction of suitable models for the design of high performance control systems. This thesis presents, as one of its main objectives, a thorough comparison between Direct Methods (applied to the closed-loop identification) and Two-Step Methods. The latter ones belong to the Joint Input/Output approach. Complementing this comparison, a new two-step algorithm the Double Filtering is proposed. The convergence properties of this method are analyzed in detail. The performance achieved by the models identified by Direct and Two-Step methods is compared in a statistical approach through Monte Carlo simulations. The Two-Step methods considered in this thesis are the u-Filtering (VAN DEN HOF; SCHRAMA, 1993), the y-Filtering (HUANG; SHAH, 1997) and the Double Filtering. A variant of the u-Filtering method is proposed, providing two distinct ways of describing the output sensitivity function associated with the process under study (FORSSELL; LJUNG, 1999). The performance comparison criteria adopted in this thesis include free-run model validations (open-loop operation), in which rectangular pulses responses are analyzed. Greater emphasis is given to closed loop model validation, which uses the same controller installed in the system under study. This type of validation employs excitation signals similar to those adopted in the identification tests, but with different realizations. Each of these validations is accompanied by its corresponding fit (Ljung, 1999), a merit index that measures the proximity between the time responses of the physical system and its mathematical model. Process frequency responses are also considered, since they form the basis for determining the model uncertainty upper-limit or upper-bound error (ZHU, 2001). The upper- bounds, along with the frequency responses of each identified model, provides ranks (A, B, C, or D) for these models. Therefore, this thesis uses merit indexes based on both time and frequency responses. It is analyzed how the type and magnitude (or equivalently, the signal-to-noise ratio) of the excitation signal applied to the loop impacts the accuracy of the identified models. This work also investigates the relationship between the accuracy of the identified models and the point of application of the excitation signal: the reference of the control loop or the controller output. Finally, it is checked how the controller tuning affects the identified models. All simulations employ quasi non-stationary disturbance signals, typical of the process industries (ESMAILI et al., 2000). The results indicate that Direct Methods are more accurate when the model structure and order adopted in the identification are identical to those of the actual process. However, the Two-Step Methods are capable of providing very reliable models even when the adopted structure and order differ from those of the process under study.
APA, Harvard, Vancouver, ISO, and other styles
26

Rubáš, Aleš. "Transformace byznys modelů pro prostředí sociálních sítí." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-165089.

Full text
Abstract:
This paper is dealing with application of business models into the area of social networks. The aim is to identify relevant business models for social networks industry based on chosen methodic and to analyse business success of application these models. First part of this paper defines and describes existing business models. Next part defines social networks and chose potential business models suitable for implementation into the industry of social networks. Analytical part investigates chosen social networks from the aspect of business model it implemented. This part also identifies respective business models defined in theoretical part of this paper and evaluates the success of its application. Analysis of the business models used by selected social networks with focus on the differences between application of these model in social network industry and other industries is made at the end of this paper.
APA, Harvard, Vancouver, ISO, and other styles
27

Escobar, Lindber Ivan Salas. "Modelos seesaw a baixas energias e modelo de violação mínima de sabor no modelo seesaw tipo III." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-26032013-145746/.

Full text
Abstract:
Enquanto todos os modelos com neutrinos massivos de Majorana levam ao mesmo operador efetivo de dimensão d = 5, que não conserva número leptônico, os operadores de dimensão d = 6, obtidos a baixas energias, conservam número leptônico e são diferentes dependendo do modelo de alta energia da nova física. Derivamos os operadores de dimensão d = 6 que são característicos de modelos Seesaw genéricos, no qual a massa do neutrino resulta do intercâmbio de campos pesados que podem ser tanto singletos fermiônicos, tripletos fermiônicos ou tripletos escalares. Os operadores resultantes podem conduzir a efeitos observáveis no futuro próximo, se os coeficientes dos operadores de dimensão d = 5 e d = 6 são desacoplados. Neste trabalho apresentamos o modelo violação mínima de sabor no contexto do modelo seesaw tipo III, no qual é possível obter tal desacoplamento. Isto permite reconstruir a estrutura de sabor a partir dos valores das massas dos neutrino leves e dos parâmetros de mistura, mesmo na presença de fases de violação CP.
While all models of Majorana neutrino masses lead to the same dimension five effective operator, which does not conserve lepton number, the dimension six operators induced at low energies conserve lepton number and differ depending on the high energy model of new physics. We derive the low-energy dimension six operators which are characteristic of generic Seesaw models, in which neutrino masses result from the exchange of heavy fields which may be either fermionic singlets, fermionic triplets or scalar triplets. The resulting operators may lead to effects observable in the near future, if the coefficients of the dimension five and six operators are decoupled. In this work we present the model of minimal avor violation in the context of the type III seesaw model, in which it is possible to obtain the decoupling mentioned before. This allows to reconstruct the avour structure of the model from the values of the light neutrino masses and mixing parameters, even in the presence of CP-violating phases.
APA, Harvard, Vancouver, ISO, and other styles
28

Lozano, Dairon Andrés Jiménez. "Modelo de Heisenberg Antiferromagnético de spin-1/2 na rede triangular com interações competitivas." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-21092016-212043/.

Full text
Abstract:
Nesta dissertação estudamos sistemas de spins em redes de baixa dimensionalidade e em temperatura nula, analisando suas transições de fases quânticas. Mais precisamente, estu- damos as propriedades do estado fundamental e as possíveis transições de fase do modelo de Heisenberg quântico antiferromagnético de spin-1/2, com interações entre os primeiros e segundos vizinhos, em diversas redes, e em particular na rede triangular, que é o foco de nosso estudo. Para a obtenção do estado fundamental aproximado, usamos um método variacional em que a rede é particionada num conjunto de plaquetas de sítios. O estado fundamental é escrito como um produto tensorial dos estados das plaquetas. Para a rede triangular, escolhemos um triângulo como uma plaqueta. Quatro fases foram encontra- das: a fase antiferromagnética de Néel, a colinear, a fase de Néel modificada e aquela que denominamos de ligação covalente ressonante. Obtivemos as energias e as magnetizações de subrede em função da razão entre as interações de primeiros e segundos vizinhos. En- tre as fases de Néel e a colinear, podemos observar a fase de ligação covalente ressonante caracterizada como um singleto quanto ao spin de cada plaqueta.
In this thesis we study spin systems in low-dimensional lattices at zero temperature, analyzing their quantum phase transitions. More precisely, we study the properties of the ground state and the possible phase transitions in the antiferromagnetic spin-1/2 quan- tum Heisenberg model with interaction between the first and second neighbors, in several lattices, and in particular in the triangular lattice, which is the focus of our study. To obtain the approximate ground state, we use a variational method in which the lattice is partitioned into a set of plates of sites. The ground state is written as a tensor product of the states of plates. For the triangular lattice, we choose a triangle as a plate. Four phases were found: the antiferromagnetic Néel phase, the collinear, the modified Néel phase and that we call resonating valence bond. We obtained the energy and the magnetization as a function of the ratio of the interactions between the first and second neighbor sites. Between the Néel and collinear phases, we can observe the spin resonating valence bond phase, characterized as a singlet with respect to the spin of each plate.
APA, Harvard, Vancouver, ISO, and other styles
29

Evers, Ludger. "Model fitting and model selection for 'mixture of experts' models." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Rosenbaum, Rimolo Jorge. "Una Mirada Sobre la Negociación Colectiva en América Latina." Derecho & Sociedad, 2017. http://repositorio.pucp.edu.pe/index/handle/123456789/118352.

Full text
Abstract:
The author analyzes the current status of freedom of association in Latin America, specifically collective bargaining and the different schemes this institution presents itself after the neoliberal policies of economic deregulation took place in the decade of nineteen ninety.
El autor analiza la situación de la libertad sindical en América Latin, específicamente la negociación colectiva y los distintos modelos en los que esta se presenta luego de las políticas neoliberales de desregulación económica de la década de mil novecientos noventa.
APA, Harvard, Vancouver, ISO, and other styles
31

Bína, Vladislav. "Mnoharozměrná pravděpodobnostní rozdělení: Struktura a učení." Doctoral thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-72677.

Full text
Abstract:
The thesis considers a representation of a discrete multidimensional probability distribution using an apparatus of compositional models, and focuses on the theoretical background and structure of search space for structure learning algorithms in the framework of such models and particularly focuses on the subclass of decomposable models. Based on the theoretical results, proposals of basic learning techniques are introduced and compared.
APA, Harvard, Vancouver, ISO, and other styles
32

Rosales, Juan Carlos. "Modelagem matematica da dinamica da Leishmaniose." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307213.

Full text
Abstract:
Orientador: Hyun Mo Yang
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-05T18:54:32Z (GMT). No. of bitstreams: 1 Rosales_JuanCarlos_M.pdf: 1360240 bytes, checksum: 84fee5e0958180b35a0f098cc5b87f4d (MD5) Previous issue date: 2005
Resumo: Neste trabalho fizemos a modelagem da leishmaniose começando com o ciclo doméstico ou urbano com 2-hospedeiros para estender os resultados ao ciclo peridoméstico com n-hospedeiros. No caso urbano, consideramos a população do vetor constante e, após, com capacidade de suporte. Simplificamos o modelo para analisar um dos fatores de risco da leishmaniose, o desmatamento. Derivamos as expressões correspondentes ao número de reprodutibilidade basal em todos os casos por meio de análise de estabilidade. Realizamos simulações com dados de zonas endêmicas. Ao final aplicamos a análise de sensitividade para o número de reprodutibilidade basal para o caso de 2-hospedeiros
Abstract: We are dealing with a modelling of leishmaniasis considering initially the urban cycle with 2-hosts aiming to extend the results to a peridomestic cycle for n-hosts. In the urban case we consider the vector population constant, also, variable. We simplify the model the assess the factor regarded to risk of leishmaniasis analysis, which is the deforestation. We derive the expression for the basic reproduction number from the stability analysis. The model was simulated whit respect to endemics zone. Finally we performed the sensibility analysis of the basic reproduction number to the case of 2-hosts
Mestrado
Epidemiologia Matematica. Biomatematica
Mestre em Matemática Aplicada
APA, Harvard, Vancouver, ISO, and other styles
33

Bäckström, Fredrik, and Anders Ivarsson. "Meta-Model Guided Error Correction for UML Models." Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8746.

Full text
Abstract:

Modeling is a complex process which is quite hard to do in a structured and controlled way. Many companies provide a set of guidelines for model structure, naming conventions and other modeling rules. Using meta-models to describe these guidelines makes it possible to check whether an UML model follows the guidelines or not. Providing this error checking of UML models is only one step on the way to making modeling software an even more valuable and powerful tool.

Moreover, by providing correction suggestions and automatic correction of these errors, we try to give the modeler as much help as possible in creating correct UML models. Since the area of model correction based on meta-models has not been researched earlier, we have taken an explorative approach.

The aim of the project is to create an extension of the program MetaModelAgent, by Objektfabriken, which is a meta-modeling plug-in for IBM Rational Software Architect. The thesis shows that error correction of UML models based on meta-models is a possible way to provide automatic checking of modeling guidelines. The developed prototype is able to give correction suggestions and automatic correction for many types of errors that can occur in a model.

The results imply that meta-model guided error correction techniques should be further researched and developed to enhance the functionality of existing modeling software.

APA, Harvard, Vancouver, ISO, and other styles
34

Belitz, Christiane. "Model Selection in Generalised Structured Additive Regression Models." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-78896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ribbing, Jakob. "Covariate Model Building in Nonlinear Mixed Effects Models." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sommer, Julia. "Regularized estimation and model selection in compartment models." Diss., Ludwig-Maximilians-Universität München, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-157673.

Full text
Abstract:
Dynamic imaging series acquired in medical and biological research are often analyzed with the help of compartment models. Compartment models provide a parametric, nonlinear function of interpretable, kinetic parameters describing how some concentration of interest evolves over time. Aiming to estimate the kinetic parameters, this leads to a nonlinear regression problem. In many applications, the number of compartments needed in the model is not known from biological considerations but should be inferred from the data along with the kinetic parameters. As data from medical and biological experiments are often available in the form of images, the spatial data structure of the images has to be taken into account. This thesis addresses the problem of parameter estimation and model selection in compartment models. Besides a penalized maximum likelihood based approach, several Bayesian approaches-including a hierarchical model with Gaussian Markov random field priors and a model state approach with flexible model dimension-are proposed and evaluated to accomplish this task. Existing methods are extended for parameter estimation and model selection in more complex compartment models. However, in nonlinear regression and, in particular, for more complex compartment models, redundancy issues may arise. This thesis analyzes difficulties arising due to redundancy issues and proposes several approaches to alleviate those redundancy issues by regularizing the parameter space. The potential of the proposed estimation and model selection approaches is evaluated in simulation studies as well as for two in vivo imaging applications: a dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) study on breast cancer and a study on the binding behavior of molecules in living cell nuclei observed in a fluorescence recovery after photobleaching (FRAP) experiment.
APA, Harvard, Vancouver, ISO, and other styles
37

Smith, Peter William Frederick. "Edge exclusion and model selection in graphical models." Thesis, Lancaster University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hain, Thomas. "Hidden model sequence models for automatic speech recognition." Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Magalla, Champa Hemanthi. "Model adequacy tests for exponential family regression models." Diss., Kansas State University, 2012. http://hdl.handle.net/2097/13640.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
James Neill
The problem of testing for lack of fit in exponential family regression models is considered. Such nonlinear models are the natural extension of Normal nonlinear regression models and generalized linear models. As is usually the case, inadequately specified models have an adverse impact on statistical inference and scientific discovery. Models of interest are curved exponential families determined by a sequence of predictor settings and mean regression function, considered as a sub-manifold of the full exponential family. Constructed general alternative models are based on clusterings in the mean parameter components and allow likelihood ratio testing for lack of fit associated with the mean, equivalently natural parameter, for a proposed null model. A maximin clustering methodology is defined in this context to determine suitable clusterings for assessing lack of fit. In addition, a geometrically motivated goodness of fit test statistic for exponential family regression based on the information metric is introduced. This statistic is applied to the cases of logistic regression and Poisson regression, and in both cases it can be seen to be equal to a form of the Pearson chi[superscript]2 statistic. This same statement is true for multinomial regression. In addition, the problem of testing for equal means in a heteroscedastic Normal model is discussed. In particular, a saturated 3 parameter exponential family model is developed which allows for equal means testing with unequal variances. A simulation study was carried out for the logistic and Poisson regression models to investigate comparative performance of the likelihood ratio test, the deviance test and the goodness of fit test based on the information metric. For logistic regression, the Hosmer-Lemeshow test was also included in the simulations. Notably, the likelihood ratio test had comparable power with that of the Hosmer-Lemeshow test under both m- and n-asymptotics, with superior power for constructed alternatives. A distance function defined between densities and based on the information metric is also given. For logistic models, as the natural parameters go to plus or minus infinity, the densities become more and more deterministic and limits of this distance function are shown to play an important role in the lack of fit analysis. A further simulation study investigated the power of a likelihood ratio test and a geometrically derived test based on the information metric for testing equal means in heteroscedastic Normal models.
APA, Harvard, Vancouver, ISO, and other styles
40

Ågren, Thuné Anders, and Åhfeldt Theo Puranen. "Extracting scalable program models for TLA model checking." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280344.

Full text
Abstract:
Program verification has long been of interest to researchers and practitioners for its role in asserting reliability in critical systems. Many such systems feature reactive behavior, where temporal properties are of interest. Consequently, a number of systems and program verification tools for dealing with temporal logic have been developed. One such is TLA, whose main purpose is to verify temporal properties of systems using model checking. A TLA model is determined by a logical formula that describes all possible behaviors of a system. TLA is primarily used to verify abstract system designs, as it is considered ill-suited for implementation code in real programming languages. This thesis investigates how TLA models can be extracted from real code, in order to verify temporal properties of the code. The main problem is getting the model size to scale well with the size of the code, while still being representative. The paper presents a general method for achieving this, which utilizes deductive verification to abstract away unnecessary implementation details from the model. Specifically, blocks which can be considered atomic are identified in the original code and replaced with Hoare-style assertions representing only the data transformation performed in the block. The result can then be translated to a more compact TLA model. The assertions, known as block contracts, are verified separately using deductive verification, ensuring the model remains representative. We successfully instantiate the method on a simple C program, using the tool Frama-C to perform deductive verification on blocks of code and translating the result to a TLA model in several steps. The PlusCal algorithm language is used as an intermediary to simplify the translation, and block contracts are successfully translated to TLA using a simple encoding. The results show promise, but there is future work to be done.
Programverifiering har länge varit av intresse för att kunna försäkra sig om tillförlitligheten hos kritiska system. Många sådana system uppvisar ett reaktivt beteende, där temporala egenskaper är av intresse. Som följd har ett antal system och programverifieringsverktyg för hantering av temporallogik utvecklats. Ett sådant är TLA, vars huvudsakliga syfte är att verifiera egenskaper hos abstrakta algoritmer med hjälp av modellprövning. En TLA-modell bestäms av en logisk formel som beskriver alla möjliga beteenden av ett visst system. TLA anses mindre lämpligt för riktig implementationskod och används främst för att verifiera egenskaper hos abstrakta systemmodeller. Denna uppsats undersöker hur TLA-modeller kan extraheras från verklig kod för att verifiera kodens temporala egenskaper. Det huvudsakliga problemet är att även för större program kunna skapa en modell av hanterbar storlek som ändå är representativ. Vi presenterar en allmän metod för att uppnå detta som använder deduktiv verifiering för att abstrahera onödiga implementeringsdetaljer från modellen. Kodblock som kan betraktas som atomiska i den ursprungliga koden identifieras och ersätts med blockkontrakt som representerar datatransformationen som utförs i blocket. Resultatet kan sedan översättas till en mera kompakt TLA-modell. Kontrakten verifieras separat med deduktiv verifiering, vilket säkerställer att modellen behålls representativ. Vi instantierar med framgång metoden på ett enkelt C-program. Verktyget Frama-C används för att utföra deduktiv verifiering på kodblock och flera steg genomförs för att översätta resultatet till en TLA-modell. Algoritmspråket PlusCal används som ett mellansteg för att förenkla översättningen, och blockkontrakt översätts till TLA med en enkel kodning. Resultaten är lovande, men det finns flera punkter som kräver ytterligare arbete.
APA, Harvard, Vancouver, ISO, and other styles
41

Vaidyanathan, Sivaranjani. "Bayesian Models for Computer Model Calibration and Prediction." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1435527468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Guo, Yixuan. "Bayesian Model Selection for Poisson and Related Models." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439310177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Tan, Falong. "Projected adaptive-to-model tests for regression models." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/390.

Full text
Abstract:
This thesis investigates Goodness-of-Fit tests for parametric regression models. With the help of sufficient dimension reduction techniques, we develop adaptive-to-model tests using projection in both the fixed dimension settings and the diverging dimension settings. The first part of the thesis develops a globally smoothing test in the fixed dimension settings for a parametric single index model. When the dimension p of covariates is larger than 1, existing empirical process-based tests either have non-tractable limiting null distributions or are not omnibus. To attack this problem, we propose a projected adaptive-to-model approach. If the null hypothesis is a parametric single index model, our method can fully utilize the dimension reduction structure under the null as if the regressors were one-dimensional. Then a martingale transformation proposed by Stute, Thies, and Zhu (1998) leads our test to be asymptotically distribution-free. Moreover, our test can automatically adapt to the underlying alternative models such that it can be omnibus and thus detect all alternative models departing from the null at the fastest possible convergence rate in hypothesis testing. A comparative simulation is conducted to check the performance of our test. We also apply our test to a self-noise mechanisms data set for illustration. The second part of the thesis proposes a globally smoothing test for parametric single-index models in the diverging dimension settings. In high dimensional data analysis, the dimension p of covariates is often large even though it may be still small compared with the sample size n. Thus we should regard p as a diverging number as n goes to infinity. With this in mind, we develop an adaptive-to-model empirical process as the basis of our test statistic, when the dimension p of covariates diverges to infinity as the sample size n tends to infinity. We also show that the martingale transformation proposed by Stute, Thies, and Zhu (1998) still work in the diverging dimension settings. The limiting distributions of the adaptive-to-model empirical process under both the null and the alternative are discussed in this new situation. Simulation examples are conducted to show the performance of this test when p grows with the sample size n. The last Chapter of the thesis considers the same problem as in the second part. Bierens's (1982) first constructed tests based on projection pursuit techniques and obtained an integrated conditional moment (ICM) test. We notice that Bierens's (1982) test performs very badly for large p, although it may be viewed as a globally smoothing test. With the help of sufficient dimension techniques, we propose an adaptive-to-model integrated conditional moment test for regression models in the diverging dimension setting. We also give the asymptotic properties of the new tests under both the null and alternative hypotheses in this new situation. When p grows with the sample size n, simulation studies show that our new tests perform much better than Bierens's (1982) original test.
APA, Harvard, Vancouver, ISO, and other styles
44

Muzy, Paulo de Tarso Artencio. "Inomogeneidades no espaço (desordem fraca; modelos de p-spins) e representação no espaço de Fock em problemas da física estatística." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-26022014-093522/.

Full text
Abstract:
Investigamos a relevância da desordem (fraca) correlacionada ao longo de D IND. 1 dimensões, em modelos ferromagnéticos de Potts sobre diversas redes hierárquicas (de d dimensões). Mostramos que para d-d IND. 1 = 1 a aproximação de desordem fraca produz um ponto fixo não físico, indicando que o comportamento crítico não pode ser descrito por um esquema perturbativo. Para d-d IND.1>1, a desordem é relevante, produzindo um ponto fixo fisicamente aceitável. Estabelecemos um critério de relevância baseado no expoente de crossover. Em seguida examinamos modelos aleatórios com interações competitivas de p spins esféricos, na versão de Curie-Weiss, que podem ser resolvidos sem o método de réplicas. Obtemos o diagrama de fases de modelos incluindo interações de 2 e 4 spins, supondo formas simples (de acordo com os esquemas de Hopfield ou de van Hemmen para os termos aleatórios. Mostramos que as escolhas de Hopfield ou de van Hemmen não mudam a topologia dos diagramas de fase. Finalmente, apresentamos uma revisão da construção do espaço de Fock para sistemas hamiltonianos, originalmente proposta por M Schöenberg a fim de obter a mecânica estatística clássica a partir da equação de Liouville. O mesmo tipo de formalismo pode ser aplicado à equação mestra de um sistemas estocástico. Como exemplo, deduzimos o operador de evolução do modelo de Glauber linear na representação número.
Investigamos a relevância da desordem (fraca) correlacionada ao longo de D IND. 1 dimensões, em modelos ferromagnéticos de Potts sobre diversas redes hierárquicas (de d dimensões). Mostramos que para d-d IND. 1 = 1 a aproximação de desordem fraca produz um ponto fixo não físico, indicando que o comportamento crítico não pode ser descrito por um esquema perturbativo. Para d-d IND.1>1, a desordem é relevante, produzindo um ponto fixo fisicamente aceitável. Estabelecemos um critério de relevância baseado no expoente de crossover. Em seguida examinamos modelos aleatórios com interações competitivas de p spins esféricos, na versão de Curie-Weiss, que podem ser resolvidos sem o método de réplicas. Obtemos o diagrama de fases de modelos incluindo interações de 2 e 4 spins, supondo formas simples (de acordo com os esquemas de Hopfield ou de van Hemmen para os termos aleatórios. Mostramos que as escolhas de Hopfield ou de van Hemmen não mudam a topologia dos diagramas de fase. Finalmente, apresentamos uma revisão da construção do espaço de Fock para sistemas hamiltonianos, originalmente proposta por M Schöenberg a fim de obter a mecânica estatística clássica a partir da equação de Liouville. O mesmo tipo de formalismo pode ser aplicado à equação mestra de um sistemas estocástico. Como exemplo, deduzimos o operador de evolução do modelo de Glauber linear na representação número.
APA, Harvard, Vancouver, ISO, and other styles
45

Mölders, Nicole. "Concepts for coupling hydrological and meteorological models." Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-215597.

Full text
Abstract:
Earth system modeling, climate modeling, water resource research as well as integrated modeling (e.g., climate impact studies) require the coupling of hydrological and meteorological models. The paper presents recent concepts on such a coupling. It points out the difficulties to be solved, and provides a brief overview on recently realized couplings. Furthermore, a concept of a hydrometeorological module to couple hydrological and meteorological models is introduced
Wasserresourcenforschung, Erdsystem- und Klimamodellierung sowie integrierte Modellierung (z.B. Klimafolgenforschung) erfordern das Koppeln von hydrologischen und meteorologischen Modellen. Dieser Artikel präsentiert Konzepte für eine solche Kopplung. Er zeigt die zu lösenden Schwierigkeiten auf und gibt einen kurzen Überblick über bisher realisierte Kopplungen. Ferner stellt er ein Konzept für einen hydrometeorologischen Moduls zur Kopplung von hydrologischen mit meteorologischen Modellen vor
APA, Harvard, Vancouver, ISO, and other styles
46

Křemen, Jaroslav. "Dynamické modely poptávky po penězích. Aplikace na ČR." Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-11932.

Full text
Abstract:
This work deals with the demand for money theories and applications of error correction Models and VAR models for the demand for money in the Czech Republic. In the theoretical part of the work are discussed theory of demand for money, with an emphasis on Patinkins theory of demand for money and the wording of VAR models and error correction models. In the application part are applied VAR (1), VAR (2) models and error correction models on the demand for money in the Czech Republic. Data used in the work come from the information system CNB and the CZSO.
APA, Harvard, Vancouver, ISO, and other styles
47

Makhtar, Mokhairi. "Contributions to Ensembles of Models for Predictive Toxicology Applications. On the Representation, Comparison and Combination of Models in Ensembles." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5478.

Full text
Abstract:
The increasing variety of data mining tools offers a large palette of types and representation formats for predictive models. Managing the models then becomes a big challenge, as well as reusing the models and keeping the consistency of model and data repositories. Sustainable access and quality assessment of these models become limited to researchers. The approach for the Data and Model Governance (DMG) makes easier to process and support complex solutions. In this thesis, contributions are proposed towards ensembles of models with a focus on model representation, comparison and usage. Predictive Toxicology was chosen as an application field to demonstrate the proposed approach to represent predictive models linked to data for DMG. Further analysing methods such as predictive models comparison and predictive models combination for reusing the models from a collection of models were studied. Thus in this thesis, an original structure of the pool of models was proposed to represent predictive toxicology models called Predictive Toxicology Markup Language (PTML). PTML offers a representation scheme for predictive toxicology data and models generated by data mining tools. In this research, the proposed representation offers possibilities to compare models and select the relevant models based on different performance measures using proposed similarity measuring techniques. The relevant models were selected using a proposed cost function which is a composite of performance measures such as Accuracy (Acc), False Negative Rate (FNR) and False Positive Rate (FPR). The cost function will ensure that only quality models be selected as the candidate models for an ensemble. The proposed algorithm for optimisation and combination of Acc, FNR and FPR of ensemble models using double fault measure as the diversity measure improves Acc between 0.01 to 0.30 for all toxicology data sets compared to other ensemble methods such as Bagging, Stacking, Bayes and Boosting. The highest improvements for Acc were for data sets Bee (0.30), Oral Quail (0.13) and Daphnia (0.10). A small improvement (of about 0.01) in Acc was achieved for Dietary Quail and Trout. Important results by combining all the three performance measures are also related to reducing the distance between FNR and FPR for Bee, Daphnia, Oral Quail and Trout data sets for about 0.17 to 0.28. For Dietary Quail data set the improvement was about 0.01 though, but this data set is well known as a difficult learning exercise. For five UCI data sets tested, similar results were achieved with Acc improvement between 0.10 to 0.11, closing more the gaps between FNR and FPR. As a conclusion, the results show that by combining performance measures (Acc, FNR and FPR), as proposed within this thesis, the Acc increased and the distance between FNR and FPR decreased.
APA, Harvard, Vancouver, ISO, and other styles
48

Ruivo, Marina Pereira. "Risco de modelo : análise à robustez do CreditMetrics." Master's thesis, Instituto Superior de Economia e Gestão, 2013. http://hdl.handle.net/10400.5/11416.

Full text
Abstract:
Mestrado em Finanças
Os modelos internos de risco de crédito são uma ferramenta essencial na atividade de gestão das instituições bancárias. A dependência das instituições financeiras na utilização destes modelos e a credibilidade que lhes é depositada podem, em ambientes de grande instabilidade, gerar resultados enviesados. A avaliação do CreditVaR da carteira, utilizando um modelo de crédito como o CreditMetrics, é um exemplo disso. O CreditMetrics, desenvolvido pela J.P.Morgan em 1997, avalia a distribuição das alterações do valor futuro da carteira com base na análise da migração da qualidade de crédito dos emitentes. Este projeto pretende analisar os riscos que a variação dos principais parâmetros do modelo CreditMetrics ? matriz de transição de ratings, taxa de recuperação e correlação entre os ativos? tem sobre o risco de uma carteira de crédito real, pertencente a um banco de investimento português. Para além do impacto em termos da medida CreditVaR analisa-se, de forma mais abrangente, o impacto da variação desses parâmetros no valor esperado e na forma da própria distribuição de perdas da carteira.
Internal credit risk models are an essential tools risk management of banks. The dependence of financial institutions on the use of these models and their trust on them, in environments of highly volatility, may generate biased results. The evaluation of portfolio CreditVaR using an internal credit risk model as CreditMetrics is one example of this. The CreditMetrics model, developed by J.P.Morgan in 1997, evaluates the distribution of changes in the future value of a portfolio based on the analysis of the migration of the credit quality of the issuers of securities in portfolio. This project aims at analyze the effects of shocks of the main parameters of the CreditMetrics model - transition matrix of credit quality, recovery rate and correlation between assets - on the risk of a real credit portfolio owned by a Portuguese investment bank. Beyond the impact they have in terms of the measure CreditVaR we analyze, more broadly, the impact of these shocks on value of expected loss and the form of the portfolio loss distribution.
APA, Harvard, Vancouver, ISO, and other styles
49

Owens, Charles Ray. "Donating Behavior in Children: The Effect of the Model's Similarity with the Model and Parental Models." DigitalCommons@USU, 1985. https://digitalcommons.usu.edu/etd/5318.

Full text
Abstract:
Model similarity and familiarity were investigated for adult and similar aged models demonstrating prosocial behavior. Third, fourth and fifth graders (75 male and 75 female) participated. Subjects were given questionnaires regarding their most and least preferred peers and their most preferred parent. The models were described as similar to the subject for some groups. Subjects were given instructions concerning a sorting task and cash certificates they would earn. Fifty control subjects viewed a video that contained neither prosocial nor antisocial behavior. For the remaining subjects, a 2 (sex of subject) X 2 (similar age model versus adult model) X 5 (treatment) factorial design was employed. The 5 treatment factors were: unfamiliar models described as a) similar, b) dissimilar, c) with no similarity mentioned, and familiar models who were d) preferred (either a best friend or preferred parent), and e) least preferred (either a least preferred peer or parent). Subjects (except the control group) saw a video taped model who demonstrated a sorting task and collected 20 certificates. All models shared 10 certificates by placing them in a canister marked "for the poor children". Subjects completed the task and had an opportunity to share while alone. Significantly more sharing occurred in the similar age than in the adult model group. Both of which imitated more than the control group. There was no difference in the imitation of males and females overall. There was no difference between the groups that saw unfamiliar models who were described as similar and the groups that saw unfamiliar models with no similarity mentioned. Each of these produced more imitative donating than the control, the familiar preferred model, and the unfamiliar model described as dissimilar groups. The familiar least preferred model group shared more than the control group. There were significant interaction effects between sex and treatment and between sex, treatment, and age of model. Unfamiliar models with no similarity mentioned and peer models each produced more sharing than parent models. Subjects who observed an unfamiliar model described as similar donated more than those seeing an unfamiliar model described as dissimilar. An unfamiliar age-mate model produced more sharing than a familiar and preferred friend. Donations were greater when the subject observed a least preferred peer rather than a best friend. This difference was due to the female subjects' performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Hellman, Samuel. "Evaluating model selection criteria for nuisance models in causal inference : (when the true models are finite mixtures)." Thesis, Umeå universitet, Statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-122686.

Full text
Abstract:
When using inverse probability weighting (IPW) and doubly robust (DR) estimators for estimating the causal effect we need to use nuisance models for estimating the propensity scores, and for the DR estimator also the outcome regression. These nuisance models are often created using a selection of covariates, higher order and interaction terms, which require some model selection criteria. In this paper we will study selection criteria and their performance in fitting nuisance models for semi-parametric estimation of the causal eect. This will be done as a simulation study. Apart from the usual simulation design using known parametric models for the true values of the outcomes and propensity scores we will also use finite mixture model simulation. From that we can evaluate how changing the design approach changes the characteristics of the nuisance models corresponding to the different criteria, and if we can still find the causal effect using them. In all simulations we found that the Bayesian Information Criteria generally creates the nuisance models with the least amounts of estimated parameters, especially for estimating the propensity scores where it rarely uses more than one covariate. Due to this the IPW estimations of the causal effect was generally more biased for that criteria than those made using other criteria. When using DR estimators the robustness of the estimator causes all criteria to create equally unbiased estimations with low variance. Using LASSO for model selection was generally the best as it created estimations with the lowest bias and variance. When using finite mixture models in the simulation design the differences between the estimations corresponding to the different criteria disappear. When we only use mixtures for the propensity scores we can still find the true causal effect, even for the IPW estimators that only use the propensity scores as a nuisance parameter. When applying finite mixture models to both the propensity scores and the outcome regressions we overestimate the causal effect for all criteria, and we see little to no difference between the estimators made using the dierent criteria. In that case we also could not find the true causal effect using a non-parametric matching estimator.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography