To see the other types of publications on this topic, follow the link: Model.

Dissertations / Theses on the topic 'Model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Andriushchenko, Roman. "Computer-Aided Synthesis of Probabilistic Models." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417269.

Full text
Abstract:
Předkládaná práce se zabývá problémem automatizované syntézy pravděpodobnostních systémů: máme-li rodinu Markovských řetězců, jak lze efektivně identifikovat ten který odpovídá zadané specifikaci? Takové rodiny často vznikají v nejrůznějších oblastech inženýrství při modelování systémů s neurčitostí a rozhodování i těch nejjednodušších syntézních otázek představuje NP-těžký problém. V dané práci my zkoumáme existující techniky založené na protipříklady řízené induktivní syntéze (counterexample-guided inductive synthesis, CEGIS) a na zjemňování abstrakce (counterexample-guided abstraction refinement, CEGAR) a navrhujeme novou integrovanou metodu pro pravděpodobnostní syntézu. Experimenty nad relevantními modely demonstrují, že navržená technika je nejenom srovnatelná s moderními metodami, ale ve většině případů dokáže výrazně překonat, někdy i o několik řádů, existující přístupy.
APA, Harvard, Vancouver, ISO, and other styles
2

Evers, Ludger. "Model fitting and model selection for 'mixture of experts' models." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kang, Changsung. "Model testing for causal models." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Coskun, Sarp Arda. "PATHCASE-SB MODEL SIMULATION AND MODEL COMPOSITION TOOLS FOR SYSTEMS BIOLOGY MODELS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1328556115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kotsalis, Georgios. "Model reduction for Hidden Markov models." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38255.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2006.
Includes bibliographical references (leaves 57-60).
The contribution of this thesis is the development of tractable computational methods for reducing the complexity of two classes of dynamical systems, finite alphabet Hidden Markov Models and Jump Linear Systems with finite parameter space. The reduction algorithms employ convex optimization and numerical linear algebra tools and do not pose any structural requirements on the systems at hand. In the Jump Linear Systems case, a distance metric based on randomization of the parametric input is introduced. The main point of the reduction algorithm lies in the formulation of two dissipation inequalities, which in conjunction with a suitably defined storage function enable the derivation of low complexity models, whose fidelity is controlled by a guaranteed upper bound on the stochastic L2 gain of the approximation error. The developed reduction procedure can be interpreted as an extension of the balanced truncation method to the broader class of Jump Linear Systems. In the Hidden Markov Model case, Hidden Markov Models are identified with appropriate Jump Linear Systems that satisfy certain constraints on the coefficients of the linear transformation. This correspondence enables the development of a two step reduction procedure.
(cont.) In the first step, the image of the high dimensional Hidden Markov Model in the space of Jump Linear Systems is simplified by means of the aforementioned balanced truncation method. Subsequently, in the second step, the constraints that reflect the Hidden Markov Model structure are imposed by solving a low dimensional non convex optimization problem. Numerical simulation results provide evidence that the proposed algorithm computes accurate reduced order Hidden Markov Models, while achieving a compression of the state space by orders of magnitude.
by Georgios Kotsalis.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
6

Papacchini, Fabio. "Minimal model reasoning for modal logic." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/minimal-model-reasoning-for-modal-logic(dbfeb158-f719-4640-9cc9-92abd26bd83e).html.

Full text
Abstract:
Model generation and minimal model generation are useful for tasks such as model checking, query answering and for debugging of logical specifications. Due to this variety of applications, several minimality criteria and model generation methods for classical logics have been studied. Minimal model generation for modal logics how ever did not receive the same attention from the research community. This thesis aims to fill this gap by investigating minimality criteria and designing minimal model generation procedures for all the sublogics of the multi-modal logic S5(m) and their extensions with universal modalities. All the procedures are minimal model sound and complete, in the sense that they generate all and only minimal models. The starting point of the investigation is the definition of a Herbrand semantics for modal logics on which a syntactic minimality criterion is devised. The syntactic nature of the minimality criterion allows for an efficient minimal model generation procedure, but, on the other hand, the resulting minimal models can be redundant or semantically non minimal with respect to each other. To overcome the syntactic limitations of the first minimality criterion, the thesis moves from minimal modal Herbrand models to semantic minimality criteria based on subset-simulation. At first, theoretical procedures for the generation of models minimal modulo subset-simulation are presented. These procedures for the generation of models minimal modulo subset-simulation are minimal model sound and complete, but they might not terminate. The minimality criterion and the procedures are then refined in such a way that termination can be ensured while preserving minimal model soundness and completeness.
APA, Harvard, Vancouver, ISO, and other styles
7

Pommellet, Adrien. "On model-checking pushdown systems models." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC207/document.

Full text
Abstract:
Cette thèse introduit différentes méthodes de vérification (ou model-checking) sur des modèles de systèmes à pile. En effet, les systèmes à pile (pushdown systems) modélisent naturellement les programmes séquentiels grâce à une pile infinie qui peut simuler la pile d'appel du logiciel. La première partie de cette thèse se concentre sur la vérification sur des systèmes à pile de la logique HyperLTL, qui enrichit la logique temporelle LTL de quantificateurs universels et existentiels sur des variables de chemin. Il a été prouvé que le problème de la vérification de la logique HyperLTL sur des systèmes d'états finis est décidable ; nous montrons que ce problème est en revanche indécidable pour les systèmes à pile ainsi que pour la sous-classe des systèmes à pile visibles (visibly pushdown systems). Nous introduisons donc des algorithmes d'approximation de ce problème, que nous appliquons ensuite à la vérification de politiques de sécurité. Dans la seconde partie de cette thèse, dans la mesure où la représentation de la pile d'appel par les systèmes à pile est approximative, nous introduisons les systèmes à surpile (pushdown systems with an upper stack) ; dans ce modèle, les symboles retirés de la pile d'appel persistent dans la zone mémoire au dessus du pointeur de pile, et peuvent être plus tard écrasés par des appels sur la pile. Nous montrons que les ensembles de successeurs post* et de prédécesseurs pre* d'un ensemble régulier de configurations ne sont pas réguliers pour ce modèle, mais que post* est toutefois contextuel (context-sensitive), et que l'on peut ainsi décider de l'accessibilité d'une configuration. Nous introduisons donc des algorithmes de sur-approximation de post* et de sous-approximation de pre*, que nous appliquons à la détection de débordements de pile et de manipulations nuisibles du pointeur de pile. Enfin, dans le but d'analyser des programmes avec plusieurs fils d'exécution, nous introduisons le modèle des réseaux à piles dynamiques synchronisés (synchronized dynamic pushdown networks), que l'on peut voir comme un réseau de systèmes à pile capables d'effectuer des changements d'états synchronisés, de créer de nouveaux systèmes à piles, et d'effectuer des actions internes sur leur pile. Le problème de l'accessibilité étant naturellement indécidable pour un tel modèle, nous calculons une abstraction des chemins d'exécutions entre deux ensembles réguliers de configurations. Nous appliquons ensuite cette méthode à un processus itératif de raffinement des abstractions
In this thesis, we propose different model-checking techniques for pushdown system models. Pushdown systems (PDSs) are indeed known to be a natural model for sequential programs, as they feature an unbounded stack that can simulate the assembly stack of an actual program. Our first contribution consists in model-checking the logic HyperLTL that adds existential and universal quantifiers on path variables to LTL against pushdown systems (PDSs). The model-checking problem of HyperLTL has been shown to be decidable for finite state systems. We prove that this result does not hold for pushdown systems nor for the subclass of visibly pushdown systems. Therefore, we introduce approximation algorithms for the model-checking problem, and show how these can be used to check security policies. In the second part of this thesis, as pushdown systems can fail to accurately represent the way an assembly stack actually operates, we introduce pushdown systems with an upper stack (UPDSs), a model where symbols popped from the stack are not destroyed but instead remain just above its top, and may be overwritten by later push rules. We prove that the sets of successors post* and predecessors pre* of a regular set of configurations of such a system are not always regular, but that post* is context-sensitive, hence, we can decide whether a single configuration is forward reachable or not. We then present methods to overapproximate post* and under-approximate pre*. Finally, we show how these approximations can be used to detect stack overflows and stack pointer manipulations with malicious intent. Finally, in order to analyse multi-threaded programs, we introduce in this thesis a model called synchronized dynamic pushdown networks (SDPNs) that can be seen as a network of pushdown processes executing synchronized transitions, spawning new pushdown processes, and performing internal pushdown actions. The reachability problem for this model is obviously undecidable. Therefore, we compute an abstraction of the execution paths between two regular sets of configurations. We then apply this abstraction framework to a iterative abstraction refinement scheme
APA, Harvard, Vancouver, ISO, and other styles
8

Bartošík, Tomáš. "Metody simulace dodávky výkonu z větrných elektráren." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217592.

Full text
Abstract:
Theme Master’s thesis was studying of wind energy power supply. Comparison of character of wind power supply in Czech Republic to power supply abroad. Thesis begins with short introduction of historical wind applications. It continues by theory of wind engines, the wind engines construction and its facilities. Next part describes wind energy characteristics and physics. It describes wind speed influence to power supply of wind turbine, a physical limits of wind engines efficiency. Later, meteorological forecast possibilities are mentioned. Following chapter classifies wind power plants by geographical locations and characterizes them. It presents and explains individual cases of wind energy business growth in Czech Republic and other countries. There are also mentioned many suitable locations for wind parks in Czech Republic. There are described data analysis methods in chapter number 5. Analysis results of day period graph and year period graphs are shown. Unsophisticated forecast model is sketched out and created in following chapter. Here the regressive analysis methods are described, such as Autoregressive moving average model (ARMA), which can bring satisfactory results. Another example is Markov switching autoregressive model (MSAR). Next step from statistic forecast models is to sophisticated large forecasting systems. Those systems require meteorological forecast data and historical wind power data. Data are analyzed by statistical models. They have been developed recently and they are ordinary used nowadays.
APA, Harvard, Vancouver, ISO, and other styles
9

Makarov, Daniil. "Business Model Innovations." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-162595.

Full text
Abstract:
The thesis covers the phenomenon of business model innovation. It provides with theoretical background of the concept based on the works of several scientists who stand at the beginnings of the discipline. The paper also introduces the principles of design thinking applied to business model innovation in order to get superior results and serve as a guideline for ideation processes and presenting enhancements to existing business models. The practical part is devoted to applying the described concepts on examples from real life, which can especially help small companies in their battle with incumbents. Three industries are analyzed to see the flaws with the current state of things. New business models that can disrupt corresponding industries are offered at the end of each case.
APA, Harvard, Vancouver, ISO, and other styles
10

Martin, Jeffrey Harold. "Evaluating models for Bible teaching at a residential summer camp an expository model, a reenactment model, and an experiential model /." Online full text .pdf document, available to Fuller patrons only, 2003. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Nilsson, Anna, and Michelle Hansen. "Singaporemodellen The Singapore model : – en modell förundervisning i matematik – a model for teaching mathematics." Thesis, Malmö universitet, Malmö högskola, Institutionen för materialvetenskap och tillämpad matematik (MTM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-42040.

Full text
Abstract:
Syftet med denna kunskapsöversikt är att undersöka vad Singaporemodellen innebär och vadsom är karakteristiskt för modellen. För att vi skulle kunna göra en kunskapsöversikt valde viatt fördjupa oss i Singapores matematikhistoria och vad som karakteriserar modellen. Eftersomdet i Sverige saknas relevant forskning om Singaporemodellen var det aktuellt för oss att se hurandra länder har gjort och med hjälp av det skapa en bild av vad modellen kan ha för betydelsei den svenska skolan.Vår digitala bakgrundsundersökning i sökmotorerna ERIC via Ebsco och GoogleScholar, resulterade i en variation av vetenskapliga artiklar som vi valde ut för att jämföra ochanalyseras. Resultatet av våra vetenskapliga artiklar gav en förståelse för hur Singaporesutbildningssystem i matematik är uppbyggt och hur man ständigt håller en hög standard på sinutbildning. Forskningsresultaten från andra länders implementering av Singapores matematikhar gett framgångsrika effekter.Påföljden av de vetenskapliga artiklarna har resulterat i att Singaporemodellen och desssärskilda struktur bidrar till ökade kunskaper i matematik för elever och hade varit till fördelför lärare i Sverige att ta efter.
APA, Harvard, Vancouver, ISO, and other styles
12

Yoshimura, Arihiro. "Essays on Semiparametric Model Selection and Model Averaging." Kyoto University, 2015. http://hdl.handle.net/2433/199059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Yi. "On Model Reduction of Distributed Parameter Models." Licentiate thesis, KTH, Signals, Sensors and Systems, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Giese, Holger, and Stephan Hildebrandt. "Efficient model synchronization of large-scale models." Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2009/2928/.

Full text
Abstract:
Model-driven software development requires techniques to consistently propagate modifications between different related models to realize its full potential. For large-scale models, efficiency is essential in this respect. In this paper, we present an improved model synchronization algorithm based on triple graph grammars that is highly efficient and, therefore, can also synchronize large-scale models sufficiently fast. We can show, that the overall algorithm has optimal complexity if it is dominating the rule matching and further present extensive measurements that show the efficiency of the presented model transformation and synchronization technique.
Die Model-getriebene Softwareentwicklung benötigt Techniken zur Übertragung von Änderungen zwischen verschiedenen zusammenhängenden Modellen, um vollständig nutzbar zu sein. Bei großen Modellen spielt hier die Effizienz eine entscheidende Rolle. In diesem Bericht stellen wir einen verbesserten Modellsynchronisationsalgorithmus vor, der auf Tripel-Graph-Grammatiken basiert. Dieser arbeitet sehr effizient und kann auch sehr große Modelle schnell synchronisieren. Wir können zeigen, dass der Gesamtalgortihmus eine optimale Komplexität aufweist, sofern er die Ausführung dominiert. Die Effizient des Algorithmus' wird durch einige Benchmarkergebnisse belegt.
APA, Harvard, Vancouver, ISO, and other styles
15

Alharthi, Muteb. "Bayesian model assessment for stochastic epidemic models." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33182/.

Full text
Abstract:
Acrucial practical advantage of infectious diseases modelling as a public health tool lies in its application to evaluate various disease-control policies. However, such evaluation is of limited use, unless a sufficiently accurate epidemic model is applied. If the model provides an adequate fit, it is possible to interpret parameter estimates, compare disease epidemics and implement control procedures. Methods to assess and compare stochastic epidemic models in a Bayesian framework are not well-established, particularly in epidemic settings with missing data. In this thesis, we develop novel methods for both model adequacy and model choice for stochastic epidemic models. We work with continuous time epidemic models and assume that only case detection times of infected individuals are available, corresponding to removal times. Throughout, we illustrate our methods using both simulated outbreak data and real disease data. Data augmented Markov Chain Monte Carlo (MCMC) algorithms are employed to make inference for unobserved infection times and model parameters. Under a Bayesian framework, we first conduct a systematic investigation of three different but natural methods of model adequacy for SIR (Susceptible-Infective-Removed) epidemic models. We proceed to develop a new two-stage method for assessing the adequacy of epidemic models. In this two stage method, two predictive distributions are examined, namely the predictive distribution of the final size of the epidemic and the predictive distribution of the removal times. The idea is based onlooking explicitly at the discrepancy between the observed and predicted removal times using the posterior predictive model checking approach in which the notion of Bayesian residuals and the and the posterior predictive p−value are utilized. This approach differs, most importantly, from classical likelihood-based approaches by taking into account uncertainty in both model stochasticity and model parameters. The two-stage method explores how SIR models with different infection mechanisms, infectious periods and population structures can be assessed and distinguished given only a set of removal times. In the last part of this thesis, we consider Bayesian model choice methods for epidemic models. We derive explicit forms for Bayes factors in two different epidemic settings, given complete epidemic data. Additionally, in the setting where the available data are partially observed, we extend the existing power posterior method for estimating Bayes factors to models incorporating missing data and successfully apply our missing-data extension of the power posterior method to various epidemic settings. We further consider the performance of the deviance information criterion (DIC) method to select between epidemic models.
APA, Harvard, Vancouver, ISO, and other styles
16

Billah, Baki 1965. "Model selection for time series forecasting models." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/8840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cloth, Lucia. "Model checking algorithms for Markov reward models." Enschede : University of Twente [Host], 2006. http://doc.utwente.nl/55445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kadhem, Safaa K. "Model fit diagnostics for hidden Markov models." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/9966.

Full text
Abstract:
Hidden Markov models (HMMs) are an efficient tool to describe and model the underlying behaviour of many phenomena. HMMs assume that the observed data are generated independently from a parametric distribution, conditional on an unobserved process that satisfies the Markov property. The model selection or determining the number of hidden states for these models is an important issue which represents the main interest of this thesis. Applying likelihood-based criteria for HMMs is a challenging task as the likelihood function of these models is not available in a closed form. Using the data augmentation approach, we derive two forms of the likelihood function of a HMM in closed form, namely the observed and the conditional likelihoods. Subsequently, we develop several modified versions of the Akaike information criterion (AIC) and Bayesian information criterion (BIC) approximated under the Bayesian principle. We also develop several versions for the deviance information criterion (DIC). These proposed versions are based on the type of likelihood, i.e. conditional or observed likelihood, and also on whether the hidden states are dealt with as missing data or additional parameters in the model. This latter point is referred to as the concept of focus. Finally, we consider model selection from a predictive viewpoint. To this end, we develop the so-called widely applicable information criterion (WAIC). We assess the performance of these various proposed criteria via simulation studies and real-data applications. In this thesis, we apply Poisson HMMs to model the spatial dependence analysis in count data via an application to traffic safety crashes for three highways in the UK. The ultimate interest is in identifying highway segments which have distinctly higher crash rates. Selecting an optimal number of states is an important part of the interpretation. For this purpose, we employ model selection criteria to determine the optimal number of states. We also use several goodness-of-fit checks to assess the model fitted to the data. We implement an MCMC algorithm and check its convergence. We examine the sensitivity of the results to the prior specification, a potential problem given small sample sizes. The Poisson HMMs adopted can provide a different model for analysing spatial dependence on networks. It is possible to identify segments with a higher posterior probability of classification in a high risk state, a task that could prioritise management action.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Lingzhu. "Model checking for general parametric regression models." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/654.

Full text
Abstract:
Model checking for regressions has drawn considerable attention in the last three decades. Compared with global smoothing tests, local smoothing tests, which are more sensitive to high-frequency alternatives, can only detect local alternatives dis- tinct from the null model at a much slower rate when the dimension of predictor is high. When the number of covariates is large, nonparametric estimations used in local smoothing tests lack efficiency. Corresponding tests then have trouble in maintaining the significance level and detecting the alternatives. To tackle the issue, we propose two methods under high but fixed dimension framework. Further, we investigate a model checking test under divergent dimension, where the numbers of covariates and unknown parameters go divergent with the sample size n. The first proposed test is constructed upon a typical kernel-based local smoothing test using projection method. Employed by projection and integral, the resulted test statistic has a closed form that depends only on the residuals and distances of the sample points. A merit of the developed test is that the distance is easy to implement compared with the kernel estimation, especially when the dimension is high. Moreover, the test inherits some feature of local smoothing tests owing to its construction. Although it is eventually similar to an Integrated Conditional Moment test in spirit, it leads to a test with a weight function that helps to collect more information from the samples than Integrated Conditional Moment test. Simulations and real data analysis justify the powerfulness of the test. The second test, which is a synthesis of local and global smoothing tests, aims at solving the slow convergence rate caused by nonparametric estimation in local smoothing tests. A significant feature of this approach is that it allows nonparamet- ric estimation-based tests, under the alternatives, also share the merits of existing empirical process-based tests. The proposed hybrid test can detect local alternatives at the fastest possible rate like the empirical process-based ones, and simultane- ously, retains the sensitivity to high-frequency alternatives from the nonparametric estimation-based ones. This feature is achieved by utilizing an indicative dimension in the field of dimension reduction. As a by-product, we have a systematic study on a residual-related central subspace for model adaptation, showing when alterna- tive models can be indicated and when cannot. Numerical studies are conducted to verify its application. Since the data volume nowadays is increasing, the numbers of predictors and un- known parameters are probably divergent as sample size n goes to infinity. Model checking under divergent dimension, however, is almost uncharted in the literature. In this thesis, an adaptive-to-model test is proposed to handle the divergent dimen- sion based on the two previous introduced tests. Theoretical results tell that, to get the asymptotic normality of the parameter estimator, the number of unknown parameters should be in the order of o(n1/3). Also, as a spinoff, we demonstrate the asymptotic properties of estimations for the residual-related central subspace and central mean subspace under different hypotheses.
APA, Harvard, Vancouver, ISO, and other styles
20

Lattimer, Alan Martin. "Model Reduction of Nonlinear Fire Dynamics Models." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/70870.

Full text
Abstract:
Due to the complexity, multi-scale, and multi-physics nature of the mathematical models for fires, current numerical models require too much computational effort to be useful in design and real-time decision making, especially when dealing with fires over large domains. To reduce the computational time while retaining the complexity of the domain and physics, our research has focused on several reduced-order modeling techniques. Our contributions are improving wildland fire reduced-order models (ROMs), creating new ROM techniques for nonlinear systems, and preserving optimality when discretizing a continuous-time ROM. Currently, proper orthogonal decomposition (POD) is being used to reduce wildland fire-spread models with limited success. We use a technique known as the discrete empirical interpolation method (DEIM) to address the slowness due to the nonlinearity. We create new methods to reduce nonlinear models, such as the Burgers' equation, that perform better than POD over a wider range of input conditions. Further, these ROMs can often be constructed without needing to capture full-order solutions a priori. This significantly reduces the off-line costs associated with creating the ROM. Finally, we investigate methods of time-discretization that preserve the optimality conditions in a certain norm associated with the input to output mapping of a dynamical system. In particular, we are able to show that the Crank-Nicholson method preserves the optimality conditions, but other single-step methods do not. We further clarify the need for these discrete-time ROMs to match at infinity in order to ensure local optimality.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Volinsky, Christopher T. "Bayesian model averaging for censored survival models /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kramdi, Seifeddine. "A modal approach to model computational trust." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30146/document.

Full text
Abstract:
Le concept de confiance est un concept sociocognitif qui adresse la question de l'interaction dans les systèmes concurrents. Quand la complexité d'un système informatique prohibe l'utilisation de solutions traditionnelles de sécurité informatique en amont du processus de développement (solutions dites de type dur), la confiance est un concept candidat, pour le développement de systèmes d'aide à l'interaction. Dans cette thèse, notre but majeur est de présenter une vue d'ensemble de la discipline de la modélisation de la confiance dans les systèmes informatiques, et de proposer quelques modèles logiques pour le développement de module de confiance. Nous adoptons comme contexte applicatif majeur, les applications basées sur les architectures orientées services, qui sont utilisées pour modéliser des systèmes ouverts telle que les applications web. Nous utiliserons pour cela une abstraction qui modélisera ce genre de systèmes comme des systèmes multi-agents. Notre travail est divisé en trois parties, la première propose une étude de la discipline, nous y présentons les pratiques utilisées par les chercheurs et les praticiens de la confiance pour modéliser et utiliser ce concept dans différents systèmes, cette analyse nous permet de définir un certain nombre de points critiques, que la discipline doit aborder pour se développer. La deuxième partie de notre travail présente notre premier modèle de confiance. Cette première solution basée sur un formalisme logique (logique dynamique épistémique), démarre d'une interprétation de la confiance comme une croyance sociocognitive, ce modèle présentera une première modélisation de la confiance. Apres avoir prouvé la décidabilité de notre formalisme. Nous proposons une méthodologie pour inférer la confiance en des actions complexes : à partir de notre confiance dans des actions atomiques, nous illustrons ensuite comment notre solution peut être mise en pratique dans un cas d'utilisation basée sur la combinaison de service dans les architectures orientées services. La dernière partie de notre travail consiste en un modèle de confiance, où cette notion sera perçue comme une spécialisation du raisonnement causal tel qu'implémenté dans le formalisme des règles de production. Après avoir adapté ce formalisme au cas épistémique, nous décrivons trois modèles basés sur l'idée d'associer la confiance au raisonnement non monotone. Ces trois modèles permettent respectivement d'étudier comment la confiance est générée, comment elle-même génère les croyances d'un agent et finalement, sa relation avec son contexte d'utilisation
The concept of trust is a socio-cognitive concept that plays an important role in representing interactions within concurrent systems. When the complexity of a computational system and its unpredictability makes standard security solutions (commonly called hard security solutions) inapplicable, computational trust is one of the most useful concepts to design protocols of interaction. In this work, our main objective is to present a prospective survey of the field of study of computational trust. We will also present two trust models, based on logical formalisms, and show how they can be studied and used. While trying to stay general in our study, we use service-oriented architecture paradigm as a context of study when examples are needed. Our work is subdivided into three chapters. The first chapter presents a general view of the computational trust studies. Our approach is to present trust studies in three main steps. Introducing trust theories as first attempts to grasp notions linked to the concept of trust, fields of application, that explicit the uses that are traditionally associated to computational trust, and finally trust models, as an instantiation of a trust theory, w.r.t. some formal framework. Our survey ends with a set of issues that we deem important to deal with in priority in order to help the advancement of the field. The next two chapters present two models of trust. Our first model is an instantiation of Castelfranchi & Falcone's socio-cognitive trust theory. Our model is implemented using a Dynamic Epistemic Logic that we propose. The main originality of our solution is the fact that our trust definition extends the original model to complex action (programs, composed services, etc.) and the use of authored assignment as a special kind of atomic actions. The use of our model is then illustrated in a case study related to service-oriented architecture. Our second model extends our socio-cognitive definition to an abductive framework that allows us to associate trust to explanations. Our framework is an adaptation of Bochman's production relations to the epistemic case. Since Bochman approach was initially proposed to study causality, our definition of trust in this second model presents trust as a special case of causal reasoning, applied to a social context. We end our manuscript with a conclusion that presents how we would like to extend our work
APA, Harvard, Vancouver, ISO, and other styles
23

Kozak, Tugrul Mustafa. "Investigation Of Model Updating Techniques And Their Applications To Aircraft Structures." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/2/12607558/index.pdf.

Full text
Abstract:
Mathematical models that are built in order to simulate the behavior of structures, most often, tend to respond differently than the actual structures in their initial state. In order to use the mathematical models and their computational outputs instead of testing the real structure under every possible case, it is mandatory to have a mathematical model that reflects the characteristics of the actual structure in the best possible way. In this thesis, the so called model updating techniques used for updating the mathematical models in order to make them respond in the way the actual structures do are investigated. Case studies using computationally generated test data are performed using the direct and indirect modal updating techniques with the software developed for each method investigated. After investigating the direct and indirect modal updating techniques, two of them, one using frequency response functions and the other using modal sensitivities, are determined to be the most suitable ones for aircraft structures. A generic software is developed for the technique using modal sensitivities. A modal test is carried out on a scaled aircraft model. The test data is used for updating of the finite element model of the scaled aircraft using the modal sensitivities and the usability of the method is thus evaluated. The finite element model of a real aircraft using the modal test data is also updated using the modal sensitivities. A new error localization technique and a model updating routine are also proposed in this thesis. This modal updating routine is used with several case studies using computationally generated test data and it is concluded that it is capable of updating the mathematical models even with incomplete measured data.
APA, Harvard, Vancouver, ISO, and other styles
24

Dočkal, Michal. "Korekce barev na základě znalosti scény a osvětlení při 3D skenování." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400670.

Full text
Abstract:
This thesis focus is on the design and verification of improvements for the RoScan robotic system texture module. RoScan is a robotic scanner for scanning and diagnosing the human body. In this thesis current texturing module of this system is described. Furthermore this thesis describes the theory of light, color and shading in computer graphics. Subsequently, improvments for RoScan texture module are proposed based on the said principles. The last part deals with the implementation of the test script in Matlab and the verification of the functionality of the proposed solution.
APA, Harvard, Vancouver, ISO, and other styles
25

Mölders, Nicole. "Concepts for coupling hydrological and meteorological models." Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-215597.

Full text
Abstract:
Earth system modeling, climate modeling, water resource research as well as integrated modeling (e.g., climate impact studies) require the coupling of hydrological and meteorological models. The paper presents recent concepts on such a coupling. It points out the difficulties to be solved, and provides a brief overview on recently realized couplings. Furthermore, a concept of a hydrometeorological module to couple hydrological and meteorological models is introduced
Wasserresourcenforschung, Erdsystem- und Klimamodellierung sowie integrierte Modellierung (z.B. Klimafolgenforschung) erfordern das Koppeln von hydrologischen und meteorologischen Modellen. Dieser Artikel präsentiert Konzepte für eine solche Kopplung. Er zeigt die zu lösenden Schwierigkeiten auf und gibt einen kurzen Überblick über bisher realisierte Kopplungen. Ferner stellt er ein Konzept für einen hydrometeorologischen Moduls zur Kopplung von hydrologischen mit meteorologischen Modellen vor
APA, Harvard, Vancouver, ISO, and other styles
26

Pechena, Iryna. "Partilha de rendimento e decisões intra-familiares em Portugal : o caso das decisões de pedidos de empréstimo." Master's thesis, Instituto Superior de Economia e Gestão, 2013. http://hdl.handle.net/10400.5/11359.

Full text
Abstract:
Mestrado em Finanças
A situação económica e financeira de Portugal e o aumento de endividamento das famílias motiva a realização de estudo sobre a gestão das finanças das famílias e sobre o processo da tomada de decisões financeiras. Esta investigação centra-se na investigação da evidência empírica da prática de partilha comum dos rendimentos (Income Pooling) pelos casais portugueses e na forma como tomam a decisão de contrair um empréstimo (partilham ou são autonomia). As bases de dados utilizadas: o recentemente disponibilizado Módulo - Partilha de Recursos no Seio do Agregado Doméstico Privado do Inquérito às Condições de Vida e Rendimentos (ICOR), pela primeira vez realizado pelo Instituto Nacional de Estatística (INE) e Eurostat. A amostra é composta por 1.440 casais e a unidade de observação é o indivíduo (N=2.880) numa amostra representativa dos residentes em Portugal, maiores de 16 anos e vivendo em casal com ou sem filhos. Foram construídos três modelos Probit: i) income pooling; ii) a partilha na decisão de contrair um empréstimo; e iii) a autonomia na decisão de contrair um empréstimo. Os resultados indicam que não há evidência empírica da prática de Income Pooling e que o Modelo Unitário não é adequado como representando o comportamento dos casais portugueses. Existem evidências de que a decisão financeira em questão é tomada sem passar pelo processo de negociação. E que este processo não é adequado aos pressupostos do Modelo Coletivo mas sim, é do tipo de Parceria Igualitária, sem estarem definidas as áreas de responsabilidade de cada parceiro consoante o género.
The economic and financial situation of Portugal and the increase of excessive debt owned by families lead to development of studies about the families? financial management and about the financial decision making process. This investigation focus on the Income Pooling by Portuguese household members and on the decision about borrowing making process, if the decision is shared or autonomous. The databases used are: the recently provided Module on Intra-Household Sharing of Resources of European Union Statistics on Income and Living Conditions (EU-SILC), made available by the first time by Statistics Portugal (INE) and Eurostat. The final selected sample comprises 1.440 couples, the unit of observation is the individual (N=2.880) resident in Portugal, over 16 years old. Three Probit models were built: i) the Income Pooling; ii) the share of the decision about borrowing; and iii) the autonomy of the decision about borrowing. The results show that there is no evidence of Income Pooling practice and that the Unitary Model is not adequate for representing Portuguese couples? behavior. The results also suggest that the financial decision (borrowing) is made without a previous negotiation process. This process is not adequate for the assumptions of Collective Model, but it is more likely to be an Egalitarian Partnership Model, where the definition of responsibility areas of each partner according to the gender, does not exist.
APA, Harvard, Vancouver, ISO, and other styles
27

Bäckström, Fredrik, and Anders Ivarsson. "Meta-Model Guided Error Correction for UML Models." Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8746.

Full text
Abstract:

Modeling is a complex process which is quite hard to do in a structured and controlled way. Many companies provide a set of guidelines for model structure, naming conventions and other modeling rules. Using meta-models to describe these guidelines makes it possible to check whether an UML model follows the guidelines or not. Providing this error checking of UML models is only one step on the way to making modeling software an even more valuable and powerful tool.

Moreover, by providing correction suggestions and automatic correction of these errors, we try to give the modeler as much help as possible in creating correct UML models. Since the area of model correction based on meta-models has not been researched earlier, we have taken an explorative approach.

The aim of the project is to create an extension of the program MetaModelAgent, by Objektfabriken, which is a meta-modeling plug-in for IBM Rational Software Architect. The thesis shows that error correction of UML models based on meta-models is a possible way to provide automatic checking of modeling guidelines. The developed prototype is able to give correction suggestions and automatic correction for many types of errors that can occur in a model.

The results imply that meta-model guided error correction techniques should be further researched and developed to enhance the functionality of existing modeling software.

APA, Harvard, Vancouver, ISO, and other styles
28

Belitz, Christiane. "Model Selection in Generalised Structured Additive Regression Models." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-78896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ribbing, Jakob. "Covariate Model Building in Nonlinear Mixed Effects Models." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sommer, Julia. "Regularized estimation and model selection in compartment models." Diss., Ludwig-Maximilians-Universität München, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-157673.

Full text
Abstract:
Dynamic imaging series acquired in medical and biological research are often analyzed with the help of compartment models. Compartment models provide a parametric, nonlinear function of interpretable, kinetic parameters describing how some concentration of interest evolves over time. Aiming to estimate the kinetic parameters, this leads to a nonlinear regression problem. In many applications, the number of compartments needed in the model is not known from biological considerations but should be inferred from the data along with the kinetic parameters. As data from medical and biological experiments are often available in the form of images, the spatial data structure of the images has to be taken into account. This thesis addresses the problem of parameter estimation and model selection in compartment models. Besides a penalized maximum likelihood based approach, several Bayesian approaches-including a hierarchical model with Gaussian Markov random field priors and a model state approach with flexible model dimension-are proposed and evaluated to accomplish this task. Existing methods are extended for parameter estimation and model selection in more complex compartment models. However, in nonlinear regression and, in particular, for more complex compartment models, redundancy issues may arise. This thesis analyzes difficulties arising due to redundancy issues and proposes several approaches to alleviate those redundancy issues by regularizing the parameter space. The potential of the proposed estimation and model selection approaches is evaluated in simulation studies as well as for two in vivo imaging applications: a dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) study on breast cancer and a study on the binding behavior of molecules in living cell nuclei observed in a fluorescence recovery after photobleaching (FRAP) experiment.
APA, Harvard, Vancouver, ISO, and other styles
31

Smith, Peter William Frederick. "Edge exclusion and model selection in graphical models." Thesis, Lancaster University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tummel, Kurt (Kurt K. ). "Investigation of model micro-scale reconnecting plasma modes." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44822.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2008.
Includes bibliographical references (leaf 27).
We numerically and analytically investigate the linear complex mode frequencies of model micro-reconnecting plasma modes which have transverse wavelengths of the order of the electron skin depth c/lwpe. This model mode, which can have finite wavelength parallel to the magnetic field, is found in the limit of a straight and uniform magnetic field in the presence of temperature gradients. The theory of the related micro-reconnecting modes has been previously developed in view of explaining the observation of macroscopic instabilities which are not predicted by the drift tearing mode theory [2]. These micro-reconnecting modes are radially localized by magnetic shear and lead to the formation of microscopic magnetic islands. We derive the model dispersion equation, which closely follows the derivation of the micro-reconnecting mode dispersion equation [1], under relevant conditions using the drift kinetic approximation. We also consider the dispersion relation in the fluid limit [1]. We examine the solutions of the resulting dispersion relations and confirm the driving effect of the electron temperature gradient, and the stabilizing effect of a density gradient.
by Kurt Tummel.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
33

Hain, Thomas. "Hidden model sequence models for automatic speech recognition." Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Magalla, Champa Hemanthi. "Model adequacy tests for exponential family regression models." Diss., Kansas State University, 2012. http://hdl.handle.net/2097/13640.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
James Neill
The problem of testing for lack of fit in exponential family regression models is considered. Such nonlinear models are the natural extension of Normal nonlinear regression models and generalized linear models. As is usually the case, inadequately specified models have an adverse impact on statistical inference and scientific discovery. Models of interest are curved exponential families determined by a sequence of predictor settings and mean regression function, considered as a sub-manifold of the full exponential family. Constructed general alternative models are based on clusterings in the mean parameter components and allow likelihood ratio testing for lack of fit associated with the mean, equivalently natural parameter, for a proposed null model. A maximin clustering methodology is defined in this context to determine suitable clusterings for assessing lack of fit. In addition, a geometrically motivated goodness of fit test statistic for exponential family regression based on the information metric is introduced. This statistic is applied to the cases of logistic regression and Poisson regression, and in both cases it can be seen to be equal to a form of the Pearson chi[superscript]2 statistic. This same statement is true for multinomial regression. In addition, the problem of testing for equal means in a heteroscedastic Normal model is discussed. In particular, a saturated 3 parameter exponential family model is developed which allows for equal means testing with unequal variances. A simulation study was carried out for the logistic and Poisson regression models to investigate comparative performance of the likelihood ratio test, the deviance test and the goodness of fit test based on the information metric. For logistic regression, the Hosmer-Lemeshow test was also included in the simulations. Notably, the likelihood ratio test had comparable power with that of the Hosmer-Lemeshow test under both m- and n-asymptotics, with superior power for constructed alternatives. A distance function defined between densities and based on the information metric is also given. For logistic models, as the natural parameters go to plus or minus infinity, the densities become more and more deterministic and limits of this distance function are shown to play an important role in the lack of fit analysis. A further simulation study investigated the power of a likelihood ratio test and a geometrically derived test based on the information metric for testing equal means in heteroscedastic Normal models.
APA, Harvard, Vancouver, ISO, and other styles
35

Vlastník, Jan. "Výpočtový model řetězového pohonu jako modul virtuálního motoru." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-233885.

Full text
Abstract:
his work deals with the methods of creating computational models for the analysis of the chain drive of camshafts in combustion engines. Methods are compared of the simulation of the drive mechanism; a new method is also presented for the simulation of the tensioning and guide bar by means of a modal reduction of an elastic body in the Multibody system. The work describes individual parts of the chain gear and the mathematical formulation of differential equations of motion. Algorithms are also indicated describing the mutual interaction of bodies in contact. Computations are here described for the determination of individual parameters necessary for setting up a chain drive model. The tensile characteristics of the chain is determined by the FEM programme. The chain model is analyzed in several alternatives of arrangement. FEM calculations are described here of the rigidity of contacts between the chain and the chain wheels and between the chain and the guide bars. The computational model has been created in the MSC ADAMS programme. The computation is carried out for a stabilized speed of the crankshaft of 3,000, 4,500 and 6,000 rpm and for a continuous start from the idle state up to the speed of 6,000 rpm with a constant load of the crankshafts by the torsion moment. Computation is also carried out for loading the crankshafts with a torsion moment deduced from the cam shape. The courses of the quantity obtained are processed by means of FFT; Campbell diagrams have been constructed for their evaluation. The results have been compared with the modal analyses of the individual parts of the chain gear for the determination of their mutual interaction.
APA, Harvard, Vancouver, ISO, and other styles
36

Ågren, Thuné Anders, and Åhfeldt Theo Puranen. "Extracting scalable program models for TLA model checking." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280344.

Full text
Abstract:
Program verification has long been of interest to researchers and practitioners for its role in asserting reliability in critical systems. Many such systems feature reactive behavior, where temporal properties are of interest. Consequently, a number of systems and program verification tools for dealing with temporal logic have been developed. One such is TLA, whose main purpose is to verify temporal properties of systems using model checking. A TLA model is determined by a logical formula that describes all possible behaviors of a system. TLA is primarily used to verify abstract system designs, as it is considered ill-suited for implementation code in real programming languages. This thesis investigates how TLA models can be extracted from real code, in order to verify temporal properties of the code. The main problem is getting the model size to scale well with the size of the code, while still being representative. The paper presents a general method for achieving this, which utilizes deductive verification to abstract away unnecessary implementation details from the model. Specifically, blocks which can be considered atomic are identified in the original code and replaced with Hoare-style assertions representing only the data transformation performed in the block. The result can then be translated to a more compact TLA model. The assertions, known as block contracts, are verified separately using deductive verification, ensuring the model remains representative. We successfully instantiate the method on a simple C program, using the tool Frama-C to perform deductive verification on blocks of code and translating the result to a TLA model in several steps. The PlusCal algorithm language is used as an intermediary to simplify the translation, and block contracts are successfully translated to TLA using a simple encoding. The results show promise, but there is future work to be done.
Programverifiering har länge varit av intresse för att kunna försäkra sig om tillförlitligheten hos kritiska system. Många sådana system uppvisar ett reaktivt beteende, där temporala egenskaper är av intresse. Som följd har ett antal system och programverifieringsverktyg för hantering av temporallogik utvecklats. Ett sådant är TLA, vars huvudsakliga syfte är att verifiera egenskaper hos abstrakta algoritmer med hjälp av modellprövning. En TLA-modell bestäms av en logisk formel som beskriver alla möjliga beteenden av ett visst system. TLA anses mindre lämpligt för riktig implementationskod och används främst för att verifiera egenskaper hos abstrakta systemmodeller. Denna uppsats undersöker hur TLA-modeller kan extraheras från verklig kod för att verifiera kodens temporala egenskaper. Det huvudsakliga problemet är att även för större program kunna skapa en modell av hanterbar storlek som ändå är representativ. Vi presenterar en allmän metod för att uppnå detta som använder deduktiv verifiering för att abstrahera onödiga implementeringsdetaljer från modellen. Kodblock som kan betraktas som atomiska i den ursprungliga koden identifieras och ersätts med blockkontrakt som representerar datatransformationen som utförs i blocket. Resultatet kan sedan översättas till en mera kompakt TLA-modell. Kontrakten verifieras separat med deduktiv verifiering, vilket säkerställer att modellen behålls representativ. Vi instantierar med framgång metoden på ett enkelt C-program. Verktyget Frama-C används för att utföra deduktiv verifiering på kodblock och flera steg genomförs för att översätta resultatet till en TLA-modell. Algoritmspråket PlusCal används som ett mellansteg för att förenkla översättningen, och blockkontrakt översätts till TLA med en enkel kodning. Resultaten är lovande, men det finns flera punkter som kräver ytterligare arbete.
APA, Harvard, Vancouver, ISO, and other styles
37

Vaidyanathan, Sivaranjani. "Bayesian Models for Computer Model Calibration and Prediction." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1435527468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Guo, Yixuan. "Bayesian Model Selection for Poisson and Related Models." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439310177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tan, Falong. "Projected adaptive-to-model tests for regression models." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/390.

Full text
Abstract:
This thesis investigates Goodness-of-Fit tests for parametric regression models. With the help of sufficient dimension reduction techniques, we develop adaptive-to-model tests using projection in both the fixed dimension settings and the diverging dimension settings. The first part of the thesis develops a globally smoothing test in the fixed dimension settings for a parametric single index model. When the dimension p of covariates is larger than 1, existing empirical process-based tests either have non-tractable limiting null distributions or are not omnibus. To attack this problem, we propose a projected adaptive-to-model approach. If the null hypothesis is a parametric single index model, our method can fully utilize the dimension reduction structure under the null as if the regressors were one-dimensional. Then a martingale transformation proposed by Stute, Thies, and Zhu (1998) leads our test to be asymptotically distribution-free. Moreover, our test can automatically adapt to the underlying alternative models such that it can be omnibus and thus detect all alternative models departing from the null at the fastest possible convergence rate in hypothesis testing. A comparative simulation is conducted to check the performance of our test. We also apply our test to a self-noise mechanisms data set for illustration. The second part of the thesis proposes a globally smoothing test for parametric single-index models in the diverging dimension settings. In high dimensional data analysis, the dimension p of covariates is often large even though it may be still small compared with the sample size n. Thus we should regard p as a diverging number as n goes to infinity. With this in mind, we develop an adaptive-to-model empirical process as the basis of our test statistic, when the dimension p of covariates diverges to infinity as the sample size n tends to infinity. We also show that the martingale transformation proposed by Stute, Thies, and Zhu (1998) still work in the diverging dimension settings. The limiting distributions of the adaptive-to-model empirical process under both the null and the alternative are discussed in this new situation. Simulation examples are conducted to show the performance of this test when p grows with the sample size n. The last Chapter of the thesis considers the same problem as in the second part. Bierens's (1982) first constructed tests based on projection pursuit techniques and obtained an integrated conditional moment (ICM) test. We notice that Bierens's (1982) test performs very badly for large p, although it may be viewed as a globally smoothing test. With the help of sufficient dimension techniques, we propose an adaptive-to-model integrated conditional moment test for regression models in the diverging dimension setting. We also give the asymptotic properties of the new tests under both the null and alternative hypotheses in this new situation. When p grows with the sample size n, simulation studies show that our new tests perform much better than Bierens's (1982) original test.
APA, Harvard, Vancouver, ISO, and other styles
40

Marshall, Lucy Amanda Civil &amp Environmental Engineering Faculty of Engineering UNSW. "Bayesian analysis of rainfall-runoff models: insights to parameter estimation, model comparison and hierarchical model development." Awarded by:University of New South Wales. Civil and Environmental Engineering, 2006. http://handle.unsw.edu.au/1959.4/32268.

Full text
Abstract:
One challenge that faces hydrologists in water resources planning is to predict the catchment???s response to a given rainfall. Estimation of parameter uncertainty (and model uncertainty) allows assessment of the risk in likely applications of hydrological models. Bayesian statistical inference, with computations carried out via Markov Chain Monte Carlo (MCMC) methods, offers an attractive approach to model specification, allowing for the combination of any pre-existing knowledge about individual models and their respective parameters with the available catchment data to assess both parameter and model uncertainty. This thesis develops and applies Bayesian statistical tools for parameter estimation, comparison of model performance and hierarchical model aggregation. The work presented has three main sections. The first area of research compares four MCMC algorithms for simplicity, ease of use, efficiency and speed of implementation in the context of conceptual rainfall-runoff modelling. Included is an adaptive Metropolis algorithm that has characteristics that are well suited to hydrological applications. The utility of the proposed adaptive algorithm is further expanded by the second area of research in which a probabilistic regime for comparing selected models is developed and applied. The final area of research introduces a methodology for hydrologic model aggregation that is flexible and dynamic. Rigidity in the model structure limits representation of the variability in the flow generation mechanism, which becomes a limitation when the flow processes are not clearly understood. The proposed Hierarchical Mixtures of Experts (HME) model architecture is designed to do away with this limitation by selecting individual models probabilistically based on predefined catchment indicators. In addition, the approach allows a more flexible specification of the model error to better assess the risk of likely outcomes based on the model simulations. Application of the approach to lumped and distributed rainfall runoff models for a variety of catchments shows that by assessing different catchment predictors the method can be a useful tool for prediction of catchment response.
APA, Harvard, Vancouver, ISO, and other styles
41

Hull, Lynette. "FRACTION MODELS THAT PROMOTE UNDERSTANDING FOR ELEMENTARY STUDENTS." Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3127.

Full text
Abstract:
This study examined the use of the set, area, and linear models of fraction representation to enhance elementary students' conceptual understanding of fractions. Students' preferences regarding the set, area, and linear models of fractions during independent work was also investigated. This study took place in a 5th grade class consisting of 21 students in a suburban public elementary school. Students participated in classroom activities which required them to use manipulatives to represent fractions using the set, area, and linear models. Students also had experiences using the models to investigate equivalent fractions, compare fractions, and perform operations. Students maintained journals throughout the study, completed a pre and post assessment, participated in class discussions, and participated in individual interviews concerning their fraction model preference. Analysis of the data revealed an increase in conceptual understanding. The data concerning student preferences were inconsistent, as students' choices during independent work did not always reflect the preferences indicated in the interviews.
M.A.
Department of Teaching and Learning Principles
Education
K-8 Mathematics and Science Education
APA, Harvard, Vancouver, ISO, and other styles
42

Abreu, Maria Inês Mendes Alves Pereira de. "Modelos de gestão pública e as opções de reforma do XIX governo constitucional: análise do discurso." Master's thesis, Instituto Superior de Ciências Sociais e Políticas, 2016. http://hdl.handle.net/10400.5/11606.

Full text
Abstract:
Dissertação de Mestrado em Gestão e Políticas Públicas
Para realização da dissertação final de Mestrado em Gestão e Políticas Públicas, desenvolveu-se uma investigação que visa compreender e avaliar as Opções de Reforma do XIX Governo Constitucional para saber em que medida se aproximam de algum dos Modelos de Governação da Administração Pública tidos em consideração na investigação a desenvolver (Modelo Burocrático, Modelo da Nova Gestão Pública e o Modelo da Governança Pública). Para tal, foi definida como pergunta de partida: “As opções de reforma do XIX Governo Constitucional aproximam-se de que modelo de gestão da Administração Pública?” Para se conseguir responder a essa pergunta de partida, foram formuladas 3 hipóteses que, através da sua validação ou rejeição, pretendem auxiliar a alcançar as conclusões pretendidas. Essas hipóteses serão testadas através de Análise de Conteúdo, tendo como corpus documentos potencialmente reveladores das opções de reforma do Governo (Programa Eleitoral, intervenções do Primeiro-Ministro, Guião para a Reforma do Estado, e o Memorando de Entendimento.
To perform the final dissertation of Master in Management and Public Policy, it was developed a research that aims to understand and evaluate the Reform Options of the XIX Constitutional Government, to understand they’re approach to any of the Public Administration Governance Models taken into account in research (Bureaucratic Model, Model of New Public Management and Public Governance Model). To do this, was set as the starting question: "The reform options XIX Constitutional Government approach that management model of public administration". To be able to answer the question of departure, were formulated three hypotheses, that through its validation or rejection, are intended to help reach the desired conclusions. These hypotheses will be tested through content analysis, with the corpus relevant documents and revealing of the government's reform options (Electoral Program, interventions of the Prime Minister, Script for State Reform, and Memorandum of Understanding).
APA, Harvard, Vancouver, ISO, and other styles
43

Fröhlich, Kristina, Alexander Pogoreltsev, and Christoph Jacobi. "The 48 Layer COMMA-LIM Model." Universitätsbibliothek Leipzig, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-217766.

Full text
Abstract:
COMMA-LIM (Cologne Model of the Middle Atmosphere - Leipzig Institute for Meteorology) ist ein 3D-mechanistisches Gitterpuktsmodell, welches sich von ca. 0 bis 135 km in logarhitmischen Druckkordinaten z = -H ln(p=p0) erstreckt, wobei H=7 km und p0 den Referenzdruck am unteren Rand bezeichnet. Die vertikale Auflösung von COMMA-LIM wurde auf 48 Schichten erhöht. Zugleich wurde die Beschreibung des Strahlungsprozesses verbessert, zusammen mit den Beiträgen zur Temperaturbilanz durch atmosphärische Wellen und Turbulenz. Weitere Veränderungen betreffen die numerische Realisation der horizontalen Diffusion und des Filterproblems. Die Beschreibung ist unterteilt in den dynamischen Teil und die Strahlungsbeträge. Die jahreszeitlichen Klimatologien werden vorgestellt und diskutiert
COMMA-LIM (Cologne Model of the Middle Atmosphere - Leipzig Institute for Meteorology) is a 3D-mechanistic gridpoint model extending up from 0 to 135 km with a logharithmic vertical coordinate z = -H ln(p=p0), where H=7 km and p0 is the reference pressure at lower boundary. The resolution of the 24 layer version has been increased to 48 layers and several improvements are made in the parameterisation of radiative processes, heating/cooling due to atmospheric waves and turbulence, as well as in the numerical realization of the horizontal diffusion and filtering. This description is divided into the section describing the changes in the dynamical part and the modifications in radiation routines. After all, the seasonal climatologies will be shown and discussed to demonstrate what the COMMA-LIM is capable of reproducing
APA, Harvard, Vancouver, ISO, and other styles
44

Partington, Mike. "AUTOMATIC IMAGE TO MODEL ALIGNMENT FOR PHOTO-REALISTIC URBAN MODEL RECONSTRUCTION." UKnowledge, 2001. http://uknowledge.uky.edu/gradschool_theses/218.

Full text
Abstract:
We introduce a hybrid approach in which images of an urban scene are automatically alignedwith a base geometry of the scene to determine model-relative external camera parameters. Thealgorithm takes as input a model of the scene and images with approximate external cameraparameters and aligns the images to the model by extracting the facades from the images andaligning the facades with the model by minimizing over a multivariate objective function. Theresulting image-pose pairs can be used to render photo-realistic views of the model via texturemapping.Several natural extensions to the base hybrid reconstruction technique are also introduced. Theseextensions, which include vanishing point based calibration refinement and video stream basedreconstruction, increase the accuracy of the base algorithm, reduce the amount of data that mustbe provided by the user as input to the algorithm, and provide a mechanism for automaticallycalibrating a large set of images for post processing steps such as automatic model enhancementand fly-through model visualization.Traditionally, photo-realistic urban reconstruction has been approached from purely image-basedor model-based approaches. Recently, research has been conducted on hybrid approaches, whichcombine the use of images and models. Such approaches typically require user assistance forcamera calibration. Our approach is an improvement over these methods because it does notrequire user assistance for camera calibration.
APA, Harvard, Vancouver, ISO, and other styles
45

Al, Mallah Amr. "Model-based testing of model transformations." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96856.

Full text
Abstract:
Model Driven Engineering (MDE) research has achieved major progress in the past fewyears. Though MDE research and adoption are moving forward at an increasing pace,there are still few major challenges left to be addressed. Model Transformations (MT)represent an essential part of MDE that is gradually reaching maturity level. Testing MThas been shown to be a challenging task due to a new set of problems. In this thesis weattempt to complement the work done so far by the research community to address MTtesting challenges.We use findings from the research in classical testing to create a prospective view on MTtesting challenges and opportunities. More specifically, we focus on two challenges : ModelComparison and automating testing execution through a Testing Framework. First,we introduce a model comparison approach (based an existing graph comparison algorithm)that is customizable, and fine tuned to performs best in testing situations. Theperformance of our algorithm is throughly investigated against different types of models.Second, we introduce TUnit : a modelled framework for testing Model transformations.We demonstrate the benefit of using TUnit in supporting the process of testing transformationsin regression testing and enabling semantic equivalence through extending ourcase study to perform a comparison of coverability graphs of Petri Nets.
La recherche sur le Model Driven Engineering (MDE) a accomplit de grands progrèsau cours des dernières années. Bien que la recherche et l'adoption avancent à grandspas, il reste encore plusieurs défis majeurs à adresser. La Transformation de Modèle(TM) représente un élément essentiel du MDE qui atteint graduellement le niveau dematurité. Le test sur les TM s'est démontré être une tˆache difficile en raison des nouveauxproblèmes survenus. Dans cette thèse, nous essayons de complémenter le travail complétépar la communauté de recherche pour adresser les défis restants des tests sur les TM.Nous utilisons les résultats de la recherche en tests classiques pour créer une visionprospective sur les défis et opportunités des tests sur les TM. Nous nous concentrons plusprécisement sur les deux défis suivants : la comparaison des modèles et l'automation destests exécutés à travers un cadre de tests . Tout d'adord, nous présentons une approcheen comparaison de modèles qui peut être personnalisée et atteint de meilleurs résultatsdans des situations de tests. La performance de notre algorithme est rigoureusementétudiée contre différents types de modèles. Deuxièmement, nous introduisons Tunit : uncadre de tests en transformation de modèles qui est aussi un modèle. Nous démontronsles avantages d'utiliser TUnit pour donner un support au processus de tests sur lestransformations en tests de regression et permettre l'équivalance sémantique.
APA, Harvard, Vancouver, ISO, and other styles
46

Amini, Moghadam Shahram. "Model Uncertainty & Model Averaging Techniques." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/28398.

Full text
Abstract:
The primary aim of this research is to shed more light on the issue of model uncertainty in applied econometrics in general and cross-country growth as well as happiness and well-being regressions in particular. Model uncertainty consists of three main types: theory uncertainty, focusing on which principal determinants of economic growth or happiness should be included in a model; heterogeneity uncertainty, relating to whether or not the parameters that describe growth or happiness are identical across countries; and functional form uncertainty, relating to which growth and well-being regressors enter the model linearly and which ones enter nonlinearly. Model averaging methods including Bayesian model averaging and Frequentist model averaging are the main statistical tools that incorporate theory uncertainty into the estimation process. To address functional form uncertainty, a variety of techniques have been proposed in the literature. One suggestion, for example, involves adding regressors that are nonlinear functions of the initial set of theory-based regressors or adding regressors whose values are zero below some threshold and non-zero above that threshold. In recent years, however, there has been a rising interest in using nonparametric framework to address nonlinearities in growth and happiness regressions. The goal of this research is twofold. First, while Bayesian approaches are dominant methods used in economic empirics to average over the model space, I take a fresh look into Frequentist model averaging techniques and propose statistical routines that computationally ease the implementation of these methods. I provide empirical examples showing that Frequentist estimators can compete with their Bayesian peers. The second objective is to use recently-developed nonparametric techniques to overcome the issue of functional form uncertainty while analyzing the variance of distribution of per capita income. Nonparametric paradigm allows for addressing nonlinearities in growth and well-being regressions by relaxing both the functional form assumptions and traditional assumptions on the structure of error terms.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
47

Makhtar, Mokhairi. "Contributions to Ensembles of Models for Predictive Toxicology Applications. On the Representation, Comparison and Combination of Models in Ensembles." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5478.

Full text
Abstract:
The increasing variety of data mining tools offers a large palette of types and representation formats for predictive models. Managing the models then becomes a big challenge, as well as reusing the models and keeping the consistency of model and data repositories. Sustainable access and quality assessment of these models become limited to researchers. The approach for the Data and Model Governance (DMG) makes easier to process and support complex solutions. In this thesis, contributions are proposed towards ensembles of models with a focus on model representation, comparison and usage. Predictive Toxicology was chosen as an application field to demonstrate the proposed approach to represent predictive models linked to data for DMG. Further analysing methods such as predictive models comparison and predictive models combination for reusing the models from a collection of models were studied. Thus in this thesis, an original structure of the pool of models was proposed to represent predictive toxicology models called Predictive Toxicology Markup Language (PTML). PTML offers a representation scheme for predictive toxicology data and models generated by data mining tools. In this research, the proposed representation offers possibilities to compare models and select the relevant models based on different performance measures using proposed similarity measuring techniques. The relevant models were selected using a proposed cost function which is a composite of performance measures such as Accuracy (Acc), False Negative Rate (FNR) and False Positive Rate (FPR). The cost function will ensure that only quality models be selected as the candidate models for an ensemble. The proposed algorithm for optimisation and combination of Acc, FNR and FPR of ensemble models using double fault measure as the diversity measure improves Acc between 0.01 to 0.30 for all toxicology data sets compared to other ensemble methods such as Bagging, Stacking, Bayes and Boosting. The highest improvements for Acc were for data sets Bee (0.30), Oral Quail (0.13) and Daphnia (0.10). A small improvement (of about 0.01) in Acc was achieved for Dietary Quail and Trout. Important results by combining all the three performance measures are also related to reducing the distance between FNR and FPR for Bee, Daphnia, Oral Quail and Trout data sets for about 0.17 to 0.28. For Dietary Quail data set the improvement was about 0.01 though, but this data set is well known as a difficult learning exercise. For five UCI data sets tested, similar results were achieved with Acc improvement between 0.10 to 0.11, closing more the gaps between FNR and FPR. As a conclusion, the results show that by combining performance measures (Acc, FNR and FPR), as proposed within this thesis, the Acc increased and the distance between FNR and FPR decreased.
APA, Harvard, Vancouver, ISO, and other styles
48

Průchová, Anna. "Makroekonomická analýza pomocí DSGE modelů." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-124606.

Full text
Abstract:
Dynamic stochastic general equilibrium models are derived from microeconomic principles and they retain the hypothesis of rational expectations under policy changes. Thus they are resistant to the Lucas critique. The DSGE model has become associated with new Keynesian thinking. The basic New Keynesian model is studied in this thesis. The three equations of this model are dynamic IS curve, Phillips-curve and monetary policy rule. Blanchard and Kahn's approach is introduced as the solution strategy for linearized model. Two methods for evaluating DSGE models are presented -- calibration and Bayesian estimation. Calibrated parametres are used to fit the model to Czech economy. The results of numeric experiments are compared with empricial data from Czech republic. DSGE model's suitability for monetary policy analysis is evaluated.
APA, Harvard, Vancouver, ISO, and other styles
49

Borgesen, Jørgen Frenken. "Efficient optimization for Model Predictive Control in reservoir models." Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9959.

Full text
Abstract:

The purpose of this thesis was to study the use of adjoint methods for gradient calculations in Model Predictive Control (MPC) applications. The goal was to find and test efficient optimization methods to use in MPC on oil reservoir models. Handling output constraints in the optimization problem has been studied closer since they deteriorate the efficiency of the MPC applications greatly. Adjoint- and finite difference approaches for gradient calculations was tested on reservoir models to determine there efficiency on this particular type of problem. Techniques for reducing the number of output constraints was also utilized to decrease the computation time further. The results of this study shows us that adjoint methods can decrease the computation time for reservoir simulations greatly. Combining the adjoint methods with techniques that reduces the number of output constraints can reduce the computation time even more. Adjoint methods require some more work in the modeling process, but the simulation time can be greatly reduced. The principal conclusion is that more specialized optimization algorithms can reduce the simulation time for reservoir models.

APA, Harvard, Vancouver, ISO, and other styles
50

Feng, Chunxia. "Transit Bus Load-Based Modal Emission Rate Model Development." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14583.

Full text
Abstract:
Heavy-duty diesel vehicle (HDDV) operations are a major source of pollutant emissions in major metropolitan areas. Accurate estimation of heavy-duty diesel vehicle emissions is essential in air quality planning efforts because highway and non-road heavy-duty diesel emissions account for a significant fraction of the oxides of nitrogen (NOx) and particulate matter (PM) emissions inventories. Yet, major modeling deficiencies in the current MOBILE6 modeling approach for heavy-duty diesel vehicles have been widely recognized for more than ten years. While the most recent MOBILE6.2 model integrates marginal improvements to various internal conversion and correction factors, fundamental flaws inherent in the modeling approach still remain. The major effort of this research is to develop a new heavy-duty vehicle load-based modal emission rate model that overcomes some of the limitations of existing models and emission rates prediction methods. This model is part of the proposed Heavy-Duty Diesel Vehicle Modal Emission Modeling (HDDV-MEM) which was developed by Georgia Institute of Technology. HDDV-MEM first predicts second-by-second engine power demand as a function of vehicle operating conditions and then applies brake-specific emission rates to these activity predictions. To provide better estimates of microscopic level, this modeling approach is designed to predict second-by-second emissions from onroad vehicle operations. This research statistically analyzes the database provided by EPA and yields a model for prediction emissions at microscopic level based on engine power demand and driving mode. Research results will enhance the explaining ability of engine power demand on emissions and the importance of simulating engine power in real world applications. The modeling approach provides a significant improvement in HDDV emissions modeling compared to the current average speed cycle-based emissions models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography