Dissertations / Theses on the topic 'Limit models'

To see the other types of publications on this topic, follow the link: Limit models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Limit models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Calhoun, Grayson Ford. "Limit theory for overfit models." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3359804.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed July 23, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 104-109).
APA, Harvard, Vancouver, ISO, and other styles
2

Andersson, Håkan. "Limit theorems for some stochastic epidemic models." Stockholm : Stockholm University, 1994. http://catalog.hathitrust.org/api/volumes/oclc/40258819.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Odero, Geophrey Otieno Mr. "Limit Cycles and Dynamics of Rumor Models." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etd/1236.

Full text
Abstract:
This thesis discusses limit cycles and behavior of rumor models. The first part presents the deterministic Daley-Kendall model (DK) with arrivals and departures and comparison of the Susceptibles, Infectives and Removed (SIR) model and the DK model. The second result is a part of the qualitative analysis and the general behavior of extension of the Daley-Kendall model. Here we discuss how the halting rate of spreaders causes the model to change from a stable equilibrium or a stable limit cycle. In the third part we carry out model validation and use both synthetic data and real data sets and fit them to the numerical solutions of the extended Daley-Kendall model. Finally, we find the parameter estimates and standard errors. In this way we shall be able to decide whether the numerical solutions quantifying the relationships between the variables obtained from the qualitative analysis can be accepted as the best description of the data. We discuss sensitivity analysis results and traditional sensitivity functions.
APA, Harvard, Vancouver, ISO, and other styles
4

Persson, Jonas. "Strings as Sigma Models and in the Tensionless Limit." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gao, Xuefeng. "Stochastic models for service systems and limit order books." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50238.

Full text
Abstract:
Stochastic fluctuations can have profound impacts on engineered systems. Nonetheless, we can achieve significant benefits such as cost reduction based upon expanding our fundamental knowledge of stochastic systems. The primary goal of this thesis is to contribute to our understanding by developing and analyzing stochastic models for specific types of engineered systems. The knowledge gained can help management to optimize decision making under uncertainty. This thesis has three parts. In Part I, we study many-server queues that model large-scale service systems such as call centers. We focus on the positive recurrence of piecewise Ornstein-Uhlenbeck (OU) processes and the validity of using these processes to predict the steady-state performance of the corresponding many-server queues. In Part II, we investigate diffusion processes constrained to the positive orthant under infinitesimal changes in the drift. This sensitivity analysis on the drift helps us understand how changes in service capacities at individual stations in a stochastic network would affect the steady-state queue-length distributions. In Part III, we study the trading mechanism known as limit order book. We are motivated by a desire to better understand the interplay among order flow rates, liquidity fluctuation, and optimal executions. The goal is to characterize the temporal evolution of order book shape on the “macroscopic” time scale.
APA, Harvard, Vancouver, ISO, and other styles
6

Meurer, Anne [Verfasser]. "Interacting Particle Models with their Limit Equations / Anne Meurer." München : Verlag Dr. Hut, 2016. http://d-nb.info/1120763983/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Visram, Abeed. "Asymptotic limit analysis for numerical models of atmospheric frontogenesis." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/23219.

Full text
Abstract:
Accurate prediction of the future state of the atmosphere is important throughout society, ranging from the weather forecast in a few days time to modelling the effects of a changing climate over decades and generations. The equations which govern how the atmosphere evolves have long been known; these are the Navier-Stokes equations, the laws of thermodynamics and the equation of state. Unfortunately the nonlinearity of the equations prohibits analytic solutions, so simplified models of particular flow phenomena have historically been, and continue to be, used alongside numerical models of the full equations. In this thesis, the two-dimensional Eady model of shear-driven frontogenesis (the creation of atmospheric fronts) was used to investigate how errors made in a localised region can affect the global solution. Atmospheric fronts are the boundary of two different air masses, typically characterised by a sharp change in air temperature and wind direction. This occurs across a small length of O(10 km), whereas the extent of the front itself can be O(1000 km). Fronts are a prominent feature of mid-latitude weather systems and, despite their narrow width, are part of the large-scale, global solution. Any errors made locally in the treatment of fronts will therefore affect the global solution. This thesis uses the convergence of the Euler equations to the semigeostrophic equations, a simplified model which is representative of the large-scale flow, including fronts. The Euler equations were solved numerically using current operational techniques. It was shown that highly predictable solutions could be obtained, and the theoretical convergence rate maintained, even with the presence of near-discontinuous solutions given by intense fronts. Numerical solutions with successively increased resolution showed that the potential vorticity, which is a fundamental quantity in determining the large-scale, balanced flow, approached the semigeostrophic limit solution. Regions of negative potential vorticity, indicative of local areas of instability, were reduced at high resolution. In all cases, the width of the front reduced to the grid-scale. While qualitative features of the limit solution were reproduced, a stark contrast in amplitude was found. The results of this thesis were approximately half in amplitude of the limit solution. Some attempts were made at increasing the intensity of the front through spatial- and temporal-averaging. A scheme was proposed that conserves the potential vorticity within the Eady model.
APA, Harvard, Vancouver, ISO, and other styles
8

So, Mee Chi. "Optimizing credit limit policy by Markov decision process models." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/68761/.

Full text
Abstract:
Credit cards have become an essential product for most consumers. Lenders have recognized the profit that can be achieved from the credit card market and thus they have introduced different credit cards to attract consumers. Thus, the credit card market has undergone keen competition in recent years. Lenders realize their operation decisions are crucial in determining how much pofit is achieved from a card. This thesis focuses on the most well-known operating policy: the management of credit limit. Lenders traditionally applied static decision models to manage the credit limit of credit card accounts. A growing number of lenders though want improved models so as to monitor the long-term risk and return of credit card borrowers. This study aims to use Markov Decision Process, which is a well-developed sequential decision model, to adjust the credit limit of current credit card accounts. The behavioural score, which is the way of assessing credit card holder's default risk in the next year, is used as the key parameter to monitor the risk of every individual account. The model formulation and the corresponding application techniques, such as state coarse-classication, choice of Markovity order, are discussed in this thesis. One major concern of using Markov Decision Process model is the small sample size in certain states. In general credit card lenders have lots of data. However, there may be no examples in the data of transitions from certain states to default, particularly for those high quality credit card accounts. If one simply uses zero to estimate these states' transition probabilities, this leads to apparent 'structural zeros' states which change the connectedness of the dynamics in the state space. A method is developed in this thesis to overcome such problems in real applications. The economy and retail credit risk are highly correlated and so one key focus of this study is to look at the interaction between credit card behavioural score migrations and the economy. This study uses dierent credit card datasets, one from Hong Kong and one from United Kingdom, to examine the impact of economy on the credit card borrowers' behaviour. The economies in these two areas were dierent during the sampling period. Based on these empirical ndings, this study has generalized the use of macroeconomic measurements in the credit limit models. This thesis also proposed segmenting the credit card accounts by the accounts' repayment patterns. The credit card population in general can be segmented into Transactors or Revolvers. Empirical ndings show the impact of economy are signicantly different for Transactors and Revolvers. This study provides a detailed picture of the application of Markov Decision Process models in adjusting the credit limit of credit card accounts.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Ying, and 劉影. "Limit equilibrium methods for slope stability analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B42576684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pilipauskaité, Vytauté. "Limit theorems for spatio-temporal models with long-range dependence." Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4057/document.

Full text
Abstract:
Les travaux de la thèse portent sur les théorèmes limites pour des modèles stochastiques à forte dépendance. Dans la première partie, nous considérons des modèles AR(1) à coefficient aléatoire. Nous identifions trois régimes asymptotiques différents pour le schéma d’agrégation conjointe temporelle-contemporaine lorsque les processus AR sont indépendants et lorsque les AR possède des innovations communes. Ensuite, on discute de l’estimation non paramétrique de la fonction de répartition du coefficient autorégressif à partir d’un panel de séries AR(1) à coefficient aléatoire. Nous prouvons la convergence faible du processus empirique basé sur des estimations des coefficients autorégressifs non observables vers un pont brownien généralisé. Ce résultat est ensuite appliqué pour valider différents outils d’inférence statistique à partir des données du panel AR(1). Dans la deuxième partie de la thèse, nous nous concentrons sur les modèles spatiaux en dimension 2. Nous considérons des champs aléatoires construits à partir des polynômes Appell et de champs aléatoires linéaires. Pour ce modèle non linéaire, nous étudions la limite de ses sommes partielles normalisées prises sur des rectangles et prouvons l’existence d’une transition d’échelle. Enfin, nous abordons la même question pour le modèle de germes-grains aléatoire. Nous mettons en évidence l’existence de deux points de transition dans les limites de ces modèles
The thesis is devoted to limit theorems for stochastic models with long-range dependence. We first consider a random-coefficient AR(1) process, which can have long memory provided the distribution of autoregressive coefficient concentrates near the unit root. We identify three different limit regimes in the scheme of joint temporal-contemporaneous aggregation for independent copies of random-coefficient AR(1) process and for its copies driven by common innovations. Next, we discuss nonparametric estimation of the distribution of the autoregressive coefficient given multiple random-coefficient AR(1) series. We prove the weak convergence of the empirical process based on estimates of unobservable autoregressive coefficients to a generalized Brownian bridge and apply this result to draw statistical inference from panel AR(1) data. In the second part of the thesis we focus on spatial models in dimension 2. We define a nonlinear random field as the Appell polynomial of a linear random field with long-range dependence. For the nonlinear random field, we investigate the limit of its normalized partial sums over rectangles and prove the existence of scaling transition. Finally, we study such like scaling of the random grain model and obtain two-change points in its limits
APA, Harvard, Vancouver, ISO, and other styles
11

Baaqeel, Hanan. "Central limit theorems and statistical inference for some random graph models." Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/29294/.

Full text
Abstract:
Random graphs and networks are of great importance in any fields including mathematics, computer science, statistics, biology and sociology. This research aims to develop statistical theory and methods of statistical inference for random graphs in novel directions. A major strand of the research is the development of conditional goodness-of-fit tests for random graph models and for random block graph models. On the theoretical side, this entails proving a new conditional central limit theorem for a certain graph statistics, which are closely related to the number of two-stars and the number of triangles, and where the conditioning is on the number of edges in the graph. A second strand of the research is to develop composite likelihood methods for estimation of the parameters in exponential random graph models. Composite likelihood methods based on edge data have previously been widely used. A novel contribution of the thesis is the development of composite likelihood methods based on more complicated data structures. The goals of this PhD thesis also include testing the numerical performance of the novel methods in extensive simulation studies and through applications to real graphical data sets.
APA, Harvard, Vancouver, ISO, and other styles
12

Granelli, Andrea. "Limit theorems and stochastic models for dependence and contagion in financial markets." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/56943.

Full text
Abstract:
We analyse the effect of dependence between financial assets in the setting of the variance risk premium and the Brownian semistationary process. The variance risk premium (VRP) refers to the premium demanded for holding assets whose variance is exposed to stochastic shocks. This thesis identifies a new modelling framework for equity indices and presents for the first time explicit analytical formulas for their VRP in a multivariate stochastic volatility setting, which includes multivariate non-Gaussian Ornstein Uhlenbeck processes and Wishart processes. Moreover, we propose to incorporate contagion within the equity index via a multivariate Hawkes process and find that the resulting dynamics of the VRP represent a convincing alternative to the models studied in the literature up to date. The Brownian semistationary process (BSS) is in general not a semimartingale and its univariate asymptotic limit theory outside the semimartingale framework has been developed over recent years, due to the increasing number of its applications in finance and also in the modelling of turbulence. We expand the reach of the theory by proving new probabilistic limit theorems for the 2-dimensional version of the process, using techniques from Malliavin calculus.
APA, Harvard, Vancouver, ISO, and other styles
13

Rabino, Bruno António Santos. "Efeito Combinado das Estratégias e do Limit Order Book num Mercado Artificial." Master's thesis, Instituto Superior de Economia e Gestão, 2011. http://hdl.handle.net/10400.5/4489.

Full text
Abstract:
Mestrado em Gestão de Sistemas de Informação
O desenvolvimento de modelos de sociedades artificiais tem tido um papel importante no estudo do comportamento dos mercados financeiros. Ainda que o recurso aos dados empíricos seja a prática mais comum das abordagens computacionais aplicadas àqueles mercados, tem-se assistido uma cada vez mais frequente utilização de ambientes artificiais, quer em complemento, quer em alternativa às abordagens empíricas. Ao longo dos últimos anos, a generalidade dos ambientes artificiais tem recorrido ao desenvolvimento de Modelos Baseado em Agentes (Agent-based models - ABM), o qual consiste num sistema computacional onde é possível reproduzir o comportamento das entidades intervenientes no fenómeno a estudar, e as interacções dessas entidades entre si e com o ambiente em que se encontram. A reprodução dos referidos comportamentos tem em vista a confirmação de hipóteses teóricas e experimentais que contribuam para explicar o fenómeno estudado. Em linhas gerais, o tema proposto para este Trabalho de Fim de Mestrado é a criação de um ABM com o objectivo de avaliar o efeito da participação conjunta de dois diferentes tipos de comportamento. O primeiro consiste na existência de um conjunto de estratégias dos agentes individuais. O segundo restringe-se a existência de um Limit Order Book, ou seja, de um mecanismo de registo de pedidos de compra e de venda de acções disponíveis no mercado. Sabe-se que em Economia, a validação dos resultados conseguidos em ambientes artificiais é com frequência efectuada através da comparação dos referidos resultados com um conjunto de factos estilizados. Assim sendo, esta dissertação tem início com a apresentação dos principais factos estilizados dos mercados financeiros. De seguida passar-se-á ao enquadramento da simulação computacional de sociedades artificiais e à identificação dos aspectos fundamentais e específicos dos Modelos Baseados em Agentes. O terceiro capítulo apresenta as ideias fundamentais do Limit Order Book e dos comportamentos baseados em estratégias. A apresentação e a análise dos resultados incide prioritariamente sobre a avaliação de três diferentes cenários, onde: 1) Considera-se a existência de um mecanismos onde actuam estratégias individuais responsáveis pelas opções de compra e de venda de acções constituintes do mercado; 2) Considera-se apenas a existência de um mecanismo designado Limit Order Book (LOB) e 3)É considerado o efeito da participação conjunta dos dois diferentes mecanismos (estratégia e LOB). Em termos gerais, conclui-se que num mercado artificial com agentes inteligentes, a existência de um Limit Order Book tem um papel preponderante sobre a existência de um conjunto de estratégias, ou seja, sobre a inteligência dos agentes do mercado. Por fim, são indicadas as várias possíveis melhorias e ampliações do modelo apresentado, no sentido de o tornar mais completo, tanto sobre o ponto de vista das funcionalidades contempladas como do ponto de vista da sua versatilidade.
Agent-based models are increasingly being used to model artificial societies of financial markets. Though the usage of empirical data is the most common practice of computational approaches applied to those markets, there has been an increasingly frequent use of artificial environments, either in addition or as an alternative to empirical approaches. Over the past years, the generality of artificial environments is being supported by Agent-Based Models - ABM, which are able to reproduce the behavior of the entities involved in a given phenomenon. Interactions take place among the model entities and between the entities and the environment. In so doing, ABM models allows to confirm some research questions and help explaining the phenomenon at hand. The topic of this work is the creation of an ABM in order to evaluate the effect of joint participation of two different types of behavior. The first is the existence of a set of strategies of individual agents. The second is restricted to the presence of a Limit Order Book - LOB. It is known that in economics, the validation of the results obtained in artificial environments is often carried out by comparing these results with a set of stylized facts. Therefore, this dissertation begins with the presentation of the main stylized facts of financial markets, followed by the framing of computer simulation of artificial societies and the identification of fundamental and specific aspects of ABM. The third chapter presents the basic ideas of the Limit Order Book and some behavior-based strategies. The presentation and analysis of results focuses primarily on the evaluation of three different scenarios: 1) We consider the existence of a mechanism where agent's individual strategies are used in deciding to buy, to sell or just to do nothing; 2) We consider only the existence of a simplified Limit Order Book, where the investor decisions are taken at random and the price dynamics is only carried out by the Limit Order Book, and 3) We consider the combined effect of the above (strategies and LOB), were the price dynamics continues being handled by the LOB but the buying/selling decisions are now defined by the agent strategies. Overall, we conclude that in an artificial market with intelligent agents, the presence of a Limit Order Book plays the leading role over the existence of a set of strategies, i.e., on the intelligence of the players. Finally, some possible enhancements and extensions to the model in order to make it more complete are presented.
APA, Harvard, Vancouver, ISO, and other styles
14

Sindel, Michael. "Numerical Renormalization Group studies of Quantum Impurity Models in the Strong Coupling Limit." Diss., lmu, 2005. http://nbn-resolving.de/urn:nbn:de:bvb:19-31150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Höschler, Marcel [Verfasser], and Peter [Akademischer Betreuer] Bank. "Limit order book models and optimal trading strategies / Marcel Höschler. Betreuer: Peter Bank." Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2011. http://d-nb.info/101494676X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Osterrieder, Jörg Robert. "Arbitrage, the limit order book and market microstructure aspects in financial market models." kostenfrei, 2007. http://e-collection.ethbib.ethz.ch/view/eth:29478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Newbury, James. "Limit order books, diffusion approximations and reflected SPDEs : from microscopic to macroscopic models." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:825d9465-842b-424b-99d0-ff4dfa9ebfc5.

Full text
Abstract:
Motivated by a zero-intelligence approach, the aim of this thesis is to unify the microscopic (discrete price and volume), mesoscopic (discrete price and continuous volume) and macroscopic (continuous price and volume) frameworks of limit order books, with a view to providing a novel yet analytically tractable description of their behaviour in a high to ultra high-frequency setting. Starting with the canonical microscopic framework, the first part of the thesis examines the limiting behaviour of the order book process when order arrival and cancellation rates are sent to infinity and when volumes are considered to be of infinitesimal size. Mathematically speaking, this amounts to establishing the weak convergence of a discrete-space process to a mesoscopic diffusion limit. This step is initially carried out in a reduced-form context, in other words, by simply looking at the best bid and ask queues, before the procedure is extended to the whole book. This subsequently leads us to the second part of the thesis, which is devoted to the transition between mesoscopic and macroscopic models of limit order books, where the general idea is to send the tick size to zero, or equivalently, to consider infinitely many price levels. The macroscopic limit is then described in terms of reflected SPDEs which typically arise in stochastic interface models. Numerical applications are finally presented, notably via the simulation of the mesocopic and macroscopic limits, which can be used as market simulators for short-term price prediction or optimal execution strategies.
APA, Harvard, Vancouver, ISO, and other styles
18

Müller, Noela [Verfasser], Ralph [Gutachter] Neininger, and Rudolf [Gutachter] Grübel. "Central limit theorems for multicolour urn models / Noela Müller ; Gutachter: Ralph Neininger, Rudolf Grübel." Frankfurt am Main : Universitätsbibliothek Johann Christian Senckenberg, 2018. http://d-nb.info/1151572322/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mihoci, Andrija. "Structural adaptive models in financial econometrics." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2012. http://dx.doi.org/10.18452/16597.

Full text
Abstract:
Moderne statistische und ökonometrische Methoden behandeln erfolgreich stilisierte Fakten auf den Finanzmärkten. Die vorgestellten Techniken erstreben die Dynamik von Finanzmarktdaten genauer als traditionelle Ansätze zu verstehen. Wirtschaftliche und finanzielle Vorteile sind erzielbar. Die Ergebnisse werden hier in praktischen Beispielen ausgewertet, die sich vor allem auf die Prognose von Finanzmarktdaten fokussieren. Unsere Anwendungen umfassen: (i) die Modellierung und die Vorhersage des Liquiditätsangebotes, (ii) die Lokalisierung des ’Multiplicative Error Model’ und (iii) die Erbringung von Evidenz für den empirischen Zustandsfaktorparadox über Landern.
Modern methods in statistics and econometrics successfully deal with stylized facts observed on financial markets. The presented techniques aim to understand the dynamics of financial market data more accurate than traditional approaches. Economic and financial benefits are achievable. The results are here evaluated in practical examples that mainly focus on forecasting of financial data. Our applications include: (i) modelling and forecasting of liquidity supply, (ii) localizing multiplicative error models and (iii) providing evidence for the empirical pricing kernel paradox across countries.
APA, Harvard, Vancouver, ISO, and other styles
20

Fournier, Mathew. "Investigations into the Shear Strength Reduction method using distinct element models." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2492.

Full text
Abstract:
This thesis reports a detailed investigation into the use of the Shear Strength Reduction (SSR) method to determine factor of safety values in discontinuum models using the Universal Distinct Element Code. The SSR method depends on the definition of failure within the model and two different criteria were compared: the numerical unbalanced force definition and a more qualitative displacement-monitoring based method. A parametric study was first undertaken, using a simple homogeneous rock slope, with three different joint networks representing common kinematic states. Lessons learned from this study were then applied to a more complex case history used for validation of the SSR method. The discontinuum models allow for the failure surface to propagate based on constitutive models that better idealize the rockmass than simpler methods such as limit equilibrium (e.g. either method of slices or wedge solutions) and even numerical continuum models (e.g. finite difference, finite element). Joints are explicitly modelled and can exert a range of influences on the SSR result. Simple elasto-plastic models are used for both the intact rock and joint properties. Strain-softening models are also discussed with respect to the SSR method. The results presented highlight several important relationships to consider related to both numerical procedures and numerical input parameters. The case history was modelled similar to how a typical forward analysis would be undertaken: i.e. simple models with complexities added incrementally. The results for this case generally depict a rotational failure mode with a reduced factor of safety due to the presence of joints within the rockmass when compared to a traditional limit equilibrium analysis. Some models with large persistence of steeply dipping joints were able to capture the actual failure surface. Softening models were employed in order to mimic the generation and propagation of joints through the rockmass in a continuum; however, only discontinuum models using explicitly defined joints in the model were able to capture the correct failure surface.
APA, Harvard, Vancouver, ISO, and other styles
21

Merino, Aceituno Sara. "Contributions in fractional diffusive limit and wave turbulence in kinetic theory." Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/256994.

Full text
Abstract:
This thesis is split in two different topics. Firstly, we study anomalous transport from kinetic models. Secondly, we consider the equations coming from weak wave turbulence theory and we study them via mean-field limits of finite stochastic particle systems. $\textbf{Anomalous transport from kinetic models.}$ The goal is to understand how fractional diffusion arises from kinetic equations. We explain how fractional diffusion corresponds to anomalous transport and its relation to the classical diffusion equation. In previous works it has been seen that particles systems undergoing free transport and scattering with the media can give rise to fractional phenomena in two cases: firstly, if in the dynamics of the particles there is a heavy-tail equilibrium distribution; and secondly, if the scattering rate is degenerate for small velocities. We use these known results in the literature to study the emergence of fractional phenomena for some particular kinetic equations. Firstly, we study BGK-type equations conserving not only mass (as in previous results), but also momentum and energy. In the hydrodynamic limit we obtain a fractional diffusion equation for the temperature and density making use of the Boussinesq relation and we also demonstrate that with the same rescaling fractional diffusion cannot be derived additionally for the momentum. But considering the case of conservation of mass and momentum only, we do obtain the incompressible Stokes equation with fractional diffusion in the hydrodynamic limit for heavy-tailed equilibria. Secondly, we will study diffusion phenomena arising from transport of energy in an anharmonic chain. More precisely, we will consider the so-called FPU-$\beta$ chain, which is a very simple model for a one-dimensional crystal in which atoms are coupled to their nearest neighbours by a harmonic potential, weakly perturbed by a nonlinear quartic potential. The starting point of our mathematical analysis is a kinetic equation; lattice vibrations, responsible for heat transport, are modelled by an interacting gas of phonons whose evolution is described by the Boltzmann Phonon Equation. Our main result is the derivation of an anomalous diffusion equation for the temperature. $\textbf{Weak wave turbulence theory and mean-field limits for stochastic particle systems.}$ The isotropic 4-wave kinetic equation is considered in its weak formulation using model homogeneous kernels. Existence and uniqueness of solutions is proven in a particular setting. We also consider finite stochastic particle systems undergoing instantaneous coagulation-fragmentation phenomena and give conditions in which this system approximates the solution of the equation (mean-field limit).
APA, Harvard, Vancouver, ISO, and other styles
22

Huang, Xinming. "Development of Reduced-Order Flame Models for Prediction of Combustion Instability." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/29763.

Full text
Abstract:
Lean-premixed combustion has the advantage of low emissions for modern gas turbines, but it is susceptible to thermoacoustic instabilities, which can result in large amplitude pressure oscillations in the combustion chamber. The thermoacoustic limit cycle is generated by the unsteady heat release dynamics coupled to the combustor acoustics. In this dissertation, we focused on reduced-order modeling of the dynamics of a laminar premixed flame. From first principles of combustion dynamics, a physically-based, reduced-order, nonlinear model was developed based on the proper orthogonal decomposition technique and generalized Galerkin method. In addition, the describing function for the flame was measured experimentally and used to identify an empirical nonlinear flame model. Furthermore, a linear acoustic model was developed and identified for the Rijke tube experiment. Closed-loop thermoacoustic modeling using the first principles flame model coupled to the linear acoustics successfully reproduced the linear instability and predicted the thermoacoustic limit cycle amplitude. With the measured experimental flame data and the modeled linear acoustics, the describing function technique was applied for limit cycle analysis. The thermoacoustic limit cycle amplitude was predicted with reasonable accuracy, and the closed-loop model also predicted the performance for a phase shift controller. Some problems found in the predictions for high heat release cases were documented.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Söhl, Jakob. "Central limit theorems and confidence sets in the calibration of Lévy models and in deconvolution." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16732.

Full text
Abstract:
Zentrale Grenzwertsätze und Konfidenzmengen werden in zwei verschiedenen, nichtparametrischen, inversen Problemen ähnlicher Struktur untersucht, und zwar in der Kalibrierung eines exponentiellen Lévy-Modells und im Dekonvolutionsmodell. Im ersten Modell wird eine Geldanlage durch einen exponentiellen Lévy-Prozess dargestellt, Optionspreise werden beobachtet und das charakteristische Tripel des Lévy-Prozesses wird geschätzt. Wir zeigen, dass die Schätzer fast sicher wohldefiniert sind. Zu diesem Zweck beweisen wir eine obere Schranke für Trefferwahrscheinlichkeiten von gaußschen Zufallsfeldern und wenden diese auf einen Gauß-Prozess aus der Schätzmethode für Lévy-Modelle an. Wir beweisen gemeinsame asymptotische Normalität für die Schätzer von Volatilität, Drift und Intensität und für die punktweisen Schätzer der Sprungdichte. Basierend auf diesen Ergebnissen konstruieren wir Konfidenzintervalle und -mengen für die Schätzer. Wir zeigen, dass sich die Konfidenzintervalle in Simulationen gut verhalten, und wenden sie auf Optionsdaten des DAX an. Im Dekonvolutionsmodell beobachten wir unabhängige, identisch verteilte Zufallsvariablen mit additiven Fehlern und schätzen lineare Funktionale der Dichte der Zufallsvariablen. Wir betrachten Dekonvolutionsmodelle mit gewöhnlich glatten Fehlern. Bei diesen ist die Schlechtgestelltheit des Problems durch die polynomielle Abfallrate der charakteristischen Funktion der Fehler gegeben. Wir beweisen einen gleichmäßigen zentralen Grenzwertsatz für Schätzer von Translationsklassen linearer Funktionale, der die Schätzung der Verteilungsfunktion als Spezialfall enthält. Unsere Ergebnisse gelten in Situationen, in denen eine Wurzel-n-Rate erreicht werden kann, genauer gesagt gelten sie, wenn die Sobolev-Glattheit der Funktionale größer als die Schlechtgestelltheit des Problems ist.
Central limit theorems and confidence sets are studied in two different but related nonparametric inverse problems, namely in the calibration of an exponential Lévy model and in the deconvolution model. In the first set-up, an asset is modeled by an exponential of a Lévy process, option prices are observed and the characteristic triplet of the Lévy process is estimated. We show that the estimators are almost surely well-defined. To this end, we prove an upper bound for hitting probabilities of Gaussian random fields and apply this to a Gaussian process related to the estimation method for Lévy models. We prove joint asymptotic normality for estimators of the volatility, the drift, the intensity and for pointwise estimators of the jump density. Based on these results, we construct confidence intervals and sets for the estimators. We show that the confidence intervals perform well in simulations and apply them to option data of the German DAX index. In the deconvolution model, we observe independent, identically distributed random variables with additive errors and we estimate linear functionals of the density of the random variables. We consider deconvolution models with ordinary smooth errors. Then the ill-posedness of the problem is given by the polynomial decay rate with which the characteristic function of the errors decays. We prove a uniform central limit theorem for the estimators of translation classes of linear functionals, which includes the estimation of the distribution function as a special case. Our results hold in situations, for which a square-root-n-rate can be obtained, more precisely, if the Sobolev smoothness of the functionals is larger than the ill-posedness of the problem.
APA, Harvard, Vancouver, ISO, and other styles
24

Corner, Sebastien Marc William. "Screw-Fastened Cold-Formed Steel-to-Steel Shear Connection Behavior and Models." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/78073.

Full text
Abstract:
This research introduces a proposed model for predicting tilting angle and limit states of single-fastened cold-formed steel-to-steel shear connections. Predictions are validated through an experimental study considering ply configuration and a single Hex #10 -washer head fastener, centered in a 102 mm by 102 mm three boundary window. The fastener tilting angle is captured using an automated, optical non-contact measurement procedure. The results are used to identify cold-formed steel shear connection deformation as load progresses, including tilting, bearing, and combined tilting bearing at the plies and thread tension, shear and bearing fastener failure. Results shows that fastener tilting plays a kinematic affect for the connection. Fastener tilting is predicted in function of ply thickness and fastener pitch. Local ply bending deformation is reported to be the main deformation of the connection during fastener tilting. While fastener bending and shear failure occurred if the fastener does not tilt.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
25

Predoiu, Silviu Nicolae. "Optimal Execution in a General One-Sided Limit-Order Book and Endogenous Dynamic Completeness of Financial Models." Research Showcase @ CMU, 2011. http://repository.cmu.edu/dissertations/115.

Full text
Abstract:
This thesis consists of two parts. The first one is a result obtained under the supervision of Steven Shreve and with the collaboration of Gennady Shaikhet. Our work yielded a detailed description of the optimal strategies for a large investor, when she needed to buy a large amount of shares of a stock over a finite time horizon. The dynamics of the limit order book of the underlying stock is a generalization of known results to continuous time and to arbitrary distributions of the said limit order book. See the introduction section in chapter 2 for a more detailed discussion. The second part is a result obtained under the supervision of Dmitry Kramkov. Our work yielded a sufficient condition on the structure of the economic factors, dividends of traded assets and total endowment in a single-agent economy, such that in an Arrow - Debreu - Radner equilibrium the market is complete. The main result is formulated as an integral representation theorem. Our work generalizes and complements fairly recent results in this direction (at the time of this thesis) by requiring less smoothness of the driving diffusion process at the expense of seemingly stronger conditions on the terminal dividends of the assets. See the introduction section in chapter 3 for a more detailed discussion.
APA, Harvard, Vancouver, ISO, and other styles
26

Buschi, Daniele. "Zero-intelligence Models e crisi di liquidità endogene nei mercati finanziari." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21933/.

Full text
Abstract:
Lo scopo della presente tesi è quello di descrivere la dinamica dei mercati, e in particolar modo quella del prezzo, da un punto di vista fisico. Sono presentati modelli e approssimazioni che rimandano all’approccio tipico della fisica dei sistemi complessi, delineando una fondamentale collaborazione (quella tra Fisica ed Economia) che sta conoscendo un importante sviluppo, soprattutto grazie alla grande mole di dati disponibili oggigiorno. La tesi è strutturata in tre capitoli principali. Nel primo capitolo si descrive la dinamica elementare del prezzo, illustrando i principi base del random walk, si delineano i concetti necessari alla comprensione dello zero-intelligence model e si discute la scelta di optare per un modello che descriva la dinamica del prezzo ponendo l’accento sulle conseguenze della struttura istituzionale a cui si fa riferimento, piuttosto che alla razionalità deli agenti. Il secondo capitolo presenta il modello, cosiddetto zero-intelligence in grado di descrivere la dinamica di una delle più comuni microstrutture di formazione del prezzo: il limit order book. Lo sviluppo di tale modello è trattato tramite una sezione che utilizza la tecnica dell’analisi dimensionale per effettuare predizioni, inerenti a varie grandezze di interesse economico, confermate da dati simulati numericamente; e un’ultima sezione in cui, attraverso l’approssimazione di campo medio, si delineano due approcci teorici che spiegano in maniera più formale i risultati di tale modello. È infine presente un terzo capitolo che tratta una modifica del modello analizzato al secondo capitolo grazie all’aggiunta del feedback. Tale fattore incide pesantemente sulla probabilità di comparsa di crisi di liquidità e, per quanto non si è rigorosamente mostrata una transizione di fase, si assiste a bruschi cambiamenti di tale probabilità a seconda dell’intensità del feedback stesso.
APA, Harvard, Vancouver, ISO, and other styles
27

Ysusi, Mendoza Carla Mariana. "Estimation of the variation of prices using high-frequency financial data." Thesis, University of Oxford, 2005. http://ora.ox.ac.uk/objects/uuid:1b520271-2a63-428d-b5a0-e7e9c4afdc66.

Full text
Abstract:
When high-frequency data is available, realised variance and realised absolute variation can be calculated from intra-day prices. In the context of a stochastic volatility model, realised variance and realised absolute variation can estimate the integrated variance and the integrated spot volatility respectively. A central limit theory enables us to do filtering and smoothing using model-based and model-free approaches in order to improve the precision of these estimators. When the log-price process involves a finite activity jump process, realised variance estimates the quadratic variation of both continuous and jump components. Other consistent estimators of integrated variance can be constructed on the basis of realised multipower variation, i.e., realised bipower, tripower and quadpower variation. These objects are robust to jumps in the log-price process. Therefore, given adequate asymptotic assumptions, the difference between realised multipower variation and realised variance can provide a tool to test for jumps in the process. Realised variance becomes biased in the presence of market microstructure effect, meanwhile realised bipower, tripower and quadpower variation are more robust in such a situation. Nevertheless there is always a trade-off between bias and variance; bias is due to market microstructure noise when sampling at high frequencies and variance is due to the asymptotic assumptions when sampling at low frequencies. By subsampling and averaging realised multipower variation this effect can be reduced, thereby allowing for calculations with higher frequencies.
APA, Harvard, Vancouver, ISO, and other styles
28

Bayly, Matthew. "Translocations of Mimulus cardinalis beyond the northern range limit show that dispersal limitation can invalidate ecological niche models." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/56267.

Full text
Abstract:
Correlative ecological niche models, built with species’ occurrence records, have become the most widespread methods to forecast range shifts with climate change, but these models assume species’ range limits are driven by their niche limits. If a species range limit is instead the result of dispersal limitation, then these correlative based models will be poorly calibrated and largely inaccurate. I used experimental field transplants within and beyond the northern range limit of the scarlet monkeyflower (Mimulus cardinalis) to test for dispersal limitation and to see if climatic-based ecological niche models were able to accurately predict site-level suitability. I also compared predictions from the niche models to a previous study that transplanted the species beyond its upper elevational range limit, which is known to be fitness limited rather than dispersal limited. Predictions from the niche model closely matched observed fitness from the field transplant experiment across the species’ elevational range limit, but not across the species’ northern latitudinal range limit. Consistently high fitness was maintained within and beyond the northern range limit and even in sites of low predicted suitability, suggesting the northern range limit is dispersal limited. I then constructed an alternative ecological niche model for M. cardinalis with stream habitat variables, rather than climatic variables and controlled for the influence of climatic mechanistically, with a simple thermal envelope. This alternative model demonstrated a large amount of suitable habitat beyond the northern range limit, further supporting that this range limit is largely dispersal limited rather than fitness limited. Dispersal limitation presents a serious systemic challenge for the correlative niche modeling framework and its associated applications. By combining niche models with field experiments, I was able to show both the strengths and weaknesses of these methods and use existing theory of dispersal limitation as a framework to assess the accuracy of these models.
Science, Faculty of
Botany, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
29

Subramaniam, Balaji. "Metrics, Models and Methodologies for Energy-Proportional Computing." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/56492.

Full text
Abstract:
Massive data centers housing thousands of computing nodes have become commonplace in enterprise computing, and the power consumption of such data centers is growing at an unprecedented rate. Exacerbating such costs, data centers are often over-provisioned to avoid costly outages associated with the potential overloading of electrical circuitry. However, such over provisioning is often unnecessary since a data center rarely operates at its maximum capacity. It is imperative that we realize effective strategies to control the power consumption of the server and improve the energy efficiency of data centers. Adding to the problem is the inability of the servers to exhibit energy proportionality which diminishes the overall energy efficiency of the data center. Therefore in this dissertation, we investigate whether it is possible to achieve energy proportionality at the server- and cluster-level by efficient power and resource provisioning. Towards this end, we provide a thorough analysis of energy proportionality at the server and cluster-level and provide insight into the power saving opportunity and mechanisms to improve energy proportionality. Specifically, we make the following contribution at the server-level using enterprise-class workloads. We analyze the average power consumption of the full system as well as the subsystems and describe the energy proportionality of these components, characterize the instantaneous power profile of enterprise-class workloads using the on-chip energy meters, design a runtime system based on a load prediction model and an optimization framework to set the appropriate power constraints to meet specific performance targets and then present the effects of our runtime system on energy proportionality, average power, performance and instantaneous power consumption of enterprise applications. We then make the following contributions at the cluster-level. Using data serving, web searching and data caching as our representative workloads, we first analyze the component-level power distribution on a cluster. Second, we characterize how these workloads utilize the cluster. Third, we analyze the potential of power provisioning techniques (i.e., active low-power, turbo and idle low-power modes) to improve the energy proportionality. We then describe the ability of active low-power modes to provide trade-offs in power and latency. Finally, we compare and contrast power provisioning and resource provisioning techniques. This thesis sheds light on mechanisms to tune the power provisioned for a system under strict performance targets and opportunities to improve energy proportionality and instantaneous power consumption via efficient power and resource provisioning at the server- and cluster-level.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Du, Preez Franco B. "Comparative cross-species analysis of detailed kinetic models of glycolysis." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/1208.

Full text
Abstract:
Thesis (PhD (Biochemistry))--University of Stellenbosch, 2009.
ENGLISH ABSTRACT: With the recent advances in the field of molecular biology, there is an increased need to integrate data on the various constituents of the cell in kinetic models that can predict and describe cellular behavior. When working towards a description of the entire cell using such kinetic models, the question arises: How do we compare different models for a given biological network? This is the central question addressed in my thesis and I developed and applied mathematical and computational methods for comparing dozens of existing models of erythrocyte and yeast glycolysis. To compare the steady-state behavior in models of erythrocyte glycolysis, I focussed on the function of the pathway, which is to supply the cell with Gibbs-free energy (γ- phosphate of ATP). I used supply-demand analysis in the framework of metabolic control analysis to make this comparison, which revealed that the ATP concentrations were homeostatically buffered at varying supply rates. I also applied this approach to compare steady-state behavior in models of yeast glycolysis, finding that they were not necessarily optimized for homeostatic maintenance of the ATP concentration and that in models for this organism the rate of ATP production is often determined by the supply reactions of glycolysis. In addition, I tested whether a kinetic model can describe novel behavior if it is adjusted to conditions different from those for which the model was originally constructed. More specifically, using a model of steady-state yeast glycolysis, I showed that small adjustments to the original enzyme concentrations are enough to obtain an oscillating model, which shows a remarkable resemblance to the experimentally observed oscillations. Importantly, some of these enzyme concentrations changes are known to occur during the pre-treatment of the cells which is necessary to obtain oscillatory behavior. To the best of my knowledge, the resulting model is the first detailed kinetic model that describes the experimentally observed strong synchronization of glycolytic oscillations in yeast populations. To analyze the dynamic behavior of yeast glycolytic models and to compare different models in terms of dynamics, I introduced a framework used in physics and engineering to create a vector based, two dimensional graphical representation of the oscillating metabolites and reactions of glycolysis. Not only was it possible to make a concise comparison of the set of models, but with the method I could also quantify the contribution of the interactions in the network to the transduction of the oscillations. Furthermore I could distinguish between different mechanisms of oscillation for each of the models, and demonstrated how the framework can be used to create such representations for experimental data sets.
AFRIKAANSE OPSOMMING: Met die onlangse vooruitgang in die veld van molekulere biologie, is daar ?n toenemende behoefte om data rakende die verskeie komponente van die sel in kinetiese modelle te integreer, om sodanig selgedrag te voorspel en te beskryf. As daar gepoog word om ’n beskrywing van die sel as geheel te verkry d.m.v. sulke kinetiese modelle, onstaan die vraag: Hoe vergelyk ons verskillende modelle van ’n gegewe biologiese netwerk? Dit is die sentrale vraag wat my tesis aanspreek en ek het wiskundige en numeriese metodes ontwikkel en toegepas om talle bestaande modelle van gis- en eritrosietglikolise te vergelyk. Om die bestendige-toestand gedrag in modelle van eritrosietglikolise te vergelyk, het ek gefokus op die funksie van die padweg, naamlik om die sel met Gibbs-vrye energie (γ-fosfaat van ATP) te voorsien. Ek het vraag-aanbod analiese in die raamwerk van metaboliese kontrole analiese gebruik om hierdie vergelyking te maak, wat getoon het dat die ATP konsentrasies homeostaties gebuffer was by verskillende aanbod tempos. Ek het ook hierdie aanpak gebruik om die bestendige-toestand gedrag in modelle van gisglikolise te vergelyk, en het bevind dat hulle nie noodwendig geoptimiseer is om ?n homeostatiese balans in die ATP konsentrasie te handhaaf nie, en dat in modelle vir hierdie organisme, die tempo van ATP produksie dikwels bepaal word deur die aanbod reaksies van glikoliese. Ek het verder ook bepaal of so ?n kinetiese model nuwe soorte gedrag kan beskryf, as dit aangepas word aan omstandighede wat verskil van dié waarvoor die model oorspronklik gekonstrueer was. Meer spesifiek, deur ?n model van bestendige-toestand gisglikolise te gebruik, kon ek wys dat klein veranderinge aan die oorspronkline ensiem konsentrasies genoeg was om ?n ossilerende model te verkry, wat opmerklik ooreenstem met die eksperimenteel waargenome ossilasies. Let ook daarop dat sommige van hierdie ensiem konsentrasie veranderinge plaasvind tydens die voorafbehandeling van die selle, wat essensieel is om die ossilasies waar te neem. Tot die beste van my kennis is die model wat ek met hierdie prosedures verkry het, die eerste gedetaileerde kinetiese model wat die eksperimenteel waargenome sterk sinkronisasie in ossilerende gis populasies voorspel. Om gis glikolitiese modelle te vergelyk in terme van hul dinamiese gedrag, het ek ?n raamwerk wat in fisika en ingeneurswese gebruik word, ingespan om ?n vektor-gebasseerde, twee dimensionele grafiese voorstelling van die ossilerende metaboliete en reaksies te maak. Hierdie raamwerk het dit nie net moontlik gemaak om ?n kompakte vergelyking van ?n stel modelle te maak nie, maar ek kon ook die bydrae van interaksies in die netwerk tot transduksie van die ossilasies kwantifiseer. Ek kon verder onderskeid tref tussen die verskillende ossilasiemeganismes vir elk van die modelle, en het ook gedemonstreer hoe die raamwerk gebruik kan word om sulke voorstellings vir eksperimentele datastelle te skep.
APA, Harvard, Vancouver, ISO, and other styles
31

Jeunesse, Paulien. "Estimation non paramétrique du taux de mort dans un modèle de population générale : Théorie et applications. A new inference strategy for general population mortality tables Nonparametric adaptive inference of birth and death models in a large population limit Nonparametric inference of age-structured models in a large population limit with interactions, immigration and characteristics Nonparametric test of time dependance of age-structured models in a large population limit." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED013.

Full text
Abstract:
L’étude du taux de mortalité dans des modèles de population humaine ou en biologie est le cœur de ce travail. Cette thèse se situe à la frontière de la statistique des processus, de la statistique non-paramétrique et de l’analyse.Dans une première partie, centrée sur une problématique actuarielle, un algorithme est proposé pour estimer les tables de mortalité, utiles en assurance. Cet algorithme se base sur un modèle déterministe de population. Ces nouvelles estimations améliorent les résultats actuels en prenant en compte la dynamique globale de la population. Ainsi les naissances sont incorporées dans le modèle pour calculer le taux de mort. De plus, ces estimations sont mises en lien avec les travaux précédents, assurant ainsi la continuité théorique de notre travail.Dans une deuxième partie, nous nous intéressons à l’estimation du taux de mortalité dans un modèle stochastique de population. Cela nous pousse à utiliser des arguments propres à la statistique des processus et à la statistique non-paramétrique. On trouve alors des estimateurs non-paramétriques adaptatifs dans un cadre anisotrope pour la mortalité et la densité de population, ainsi que des inégalités de concentration non asymptotiques quantifiant la distance entre le modèle stochastique et le modèle déterministe limite utilisé dans la première partie. On montre que ces estimateurs restent optimaux dans un modèle où le taux de mort dépend d’interactions, comme dans le cas de la population logistique.Dans une troisième partie, on considère la réalisation d’un test pour détecter la présence d’interactions dans le taux de mortalité. Ce test permet en réalité de juger de la dépendance temporelle de ce taux. Sous une hypothèse, on montre alors qu’il est possible de détecter la présence d’interactions. Un algorithme pratique est proposé pour réaliser ce test
In this thesis, we study the mortality rate in different population models to apply our results to demography or biology. The mathematical framework includes statistics of process, nonparametric estimations and analysis.In a first part, an algorithm is proposed to estimate the mortality tables. This problematic comes from actuarial science and the aim is to apply our results in the insurance field. This algorithm is founded on a deterministic population model. The new estimates we gets improve the actual results. Its advantage is to take into account the global population dynamics. Thanks to that, births are used in our model to compute the mortality rate. Finally these estimations are linked with the precedent works. This is a point of great importance in the field of actuarial science.In a second part, we are interested in the estimation of the mortality rate in a stochastic population model. We need to use the tools coming from nonparametric estimations and statistics of process to do so. Indeed, the mortality rate is a function of two parameters, the time and the age. We propose minimax optimal and adaptive estimators for the mortality and the population density. We also demonstrate some non asymptotics concentration inequalities. These inequalities quantifiy the deviation between the stochastic process and its deterministic limit we used in the first part. We prove that our estimators are still optimal in a model where the mortality is influenced by interactions. This is for example the case for the logistic population.In a third part, we consider the testing problem to detect the existence of interactions. This test is in fact designed to detect the time dependance of the mortality rate. Under the assumption the time dependance in the mortality rate comes only from the interactions, we can detect the presence of interactions. Finally we propose an algorithm to do this test
APA, Harvard, Vancouver, ISO, and other styles
32

Renlund, Henrik. "Recursive Methods in Urn Models and First-Passage Percolation." Doctoral thesis, Uppsala universitet, Matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-145430.

Full text
Abstract:
This PhD thesis consists of a summary and four papers which deal with stochastic approximation algorithms and first-passage percolation. Paper I deals with the a.s. limiting properties of bounded stochastic approximation algorithms in relation to the equilibrium points of the drift function. Applications are given to some generalized Pólya urn processes. Paper II continues the work of Paper I and investigates under what circumstances one gets asymptotic normality from a properly scaled algorithm. The algorithms are shown to converge in some other circumstances, although the limiting distribution is not identified. Paper III deals with the asymptotic speed of first-passage percolation on a graph called the ladder when the times associated to the edges are independent, exponentially distributed with the same intensity. Paper IV generalizes the work of Paper III in allowing more edges in the graph as well as not having all intensities equal.
APA, Harvard, Vancouver, ISO, and other styles
33

Fredes, Carrasco Luis. "Some models on the interface of probability and combinatorics : particle systems and maps." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0142/document.

Full text
Abstract:
Cette thèse se compose de plusieurs travaux portant sur deux branches de la théorie des probabilités: processus de particules et cartes planaires aléatoires. Un premier travail concerne les aspects algébriques des mesures invariantes des processus de particules. Nous obtenons des conditions nécessaires et suffisantes sous lesquelles un processus de particules en temps continu avec espace d’états local discret possède une mesure invariante simple. Dans un deuxième travail nous étudions un modèle "biologique" de coexistence de 2 espèces en compétition sur un espace partagé, et soumis à des épidémies modélisées par un modèle probabiliste appelé "feux de forêts". Notre résultat principal montre que pour deux espèces, il existe des régions explicites de paramètres pour lesquelles une espèce domine ou les deux espèces coexistent. Il s’agit d’un des premiers modèles pour lesquels la coexistence d’espèces sur le long terme est prouvée. Les troisièmes et quatrièmes travaux. portent sur les cartes planaires décorées par des arbres. Dans le troisième nous présentons une bijection entre l’ensemble des cartes décorées par des arbres et le produit Cartésien entre l’ensemble des arbres planaires et l’ensemble de cartes à bord simple. Nous obtenons quelques formules de comptage et quelques outils pour l’étude de cartes aléatoires décorées par un arbre. Le quatrième travail montre que les triangulations et quadrangulations aléatoires uniformes avec f faces, bord simple de taille p et décorées par un arbre avec a arêtes, convergent en loi pour la topologie locale vers différentes limites, dépendant du comportement fini ou infini de la limite de f, p et a
This thesis consists in several works exploring some models belonging to two branches of probability theory: interacting particle systems and random planar maps. A first work concerns algebraic aspects of interacting particle systems invariant measures. We obtain some necessary and sufficient conditions for some continuous time particle systems with discrete local state space, to have a simple invariant measure. In a second work we investigate the effect on survival and coexistence of introducing forest fire epidemics to a certain two-species spatial competition model. Our main results show that, for the two-type model, there are explicit parameter regions where either one species dominates or there is coexistence; contrary to the same model without forest fires, for which the fittest species alwaysdominates. The third and fourth works are related to tree-decorated planar maps. In the third work we present a bijection between the set of tree-decorated maps and the Cartesian product between the set of trees and the set of maps with a simple boundary. We obtain some counting results and some tools to study random decorated map models. In the fourth work we prove that uniform tree-decorated triangulations and quadrangulations with f faces, boundary of length p and decorated by a tree of size a converge weakly for the local topology to different limits, depending on the finite or infinite behavior of f, p and a
APA, Harvard, Vancouver, ISO, and other styles
34

Jeong, Minsoo. "Asymptotics for the maximum likelihood estimators of diffusion models." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

O'Leary, Brendon R. "In search of the electron's electric dipole moment in thorium monoxide| An improved upper limit, systematic error models, and apparatus upgrades." Thesis, Yale University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10633259.

Full text
Abstract:

Searches for violations of discrete symmetries can be sensitive probes of physics beyond the Standard Model. Many models, such as supersymmetric theories, introduce new particles at higher masses that include new CP-violating phases which are thought to be of order unity. Such phases could generate measurable permanant electric dipole moments (EDMs) of particles. The ACME collaboration has measured the electron's EDM to be consistent with zero with an order of magnitude improvement in precision compared to the previous best precision (J. Baron et al., ACME collaboration, Science 343 (2014), 269-272) with a spin precession measurement performed in the H state of a beam of thorium monoxide (ThO). This limit constrains time-reversal violating physics for particles with masses well into the TeV scale. In this thesis I discuss the details of this measurement with an emphasis on the data analysis, search for systematic errors, and systematic error models that contributed to this result. I also discuss implemented and planned upgrades to the experimental apparatus intended to both improve the statistical sensitivity and reduce its susceptibility to systematic errors. At this time, the upgraded apparatus has been demonstrated to have a statistical sensitivity to the electron EDM that is more than a factor of 10x more precise than our previous measurement.

APA, Harvard, Vancouver, ISO, and other styles
36

Janssen, Arend. "Order book models, signatures and numerical approximations of rough differential equations." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:264e96b3-f449-401b-8768-337acab59cab.

Full text
Abstract:
We construct a mathematical model of an order driven market where traders can submit limit orders and market orders to buy and sell securities. We adapt the notion of no free lunch of Harrison and Kreps and Jouini and Kallal to our setting and we prove a no-arbitrage theorem for the model of the order driven market. Furthermore, we compute signatures of order books of different financial markets. Signatures, i.e. the full sequence of definite iterated integrals of a path, are one of the fundamental elements of the theory of rough paths. The theory of rough paths provides a framework to describe the evolution of dynamical systems that are driven by rough signals, including rough paths based on Brownian motion and fractional Brownian motion (see the work of Lyons). We show how we can obtain the solution of a polynomial differential equation and its (truncated) signature from the signature of the driving signal and the initial value. We also present and analyse an ODE method for the numerical solution of rough differential equations. We derive error estimates and we prove that it achieves the same rate of convergence as the corresponding higher order Euler schemes studied by Davie and Friz and Victoir. At the same time, it enhances stability. The method has been implemented for the case of polynomial vector fields as part of the CoRoPa software package which is available at http://coropa.sourceforge.net. We describe both the algorithm and the implementation and we show by giving examples how it can be used to compute the pathwise solution of stochastic rough differential equations driven by Brownian rough paths and fractional Brownian rough paths.
APA, Harvard, Vancouver, ISO, and other styles
37

Borghi, Giacomo. "Consensus-Based Optimization on Hypersurfaces." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21783/.

Full text
Abstract:
In questo elaborato viene presentato un algoritmo Consensus-Based per l'ottimizazione vincolata a ipersuperfici. Il metodo consiste in una tecnica di ottimizzazione di tipo metaeuristico dove un insieme di particelle interagenti si muove secondo un meccanismo che unisce movimenti deterministici e stocastici per creare un consenso attorno ad un luogo del dominio dove è presente un minimo della funzione. La dinamica è governata da un sistema di SDE ed è studiata attraverso il formalismo della teoria cinetica per modelli di particelle interagenti. Innanzitutto, viene dimostrato che il sistema è ben posto e viene formalmente derivato il suo limite di campo medio. Il meccanismo di consenso viene poi studiato analiticamente e computazionalmente soffermandosi sulle difficoltà che il rispetto del vincolo comporta. Infine, vengono condotti esperimenti su classiche funzioni test.
APA, Harvard, Vancouver, ISO, and other styles
38

Santos, Lucas Araújo. "Limites de escala em modelos de armadilhas." Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/8043.

Full text
Abstract:
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-03-28T13:00:07Z No. of bitstreams: 1 arquivo total.pdf: 809257 bytes, checksum: 7406ef37d18bbaf1d9cdd5649f5cff19 (MD5)
Made available in DSpace on 2016-03-28T13:00:07Z (GMT). No. of bitstreams: 1 arquivo total.pdf: 809257 bytes, checksum: 7406ef37d18bbaf1d9cdd5649f5cff19 (MD5) Previous issue date: 2015-12-11
Let X = fX 0;X0 = 0g be a mean zero -stable random walk on Z with inhomogeneous jump rates f 􀀀1 i ; i 2 Zg, with 2 (1; 2] and f i : i 2 Zg is a family of independent random walk variables with common marginal distribution in the basis of attraction of an -stable law with 2 (0; 2]. In this paper we derive results about the long time behavior of this process, we obtain the scaling limit. To this end, rst we will approach probability on metric spaces, speci cally treat the D space of the functions that are right-continuous and have left-hand limits. We will also expose some results dealing with stable laws that are directly related to the above problem.
Seja X = fX 0;X0 = 0g um passeio aleat orio de m edia zero -est avel sobre Z com taxas de saltos n~ao homog^eneas f 􀀀1 i ; i 2 Zg, com 2 (1; 2] e f i : i 2 Zg uma fam lia de vari aveis aleat orias independentes com distribui c~ao marginal comum na bacia de atra c~ao de uma lei -est avel com 2 (0; 2]. Neste trabalho, obtemos resultados sobre o comportamento a longo prazo deste processo obtendo seu limite de escala. Para isso, faremos previamente um estudo sobre probabilidade em espa cos m etricos, mais especi camente sobre o espa co D das fun coes cont nuas a direita com limite a esquerda. Tamb em iremos expor alguns resultados que tratam de leis est aveis que est~ao relacionadas diretamente ao problema supracitado.
APA, Harvard, Vancouver, ISO, and other styles
39

Kirby, Nicholas O. "Modeling, Analysis, and Simulation of Discrete-Continuum Models of Step-Flow Epitaxy: Bunching Instabilities and Continuum Limits." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/187.

Full text
Abstract:
Vicinal surfaces consist of terraces separated by atomic steps. In the step-flow regime, deposited atoms (adatoms) diffuse on terraces, eventually reaching steps where they attach to the crystal, thereby causing the steps to move. There are two main objectives of this work. First, we analyze rigorously the differences in qualitative behavior between vicinal surfaces consisting of infinitely many steps and nanowires whose top surface consists of a small number of steps bounded by a reflecting wall. Second, we derive the continuum model that describes the macroscopic behavior of vicinal surfaces from detailed microscopic models of step dynamics. We use the standard theory of Burton-Cabrera-Frank (BCF) to show that in the presence of an Ehrlich-Schwoebel barrier, i.e., a preferential attachment of adatoms from the lower terraces, N-periodic step motions are stable with respect to step collisions. Nonetheless, for N > 2 step collisions may occur. Moreover, we consider a single perturbed terrace, in which we distinguish three cases: no attachment from the upper terraces (perfect ES barrier), no attachment from the lower terraces (perfect inverse ES barrier), and symmetric attachment. For a perfect ES barrier, steps never collide regardless of the initial perturbation. In contrast, for a perfect inverse ES barrier, collisions occur for any nonzero perturbation. Finally, for symmetric attachment, step collisions occur for sufficiently large outward perturbations. To model nanowire growth, we consider rectilinear steps and concentric steps bounded by reflecting walls. In contrast to a vicinal surface with infinitely many steps, we prove analytically that the Ehrlich-Schwoebel barrier is destabilizing with respect to step collisions. We further consider nanowire growth with desorption, and prove that the initial conditions that lead to step collisions are characterized by a unique step motion trajectory. We take as our starting point a thermodynamically consistent (TC) generalization of the BCF model to derive PDE that govern the evolution of the vicinal surface at the macroscale. Whereas the BCF model yields a fourth-order parabolic equation for the surface height, the TC model yields a system of coupled equations for the surface height and the surface chemical potential.
APA, Harvard, Vancouver, ISO, and other styles
40

Söhl, Jakob [Verfasser], Markus [Akademischer Betreuer] Reiß, Vladimir [Akademischer Betreuer] Spokoiny, and Richard [Akademischer Betreuer] Nickl. "Central limit theorems and confidence sets in the calibration of Lévy models and in deconvolution / Jakob Söhl. Gutachter: Markus Reiß ; Vladimir Spokoiny ; Richard Nickl." Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://d-nb.info/103457258X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Morel, Guillaume. "Asymptotic-preserving and well-balanced schemes for transport models using Trefftz discontinuous Galerkin method." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS556.

Full text
Abstract:
Cette thèse traite de l’étude et de l’analyse d’un schéma de type Trefftz Galerkin discontinu (TDG) pour un problème modèle de transport avec relaxation linéaire. Nous montrons que la méthode TDG fournie naturellement des discrétisations bien équilibrées et asymptotic-preserving puisque des solutions exactes, éventuellement non polynomiales, sont utilisées localement dans les fonctions de base. En particulier, la formulation de la méthode du TDG est donnée dans le cas général des systèmes de Friedrichs. En pratique, une attention particulière est consacrée à l’approximation PN de l’équation de transport. Pour ce modèle bidimensionnel, des fonctions de base polynomiales et exponentielles sont construites et la convergence du schéma est étudiée. Les exemples numériques sur les modèles P1 et P3 montrent que la méthode TDG surpasse la méthode Galerkin discontinue standard pour certains tests avec termes source raides. En particulier, la méthode TDG permet d’obtenir des schémas efficaces pour capturer les couches limites et la limite de diffusion de l’équation de transport
Some solutions to the transport equation admit a diffusion limit and boundary layers which may be very costly to approximate with naive numerical methods. To address these issues, a possible approach is to consider well-balanced (WB) and asymptotic-preserving (AP) schemes. Such schemes are known, in some cases, to greatly improve the numerical solution on coarse meshes. This thesis deals with the study and analysis of a Trefftz Discontinuous Galerkin (TDG) scheme for a model problem of transport with linear relaxation. We show that natural well-balanced and asymptotic-preserving discretization are provided by the TDG method since exact solutions, possibly non-polynomials, are used locally in the basis functions. In particular, the formulation of the TDG method for the general case of Friedrichs systems is given. For the practical examples, a special attention is devoted to the PN approximation of the transport equation. For this two dimensional model, polynomial and exponential basis functions are constructed and the convergence of the scheme is studied. Numerical examples on the P1 and P3 models show that the TDG method outperforms the standard discontinuous Galerkin method when considering stiff coefficients. In particular, the TDG method leads to efficient schemes to capture boundary layers and the diffusion limit of the transport equation
APA, Harvard, Vancouver, ISO, and other styles
42

Navarrete, Hurtado Hugo Ariel. "Electromagnetic models for ultrasound image processing." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/398235.

Full text
Abstract:
Speckle noise appears when coherent illumination is employed, as for example Laser, Synthetic Aperture Radar (SAR), Sonar, Magnetic Resonance, X-ray and Ultrasound imagery. Backscattered echoes from the randomly distributed scatterers in the microscopic structure of the medium are the origin of speckle phenomenon, which characterizes coherent imaging with a granular appearance. It can be shown that speckle noise is of multiplicative nature, strongly correlated and more importantly, with non-Gaussian statistics. These characteristics differ greatly from the traditional assumption of white additive Gaussian noise, often taken in image segmentation, filtering, and in general, image processing; which leads to reduction of the methods effectiveness for final image information extraction; therefore, this kind of noise severely impairs human and machine ability to image interpretation. Statistical modeling is of particular relevance when dealing with speckled data in order to obtain efficient image processing algorithms; but, additionally, clinical ultrasound imaging systems employ nonlinear signal processing to reduce the dynamic range of the input echo signal to match the smaller dynamic range of the display device and to emphasize objects with weak backscatter. This reduction in dynamic range is normally achieved through a logarithmic amplifier i.e. logarithmic compression, which selectively compresses large input signals. This kind of nonlinear compression totally changes the statistics of the input envelope signal; and, a closed form expression for the density function of the logarithmic transformed data is usually hard to derive. This thesis is concerned with the statistical distributions of the Log-compressed amplitude signal in coherent imagery, and its main objective is to develop a general statistical model for log-compressed ultrasound B-scan images. The developed model is adapted, making the pertinent physical analogies, from the multiplicative model in Synthetic Aperture Radar (SAR) context. It is shown that the proposed model can successfully describe log-compressed data generated from different models proposed in the specialized ultrasound image processing literature. Also, the model is successfully applied to model in-vivo echo-cardiographic (ultrasound) B-scan images. Necessary theorems are established to account for a rigorous mathematical proof of the validity and generality of the model. Additionally, a physical interpretation of the parameters is given, and the connections between the generalized central limit theorems, the multiplicative model and the compound representations approaches for the different models proposed up-to-date, are established. It is shown that the log-amplifier parameters are included as model parameters and all the model parameters are estimated using moments and maximum likelihood methods. Finally, three applications are developed: speckle noise identification and filtering; segmentation of in vivo echo-cardiographic (ultrasound) B-scan images and a novel approach for heart ejection fraction evaluation
El ruido Speckle aparece cuando se utilizan sistemas de iluminación coherente, como por ejemplo Láser, Radar de Apertura Sintética (SAR), Sonar, Resonancia Magnética, rayos X y ultrasonidos. Los ecos dispersados por los centros dispersores distribuidos al azar en la estructura microscópica del medio son el origen de este fenómeno, que caracteriza las imágenes coherentes con un aspecto granular. Se puede demostrar que el ruido Speckle es de carácter multiplicativo, fuertemente correlacionados y lo más importante, con estadística no Gaussiana. Estas características son muy diferentes de la suposición tradicional de ruido aditivo gaussiano blanco, a menudo asumida en la segmentación de imágenes, filtrado, y en general, en el procesamiento de imágenes; lo cual se traduce en la reducción de la eficacia de los métodos para la extracción de información de la imagen final. La modelización estadística es de particular relevancia cuando se trata con datos Speckle, a fin de obtener algoritmos de procesamiento de imágenes eficientes. Además, el procesamiento no lineal de señales empleado en sistemas clínicos de imágenes por ultrasonido para reducir el rango dinámico de la señal de eco de entrada de manera que coincida con el rango dinámico más pequeño del dispositivo de visualización y resaltar así los objetos con dispersión más débil, modifica radicalmente la estadística de los datos. Esta reducción en el rango dinámico se logra normalmente a través de un amplificador logarítmico es decir, la compresión logarítmica, que comprime selectivamente las señales de entrada y una forma analítica para la expresión de la función de densidad de los datos transformados logarítmicamente es por lo general difícil de derivar. Esta tesis se centra en las distribuciones estadísticas de la amplitud de la señal comprimida logarítmicamente en las imágenes coherentes, y su principal objetivo es el desarrollo de un modelo estadístico general para las imágenes por ultrasonido comprimidas logarítmicamente en modo-B. El modelo desarrollado se adaptó, realizando las analogías físicas relevantes, del modelo multiplicativo en radares de apertura sintética (SAR). El Modelo propuesto puede describir correctamente los datos comprimidos logarítmicamente a partir datos generados con los diferentes modelos propuestos en la literatura especializada en procesamiento de imágenes por ultrasonido. Además, el modelo se aplica con éxito para modelar ecocardiografías en vivo. Se enuncian y demuestran los teoremas necesarios para dar cuenta de una demostración matemática rigurosa de la validez y generalidad del modelo. Además, se da una interpretación física de los parámetros y se establecen las conexiones entre el teorema central del límite generalizado, el modelo multiplicativo y la composición de distribuciones para los diferentes modelos propuestos hasta a la fecha. Se demuestra además que los parámetros del amplificador logarítmico se incluyen dentro de los parámetros del modelo y se estiman usando los métodos estándar de momentos y máxima verosimilitud. Por último, tres aplicaciones se desarrollan: filtrado de ruido Speckle, segmentación de ecocardiografías y un nuevo enfoque para la evaluación de la fracción de eyección cardiaca.
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Wenzhe. "The arithmetic geometry of mirror symmetry and the conifold transition." Thesis, University of Oxford, 2018. http://ora.ox.ac.uk/objects/uuid:e55a7b22-a268-4c57-9d98-c0547ecdcef9.

Full text
Abstract:
The central theme of this thesis is the application of mirror symmetry to the study of the arithmetic geometry of Calabi-Yau threefolds. It formulates a conjecture about the properties of the limit mixed Hodge structure at the large complex structure limit of an arbitrary mirror threefold, which is supported by a two-parameter example of a self-mirror Calabi-Yau threefold. It further studies the connections between this conjecture with Voevodsky's mixed motives. This thesis also studies the connections between the conifold transition and Beilinson's conjecture on the values of the L-functions at integral points. It carefully studies the arithmetic geometry of the conifold in the mirror family of the quintic Calabi-Yau threefold and its L-function, which is shown to provide a very interesting example to Beilinson's conjecture.
APA, Harvard, Vancouver, ISO, and other styles
44

Ozan, Cem. "Estimation Of Grain Characteristics Of Soils By Using Cone Penetration Test (cpt) Data." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1088988/index.pdf.

Full text
Abstract:
Due to lack of soil sampling during a conventional cone penetration testing (CPT), it is necessary to classify soils based on recorded tip and sleeve friction and pore pressure (if available) values. However, currently available soil classification models are based on deterministic and judgemental determination of soil classification boundaries which do not address the uncertainties intristic to the problem. Moreover, size and quality of databases used in the development of these soil classification models are undocumented and thus questionable. Similar limitations do also exist in the development of SPT-CPT correlations which are widely used in SPT dominated design such as soil liquefaction triggering. To eliminate these discussed limitations, within the confines of this study it is attempted to present (1) a new probabilistic CPT- based soil classification methodology, and (2) new SPT-CPT correlations which address the uncertainties intrinsic to the problems. For these purposes, a database composed of 400 CPT/SPT boring data pairs was compiled. It is intended to develop probabilistic models, which will correlate CPT tip and sleeve friction values to actual soil classification and CPT tip resistance to SPT blow count N. The new set of correlations, model parameters of which estimated by implementing maximum likelihood methodology, presented herein are judged to represent a robust and defensible basis for (1) prediction of soil type based on CPT data and, (2) estimation of SPT-N value for given CPT data.
APA, Harvard, Vancouver, ISO, and other styles
45

Piltz, Sofia Helena. "Models for adaptive feeding and population dynamics in plankton." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:811fd94d-d58e-48fa-8848-ad7dc37a099f.

Full text
Abstract:
Traditionally, differential-equation models for population dynamics have considered organisms as "fixed" entities in terms of their behaviour and characteristics. However, there have been many observations of adaptivity in organisms, both at the level of behaviour and as an evolutionary change of traits, in response to the environmental conditions. Taking such adaptiveness into account alters the qualitative dynamics of traditional models and is an important factor to be included, for example, when developing reliable model predictions under changing environmental conditions. In this thesis, we consider piecewise-smooth and smooth dynamical systems to represent adaptive change in a 1 predator-2 prey system. First, we derive a novel piecewise-smooth dynamical system for a predator switching between its preferred and alternative prey type in response to prey abundance. We consider a linear ecological trade-off and discover a novel bifurcation as we change the slope of the trade-off. Second, we reformulate the piecewise-smooth system as two novel 1 predator-2 prey smooth dynamical systems. As opposed to the piecewise-smooth system that includes a discontinuity in the vector fields and assumes that a predator switches its feeding strategy instantaneously, we relax this assumption in these systems and consider continuous change in a predator trait. We use plankton as our reference organism because they serve as an important model system. We compare the model simulations with data from Lake Constance on the German-Swiss-Austrian border and suggest possible mechanistic explanations for cycles in plankton concentrations in spring.
APA, Harvard, Vancouver, ISO, and other styles
46

Corker, Lloyd A. "A test for Non-Gaussian distributions on the Johannesburg stock exchange and its implications on forecasting models based on historical growth rates." University of Western Cape, 2002. http://hdl.handle.net/11394/7447.

Full text
Abstract:
Masters of Commerce
If share price fluctuations follow a simple random walk then it implies that forecasting models based on historical growth rates have little ability to forecast acceptable share price movements over a certain period. The simple random walk description of share price dynamics is obtained when a large number of investors have equal probability to buy or sell based on their own opinion. This simple random walk description of the stock market is in essence the Efficient Market Hypothesis, EMT. EMT is the central concept around which financial modelling is based which includes the Black-Scholes model and other important theoretical underpinnings of capital market theory like mean-variance portfolio selection, arbitrage pricing theory (APT), security market line and capital asset pricing model (CAPM). These theories, which postulates that risk can be reduced to zero sets the foundation for option pricing and is a key component in financial software packages used for pricing and forecasting in the financial industry. The model used by Black and Scholes and other models mentioned above are Gaussian, i.e. they exhibit a random nature. This Gaussian property and the existence of expected returns and continuous time paths (also Gaussian properties) allow the use of stochastic calculus to solve complex Black- Scholes models. However, if the markets are not Gaussian then the idea that risk can be. (educed to zero can lead to a misleading and potentially disastrous sense of security on the financial markets. This study project test the null hypothesis - share prices on the JSE follow a random walk - by means of graphical techniques such as symmetry plots and Quantile-Quantile plots to analyse the test distributions. In both graphical techniques evidence for the rejection of normality was found. Evidenceleading to the rejection of the hypothesis was also found through nonparametric or distribution free methods at a 1% level of significance for Anderson-Darling and Runs test.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Liming. "Limit Values and Factors influencing Limit Values of Spruce." Thesis, Uppsala universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-155062.

Full text
Abstract:
We collected the data for decomposition of spruce litter to determine the limit values of mass loss and to find both chemical and climate factors that influence limit values. Our data contained 28 sequences of spruce which mainly in Sweden and a small part in other places. We choose mean annual temperature (MAT) and mean annual precipitation (MAP) as climate factors and water solubles, lignin, N, P, K, Ca, Mg and Mn as chemical factors. Then we got the estimated limit values by performing a nonlinear model with mass loss and time spots, and found out the influential factors by using another linear mixed model. At the end we knew that linear mixed model is a proper and efficient approach for determining the factors, P and MAP are the significant factors and Species is a good random effect to explain the variance within groups.
APA, Harvard, Vancouver, ISO, and other styles
48

Vaccheri, Matteo. "Stochastic particle systems and their application in finacial modelling." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
In questa tesi vogliamo studiare le proprietà macroscopiche di sistemi stocastici di particelle, ossia il comportamento di sistemi di SDEs quando si considera un numero grande di particelle. Qui siamo più che altro interessati a usare tali sistemi per modellare fenomeni, quindi qui li con un fine prettamente pratico. Il problema principale che ci si presenta è di trovare un modo di approssimare la distribuzione del sistema, dal momento che nella maggior parte dei casi tale essa non è esplicita. Il fatto chiave che vogliamo mostrare è che ci sono due modi per approssimare tale distribuzione: attraverso il limite macroscopico o attraverso PDEs. Infine si vuole vedere tali concetti in azione per calibrare ai prezzi di mercato i modelli a volatilità locale stocastica.
APA, Harvard, Vancouver, ISO, and other styles
49

Gava, Renato Jacob. "Limite de escala do modelo de armadilhas numa árvore." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-24092014-224507/.

Full text
Abstract:
Nós apresentamos o processo K numa árvore, que é um processo de Markov com estados instantâneos e generaliza o processo K no grafo completo, como o limite do modelo de armadilha numa árvore, e aplicamos esse resultado para derivar um limite de escala para o modelo de armadilha do GREM.
We present the K process on a tree, which is a Markov process with instantaneous states and generalises the K process on the complete graph, as a limit of the trap model on a tree, and apply this result to derive a scaling limit to the GREM-like trap model.
APA, Harvard, Vancouver, ISO, and other styles
50

Geng, Xin. "Limit order book resilience and cross impact limit order book model." Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/67146/.

Full text
Abstract:
This thesis comprises of five chapters. The first chapter gives a brief introduction on the existing literature about the optimal trading order execution problem, the concept of limit order book, market impact models and their underlying market microstructure. We will also provide some brief review on the regularity problem of market impact model and the resilience effect of the LOB market. Some notions about the limit order book trading will also be introduced in this chapter. The second chapter, a game theoretical model given by Rosu [74] is introduced and the same side and opposite side resilience are reinterpreted for this model. The solution structure of a Markov equilibrium of this model is obtained for the same side resilience by providing a rigourous mathematical analysis. We also provide a sufficient condition for the existence of real-valued solutions under this situation. We also reproduce the results in Rosu [74] about the opposite side resilience in this LOB model. In the third chapter, we extend the LOB market impact model in Obizhaeva and Wang [65] by introducing two sides resilience and a general LOB shape function. Two existing LOB market impact models are then replicated by our extended model, allowing the cross-impact resilience rate going to zero and infinity respectively. In the last two chapters, we conduct two applications of our extended market impact model. These two applications are able to help us study the optimal execution problem and the market regularity issues. We find out that the minimum cost of the zero-spread LOB model is a lower bound of the minimum cost of our extended market impact LOB model and those models with zero bid-ask spread have weaker regularity conditions than those with a non-zero bid-ask spread.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography