Thèses sur le sujet « Qa273 »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Qa273.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Qa273 ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Chernyavsky, Igor L. « A multiscale analysis of flow and transport in the human placenta ». Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/13678/.

Texte intégral
Résumé :
The human placenta is characterised by a unique circulatory arrangement, with numerous villous trees containing fetal vessels immersed in maternal blood. Placental tissue therefore manifests a multiscale structure balancing microscopic delivery of nutrients and macroscopic flow. The aims of this study are to examine the interaction between these scales and to understand the influence of placental organisation on the effectiveness of nutrient uptake, which can be compromised in pathologies like pre-eclampsia and diabetes. We first systematically analyse solute transport by a unidirectional flow past an array of microscopic sinks, taking up a dissolved nutrient or gas, for both regular and random sink distributions. We classify distinct asymptotic transport regimes, each characterised by the dominance of advective, diffusive or uptake effects at the macroscale, and analyse a set of simplified model problems to assess the accuracy of homogenization approximations as a function of governing parameters (Peclet and Damkohler numbers) and the statistical properties of the sink distribution. The difference between the leading-order homogenization approximation and the exact solute distribution is determined by large spatial gradients at the scale of individual villi (depending on transport parameter values) and substantial fluctuations that can be correlated over lcngthscales comparable to the whole domain. In addition, we consider the nonlinear advective effects of solute-carriers, such as red blood cells carrying oxygen. Homogenization of the solute-carrier-facilitated transport introduces an effective Peclet number that depends on the slowly varying leading-order concentration, so that an asymptotic transport regime can be changed within the domain. At large Peclet and Damkohler numbers (typical for oxygen transport in the human placenta), nonlinear advection due to solute-carriers leads to a more uniform solute distribution than for a linear carrier-free transport, suggesting a "homogenizing" effect of red blood cells on placental oxygen transport. We then use image analysis and homogenization concepts to extract the effective transport properties (diffusivity and hydraulic resistance) from the microscopic images of histological sections of the normal human placenta. The resulting two-dimensional tensor quantities allow us to assess the anisotropy of placental tissue for solute transport. We also show how the pattern of villous centres of mass can be characterised using an integral correlation measure, and identify the minimum spatial scale over which the distribution of villous branches appears statistically homogeneous. Finally, we propose a mathematical model for maternal blood flow in a placental functional unit (a placentone), describing flow of maternal blood via Darcy's law and steady advective transport of a dissolved nutrient. An analytical method of images and computational integration along streamlines are employed to find flow and solute concentration distributions, which are illustrated for a range of governing system parameters. Predictions of the model agree with experimental radioangiographic studies of tracer dynamics in the intervillous space. The model supports the hypothesis that basal veins are located on the periphery of the placentone in order to optimise delivery of nutrients. We also explain the importance of dilatation of maternal spiral arteries and suggest the existence of an optimal volume fraction of villous tissue, which can both be involved in the placental dysfunction. Theoretical studies of this thesis thus constitute a step towards modelling-based diagnostics and treatment of placental disorders.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Li, Mengdi. « Stochastic modelling and optimization with applications to actuarial models ». Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/12702/.

Texte intégral
Résumé :
This thesis is devoted to Ruin Theory which sometimes referred to the collective ruin theory. In Actuarial Science, one of the most important problems is to determine the finite time or infinite time ruin probability of the risk process in an insurance company. To treat a realistic economic situation, the random interest factor should be taken into account. We first define the model with the interest rate and approximate the ruin probability for the model by the Brownian motion and develop several numerical methods to evaluate the ruin probability. Then we construct several models which incorporate possible investment strategies. We estimate the parameters from the simulated data. Then we find the optimal investment strategy with a given upper bound on the ruin probability. Finally we study the ruin probability for our class of models with the Heavy- Tailed claim size distribution.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Worby, Colin J. « Statistical inference and modelling for nosocomial infections and the incorporation of whole genome sequence data ». Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13154/.

Texte intégral
Résumé :
Healthcare-associated infections (HCAIs) remain a problem worldwide, and can cause severe illness and death. The increasing level of antibiotic resistance among bacteria that cause HCAIs limits infection treatment options, and is a major concern. Statistical modelling is a vital tool in developing an understanding of HCAI transmission dynamics. In this thesis, stochastic epidemic models are developed and used with the aim of investigating methicillin-resistant Staphylococcus aureus (MRSA) transmission and intervention measures in hospital wards. A detailed analysis of MRSA transmission and the effectiveness of patient isolation was performed, using data collected from several general medical wards in London. A Markov chain Monte Carlo (MCMC) algorithm was used to derive parameter estimates, accounting for unobserved transmission dynamics. A clear reduction in transmission associated with the use of patient isolation was estimated. A Bayesian framework offers considerable benefits and flexibility when dealing with missing data; however, model comparison is difficult, and existing methods are far from universally accepted. Two commonly used Bayesian model selection tools, reversible jump MCMC and the deviance information criterion (DIC), were thoroughly investigated in a transmission model setting, using both simulated and real data. The collection of whole genome sequence (WGS) data is becoming easier, faster and cheaper than ever before. With WGS data likely to become abundant in the near future, the development of sophisticated analytical tools and models to exploit such genetic information is of great importance. New methods were developed to model MRSA transmission, using both genetic and epidemiological data, allowing for the reconstruction of transmission networks and simultaneous estimation of key transmission parameters. This approach was tested with simulated data and employed on WGS data collected from two Thai intensive care units. This work offers much scope for future investigations into genetic diversity and more complex transmission models, once suitable data become available.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Lee, Wai Ha. « Continuous and discrete properties of stochastic processes ». Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11194/.

Texte intégral
Résumé :
This thesis considers the interplay between the continuous and discrete properties of random stochastic processes. It is shown that the special cases of the one-sided Lévy-stable distributions can be connected to the class of discrete-stable distributions through a doubly-stochastic Poisson transform. This facilitates the creation of a one-sided stable process for which the N-fold statistics can be factorised explicitly. The evolution of the probability density functions is found through a Fokker-Planck style equation which is of the integro-differential type and contains non-local effects which are different for those postulated for a symmetric-stable process, or indeed the Gaussian process. Using the same Poisson transform interrelationship, an exact method for generating discrete-stable variates is found. It has already been shown that discrete-stable distributions occur in the crossing statistics of continuous processes whose autocorrelation exhibits fractal properties. The statistical properties of a nonlinear filter analogue of a phase-screen model are calculated, and the level crossings of the intensity analysed. It is found that rather than being Poisson, the distribution of the number of crossings over a long integration time is either binomial or negative binomial, depending solely on the Fano factor. The asymptotic properties of the inter-event density of the process are found to be accurately approximated by a function of the Fano factor and the mean of the crossings alone.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Jones, Zofia. « Topics in the mathematical modelling of nanotoxicology ». Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/12436/.

Texte intégral
Résumé :
Over the last ten years questions related to the safety of nanoparticles and their possible toxic effects have become well-established. The government's Health and Safety Laboratories (HSL) at Buxton are currently attempting to determine their possible toxicity in the workplace. It is their responsibility to establish what levels are exposure can be considered safe in the workplace. This project is a CASE studentship with HSL and aims to start developing mathematical models relating to nanotoxicology. After reviewing the available literature, three key mechanisms which are involved in the possible toxicity of nanoparticles emerge. One mechanism is the oxidative stress they cause once they enter individual cells. The second mechanism is the damage done to the surface of the lung if they are not successfully phagocytosed on inhalation. Finally, the third mechanism is their propensity to aggregate both when dispersed in the air or when they are found inside the body. These three topics are dealt with in Parts I, II and III respectively. There has been much concern over how carbon nanotubes (CNTs) may cause oxidative stress. Oxidative stress occurs when there is an overload negatively charged species in the cell. These are collectively known as Reactive Oxidative Species (ROS). ROS are always present in a cell as they are the natural product of the metabolic pathways. By their reactivity, they readily cause damage to other molecules in the cell, so every cell produces anti-oxidants in order to control the concentration of ROS. However, when the concentration of ROS becomes too high the concentration of anti-oxidants becomes depleted and the cell can become too damaged to function. In this case it dies by necrosis. When a cell dies by necrosis is can cause irritation and further damage to surrounding cells. Oxidative stress can also trigger the immune response so that the cell dies by self-programmed apoptotic cell death which limits this damage to surrounding cells. It is best to avoid unnecessary cell death, however, not undergoing apoptosis risks a more damaging necrotic death. Part I introduces develops models of Tumor Necrosis Factor-alpha (TNF-alpha) activated pathways. This model consists of three signalling cascades. One pathway triggers apoptosis while a second inhibits apoptosis. These two models are based on pre-existing models. This work introduces a third pathway which activates Activator Protein-1 (AP-1). This pathway includes two well-known ROS sensitive elements. These are the ROS-sensitive activation of the Mitogen-Activated Protein Kinase (MAPK) cascade and the ROS-sensitive deactivation of the MAPK phosphatases. These three pathways are regulated by three sets of inhibiting reactions and inhibitors to these inhibitors. The effect of these inhibitors is to introduce a time-lag between the initial TNF-alpha extracellular signal and the death of the cell by apoptosis. This time-lag is regulated by the concentration of intracellular ROS and the concentration of anti-oxidants. Different combinations of inhibitors can be switched on or off before running the model. The effectiveness of the oxidative stress sensitive elements in regulating apoptosis can therefore be optimised while different sets of inhibitors are active. Two qualitatively different types of solutions are found. The cell can be either only transiently active, over a shorter period of time, or persistently active, over a longer period of time. This could provide some guidance to biologists investigation TNF-alpha activation of the immune system. On inhalation, CNTs have been found to reach the alveoli, where air exchange occurs in the lung. The only mechanism available to remove debris in these delicate regions of the lung are lung macrophages. Macrophages work by enclosing unwanted matter in an organelle called a lysosome and then moving this debris away to where it can be cleared by cilia. Non-organic material does not trigger a macrophage response as strongly as organic material, which also triggers the immune system. The shape of fibrous material makes it more difficult for a macrophage to successfully form a lysosome and to move the material away once it has been engulfed. Frustrated phagocytosis releases harmful acids and enzymes which can damage the alveoli causing oxidative stress. If debris cannot be removed, then dead cells may form around the debris to protect the surrounding tissue, forming a granuloma. Both scarring from frustrated phagocytosis and granuloma formation will impair the function of the lung. In Part II, insight is gained on how a cell membrane can engulf an object with a high aspect ratio. The mechanisms of phagocytosis are complex in terms of both cell signalling cascades and the polymerisation and de-polymerisation of the actin network. In order to find a model which takes into account the geometry of a cell as a whole, this picture has been simplified. An energy minimisation approach is used where the surface of a cell is taken to be a surface of rotation around an axis, which is taken to be the axis of a fibre. In Chapter 4, the free energy is taken to be of a liquid drop, resting on a solid surface, in vapour where only the surface and volume energies are considered. The surface tension is taken to account for the tension in the lipid bilayer on the surface of the macrophage. In Chapter 5, the free energy is extended to also include a Helfrich or bending energy which specifically takes into account the energy taken to bend a lipid bilayer. It is assumed that, in order to conserve the limited resources of a macrophage, the shape of a lipid membrane which has successfully engulfed a particle will be energetically stable with regards to these surface, volume and bending energies as a macrophage reaches the final stages of phagocytosis. This does not take into account the energy required to remodel the cytoskeleton for the cell to reach this shape. However, the bending energy associated with cell membranes of increasing length can be used to suggest the amount of energy required in this dynamical process. It is found that in Chapter 4, when no Helfrich energy is included in the energy minimisation, the only limiting shape possible in the limit of increasing length to radius ratio of the fibre is a sphere. When the Helfrich energy is included, three different boundary conditions are imposed. The first boundary condition sets the forces associated with the bending energy to zero at the edge of the membrane. At the point of contact between the membrane and the fibre, the forces reduce to that of a classical solid/liquid/vapour interface. The second boundary condition is imposes the length of the droplet. This length can be incrementally increased to find solutions of increasing length. Finally, a third boundary condition is imposed which sets the contact angle of the membrane at the surface of the fibre to zero. By imposing these three boundary conditions, a variety of membrane shapes were obtained. These results are expected to be a useful guide to experimentalists observing different shapes of macrophages under different conditions. Part III in Section 7.1 pin-points frameworks of models which use concepts from polymer physics to possibly predict the volume of an aggregate of CNTs and also to understand how nanoparticles interact with chain-like protein. However, no new results are presented in Part III.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Verykouki, Eleni. « Stochastic modelling and Bayesian inference for the effect of antimicrobial treatments on transmission and carriage of nosocomial pathogens ». Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13642/.

Texte intégral
Résumé :
Nosocomial pathogens are usually organisms such as fungi and bacteria that are associated with infections caused in a hospital environment. Examples include Clostridium difficile, Pseudomonas aeruginosa, Vancomycin-resistant enterococcus and Methicillin-resistant Staphylococcus aureus (MRSA). MRSA, like most of the nosocomial pathogens, is resistant to antibiotics and is one of the most serious causes of infections. In this thesis we assess the effects of antibiotics and antiseptics on carriage and transmission of MRSA. We use highly detailed patient level data taken from two Intensive Care Unit (ICU) wards in St. Guys and Thomas’s hospital in London, where patients were receiving daily antimicrobial treatment and a decolonisation protocol was used. We work in discrete time and employ three different patient-level stochastic models in a Bayesian framework to explore the effectiveness of antimicrobial treatment on MRSA in discrete time. We also develop suitable methods of model assessment. The first two models assume that there is no transmission between patients in the ICU wards. Initially a Markov model is used, assuming perfect swab test specificity and sensitivity, to describe the colonisation status of an individual on a daily basis. Results are obtained using Gaussian random walk Metropolis- Hastings algorithms. We find some evidence that decolonisation treatment and Oxazolidinone have a positive effect in clearing MRSA carriage. The second model is a hidden Markov model and assumes perfect swab test specificity but imperfect sensitivity. We obtain the results using data- augmented Markov Chain Monte Carlo (MCMC) algorithms to make inference for the unobserved patient colonisation states. We find evidence that the Antiseptic treatment used during the decolonisation period is effective in the clearance of MRSA carriage. In the third case we assume that there is MRSA transmission between the patients in the ICUs. We use three different stochastic transmission models which overcome many of the unrealistic assumptions of other models. A data- augmented MCMC algorithm is employed in order to estimate the transmission rates of MRSA between the patients assuming imperfect swab test sensitivity. We found no or limited evidence that antibiotic use affects the transmission process, whereas antiseptic treatment was found to have an effect.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Czogiel, Irina. « Statistical inference for molecular shapes ». Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/12217/.

Texte intégral
Résumé :
This thesis is concerned with developing statistical methods for evaluating and comparing molecular shapes. Techniques from statistical shape analysis serve as a basis for our methods. However, as molecules are fuzzy objects of electron clouds which constantly undergo vibrational motions and conformational changes, these techniques should be modified to be more suitable for the distinctive features of molecular shape. The first part of this thesis is concerned with the continuous nature of molecules. Based on molecular properties which have been measured at the atom positions, a continuous field--based representation of a molecule is obtained using methods from spatial statistics. Within the framework of reproducing kernel Hilbert spaces, a similarity index for two molecular shapes is proposed which can then be used for the pairwise alignment of molecules. The alignment is carried out using Markov chain Monte Carlo methods and posterior inference. In the Bayesian setting, it is also possible to introduce additional parameters (mask vectors) which allow for the fact that only part of the molecules may be similar. We apply our methods to a dataset of 31 steroid molecules which fall into three activity classes with respect to the binding activity to a common receptor protein. To investigate which molecular features distinguish the activity classes, we also propose a generalisation of the pairwise method to the simultaneous alignment of several molecules. The second part of this thesis is concerned with the dynamic aspect of molecular shapes. Here, we consider a dataset containing time series of DNA configurations which have been obtained using molecular dynamic simulations. For each considered DNA duplex, both a damaged and an undamaged version are available, and the objective is to investigate whether or not the damage induces a significant difference to the the mean shape of the molecule. To do so, we consider bootstrap hypothesis tests for the equality of mean shapes. In particular, we investigate the use of a computationally inexpensive algorithm which is based on the Procrustes tangent space. Two versions of this algorithm are proposed. The first version is designed for independent configuration matrices while the second version is specifically designed to accommodate temporal dependence of the configurations within each group and is hence more suitable for the DNA data.
Styles APA, Harvard, Vancouver, ISO, etc.
8

White, Simon Richard. « Stochastic epidemics conditioned on their final outcome ». Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11274/.

Texte intégral
Résumé :
This thesis investigates the representation of a stochastic epidemic process as a directed random graph; we use this representation to impute the missing information in final size data to make Bayesian statistical inference about the model parameters using MCMC techniques. The directed random graph representation is analysed, in particular its behaviour under the condition that the epidemic has a given final size. This is used to construct efficient updates for MCMC algorithms. The MCMC method is extended to include two-level mixing models and two-type models, with a general framework given for an arbitrary number of levels and types. Partially observed epidemics, that is, where the number of susceptibles is unknown or where only a subset of the population is observed, are analysed. The method is applied to several well known data sets and comparisons are made with previous results. Finally, the method is applied to data of an outbreak of Equine Influenza (H3N8) at Newmarket in 2003, with a comparison to another analysis of the same data. Practical issues of implementing the method are discussed and are overcome using parallel computing (GNU OpenMP) and arbitrary precision arithmetic (GNU MPFR).
Styles APA, Harvard, Vancouver, ISO, etc.
9

Deligiannidis, Georgios. « Some results associated with random walks ». Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/13104/.

Texte intégral
Résumé :
In this thesis we treat three problems from the theory and applications of random walks. The first question we tackle is from the theory of the optimal stopping of random walks. We solve the infinite-horizon optimal stopping problem for a class of reward functions admitting a representation introduced in Boyarchenko and Levendorskii [1], and obtain closed expressions for the expected reward and optimal stopping time. Our methodology is a generalization of an early paper by Darling et al. [2] and is based on probabilistic techniques: in particular a path decomposition related to the Wiener-Hopf factorization. Examples from the literature and perturbations are treated to demonstrate the flexibility of our approach. The second question is related to the path structure of lattice random walks. We obtain the exact asymptotics of the variance of the self- intersection local time Vn which counts the number of times the paths of a random walk intersect themselves. Our approach extends and improves upon that of Bolthausen [3], by making use of complex power series. In particular we state and prove a complex Tauberian lemma, which avoids the assumption of monotonicity present in the classical Tauberian theorem. While a bound of order 0(n2) has previously been claimed in the literature ([3], [4]) we argue that existing methods only show the tipper bound O(n2 log n), unless extra conditions are imposed to ensure monotonicity of the underlying sequence. Using the complex Tauberian approach we show that Var (Vn ) Cn2, thus settling a long-standing misunderstanding. Finally, in the last chapter, we prove a functional central limit theorem for one-dimensional random walk in random scenery, a result conjectured in 1979 by Kesten and Spitzer [5]. Essentially random walk in random scenery is the process defined by the partial suins of a collection of random variables (the random scenery), sampled by a random walk. We show that for Z-valued random walk attracted to the symmetric Cauchy law, and centered random scenery with second moments, a functional central limit theorem holds, thus proving the Kesten and Spitzer [5] conjecture which had remained open since 1979. Our proof makes use of tile asymptotic results obtained in the Chapter 3.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Knock, Edward Stuart. « Stochastic epidemic models for emerging diseases incorporating household structure and contact tracing ». Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12046/.

Texte intégral
Résumé :
In this thesis, three stochastic epidemic models for intervention for emerging diseases are considered. The models are variants of real-time, responsive intervention, based upon observing diagnosed cases and targeting intervention towards individuals they have infected or are likely to have infected, be they housemates or named contacts. These models are: (i) a local tracing model for a disease spreading amongst a community of households, wherein intervention (vaccination and/or isolation) is directed towards housemates of diagnosed individuals, (ii) a contact tracing model for a disease spreading amongst a homogeneously-mixing population, with isolation of traced contacts of a diagnosed individual, (iii) a local tracing and contact tracing model for a disease spreading amongst a community of households, with intervention directed towards housemates of both diagnosed and traced individuals. These are quantified by deriving threshold parameters that determine whether the disease will infect a few individuals or a sizeable proportion of the population, as well as probabilities for such events occurring.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Michelbrink, Daniel. « A Martingale approach to optimal portfolios with jump-diffusions and benchmarks ». Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/12612/.

Texte intégral
Résumé :
We consider various portfolio optimization problems when the stock prices follow jump-diusion processes. In the first part the classical optimal consumption-investment problem is considered. The investor's goal is to maximize utility from consumption and terminal wealth over a finite investment horizon. We present results that modify and extend the duality approach that can be found in Kramkov and Schachermayer (1999). The central result is that the optimal trading strategy and optimal equivalent martingale measure can be determined as a solution to a system of non-linear equations. In another problem a benchmark process is introduced, which the investor tries to outperform. The benchmark can either be a generic jump-diusion process or, as a special case, a wealth process of a trading strategy. Similar techniques as in the first part of the thesis can be applied to reach a solution. In the special case that the benchmark is a wealth process, the solution can be deduced from the first part's consumption-investment problem via a transform of the parameters. The benchmark problem presented here gives a dierent approach to benchmarks as in, for instance, Browne (1999b) or Pra et al. (2004). It is also, as far as the author is aware, the first time that martingale methods are employed for this kind of problem. As a side effect of our analysis some interesting relationships to Platen's benchmark approach (cf. Platen (2006)) and change of numeraire techniques (cf. German et al. (1995)) can be observed. In the final part of the thesis the set of trading strategies in the previous two problems are restricted to constraints. These constraints are, for example, a prohibition of shortselling or the restriction on the number of assets. Conditions are provided under which a solution to the two problems can still be found. This extends the work of Cvitanic and Karatzas (1993) to jump diffusions where the initial market set-up is incomplete.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Mitchell, Mark J. « Mathematical modelling of carbon dioxide dissolution and reaction processes ». Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/14502/.

Texte intégral
Résumé :
Carbon dioxide dissolution into water is a ubiquitous chemical process on earth, and having a full understanding of this process is becoming ever more important as we seek to understand the consequences of 250 years of exponentially-increasing anthropogenic C02 emissions to the atmosphere since the start of the Industrial Revolution. We examine the dissolution of C02 into water in a number of contexts. First, we analyse what happens to a range of chemical species dissolved in water following an injection of additional C02. We consider the well-mixed problem, and use the method of matched asymptotic expansions to obtain new expressions for the changes in the species' concentrations with time, the new final chemical equilibrium, and the time scales over which this equilibrium is reached, as functions of time, the parameters and the initial condition. These results can be used to help predict the changes in the pH and concentrations of dissolved carbonic species that will occur in the oceans as a result of anthropogenic C02 emissions, and in saline aquifer formations after pumping C02 deep underground. Second, we consider what happens deep underground in a saline aquifer when C02 has been pumped in, spreads through the pore space, and dissolves into the resident water, when advection, diffusion, and chemical reaction have varying levels of relative importance. We examine the length scales over which the dissolved C02 will spread out through an individual pore, ahead of a spreading drop of C02, and the concentrations of the different chemical species within the pore, in the steady-state case. Finally, some experiments have been carried out to investigate the effect of an injection of gaseous C02 on the chemical composition and pH of a saturated limestone aquifer formation. As the C02 enters the soil, it dissolves into the water, and we model the changes in the chemical composition of the water/limestone mixture with time.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Ingrey, Philip Charles. « Optical limits in Left-Handed Media ». Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11392/.

Texte intégral
Résumé :
This thesis determines the response of Left-Handed Media (LHM) to surface effects. A LHM half-space with a roughened interface, modelled by a graded index boundary, is shown to give rise to an analytical solution for the propagation of electromagnetic radiation through this inhomogeneous layer. Significant field localization is generated within the layer, caused by the coherent superposition of evanescent waves. The localization is shown to greatly deteriorate transmission when losses are present. The addition of a second interface to the LHM, creating a perfect lens configuration, allows for the exploration of evanescent mode propagation through a perfect lens with roughened boundaries. The effects of the field localisations at the boundaries serves to diminish the resolving capability of the lens. Specifically the layers produce an effect that is qualitatively similar to nonlinearly enhanced dissipation. Ray-optics is used to analyse negative refraction through a roughened interface, prescribed by Gaussian statistics. This shows that rays can focus at smaller distances from the interface due to the negative refractive effects. Moreover, a new reflection mechanism is shown to exist for LHM. Consequently an impedance matched configuration involving LHM (such as the perfect lens) with a roughened interface can still display reflection. A physical-optics approach is used to determine the mean intensity and fluctuations of a wave passing into a half-space of LHM through a roughened interface in two ways. Firstly through the perturbation analysis of Rice theory which shows that the scattered field evolves from a real Gaussian process near the surface into a complex Gaussian process as distance into the second media increases. Secondly through large-scale Monte-Carlo simulations that show that illuminating a roughened interface between air and a LHM produces a regime for enhanced focussing of light close to the boundary, generating caustics that are brighter, fluctuate more, and cause Gaussian speckle at distances closer to the interface than in right-handed matter.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Stone, Nicola. « Gaussian process emulators for uncertainty analysis in groundwater flow ». Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/11989/.

Texte intégral
Résumé :
In the field of underground radioactive waste disposal, complex computer models are used to describe the flow of groundwater through rocks. An important property in this context is transmissivity, the ability of the groundwater to pass through rocks, and the transmissivity field can be represented by a stochastic model. The stochastic model is included in complex computer models which determine the travel time for radionuclides released at one point to reach another. As well as the uncertainty due to the stochastic model, there may also be uncertainties in the inputs of these models. In order to quantify the uncertainties, Monte Carlo analyses are often used. However, for computationally expensive models, it is not always possible to obtain a large enough sample to provide accurate enough uncertainty analyses. In this thesis, we present the use of Bayesian emulation methodology as an alternative to Monte Carlo in the analysis of stochastic models. The idea behind Bayesian emulation methodology is that information can be obtained from a small number of runs of the model using a small sample from the input distribution. This information can then be used to make inferences about the output of the model given any other input. The current Bayesian emulation methodology is extended to emulate two statistics of a stochastic computer model; the mean and the distribution function of the output. The mean is a simple output statistic to emulate and provides some information about how the output changes due to changes in each input. The distribution function is more complex to emulate, however it is an important statistic since it contains information about the entire distribution of the outputs. Distribution functions of radionuclide travel times have been used as part of risk analyses for underground radioactive waste disposal. The extended methodology is presented using a case study.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Ball, Sue. « Stochastic models of ion channels ». Thesis, University of Nottingham, 2001. http://eprints.nottingham.ac.uk/11277/.

Texte intégral
Résumé :
This thesis is concerned with models and inference for single ion channels. Molecular modelling studies are used as the basis for biologically realistic, large state-space gating models of the nicotinic acetylcholine receptor which enable single-channel kinetic behaviour to be characterized in terms of a small number of free parameters. A model is formulated which incorporates known structural information concerning pentameric subunit composition, interactions between neighbouring subunits and knowledge of the behaviour of agonist binding sites within the receptor-channel proteins. Expressions are derived for various channel properties and results are illustrated using numerical examples. The model is adapted and extended to demonstrate how properties of the calcium ion-activated potassium ion channel may be modelled. A two-state stochastic model for ion channels which incorporates time interval omission is examined. Two new methods for overcoming a non-identifiability problem induced by time interval omission are introduced and simulation studies are presented in support of these methods. A framework is presented for analysing the asymptotic behaviour of the method-of-moments estimators of the mean lengths of open and closed sojourns. This framework is used to clarify the origin of the non-identifiability and to construct confidence sets for the mean sojourn lengths. A conjecture concerning the number of solutions of the moment estimating equations is proved.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Shortland, Christopher Francis. « Some results on boundary hitting times for one-dimensional diffusion processes ». Thesis, University of Nottingham, 1993. http://eprints.nottingham.ac.uk/11465/.

Texte intégral
Résumé :
Boundary hitting times for one-dimensional diffusion processes have applications in a variety of areas of mathematics. Unfortunately, for most choices of diffusions and boundaries, the exact exit distribution is unknown, and an approximation has to be made. The primary requirements of an approximation, from a practical viewpoint, is that it is both accurate and easily computable. The main, currently used approximations are discussed, and a new method is developed for two-sided boundaries, where current methodology provides very few techniques. In order to produce new approximations, we will make use of results about the ordering of stochastic processes, and conditioning processes not to have hit a boundary. These topics are introduced in full detail, and a number of results are proved. The ability to order conditioned processes is exploited to provide exact, analytic bounds on the exit distribution. This technique also produces a new approximation, which, for Brownian motion exiting concave or convex boundaries, is shown to be a superior approximation to the standard tangent approximation. To illustrate the uses of these approximations, and general boundary hitting time results, we investigate a class of optimal stopping problems, motivated by a sequential analysis problem. Properties of the optimal stopping boundary are found using analytic techniques for a wide class of cost functions, and both one- and two-sided boundaries. A number of results are proved concerning the expected stopping cost in cases of "near optimality". Numerical examples are used, throughout this thesis, to illustrate the principal results and exit distribution approximations.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Fayomi, Aisha Fouad. « Robust versions of classical multivariate techniques based on the Cauchy likelihood ». Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13446/.

Texte intégral
Résumé :
Classical multivariate analysis techniques such as principal components analysis (PCA), canonical correlation analysis (CCA) and discriminant analysis (DA) can be badly affected when extreme outliers are present. The purpose of this thesis is to present new robust versions of these methods. Our approach is based on the following observation: the classical approaches to PCA, CCA and DA can all be interpreted as operations on a Gaussian likelihood function. Consequently, PCA, CCA and DA can be robustified by replacing the Gaussian likelihood with a Cauchy likelihood. The performance of the Cauchy version of each of these procedures is studied in detail both theoretically, through calculation of the relevant influence function, and numerically, through numerous examples involving real and simulated data. Our results demonstrate that the new procedures have good robustness properties which are certainly far superior to these of the classical versions.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Baaqeel, Hanan. « Central limit theorems and statistical inference for some random graph models ». Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/29294/.

Texte intégral
Résumé :
Random graphs and networks are of great importance in any fields including mathematics, computer science, statistics, biology and sociology. This research aims to develop statistical theory and methods of statistical inference for random graphs in novel directions. A major strand of the research is the development of conditional goodness-of-fit tests for random graph models and for random block graph models. On the theoretical side, this entails proving a new conditional central limit theorem for a certain graph statistics, which are closely related to the number of two-stars and the number of triangles, and where the conditioning is on the number of edges in the graph. A second strand of the research is to develop composite likelihood methods for estimation of the parameters in exponential random graph models. Composite likelihood methods based on edge data have previously been widely used. A novel contribution of the thesis is the development of composite likelihood methods based on more complicated data structures. The goals of this PhD thesis also include testing the numerical performance of the novel methods in extensive simulation studies and through applications to real graphical data sets.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Wilson, Lorna Rachel Maven. « The impact of periodicity on the zero-crossings of random functions ». Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/30472/.

Texte intégral
Résumé :
Continuous random processes are used to model a huge variety of real world phenomena. In particular, the zero-crossings of such processes find application in modelling processes of diffusion, meteorology, genetics, finance and applied probability. Understanding the zero-crossings behaviour improves prediction of phenomena initiated by a threshold crossing, as well as extremal problems where the turning points of the process are of interest. To identify the Probability Density Function (PDF) for the times between successive zero-crossings of a stochastic process is a challenging problem with a rich history. This thesis considers the effect of an oscillatory auto-correlation function on the zero-crossings of a Gaussian process. Examining statistical properties of the number of zeros in a fixed time period, it is found that increasing the rate of oscillations in the auto-correlation function results in more ‘deterministic’ realisations of the process. The random interval times between successive zeros become more regular, and the variance is reduced. Accurate calculation of the variance is achieved through analysing the correlation between intervals,which numerical simulations show can be anti-correlated or correlated, depending on the rate of oscillations in the auto-correlation function. The persistence exponent describes the tail of the inter-event PDF, which is steeper where zero-crossings occur more regularly. It exhibits a complex phenomenology, strongly influenced by the oscillatory nature of the auto-correlation function. The interplay between random and deterministic components of a system governs its complexity. In an ever-more complex world, the potential applications for this scale of ‘regularity’ in a random process are far reaching and powerful.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Cao, Yufei. « Zero-crossing intervals of Gaussian and symmetric stable processes ». Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/39997/.

Texte intégral
Résumé :
The zero-crossing problem is the determination of the probability density function of the intervals between the successive axis crossings of a stochastic process. This thesis studies the properties of the zero-crossings of stationary processes belonging to the symmetric-stable class of Gaussian and non-Gaussian type, corresponding to the stability index nu=2 and 0 < nu < 2 respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Foulger, Iain. « Quantum walks and quantum search on graphene lattices ». Thesis, University of Nottingham, 2014. http://eprints.nottingham.ac.uk/27717/.

Texte intégral
Résumé :
This thesis details research I have carried out in the field of quantum walks, which are the quantum analogue of classical random walks. Quantum walks have been shown to offer a significant speed-up compared to classical random walks for certain tasks and for this reason there has been considerable interest in their use in algorithmic settings, as well as in experimental demonstrations of such phenomena. One of the most interesting developments in quantum walk research is their application to spatial searches, where one searches for a particular site of some network or lattice structure. There has been much work done on the creation of discrete- and continuous-time quantum walk search algorithms on various lattice types. However, it has remained an issue that continuous-time searches on two-dimensional lattices have required the inclusion of additional memory in order to be effective, memory which takes the form of extra internal degrees of freedom for the walker. In this work, we describe how the need for extra degrees of freedom can be negated by utilising a graphene lattice, demonstrating that a continuous-time quantum search in the experimentally relevant regime of two-dimensions is possible. This is achieved through alternative methods of marking a particular site to previous searches, creating a quantum search protocol at the Dirac point in graphene. We demonstrate that this search mechanism can also be adapted to allow state transfer across the lattice. These two processes offer new methods for channelling information across lattices between specific sites and supports the possibility of graphene devices which operate at a single-atom level. Recent experiments on microwave analogues of graphene that adapt these ideas, which we will detail, demonstrate the feasibility of realising the quantum search and transfer mechanisms on graphene.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Davis, Ben. « Stochastic epidemic models on random networks : casual contacts, clustering and vaccination ». Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/47272/.

Texte intégral
Résumé :
There has been considerable recent interest in models for epidemics on networks describing social contacts. This thesis considers a stochastic SIR (Susceptible - Infective - Removed) model for the spread of an epidemic among a population of individuals, with a random network of social contacts, that is partitioned into households and in which individuals also make casual contacts, i.e. with people chosen uniformly at random from the population. The behaviour of the model as the population tends to infinity is investigated. A threshold parameter that governs whether or not the epidemic with an initial infective can become established is obtained, as is the probability that such an outbreak occurs and, if so, how large it will become. The behaviour of this model is then compared to that of a finite population using Monte Carlo simulations. The effect of the different transmission routes on the final outcome of an epidemic and the effect of introducing social contacts and clustering to the network on the performance of various vaccination strategies are also investigated.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Pérez, López Iker. « Results in stochastic control : optimal prediction problems and Markov decision processes ». Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/28395/.

Texte intégral
Résumé :
The following thesis is divided in two main topics. The first part studies variations of optimal prediction problems introduced in Shiryaev, Zhou and Xu (2008) and Du Toit and Peskir (2009) to a randomized terminal-time set up and different families of utility measures. The work presents optimal stopping rules that apply under different criteria, introduces a numerical technique to build approximations of stopping boundaries for fixed terminal time problems and suggest previously reported stopping rules extend to certain generalizations of measures. The second part of the thesis is concerned with analysing optimal wealth allocation techniques within a defaultable financial market similar to Bielecki and Jang (2007). It studies a portfolio optimization problem combining a continuous time jump market and a defaultable security; and presents numerical solutions through the conversion into a Markov Decision Process and characterization of its value function as a unique fixed point to a contracting operator. This work analyses allocation strategies under several families of utilities functions, and highlights significant portfolio selection differences with previously reported results.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Tilūnaitė, Agnė. « Modelling of intracellular calcium dynamics ». Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/48909/.

Texte intégral
Résumé :
Ca2+ as a universal messenger participates in a great variety of physiological functions and biological events such as cell maturation, chemotaxis or gene expression. These diverse functions are controlled through complex spatio-temporal calcium patterns. To date it is known that these patterns depend on stimuli type and concentration. However, the majority of these observations were from constant or step change stimulation protocols. Under these conditions two leading hypotheses for the stimulus encoding into cytosolic calcium responses were proposed, namely amplitude and frequency modulation. Under physiological conditions, however, cells often experience time dependent stimuli such as transient changes in neurotransmitter or oscillations in hormone concentrations. How cells transduce such dynamic stimuli into an appropriate response is an open question. We exposed HEK293 cells and astrocytes to dynamically varying time courses of carbachol and ATP, respectively, and investigated the corresponding cellular calcium activity. While single cells generally fail to follow the applied stimulation due to their intrinsic stochasticity and heterogeneity, faithful signal reconstruction is observed at the population level. We suggest eight possible population representation measures and using mutual information measure show that the area under the curve and total number of spikes are the most informative ones. Next we provide simple transfer functions that explain how dynamic stimulation is encoded into area under the curve and ensemble calcium spike rates. Cells in a physiological environment often experience diverse stimulation time courses which can be reproduced experimentally. Furthermore, cell populations may differ in the number of cells or exhibit various spatial distributions. In order to understand how these conditions affect population responses, we compute the single cell response to a given dynamic stimulus. Single cell variability and the small number of calcium spikes per cell pose a significant modelling challenge, but we demonstrate that Gaussian processes can successfully describe calcium spike rates in these circumstances and outperform standard tools such as peri-stimulus time histograms and kernel smoothing. Having the single cell response model will allow us to compare responses of various sets of cells to the observed population response and consequently obtain insight into tissue-wide calcium oscillations for heterogeneous cell populations. Finally,in vivo astrocytes respond to a range of hormones and neurotransmitters. Furthermore these agonists can have different characteristics, for example glutamate is a fast excitatory transmitter, while ATP can be an inhibitory transmitter. Despite of this, how (or if at all) astrocytes differentiate between different agonists is still not clear. We hypothesize that astrocytes discriminates between different stimuli by exploiting the spatial-temporal complexity of calcium responses. We show how 2D A Trous wavelet decomposition combined with Bhattacharyya distance measure can be applied to test this hypothesis.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Dinh, Jean-Louis T. Q. « Mathematical modelling of the floral transition ». Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/45106/.

Texte intégral
Résumé :
The floral transition is a developmental process through which some plants commit to flowering and stop producing leaves. This is controlled by changes in gene expression in the shoot apical meristem (SAM). Many of the genes involved are known, but their interactions are usually only studied one by one, or in small sets. While it might be necessary to properly ascertain the existence of regulatory interactions from a biological standpoint, it cannot really provide insight in the functioning of the floral-transition process as a whole. For this reason, a modelling approach has been used to integrate knowledge from multiple studies. Several approaches were applied, starting with ordinary differential equation (ODE) models. It revealed in two cases – one on rice and one on Arabidopsis thaliana – that the currently available data were not sufficient to build data-driven ODE models. The main issues were the low temporal resolution of the time series, the low spatial resolution of the sampling methods used on meristematic tissue, and the lack of gene expression measurements in studies of factors affecting the floral transition. These issues made the available gene expression time series of little use to infer the regulatory mechanisms involved. Therefore, another approach based on qualitative data was investigated. It relies on data extracted from published in situ hybridization (ISH) studies, and Boolean modelling. The ISH data clearly showed that shoot apical meristems (SAM) are not homogeneous and contain multiple spatial domains corresponding to coexisting steady-states of the same regulatory network. Using genetic programming, Boolean models with the right steady-states were successfully generated. Finally, the third modelling approach builds upon one of the generated Boolean models and implements its logic into a 3D tissue of SAM. As Boolean models cannot represent quantitative spatio-temporal phenomena such as passive transport, the model had to be translated into ODEs. This model successfully reproduced the patterning of SAM genes in a static tissue structure. The main biological conclusions of this thesis are that the spatial organization of gene expression in the SAM is a crucial part of the floral transition and of the development of inflorescences, and it is mediated by the transport of mobile proteins and hormones. On the modelling front, this work shows that quantitative ODE models, despite their popularity, cannot be applied to all situations. When the data are insufficient, simpler approaches like Boolean models and ODE models with qualitatively selected parameters can provide suitable alternatives and facilitate large-scale explorations of the space of possible models, due to their low computational cost.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Gao, Yu. « Statistical modelling of games ». Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33298/.

Texte intégral
Résumé :
This thesis mainly focuses on the statistical modelling of a selection of games, namely, the minority game, the urn model and the Hawk-Dove game. Chapters 1 and 2 give a brief introduction and survey of the field. In Chapter 3, the key characteristics of the minority game are reproduced. In addition, the minority game is extended to include wealth distribution and leverage effect. By assuming that each player has initial wealth which rises and falls according to profit and loss, with the potential of borrowing and bankruptcy, we find that modelled wealth distribution may be power law distributed and leverage increases the instability of the system. In Chapter 4, to explore the effects of memory, we construct a model where agents with memories of different lengths compete for finite resources. Using analytical and numerical approaches, our research demonstrates that an instability exists at a critical memory length; and players with different memory lengths are able to compete with each other and achieve a state of co-existence. The analytical solution is found to be connected to the well-known urn model. Additionally, our findings reveal that the temperature is related to the agent's memory. Due to its general nature, this memory model could potentially be relevant for a variety of other game models. In Chapter 5, our main finding is extended to the Hawk-Dove game, by introducing the memory parameter to each agent playing the game. An assumption is made that agents try to maximise their profits by learning from past experiences, stored in their finite memories. We show that the analytical results obtained from these two games are in agreement with the results from our simulations. It is concluded that the instability occurs when agents' memory lengths reach the critical value. Finally, Chapter 6 provides some concluding remarks and outlines some potential future work.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Wicks, Thomas J. « Molecular simulation of nucleation in polymers ». Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/32012/.

Texte intégral
Résumé :
We develop several new algorithms using molecular simulation to investigate the nucleation barrier of a single, freely-jointed polymer chain. In the first part of the thesis, we use a free particle model to develop a new biasing technique, which uses an automated feedback mechanism to overcome the poor sampling of crystal states in a thermodynamic system. Our feedback technique does not require any prior knowledge of the nucleation barrier and enables good representative sampling of all available states of interest. In the second part of the thesis, we simulate the nucleation barrier of the single, freely-jointed, square-well chain. We use our feedback technique and parallel tempering with a nonstandard temperature distribution to overcome poor sampling of crystal states and configuration mixing issues respectively. We also provide some comparative analysis of different choices of configurational order parameters for the single chain. Finally, we apply stretching to the chain to approximate flow-induced crystallisation and investigate the effect of different degrees of stretch on the nucleation barrier. We verify the quality of our simulation with careful monitoring of several criteria, including the acceptance ratios of configuration swaps between simulations with adjacent temperatures, evolution of the energy traces as a result of configuration swaps between tempering levels, and ensuring effective de-correlation of configurations through reptation moves. Our simulations provide strong reproducible results for the base, the peak and beyond the peak of the barrier for the quiescent and stretched single chain. We observe a remarkably strong effect of modest stretching on the nucleation barrier for a single chain, which can potentially lead to dramatic effects on the nucleation rate. Our simulation code has been made publicly available, with details provided in an appendix.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Duduială, Ciprian Ionut. « Stochastic nonlinear models of DNA breathing at a defect ». Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11027/.

Texte intégral
Résumé :
Deoxyribonucleic acid (DNA) is a long polymer consisting of two chains of bases, in which the genetic information is stored. A base from one chain has a corresponding base on the other chain which together form a so-called base-pair. Molecular-dynamics simulations of a normal DNA duplex show that breathing events – the temporary opening of one or more base-pairs – typically occur on the microsecond time-scale. Using the molecular dynamics package AMBER, we analyse, for different twist angles in the range 30-40 degrees of twist, a 12 basepair DNA duplex solvated in a water box, which contains the ’rogue’ base difluorotoluene (F) in place of a thymine base (T). This replacement makes breathing occur on the nanosecond time-scale. The time spent simulating such large systems, as well as the variation of breathing length and frequency with helical twist, determined us to create a simplified model, which is capable to predict with accuracy the DNA behaviour. Starting from a nonlinear Klein-Gordon lattice model and adding noise and damping to our system, we obtain a new mesoscopic model of the DNA duplex, close to that observed in experiments and all-atom MD simulations. Defects are considered in the inter-chain interactions as well as in the along-chain interactions. The system parameters are fitted to AMBER data using the maximum likelihood method. This model enables us to discuss the role of the fluctuation-dissipation relations in the derivation of reduced (mesoscopic) models, the differences between the potential of mean force and the potential energies used in Klein-Gordon lattices and how breathing can be viewed as competition between the along-chain elastic energy, the inter-chain binding energy and the entropy term of the system’s free energy. Using traditional analysis methods, such as principal component analysis, data autocorrelation, normal modes and Fourier transform, we compare the AMBER and SDE simulations to emphasize the strength of the proposed model. In addition, the Fourier transform of the trajectory of the A-F base-pair suggests that DNA is a self-organised system and our SDE model is also capable of preserving this behaviour. However, we reach the conclusion that the critical DNA behaviour needs further investigations, since it might offer some information about bubble nucleation and growth and even about DNA transcription and replication.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Blackwell, Paul Gavin. « The stochastic modelling of social and territorial behaviour ». Thesis, University of Nottingham, 1990. http://eprints.nottingham.ac.uk/13594/.

Texte intégral
Résumé :
This thesis considers mathematical models of the interaction between social and territorial behaviour in animals, mainly by probabilistic methods. Chapter 1 introduces the Resource Dispersion Hypothesis, which suggests that territorial behaviour plus dispersed food resources can explain the existence of social groups, and describes an existing model of the process, due to Carr and Macdonald. In Chapter 2 the model of Carr and Macdonald is analysed, and in Chapter 3 an improved model is suggested and its main properties derived, primarily using renewal theory. Chapters 4 and 5 consider various spatial models for territory formation, and the effect, of spatial factors on social behaviour, using analytic and simulation-based methods. Chapter 6 considers the evolution of social behaviour using both discrete-time deterministic models and branching processes to investigate the viability of different strategies of social behaviour in the presence of dispersed resources.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Spencer, Simon. « Stochastic epidemic models for emerging diseases ». Thesis, University of Nottingham, 2008. http://eprints.nottingham.ac.uk/11132/.

Texte intégral
Résumé :
In this thesis several problems concerning the stochastic modelling of emerging infections are considered. Mathematical modelling is often the only available method of predicting the extent of an emerging disease and assessing proposed control measures, as there may be little or no available data on previous outbreaks. Only stochastic models capture the inherent randomness in disease transmission observed in real-life outbreaks, which can strongly influence the outcome of an emerging epidemic because case numbers will initially be small compared with the population size. Chapter 2 considers a model for diseases in which some of the cases exhibit no symptoms and are therefore difficult to observe. Examples of such diseases include influenza, mumps and polio. This chapter investigates the problem of determining whether or not the epidemic has died out if a period containing no symptomatic individuals is observed. When modelling interventions, it is realistic to include a delay between observing the presence of infection and the implementation of control measures. Chapter 3 quantifies the effect that the length of such a delay has on an epidemic amongst a population divided into households. As well as a constant delay, an exponentially distributed delay is also considered. Chapter 4 develops a model for the spread of an emerging strain of influenza in humans. By considering the probability that an outbreak will be contained within a region in which an intervention strategy is active, it becomes possible to quantify and therefore compare the effectiveness of intervention strategies.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Shaw, Laurence M. « SIR epidemics in a population of households ». Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/38606/.

Texte intégral
Résumé :
The severity of the outbreak of an infectious disease is highly dependent upon the structure of the population through which it spreads. This thesis considers the stochastic SIR (susceptible → infective → removed) household epidemic model, in which individuals mix with other individuals in their household at a far higher rate than with any other member of the population. This model gives a more realistic view of dynamics for the transmission of many diseases than the traditional model, in which all individuals in a population mix homogeneously, but retains mathematical tractability, allowing us to draw inferences from disease data. This thesis considers inference from epidemics using data which has been acquired after an outbreak has finished and whilst it is still in its early, `emerging' phase. An asymptotically unbiased method for estimating within household infectious contact rate(s) from emerging epidemic data is developed as well as hypothesis testing based on final size epidemic data. Finally, we investigate the use of both emerging and final size epidemic data to estimate the vaccination coverage required to prevent a large scale epidemic from occurring. Throughout the thesis we also consider the exact form of the households epidemic model which should be used. Specifically, we consider models in which the level of infectious contact between two individuals in the same household varies according to the size of their household.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Dunster, Joanne L. « Mathematical models of soft tissue injury repair : towards understanding musculoskeletal disorders ». Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/27797/.

Texte intégral
Résumé :
The process of soft tissue injury repair at the cellular lew I can be decomposed into three phases: acute inflammation including coagulation, proliferation and remodelling. While the later phases are well understood the early phase is less so. We produce a series of new mathematical models for the early phases coagulation and inflammation. The models produced are relevant not only to soft tissue injury repair but also to the many disease states in which coagulation and inflammation play a role. The coagulation cascade and the subsequent formation of the enzyme thrombin are central to the creation of blood clots. By focusing on a subset of reactions that occur within the coagulation cascade, we develop a model that exhibits a rich asymptotic structure. Using singular perturbation theory we produce a sequence of simpler time-dependent model which enable us to elucidate the physical mechanisms that underlie the cascade and the formation of thrombin. There is considerable interest in identifying new therapeutic targets within the coagulation cascade, as current drugs for treating pathological coagulation (thrombosis) target multiple factors and cause the unwelcome side effect of excessive bleeding. Factor XI is thought to be a potential therapeutic target, as it is implicated in pathological coagulation but not in haemostasis (the stopping of bleeding), but its mechanism of activation is controversial. By extending our previous model of the coagulation cascade to include the whole cascade (albeit in a simplistic way) we use numerical methods to simulate experimental data of the coagulation cascade under normal as well as specific-factor-deficient conditions. We then provide simulations supporting the hypothesis that thrombin activates factor XI. The interest in inflammation is now increasing due to it being implicated in such diverse conditions as Alzmeimer's disease, cancer and heart disease. Inflammation can either resolve or settle into a self-perpetuating condition which in the context of soft tissue repair is termed chronic inflammation. Inflammation has traditionally been thought gradualIy to subside but new biological interest centres on the anti-inflammatory processes (relating to macrophages) that are thought to promote resolution and the pro-inflammatory role that neutrophils can provide by causing damage to healthy tissue. We develop a new ordinary differential equation model of the inflammatory process that accounts for populations of neutrophils and macrophages. We use numerical techniques and bifurcation theory to characterise and elucidate the physiological mechanisms that are dominant during the inflammatory phase and the roles they play in the healing process. There is therapeutic interest in modifying the rate of neutrophil apoptosis but we find that increased apoptosis is dependent on macrophage removal to be anti-inflammatory. We develop a simplified version of the model of inflammation reducing a system of nine ordinary equations to six while retaining the physical processes of neutrophil apoptosis and macrophage driven anti-inflammatory mechanisms. The simplified model reproduces the key outcomes that we relate to resolution or chronic inflammation. We then present preliminary work on the inclusion of the spatial effects of chemotaxis and diffusion.
Styles APA, Harvard, Vancouver, ISO, etc.
33

van, Horssen Merlijn. « Large deviations and dynamical phase transitions for quantum Markov processes ». Thesis, University of Nottingham, 2014. http://eprints.nottingham.ac.uk/27741/.

Texte intégral
Résumé :
Quantum Markov processes are widely used models of the dynamics open quantum systems, a fundamental topic in theoretical and mathematical physics with important applications in experimental realisations of quantum systems such as ultracold atomic gases and new quantum information technologies such as quantum metrology and quantum control. In this thesis we present a mathematical framework which effectively characterises dynamical phase transitions in quantum Markov processes, using the theory of large deviations, by combining insights developed in non-equilibrium dynamics with techniques from quantum information and probability. We provide a natural decomposition for quantum Markov chains into phases, paving the way for the rigorous treatment of critical features of such systems such as phase transitions and phase purification. A full characterisation of dynamical phase transitions beyond properties of the steady state is described in terms of a dynamical perspective through critical behaviour of the quantum jump trajectories. We extend a fundamental result from large deviations for classical Markov chains, the Sanov theorem, to a quantum setting; we prove this Sanov theorem for the output of quantum Markov chains, a result which could be extended to a quantum Donsker-Varadhan theory. We perform an in-depth analysis of the atom maser, an infinite-dimensional quantum Markov process exhibiting various types of critical behaviour: for certain parameters it exhibits strong intermittency in the atom detection counts, and has a bistable stationary state. We show that the atom detection counts satisfy a large deviations principle, and therefore we deal with a phase cross-over rather than a genuine phase transition, although the latter occurs in the limit of infinite pumping rate. As a corollary, we obtain the Central Limit Theorem for the counting process.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Davies, Jonathan. « Sparse regression methods with measurement-error for magnetoencephalography ». Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/48062/.

Texte intégral
Résumé :
Magnetoencephalography (MEG) is a neuroimaging method for mapping brain activity based on magnetic field recordings. The inverse problem associated with MEG is severely ill-posed and is complicated by the presence of high collinearity in the forward (leadfield) matrix. This means that accurate source localisation can be challenging. The most commonly used methods for solving the MEG problem do not employ sparsity to help reduce the dimensions of the problem. In this thesis we review a number of the sparse regression methods that are widely used in statistics, as well as some more recent methods, and assess their performance in the context of MEG data. Due to the complexity of the forward model in MEG, the presence of measurement-error in the leadfield matrix can create issues in the spatial resolution of the data. Therefore we investigate the impact of measurement-error on sparse regression methods as well as how we can correct for it. We adapt the conditional score and simulation extrapolation (SIMEX) methods for use with sparse regression methods and build on an existing corrected lasso method to cover the elastic net penalty. These methods are demonstrated using a number of simulations for different types of measurement-error and are also tested with real MEG data. The measurement-error methods perform well in simulations, including high dimensional examples, where they are able to correct for attenuation bias in the true covariates. However the extent of their correction is much more restricted in the more complex MEG data where covariates are highly correlated and there is uncertainty over the distribution of the error.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Sajib, Anamul. « A Bayesian model for the unlabelled size-and-shape analysis ». Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/55511/.

Texte intégral
Résumé :
This thesis considers the development of efficient MCMC sampling methods for Bayesian models used for the pairwise alignment of two unlabelled configurations. We introduce ideas from differential geometry along with other recent developments in unlabelled shape analysis as a means of creating novel and more efficient MCMC sampling methods for such models. For example, we have improved the performance of the sampler for the model of Green and Mardia (2006) by sampling rotation, A ∈ SO(3), and matching matrix using geodesic Monte Carlo (MCMC defined on manifold) and Forbes and Lauritzen (2014) matching sampler, developed for finger print matching problem, respectively. We also propose a new Bayesian model, together with implementation methods, motivated by the desire for further improvement. The model and its implementation methods proposed exploit the continuous nature of the parameter space of our Bayesian model and thus move around easily in this continuous space, providing highly efficient convergence and exploration of the target posterior distribution. The proposed Bayesian model and its implementation methods provide generalizations of the existing two models, Bayesian Hierarchical and regression models, introduced by Green and Mardia (2006) and Taylor, Mardia and Kent (2003) respectively, and resolve many shortcomings of existing implementation methods; slow convergence, traps in local mode and dependence on initial starting values when sampling from high dimensional and multi-modal posterior distributions. We illustrate our model and its implementation methods on the alignment of two proteins and two gels, and we find that the performance of proposed implementation methods under proposed model is better than current implementation techniques of existing models in both real and simulated data sets.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Howitt, Ryan. « Stochastic modelling of repeat-mediated phase variation in Campylobacter jejuni ». Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/52218/.

Texte intégral
Résumé :
It is of interest to determine how populations of bacteria whose genes exhibit an ON/OFF switching property (phase variation) evolve over time from an initial population. By statistical analysis of two in vitro experimental Campylobacter jejuni datasets containing 28 genes assumed to be phase variable, we find evidence of small networks of genes which exhibit dependent evolutionary behaviour. This violates the assumption that the genes in these datasets do not interact with one another in the way they mutate during the division of cells, motivating the development of a model which attempts to explain evolution of such genes with factors other than mutation alone. We show that discrete probability distributions at observation times can be estimated by utilising two stochastic models. One model provides an explanation with mutation rates in genes, resembling a Markov chain under the assumption of having a near infinite population size. The second provides an explanation with both mutation and natural selection. However, the addition of selection parameters makes this model resemble a non-linear Markov process, which makes further analysis less straight-forward. An algorithm is constructed to test whether the mutation-only model can sufficiently explain evolution of single phase variable genes, using distributions and mutation rates from data as examples. This algorithm shows that applying this model to the same phase variable genes believed to show dependent evolutionary behaviour is inadequate. We use Approximate Bayesian Computation to estimate selection parameters for the mutation with selection model, whereby inference is derived from samples drawn from an approximation of the joint posterior distribution of the model parameters. We illustrate this method on an example of three genes which show evidence of dependent evolutionary behaviour from our two datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Alharthi, Muteb. « Bayesian model assessment for stochastic epidemic models ». Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33182/.

Texte intégral
Résumé :
Acrucial practical advantage of infectious diseases modelling as a public health tool lies in its application to evaluate various disease-control policies. However, such evaluation is of limited use, unless a sufficiently accurate epidemic model is applied. If the model provides an adequate fit, it is possible to interpret parameter estimates, compare disease epidemics and implement control procedures. Methods to assess and compare stochastic epidemic models in a Bayesian framework are not well-established, particularly in epidemic settings with missing data. In this thesis, we develop novel methods for both model adequacy and model choice for stochastic epidemic models. We work with continuous time epidemic models and assume that only case detection times of infected individuals are available, corresponding to removal times. Throughout, we illustrate our methods using both simulated outbreak data and real disease data. Data augmented Markov Chain Monte Carlo (MCMC) algorithms are employed to make inference for unobserved infection times and model parameters. Under a Bayesian framework, we first conduct a systematic investigation of three different but natural methods of model adequacy for SIR (Susceptible-Infective-Removed) epidemic models. We proceed to develop a new two-stage method for assessing the adequacy of epidemic models. In this two stage method, two predictive distributions are examined, namely the predictive distribution of the final size of the epidemic and the predictive distribution of the removal times. The idea is based onlooking explicitly at the discrepancy between the observed and predicted removal times using the posterior predictive model checking approach in which the notion of Bayesian residuals and the and the posterior predictive p−value are utilized. This approach differs, most importantly, from classical likelihood-based approaches by taking into account uncertainty in both model stochasticity and model parameters. The two-stage method explores how SIR models with different infection mechanisms, infectious periods and population structures can be assessed and distinguished given only a set of removal times. In the last part of this thesis, we consider Bayesian model choice methods for epidemic models. We derive explicit forms for Bayes factors in two different epidemic settings, given complete epidemic data. Additionally, in the setting where the available data are partially observed, we extend the existing power posterior method for estimating Bayes factors to models incorporating missing data and successfully apply our missing-data extension of the power posterior method to various epidemic settings. We further consider the performance of the deviance information criterion (DIC) method to select between epidemic models.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Haque, Mainul. « Mathematical modelling of eukaryotic stress-response gene networks ». Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/12509/.

Texte intégral
Résumé :
Mathematical modelling of gene regulatory networks is a relatively new area which is playing an important role in theoretical and experimental investigations that seek to open the door to understanding the real mechanisms that take place in living systems. The current thesis concentrates on studying the animal stress-response gene regulatory network by seeking to predict the consequence of environmental hazards caused by chemical mixtures (typical of industrial pollution). Organisms exposed to pollutants display multiple defensive stress responses, which together constitute an interlinked gene network (the Stress-Response Network; SRN). Multiple SRN reporter-gene outputs have been monitored during single and combined chemical exposures in transgenic strains of two invertebrates, Caenorhabditis elegans and Drosophila melanogaster. Reporter expression data from both species have been integrated into mathematical models describing the dynamic behaviour of the SRN and incorporating its known regulatory gene circuits. We describe some mathematical models of several types of different stress response networks, incorporating various methods of activation and inhibition, including formation of complexes and gene regulation (through several known transcription factors). Although the full details of the protein interactions forming these types of circuits are not yet well-known, we seek to include the relevant proteins acting in different cellular compartments. We propose and analyse a number of different models that describe four different stress response gene networks and through a combination of analytical (including stability, bifurcation and asymptotic) and numerical methods, we study these models to gain insight on the effect of several stresses on gene networks. A detailed time-dependent asymptotic analysis is performed for relevant models in order to clarify the roles of the distinct biochemical reactions that make up several important proteins production processes. In two models we were able to verify the theoretical predictions with the corresponding laboratory experimental observations that carried out by my coworkers in Britain and India.
Styles APA, Harvard, Vancouver, ISO, etc.
39

March, Jack. « Determining the location of an impact site from bloodstain spatter patterns : computer-based analysis of estimate uncertainty ». Thesis, University of Nottingham, 2005. http://eprints.nottingham.ac.uk/10166/.

Texte intégral
Résumé :
The estimation of the location in which an impact event took place from its resultant impact spatter bloodstain pattern can be a significant investigative issue in the reconstruction of a crime scene. The bloodstain pattern analysis methods through which an estimate is constructed utilise the established bloodstain pattern analysis principles of spatter bloodstain directionality, impact angle calculation, and straight-line trajectory approximation. Uncertainty, however, can be shown to be present in the theoretical definition and practical approximation of an impact site; the theoretical justification for impact angle calculation; spatter bloodstain sample selection; the dimensional measurement of spatter bloodstain morphologies; the inability to fully incorporate droplet flight dynamics; and the limited numerical methods used to describe mathematical estimates. An experimental computer-based research design was developed to investigate this uncertainty. A series of experimental impact spatter patterns were created, and an exhaustive spatter bloodstain recording methodology developed and implemented. A computer application was developed providing a range of analytical approaches to the investigation of estimate uncertainty, including a three-dimensional computer graphic virtual investigative environment. The analytical computer application was used to generate a series of estimates using a broad spatter bloodstain sampling strategy, with six potentially probative estimates analysed in detail. Two additional pilot projects investigating the utility of a sampled photographic recording methodology and an automated image analysis approach to spatter bloodstain measurement were also conducted. The results of these analyses indicate that, with further development, the application of similar analytical approaches to the construction and investigation of an estimate could prove effective in minimising the effect that estimate uncertainty might have on informing the conclusions of this forensic reconstructive process, and thereby reaffirm the scientific expert evidential status of estimate techniques within legal contexts.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Hall, Fenella T. H. « Mathematical models for class-D amplifiers ». Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/11891/.

Texte intégral
Résumé :
We here analyse a number of class-D amplifier topologies. Class-D amplifiers operate by converting an audio input signal into a high-frequency square wave output, whose lower-frequency components can accurately reproduce the input. Their high power efficiency and potential for low distortion makes them suitable for use in a wide variety of electronic devices. By calculating the outputs from a classical class-D design implementing different sampling schemes we demonstrate that a more recent method, called the Fourier transform/Poisson resummation method, has many advantages over the double Fourier series method, which is the traditional technique employed for this analysis. We thereby show that when natural sampling is used the input signal is reproduced exactly in the low-frequency part of the output, with no distortion. Although this is a known result, our calculations present the method and notation that we later develop. The classical class-D design is prone to noise, and therefore negative feedback is often included in the circuit. Subsequently we incorporate the Fourier transform/Poisson resummation method into a formalised and succinct analysis of a first-order negative feedback amplifier. Using perturbation expansions we derive the audio-frequency part of the output, demonstrating that negative feedback introduces undesirable distortion. Here we reveal the next order terms in the output compared with previous work, giving further insight into the nonlinear distortion. We then further extend the analysis to examine two more complex negative feedback topologies, namely a second-order and a derivative negative feedback design. Modelling each of these amplifiers presents an increased challenge due to the differences in their respective circuit designs, and in addition, for the derivative negative feedback amplifier we must consider scaling regimes based on the relative magnitudes of the frequencies involved. For both designs we establish novel expressions for the output, including the most significant distortion terms.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Milne, Andrew. « Topics in flow in fractured media ». Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/11957/.

Texte intégral
Résumé :
Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measured pressures are dependent on all the fracture transmissivities in the network. This thesis presents two new methods for conditioning fracture transmissivities in a discrete fracture network on measured pressure values. The first approach adopts a linear approximation when fracture transmissivities are mildly heterogeneous; this approach is then generalised to the minimisation of an objective function when fracture transmissivities are highly heterogeneous. This method is based on a generalisation of previous work on conditioning transmissivity values in a continuous porous medium. The second method developed is a Bayesian conditioning method. Bayes’ theorem is used to give an expression of proportionality for the posterior distribution of fracture log transmissivities in terms of the prior distribution and the data available through pressure measurements. The fracture transmissivities are assumed to be log normally distributed with a given mean and covariance, and the measured pressures are assumed to be normally distributed values each with a given error. From the expression of proportionality for the posterior distribution of fracture transmissivities the modes of the posterior distribution (the points of highest likelihood for the fracture transmissivities given the measured pressures) are numerically computed. Both algorithms are implemented in the existing finite element code NAPSAC developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Stephens, David A. « Bayesian edge-detection in image processing ». Thesis, University of Nottingham, 1990. http://eprints.nottingham.ac.uk/11723/.

Texte intégral
Résumé :
Problems associated with the processing and statistical analysis of image data are the subject of much current interest, and many sophisticated techniques for extracting semantic content from degraded or corrupted images have been developed. However, such techniques often require considerable computational resources, and thus are, in certain applications, inappropriate. The detection localised discontinuities, or edges, in the image can be regarded as a pre-processing operation in relation to these sophisticated techniques which, if implemented efficiently and successfully, can provide a means for an exploratory analysis that is useful in two ways. First, such an analysis can be used to obtain quantitative information relating to the underlying structures from which the various regions in the image are derived about which we would generally be a priori ignorant. Secondly, in cases where the inference problem relates to discovery of the unknown location or dimensions of a particular region or object, or where we merely wish to infer the presence or absence of structures having a particular configuration, an accurate edge-detection analysis can circumvent the need for the subsequent sophisticated analysis. Relatively little interest has been focussed on the edge-detection problem within a statistical setting. In this thesis, we formulate the edge-detection problem in a formal statistical framework, and develop a simple and easily implemented technique for the analysis of images derived from two-region single edge scenes. We extend this technique in three ways; first, to allow the analysis of more complicated scenes, secondly, by incorporating spatial considerations, and thirdly, by considering images of various qualitative nature. We also study edge reconstruction and representation given the results obtained from the exploratory analysis, and a cognitive problem relating to the detection of objects modelled by members of a class of simple convex objects. Finally, we study in detail aspects of one of the sophisticated image analysis techniques, and the important general statistical applications of the theory on which it is founded.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Amaral, Getulio J. A. « Bootstrap and empirical likelihood methods in statistical shape analysis ». Thesis, University of Nottingham, 2004. http://eprints.nottingham.ac.uk/11399/.

Texte intégral
Résumé :
The aim of this thesis is to propose bootstrap and empirical likelihood confidence regions and hypothesis tests for use in statistical shape analysis. Bootstrap and empirical likelihood methods have some advantages when compared to conventional methods. In particular, they are nonparametric methods and so it is not necessary to choose a family of distribution for building confidence regions or testing hypotheses. There has been very little work on bootstrap and empirical likelihood methods in statistical shape analysis. Only one paper (Bhattacharya and Patrangenaru, 2003) has considered bootstrap methods in statistical shape analysis, but just for constructing confidence regions. There are no published papers on the use of empirical likelihood methods in statistical shape analysis. Existing methods for building confidence regions and testing hypotheses in shape analysis have some limitations. The Hotelling and Goodall confidence regions and hypothesis tests are not appropriate for data sets with low concentration. The main reason is that these methods are designed for data with high concentration, and if this hypothesis is violated, the methods do not perform well. On the other hand, simulation results have showed that bootstrap and empirical likelihood methods developed in this thesis are appropriate to the statistical shape analysis of low concentrated data sets. For highly concentrated data sets all the methods show similar performance. Theoretical aspects of bootstrap and empirical likelihood methods are also considered. Both methods are based on asymptotic results and those results are explained in this thesis. It is proved that the bootstrap methods proposed in this thesis are asymptotically pivotal. Computational aspects are discussed. All the bootstrap algorithms are implemented in “R”. An algorithm for computing empirical likelihood tests for several populations is also implemented in “R”.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Demiris, Nikolaos. « Bayesian inference for stochastic epidemic models using Markov chain Monte Carlo methods ». Thesis, University of Nottingham, 2004. http://eprints.nottingham.ac.uk/10078/.

Texte intégral
Résumé :
This thesis is concerned with statistical methodology for the analysis of stochastic SIR (Susceptible->Infective->Removed) epidemic models. We adopt the Bayesian paradigm and we develop suitably tailored Markov chain Monte Carlo (MCMC) algorithms. The focus is on methods that are easy to generalise in order to accomodate epidemic models with complex population structures. Additionally, the models are general enough to be applicable to a wide range of infectious diseases. We introduce the stochastic epidemic models of interest and the MCMC methods we shall use and we review existing methods of statistical inference for epidemic models. We develop algorithms that utilise multiple precision arithmetic to overcome the well-known numerical problems in the calculation of the final size distribution for the generalised stochastic epidemic. Consequently, we use these exact results to evaluate the precision of asymptotic theorems previously derived in the literature. We also use the exact final size probabilities to obtain the posterior distribution of the threshold parameter $R_0$. We proceed to develop methods of statistical inference for an epidemic model with two levels of mixing. This model assumes that the population is partitioned into subpopulations and permits infection on both local (within-group) and global (population-wide) scales. We adopt two different data augmentation algorithms. The first method introduces an appropriate latent variable, the \emph{final severity}, for which we have asymptotic information in the event of an outbreak among a population with a large number of groups. Hence, approximate inference can be performed conditional on a ``major'' outbreak, a common assumption for stochastic processes with threshold behaviour such as epidemics and branching processes. In the last part of this thesis we use a \emph{random graph} representation of the epidemic process and we impute more detailed information about the infection spread. The augmented state-space contains aspects of the infection spread that have been impossible to obtain before. Additionally, the method is exact in the sense that it works for any (finite) population and group sizes and it does not assume that the epidemic is above threshold. Potential uses of the extra information include the design and testing of appropriate prophylactic measures like different vaccination strategies. An attractive feature is that the two algorithms complement each other in the sense that when the number of groups is large the approximate method (which is faster) is almost as accurate as the exact one and can be used instead. Finally, it is straightforward to extend our methods to more complex population structures like overlapping groups, small-world and scale-free networks
Styles APA, Harvard, Vancouver, ISO, etc.
45

Christen, José Andrés. « Bayesian interpretation of radiocarbon results ». Thesis, University of Nottingham, 1994. http://eprints.nottingham.ac.uk/11035/.

Texte intégral
Résumé :
Over the last thirty years radiocarbon dating has been widely used in archaeology and related fields to address a wide-range of chronological questions. Because of some inherent stochastic factors of a complex nature, radiocarbon dating presents a rich source of challenging statistical problems. The chronological questions posed commonly involve the interpretation of groups of radiocarbon determinations and often substantial amounts of a priori information are available. The statistical techniques used up to very recently could only deal with the analysis of one determination at a time, and no prior information could be included in the analysis. However, over the last few years some problems have been successfully tackled using the Bayesian paradigm. In this thesis we expand that work and develop a general statistical framework for the Bayesian interpretation of radiocarbon determinations. Firstly we consider the problem of radiocarbon calibration and develop a novel approach. Secondly we develop a statistical framework which permits the inclusion of prior archaeological knowledge and illustrate its use with a wide range of examples. We discuss various generic problems some of which are, replications, summarisation, floating chronologies and archaeological phase structures. The techniques used to obtain the posterior distributions of interest are numerical and, in most of the cases, we have used Markov chain Monte Carlo (MCMC) methods. We also discuss the sampling routines needed for the implementation of the MCNIC methods used in our examples. Thirdly we address the very important problem of outliers in radiocarbon dating and develop an original methodology for the identification of outliers in sets of radiocarbon determinations. We show how our framework can be extended to permit the identification of outliers. Finally we apply this extended framework to the analysis of a substantial archaeological dating problem.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Polson, Nicholas G. « Bayesian perspectives on statistical modelling ». Thesis, University of Nottingham, 1988. http://eprints.nottingham.ac.uk/11292/.

Texte intégral
Résumé :
This thesis explores the representation of probability measures in a coherent Bayesian modelling framework, together with the ensuing characterisation properties of posterior functionals. First, a decision theoretic approach is adopted to provide a unified modelling criterion applicable to assessing prior-likelihood combinations, design matrices, model dimensionality and choice of sample size. The utility structure and associated Bayes risk induces a distance measure, introducing concepts from differential geometry to aid in the interpretation of modelling characteristics. Secondly, analytical and approximate computations for the implementation of the Bayesian paradigm, based on the properties of the class of transformation models, are discussed. Finally, relationships between distance measures (in the form of either a derivative of a Bayes mapping or an induced distance) are explored, with particular reference to the construction of sensitivity measures.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Lee, T. D. « Implementation of the Bayesian paradigm for highly parameterised linear models ». Thesis, University of Nottingham, 1986. http://eprints.nottingham.ac.uk/14421/.

Texte intégral
Résumé :
This thesis re-examines the Bayes hierarchical linear model and the associated issue of variance component estimation in the light of new numerical procedures, and demonstrates that the Bayes linear model is indeed a practical proposition. Technical issues considered include the development of analytical procedures essential for efficient evaluation of the likelihood function, and a partial characterisation of the difficulty of likelihood evaluation. A general non-informative prior distribution for the hierarchical linear model is developed. Extensions to spherically symmetric error distributions are shown to be practicable and useful. The numerical technique enables the sensitivity of the results to the prior structure, error structure and model structure to be investigated. An extended example is considered which illustrates these analytical and numerical techniques in a 15 dimensional problem. A second example provides a critical examination of a British Standards Institute paper, and develops further techniques for handling alternative spherically symmetric error distributions. Recent work on variance component estimation is viewed from the Bayesian perspective, and areas for further work are identified.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Xu, Xiaoguang. « Bayesian nonparametric inference for stochastic epidemic models ». Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/29170/.

Texte intégral
Résumé :
Modelling of infectious diseases is a topic of great importance. Despite the enormous attention given to the development of methods for efficient parameter estimation, there has been relatively little activity in the area of nonparametric inference for epidemics. In this thesis, we develop new methodology which enables nonparametric estimation of the parameters which govern transmission within a Bayesian framework. Many standard modelling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. We relax these assumptions and analyse data from disease outbreaks in a Bayesian nonparametric framework. We first apply our Bayesian nonparametric methods to small-scale epidemics. In a standard SIR model, the overall force of infection is assumed to have a parametric form. We relax this assumption and treat it as a function which only depends on time. Then we place a Gaussian process prior on it and infer it using data-augmented Markov Chain Monte Carlo (MCMC) algorithms. Our methods are illustrated by applications to simulated data as well as Smallpox data. We also investigate the infection rate in the SIR model using our methods. More precisely, we assume the infection rate is time-varying and place a Gaussian process prior on it. Results are obtained using data augmentation methods and standard MCMC algorithms. We illustrate our methods using simulated data and respiratory disease data. We find our methods work fairly well for the stochastic SIR model. We also investigate large-scaled epidemics in a Bayesian nonparametric framework. For large epidemics in large populations, we usually observe surveillance data which typically provide number of new infection cases occurring during observation periods. We infer the infection rate for each observation period by placing Gaussian process priors on them. Our methods are illustrated by the real data, i.e. a time series of incidence of measles in London (1948-1957). Please note, the pagination in the online version differs slightly from the official, printed version because of the insertion of a list of corrections. The incorporation of the corrections into the text of the online version means that the page breaks appear at different points on p. 39-47, and p. 47-147 of the electronic version correspond to p. 48-148 of the printed version.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Yu, Chen. « The use of mixture models in capture-recapture ». Thesis, University of Kent, 2015. https://kar.kent.ac.uk/50775/.

Texte intégral
Résumé :
Mixture models have been widely used to model heterogeneity. In this thesis, we focus on the use of mixture models in capture--recapture, for both closed populations and open populations. We provide both practical and theoretical investigations. A new model is proposed for closed populations and the practical difficulties of model fitting for mixture models are demonstrated for open populations. As the number of model parameters can increase with the number of mixture components, whether we can estimate all of the parameters using the method of maximum likelihood is an important issue. We explore this using formal methods and develop general rules to ensure that all parameters are estimable.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Hubbard, Ben Arthur. « Parameter redundancy with applications in statistical ecology ». Thesis, University of Kent, 2014. https://kar.kent.ac.uk/47436/.

Texte intégral
Résumé :
This thesis is concerned with parameter redundancy in statistical ecology models. If it is not possible to estimate all the parameters, a model is termed parameter redundant. Parameter redundancy commonly occurs when parameters are confounded in the model so that the model could be reparameterised in terms of a smaller number of parameters. In principle, it is possible to use symbolic algebra to determine whether or not all the parameters of a certain ecological model can be estimated using classical methods of statistical inference. We examine a variety of different ecological models: We begin by exploring models based on marking a number of animals and observing the same animals at future time points. These observations can either be when the animal is marked and then recovered dead in mark-recovery modelling, or when the animal is marked and then recaptured alive in capture-recapture modelling. We also explore capture-recapture-recovery models where both dead recoveries and alive recaptures can be observed in the same study. We go on to explore occupancy models which are used to obtain estimates of the probability of presence, or absence, for living species by the use of repeated detection surveys, where these models have the advantage that individuals are not required to be marked. A variety of different occupancy models are examined included the addition of season-dependent parameters, group-dependent parameters and species-dependent, along with other models. We investigate parameter redundancy by deriving general results for a variety of different models where the model's parameter dependencies can be relaxed suited to different studies. We also analyse how the results change for specific data sets and how sparse data influence whether or not a model is parameter redundant using procedures written in Maple. This theory on parameter redundancy is vital for the correct use of these ecological models so that valid statistical inference can be made.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie