Dissertations / Theses on the topic 'Modelling, Markov chain'

To see the other types of publications on this topic, follow the link: Modelling, Markov chain.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 dissertations / theses for your research on the topic 'Modelling, Markov chain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Samci, Karadeniz Rukiye. "Modelling share prices as a random walk on a Markov chain." Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/40129.

Full text
Abstract:
In the financial area, a simple but also realistic means of modelling real data is very important. Several approaches are considered to model and analyse the data presented herein. We start by considering a random walk on an additive functional of a discrete time Markov chain perturbed by Gaussian noise as a model for the data as working with a continuous time model is more convenient for option prices. Therefore, we consider the renowned (and open) embedding problem for Markov chains: not every discrete time Markov chain has an underlying continuous time Markov chain. One of the main goals of this research is to analyse whether the discrete time model permits extension or embedding to the continuous time model. In addition, the volatility of share price data is estimated and analysed by the same procedure as for share price processes. This part of the research is an extensive case study on the embedding problem for financial data and its volatility. Another approach to modelling share price data is to consider a random walk on the lamplighter group. Specifically, we model data as a Markov chain with a hidden random walk on the lamplighter group Z3 and on the tensor product of groups Z2 ⊗ Z2. The lamplighter group has a specific structure where the hidden information is actually explicit. We assume that the positions of the lamplighters are known, but we do not know the status of the lamps. This is referred to as a hidden random walk on the lamplighter group. A biased random walk is constructed to fit the data. Monte Carlo simulations are used to find the best fit for smallest trace norm difference of the transition matrices for the tensor product of the original transition matrices from the (appropriately split) data. In addition, splitting data is a key method for both our first and second models. The tensor product structure comes from the split of the data. This requires us to deal with the missing data. We apply a variety of statistical techniques such as Expectation- Maximization Algorithm and Machine Learning Algorithm (C4.5). In this work we also analyse the quantum data and compute option prices for the binomial model via quantum data.
APA, Harvard, Vancouver, ISO, and other styles
2

Gallagher, Raymond. "Uncertainty modelling in quantitative risk analysis." Thesis, University of Liverpool, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pang, Wan-Kai. "Modelling ordinal categorical data : a Gibbs sampler approach." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kawale, Sujay J. "Implication of Terrain Topology Modelling on Ground Vehicle Reliability." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/31241.

Full text
Abstract:
The accuracy of computer-based ground vehicle durability and ride quality simulations depends on accurate representation of road surface topology as an excitation to vehicle dynamics simulation software, since most of the excitation input to a vehicle as it traverses terrain is provided by the surface topology. It is not computationally efficient to utilise physically measured terrain topology for these simulations since extremely large data sets would be required to represent terrain of all desired types. Moreover, performing repeated simulations on the same set of measured data would not provide a random character typical of real world usage. There exist several methods of synthesising terrain data through the use of stochastic or mathematical models in order to capture such physical properties of measured terrain as roughness, bank angle and grade. In first part of this work, the autoregressive model and the Markov chain model have been applied to generate synthetic two-dimensional terrain profiles. The synthesised terrain profiles generated are expected to capture the statistical properties of the measured data. A methodology is then proposed; to assess the performance of these models of terrain in capturing the statistical properties of the measured terrain. This is done through the application of several statistical property tests to the measured and synthesized terrain profiles. The second part of this work describes the procedure that has been followed to assess the performance of these models in capturing the vehicle component fatigue-inducing characteristics of the measured terrain, by predicting suspension component fatigue life based on the loading conditions obtained from the measured terrain and the corresponding synthesized terrain. The terrain model assessment methodology presented in this work can be applied to any model of terrain, serving to identify which terrain models are suited to which type of terrain.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Bray, Isabelle Cella. "Modelling the prevalence of Down syndrome with applications of Markov chain Monte Carlo methods." Thesis, University of Plymouth, 1998. http://hdl.handle.net/10026.1/2408.

Full text
Abstract:
This thesis was motivated by applications in the epidemiology of Down syndrome and prenatal screening for Down syndrome. Methodological problems arising in these applications include under-ascertainment of cases in livebirth studies, double-sampled data with missing observations and coarsening of data. These issues are considered from a classical perspective using maximum likelihood and from a Bayesian viewpoint employing Markov chain Monte Carlo (MCMC) techniques. Livebirth prevalence studies published in the literature used a variety of data collection methods and many are of uncertain completeness. In two of the nine studies an estimate of the level of under-reporting is available. We present a meta-analysis of these studies in which maternal age-related risks and the levels of under-ascertainment in individual studies are estimated simultaneously. A modified logistic model is used to describe the relationship between Down syndrome prevalence and maternal age. The model is then extended to include data from several studies of prevalence rates observed at times of chorionic villus sampling (CVS) and amniocentesis. New estimates for spontaneous loss rates between the times" of CVS, amniocentesis and live birth are presented. The classical analysis of live birth prevalence data is then compared with an MCMC analysis which allows prior information concerning ascertainment to be incorporated. This approach is particularly attractive since the double-sampled data structure includes missing observations. The MCMC algorithm, which uses single-component Metropolis-Hastings steps to simulate model parameters and missing data, is run under three alternative prior specifications. Several convergence diagnostics are also considered and compared. Finally, MCMC techniques are used to model the distribution of fetal nuchal translucency (NT), an ultrasound marker for Down syndrome. The data are a mixture of measurements rounded to whole millimetres and measurements more accurately recorded to one decimal place. An MCMC algorithm is applied to simulate the proportion of measurements rounded to whole millimetres and parameters to describe the distribution of NT in unaffected and Down syndrome pregnancies. Predictive probabilities of Down syndrome given NT and maternal age are then calculated.
APA, Harvard, Vancouver, ISO, and other styles
6

Loza, Reyes Elisa. "Classification of phylogenetic data via Bayesian mixture modelling." Thesis, University of Bath, 2010. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.519916.

Full text
Abstract:
Conventional probabilistic models for phylogenetic inference assume that an evolutionary tree,andasinglesetofbranchlengthsandstochasticprocessofDNA evolutionare sufficient to characterise the generating process across an entire DNA alignment. Unfortunately such a simplistic, homogeneous formulation may be a poor description of reality when the data arise from heterogeneous processes. A well-known example is when sites evolve at heterogeneous rates. This thesis is a contribution to the modelling and understanding of heterogeneityin phylogenetic data. Weproposea methodfor the classificationof DNA sites based on Bayesian mixture modelling. Our method not only accounts for heterogeneous data but also identifies the underlying classes and enables their interpretation. We also introduce novel MCMC methodology with the same, or greater, estimation performance than existing algorithms but with lower computational cost. We find that our mixture model can successfully detect evolutionary heterogeneity and demonstrate its direct relevance by applying it to real DNA data. One of these applications is the analysis of sixteen strains of one of the bacterial species that cause Lyme disease. Results from that analysis have helped understanding the evolutionary paths of these bacterial strains and, therefore, the dynamics of the spread of Lyme disease. Our method is discussed in the context of DNA but it may be extendedto othertypesof molecular data. Moreover,the classification scheme thatwe propose is evidence of the breadth of application of mixture modelling and a step forwards in the search for more realistic models of theprocesses that underlie phylogenetic data.
APA, Harvard, Vancouver, ISO, and other styles
7

Garzon, Rozo Betty Johanna. "Modelling operational risk using skew t-copulas and Bayesian inference." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/25751.

Full text
Abstract:
Operational risk losses are heavy tailed and are likely to be asymmetric and extremely dependent among business lines/event types. The analysis of dependence via copula models has been focussed on the bivariate case mainly. In the vast majority of instances symmetric elliptical copulas are employed to model dependence for severities. This thesis proposes a new methodology to assess, in a multivariate way, the asymmetry and extreme dependence between severities, and to calculate the capital for operational risk. This methodology simultaneously uses (i) several parametric distributions and an alternative mixture distribution (the Lognormal for the body of losses and the generalised Pareto Distribution for the tail) using a technique from extreme value theory, (ii) the multivariate skew t-copula applied for the first time across severities and (iii) Bayesian theory. The former to model severities, I test simultaneously several parametric distributions and the mixture distribution for each business line. This procedure enables me to achieve multiple combinations of the severity distribution and to find which fits most closely. The second to effectively model asymmetry and extreme dependence in high dimensions. The third to estimate the copula model, given the high multivariate component (i.e. eight business lines and seven event types) and the incorporation of mixture distributions it is highly difficult to implement maximum likelihood. Therefore, I use a Bayesian inference framework and Markov chain Monte Carlo simulation to evaluate the posterior distribution to estimate and make inferences of the parameters of the skew t-copula model. The research analyses an updated operational loss data set, SAS® Operational Risk Global Data (SAS OpRisk Global Data), to model operational risk at international financial institutions. I then evaluate the impact of this multivariate, asymmetric and extreme dependence on estimating the total regulatory capital, among other established multivariate copulas. My empirical findings are consistent with other studies reporting thin and medium-tailed loss distributions. My approach substantially outperforms symmetric elliptical copulas, demonstrating that modelling dependence via the skew t-copula provides a more efficient allocation of capital charges of up to 56% smaller than that indicated by the standard Basel model.
APA, Harvard, Vancouver, ISO, and other styles
8

Bleki, Zolisa. "Efficient Bayesian analysis of spatial occupancy models." Master's thesis, University of Cape Town, 2020. http://hdl.handle.net/11427/32469.

Full text
Abstract:
Species conservation initiatives play an important role in ecological studies. Occupancy models have been a useful tool for ecologists to make inference about species distribution and occurrence. Bayesian methodology is a popular framework used to model the relationship between species and environmental variables. In this dissertation we develop a Gibbs sampling method using a logit link function in order to model posterior parameters of the single-season spatial occupancy model. We incorporate the widely used Intrinsic Conditional Autoregressive (ICAR) prior model to specify the spatial random effect in our sampler. We also develop OccuSpytial, a statistical package implementing our Gibbs sampler in the Python programming language. The aim of this study is to highlight the computational efficiency that can be obtained by employing several techniques, which include exploiting the sparsity of the precision matrix of the ICAR model and also making use of Polya-Gamma latent variables to obtain closed form expressions for the posterior conditional distributions of the parameters of interest. An algorithm for efficiently sampling from the posterior conditional distribution of the spatial random effects parameter is also developed and presented. To illustrate the sampler's performance a number of simulation experiments are considered, and the results are compared to those obtained by using a Gibbs sampler incorporating Restricted Spatial Regression (RSR) to specify the spatial random effect. Furthermore, we fit our model to the Helmeted guineafowl (Numida meleagris) dataset obtained from the 2nd South African Bird Atlas Project database in order to obtain a distribution map of the species. We compare our results with those obtained from the RSR variant of our sampler, those obtained by using the stocc statistical package (written using the R programming language), and those obtained from not specifying any spatial information about the sites in the data. It was found that using RSR to specify spatial random effects is both statistically and computationally more efficient that specifying them using ICAR. The OccuSpytial implementations of both ICAR and RSR Gibbs samplers has significantly less runtime compared to other implementations it was compared to.
APA, Harvard, Vancouver, ISO, and other styles
9

Henriques, Bruno M. "Hybrid galaxy evolution modelling : Monte Carlo Markov Chain parameter estimation in semi-analytic models of galaxy formation." Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2334/.

Full text
Abstract:
We introduce a statistical exploration of the parameter space of the Munich semi-analytic model built upon the Millennium dark matter simulation. This is achieved by applying a Monte Carlo Markov Chain (MCMC) method to constrain the 6 free parameters that define the stellar mass function at redshift zero. The model is tested against three different observational data sets, including the galaxy K-band luminosity function, B −V colours, and the black hole-bulge mass relation, to obtain mean values, confidence limits and likelihood contours for the best fit model. We discuss how the model parameters affect each galaxy property and find that there are strong correlations between them. We analyze to what extent these are simply reflections of the observational constraints, or whether they can lead to improved understanding of the physics of galaxy formation. When all the observations are combined, the need to suppress dwarf galaxies requires the strength of the supernova feedback to be significantly higher in our best-fit solution than in previous work. We interpret this fact as an indication of the need to improve the treatment of low mass objects. As a possible solution, we introduce the process of satellite disruption, caused by tidal forces exerted by central galaxies on their merging companions. We apply similar MCMC sampling techniques to the new model, which allows us to discuss the impact of disruption on the basic physics of the model. The new best fit model has a likelihood four times better than before, reproducing reasonably all the observational constraints, as well as the metallicity of galaxies and predicting intra-cluster light. We interpret this as an indication of the need to include the new recipe. We point out the remaining limitations of the semi-analytic model and discuss possible improvements that might increase its predictive power in the future.
APA, Harvard, Vancouver, ISO, and other styles
10

Dao, Trong Nghia Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Modelling 802.11 networks for multimedia applications." Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2008. http://handle.unsw.edu.au/1959.4/41222.

Full text
Abstract:
This thesis is concerned with the development of new mathematical models for the IEEE 802.11??s access mechanisms, with a particular focus on DCF and EDCA. Accurate mathematical models for the DCF and EDCA access mechanisms provide many benefits, such as improved performance analysis, easier network capacity planning, and robust network design. A feature that permeates the work presented in this thesis is the application of our new models to network environments where both saturated and non-saturated traffic sources are present. The scenario in which multiple traffic sources are present is more technically challenging, but provides for a more realistic setting. Our first contribution is the development of a new Markov model for non-saturated DCF in order to predict the network throughput. This model takes into account several details of the protocol that have been hitherto neglected. In addition, we apply a novel treatment of the packet service time within our model. We show how the inclusion of these effects provides more accurate predictions of network throughput than earlier works. Our second contribution is the development of a new analytical model for EDCA, again in order to predict network throughput. Our new EDCA model is based on a replacement of the normal AIFS parameter of EDCA with a new parameter more closely associated with DCF. This novel procedure allows EDCA to be viewed as a modified multi-mode version of DCF. Our third contribution is the simultaneous application of our new Markov models to both the non-saturated and the saturated regime. Hitherto, network throughput predictions for these regimes have required completely separate mathematical models. The convergence property of our model in the two regimes provides a new method to estimate the network capacity of the network. Our fourth contribution relates to predictions for the multimedia capacity of 802.11 networks. Our multimedia capacity analysis, which is based on modifications to our Markov model, is new in that it can be applied to a broad range of quality of service requirements. Finally, we highlight the use of our analysis in the context of emerging location-enabled networks.
APA, Harvard, Vancouver, ISO, and other styles
11

Wanderley, Matos de Abreu Thiago. "Modeling and performance analysis of IEEE 802.11-based chain networks." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10030/document.

Full text
Abstract:
Le protocole IEEE 802.11, basé sur les principes CMSA/CA, est largement déployé dans les communications sans fil actuelles, principalement en raison de sa simplicité et sa mise en œuvre à faible coût. Une utilisation intéressante de ce protocole peut être trouvée dans les réseaux sans fil multi-sauts, où les communications entre les nœuds peuvent impliquer l'emploi de nœuds relais. Une topologie simple de ces réseaux impliquant une source et une destination est communément connue en tant que chaîne. Dans cette thèse, un modèle hiérarchique, composé de deux niveaux, est présenté dans le but d'analyser la performance associée à ces chaînes. Le niveau supérieur modélise la topologie de la chaîne et le niveau inférieur modélise chacun de ses nœuds. On estime les performances de la chaîne, en termes de débit obtenu et de pertes de datagrammes, en fonction de différents modes de qualité du canal. En termes de précision, le modèle offre, en général, des résultats justes. Par ailleurs, le temps nécessaire à sa résolution reste très faible. Le modèle proposé est ensuite appliqué aux chaînes avec deux, trois et quatre nœuds, en présence de stations cachées potentielles, de tampons finis et d'une couche physique non idéale. Par ailleurs, l'utilisation du modèle proposé permet de mettre en évidence certaines propriétés inhérentes à ces réseaux. Par exemple, on peut montrer que la chaîne présente un maximum de performance (en ce qui concerne le débit atteint) en fonction du niveau de charge de du système, et que cette performance s'effondre par l'augmentation de cette charge. Cela représente un comportement non trivial des réseaux sans fil et il ne peut pas être facilement identifié. Cependant, le modèle capture cet effet non évident. Finalement, certains impacts sur les performances des chaînes occasionnés par les mécanismes IEEE 802.11 sont analysés et détaillés. La forte synchronisation entre les nœuds d'une chaîne et comment cette synchronisation représente un défi pour la modélisation de ces réseaux sont décrites. Le modèle proposé permet de surmonter cet obstacle et d'assurer une évaluation facile des performances de la chaîne
The IEEE 802.11 protocol, based on the CMSA/CA principles, is widely deployed in current communications, mostly due to its simplicity and low cost implementation. One common usage can be found in multi-hop wireless networks, where communications between nodes may involve relay nodes. A simple topology of these networks including one source and one destination is commonly known as a chain. In this thesis, a hierarchical modeling framework, composed of two levels, is presented in order to analyze the associated performance of such chains. The upper level models the chain topology and the lower level models each of its nodes. It estimates the performance of the chain in terms of the attained throughput and datagram losses, according to different patterns of channel degradation. In terms of precision, the model delivers, in general, accurate results. Furthermore, the time needed for solving it remains very small. The proposed model is then applied to chains with 2, 3 and 4 nodes, in the presence of occasional hidden nodes, finite buffers and non-perfect physical layer. Moreover, the use of the proposed model allows us to highlight some inherent properties to such networks. For instance, it is shown that a chain presents a performance maximum (with regards to the attained throughput) according to the system workload level, and this performance collapses with the increase of the workload. This represents a non-trivial behavior of wireless networks and cannot be easily identified. However, the model captures this non-trivial effect. Finally, some of the impacts in chains performance due to the IEEE 802.11 mechanisms are analyzed and detailed. The strong synchronization among nodes of a chain is depicted and how it represents a challenge for the modeling of such networks. The proposed model overcomes this obstacle and allows an easy evaluation of the chain performance
APA, Harvard, Vancouver, ISO, and other styles
12

Zijerveld, Leonardus Jacobus Johannes. "Integrated modelling and Bayesian inference applied to population and disease dynamics in wildlife : M.bovis in badgers in Woodchester Park." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/7733.

Full text
Abstract:
Understanding demographic and disease processes in wildlife populations tends to be hampered by incomplete observations which can include significant errors. Models provide useful insights into the potential impacts of key processes and the value of such models greatly improves through integration with available data in a way that includes all sources of stochasticity and error. To date, the impact on disease of spatial and social structures observed in wildlife populations has not been widely addressed in modelling. I model the joint effects of differential fecundity and spatial heterogeneity on demography and disease dynamics, using a stochastic description of births, deaths, social-geographic migration, and disease transmission. A small set of rules governs the rates of births and movements in an environment where individuals compete for improved fecundity. This results in realistic population structures which, depending on the mode of disease transmission can have a profound effect on disease persistence and therefore has an impact on disease control strategies in wildlife populations. I also apply a simple model with births, deaths and disease events to the long-term observations of TB (Mycobacterium bovis) in badgers in Woodchester Park. The model is a continuous time, discrete state space Markov chain and is fitted to the data using an implementation of Bayesian parameter inference with an event-based likelihood. This provides a flexible framework to combine data with expert knowledge (in terms of model structure and prior distributions of parameters) and allows us to quantify the model parameters and their uncertainties. Ecological observations tend to be restricted in terms of scope and spatial temporal coverage and estimates are also affected by trapping efficiency and disease test sensitivity. My method accounts for such limitations as well as the stochastic nature of the processes. I extend the likelihood function by including an error term that depends on the difference between observed and inferred state space variables. I also demonstrate that the estimates improve by increasing observation frequency, combining the likelihood of more than one group and including variation of parameter values through the application of hierarchical priors.
APA, Harvard, Vancouver, ISO, and other styles
13

Al-Saadony, Muhannad. "Bayesian stochastic differential equation modelling with application to finance." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1530.

Full text
Abstract:
In this thesis, we consider some popular stochastic differential equation models used in finance, such as the Vasicek Interest Rate model, the Heston model and a new fractional Heston model. We discuss how to perform inference about unknown quantities associated with these models in the Bayesian framework. We describe sequential importance sampling, the particle filter and the auxiliary particle filter. We apply these inference methods to the Vasicek Interest Rate model and the standard stochastic volatility model, both to sample from the posterior distribution of the underlying processes and to update the posterior distribution of the parameters sequentially, as data arrive over time. We discuss the sensitivity of our results to prior assumptions. We then consider the use of Markov chain Monte Carlo (MCMC) methodology to sample from the posterior distribution of the underlying volatility process and of the unknown model parameters in the Heston model. The particle filter and the auxiliary particle filter are also employed to perform sequential inference. Next we extend the Heston model to the fractional Heston model, by replacing the Brownian motions that drive the underlying stochastic differential equations by fractional Brownian motions, so allowing a richer dependence structure across time. Again, we use a variety of methods to perform inference. We apply our methodology to simulated and real financial data with success. We then discuss how to make forecasts using both the Heston and the fractional Heston model. We make comparisons between the models and show that using our new fractional Heston model can lead to improve forecasts for real financial data.
APA, Harvard, Vancouver, ISO, and other styles
14

Muhammad, Sayyed Auwn. "Probabilistic Modelling of Domain and Gene Evolution." Doctoral thesis, KTH, Beräkningsvetenskap och beräkningsteknik (CST), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191352.

Full text
Abstract:
Phylogenetic inference relies heavily on statistical models that have been extended and refined over the past years into complex hierarchical models to capture the intricacies of evolutionary processes. The wealth of information in the form of fully sequenced genomes has led to the development of methods that are used to reconstruct the gene and species evolutionary histories in greater and more accurate detail. However, genes are composed of evolutionary conserved sequence segments called domains, and domains can also be affected by duplications, losses, and bifurcations implied by gene or species evolution. This thesis proposes an extension of evolutionary models, such as duplication-loss, rate, and substitution, that have previously been used to model gene evolution, to model the domain evolution. In this thesis, I am proposing DomainDLRS: a comprehensive, hierarchical Bayesian method, based on the DLRS model by Åkerborg et al., 2009, that models domain evolution as occurring inside the gene and species tree. The method incorporates a birth-death process to model the domain duplications and losses along with a domain sequence evolution model with a relaxed molecular clock assumption. The method employs a variant of Markov Chain Monte Carlo technique called, Grouped Independence Metropolis-Hastings for the estimation of posterior distribution over domain and gene trees. By using this method, we performed analyses of Zinc-Finger and PRDM9 gene families, which provides an interesting insight of domain evolution. Finally, a synteny-aware approach for gene homology inference, called GenFamClust, is proposed that uses similarity and gene neighbourhood conservation to improve the homology inference. We evaluated the accuracy of our method on synthetic and two biological datasets consisting of Eukaryotes and Fungal species. Our results show that the use of synteny with similarity is providing a significant improvement in homology inference.

QC 20160904

APA, Harvard, Vancouver, ISO, and other styles
15

Singh, Karandeep. "Statistical modelling and analysis of traffic : a dynamic approach." Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/9421.

Full text
Abstract:
In both developed and emerging-economies, major cities continue to experience increasing traffic congestion. To address this issue, complex Traffic Management Systems (TMS) are employed in recent years to help manage traffic. These systems fuse traffic-surveillance-related information from a variety of sensors deployed across traffic networks. A TMS requires real-time information to make effective control decisions and to deliver trustworthy information to users, such as travel time, congestion level, etc. There are three fundamental inputs required by TMS, namely, traffic volume, vehicular speed, and traffic density. Using conventional traffic loop detectors one can directly measure flow and velocity. However, traffic density is more difficult to measure. The situation becomes more difficult for multi-lane motorways due to drivers lane-change behaviour. This research investigates statistical modelling and analysis of traffic flow. It contributes to the literature of transportation and traffic management and research in several aspects. First, it takes into account lane-changes in traffic modelling through incorporating a Markov chain model to describe the drivers lane-change behaviour. Secondly, the lane change probabilities between two adjacent lanes are not assumed to be fixed but rather they depend on the current traffic condition. A discrete choice model is used to capture drivers lane choice behaviour. The drivers choice probabilities are modelled by several traffic-condition related attributes such as vehicle time headway, traffic density and speed. This results in a highly nonlinear state equation for traffic density. To address the issue of high nonlinearity of the state space model, the EKF and UKF is used to estimate the traffic density recursively. In addition, a new transformation approach has been proposed to transform the observation equation from a nonlinear form to a linear one so that the potential approximation in the EKF & UKF can be avoided. Numerical studies have been conducted to investigate the performance of the developed method. The proposed method outperformed the existing methods for traffic density estimation in simulation studies. Furthermore, it is shown that the computational cost for updating the estimate of traffic densities for a multi-lane motorway is kept at a minimum so that online applications are feasible in practice. Consequently the traffic densities can be monitored and the relevant information can be fed into the traffic management system of interest.
APA, Harvard, Vancouver, ISO, and other styles
16

Lopez, lopera Andres Felipe. "Gaussian Process Modelling under Inequality Constraints." Thesis, Lyon, 2019. https://tel.archives-ouvertes.fr/tel-02863891.

Full text
Abstract:
Le conditionnement de processus gaussiens (PG) par des contraintes d’inégalité permet d’obtenir des modèles plus réalistes. Cette thèse s’intéresse au modèle de type PG proposé par maatouk (2015), obtenu par approximation finie, qui garantit que les contraintes sont satisfaites dans tout l’espace. Plusieurs contributions sont apportées. Premièrement, nous étudions l’emploi de méthodes de monte carlo par chaı̂nes de markov pour des lois multinormales tronquées. Elles fournissent un échantillonnage efficacpour des contraintes d’inégalité linéaires. Deuxièmement, nous explorons l’extension du modèle, jusque-làlimité à la dimension trois, à de plus grandes dimensions. Nous remarquons que l’introduction d’un bruit d’observations permet de monter à la dimension cinq. Nous proposons un algorithme d’insertion des nœuds, qui concentre le budget de calcul sur les dimensions les plus actives. Nous explorons aussi la triangulation de delaunay comme alternative à la tensorisation. Enfin, nous étudions l’utilisation de modèles additifs dans ce contexte, théoriquement et sur des problèmes de plusieurs centaines de variables. Troisièmement, nous donnons des résultats théoriques sur l’inférence sous contraintes d’inégalité. La consistance et la normalité asymptotique d’estimateurs par maximum de vraisemblance sont établies. L’ensemble des travaux a fait l’objet d’un développement logiciel en R. Ils sont appliqués à des problèmes de gestion des risques en sûreté nucléaire et inondations côtières, avec des contraintes de positivité et monotonie. Comme ouverture, nous montrons que la méthodologie fournit un cadre original pour l’étude de processus de Poisson d’intensité stochastique
Conditioning Gaussian processes (GPs) by inequality constraints gives more realistic models. This thesis focuses on the finite-dimensional approximation of GP models proposed by Maatouk (2015), which satisfies the constraints everywhere in the input space. Several contributions are provided. First, we study the use of Markov chain Monte Carlo methods for truncated multinormals. They result in efficient sampling for linear inequality constraints. Second, we explore the extension of the model, previously limited up tothree-dimensional spaces, to higher dimensions. The introduction of a noise effect allows us to go up to dimension five. We propose a sequential algorithm based on knot insertion, which concentrates the computational budget on the most active dimensions. We also explore the Delaunay triangulation as an alternative to tensorisation. Finally, we study the case of additive models in this context, theoretically and on problems involving hundreds of input variables. Third, we give theoretical results on inference under inequality constraints. The asymptotic consistency and normality of maximum likelihood estimators are established. The main methods throughout this manuscript are implemented in R language programming.They are applied to risk assessment problems in nuclear safety and coastal flooding, accounting for positivity and monotonicity constraints. As a by-product, we also show that the proposed GP approach provides an original framework for modelling Poisson processes with stochastic intensities
APA, Harvard, Vancouver, ISO, and other styles
17

Dannenberg, Frits Gerrit Willem. "Modelling and verification for DNA nanotechnology." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:a0b5343b-dcee-44ff-964b-bdf5a6f8a819.

Full text
Abstract:
DNA nanotechnology is a rapidly developing field that creates nanoscale devices from DNA, which enables novel interfaces with biological material. Their therapeutic use is envisioned and applications in other areas of basic science have already been found. These devices function at physiological conditions and, owing to their molecular scale, are subject to thermal fluctuations during both preparation and operation of the device. Troubleshooting a failed device is often difficult and we develop models to characterise two separate devices: DNA walkers and DNA origami. Our framework is that of continuous-time Markov chains, abstracting away much of the underlying physics. The resulting models are coarse but enable analysis of system-level performance, such as ‘the molecular computation eventually returns the correct answer with high probability’. We examine the applicability of probabilistic model checking to provide guarantees on the behaviour of nanoscale devices, and to this end we develop novel model checking methodology. We model a DNA walker that autonomously navigates a series of junctions, and we derive design principles that increase the probability of correct computational output. We also develop a novel parameter synthesis method for continuous-time Markov chains, for which the synthesised models guarantee a predetermined level of performance. Finally, we develop a novel discrete stochastic assembly model of DNA origami from first principles. DNA origami is a widespread method for creating nanoscale structures from DNA. Our model qualitatively reproduces experimentally observed behaviour and using the model we are able to rationally steer the folding pathway of a novel polymorphic DNA origami tile, controlling the eventual shape.
APA, Harvard, Vancouver, ISO, and other styles
18

Szymczak, Marcin. "Programming language semantics as a foundation for Bayesian inference." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28993.

Full text
Abstract:
Bayesian modelling, in which our prior belief about the distribution on model parameters is updated by observed data, is a popular approach to statistical data analysis. However, writing specific inference algorithms for Bayesian models by hand is time-consuming and requires significant machine learning expertise. Probabilistic programming promises to make Bayesian modelling easier and more accessible by letting the user express a generative model as a short computer program (with random variables), leaving inference to the generic algorithm provided by the compiler of the given language. However, it is not easy to design a probabilistic programming language correctly and define the meaning of programs expressible in it. Moreover, the inference algorithms used by probabilistic programming systems usually lack formal correctness proofs and bugs have been found in some of them, which limits the confidence one can have in the results they return. In this work, we apply ideas from the areas of programming language theory and statistics to show that probabilistic programming can be a reliable tool for Bayesian inference. The first part of this dissertation concerns the design, semantics and type system of a new, substantially enhanced version of the Tabular language. Tabular is a schema-based probabilistic language, which means that instead of writing a full program, the user only has to annotate the columns of a schema with expressions generating corresponding values. By adopting this paradigm, Tabular aims to be user-friendly, but this unusual design also makes it harder to define the syntax and semantics correctly and reason about the language. We define the syntax of a version of Tabular extended with user-defined functions and pseudo-deterministic queries, design a dependent type system for this language and endow it with a precise semantics. We also extend Tabular with a concise formula notation for hierarchical linear regressions, define the type system of this extended language and show how to reduce it to pure Tabular. In the second part of this dissertation, we present the first correctness proof for a Metropolis-Hastings sampling algorithm for a higher-order probabilistic language. We define a measure-theoretic semantics of the language by means of an operationally-defined density function on program traces (sequences of random variables) and a map from traces to program outputs. We then show that the distribution of samples returned by our algorithm (a variant of “Trace MCMC” used by the Church language) matches the program semantics in the limit.
APA, Harvard, Vancouver, ISO, and other styles
19

Shahtahmassebi, Golnaz. "Bayesian modelling of ultra high-frequency financial data." Thesis, University of Plymouth, 2011. http://hdl.handle.net/10026.1/894.

Full text
Abstract:
The availability of ultra high-frequency (UHF) data on transactions has revolutionised data processing and statistical modelling techniques in finance. The unique characteristics of such data, e.g. discrete structure of price change, unequally spaced time intervals and multiple transactions have introduced new theoretical and computational challenges. In this study, we develop a Bayesian framework for modelling integer-valued variables to capture the fundamental properties of price change. We propose the application of the zero inflated Poisson difference (ZPD) distribution for modelling UHF data and assess the effect of covariates on the behaviour of price change. For this purpose, we present two modelling schemes; the first one is based on the analysis of the data after the market closes for the day and is referred to as off-line data processing. In this case, the Bayesian interpretation and analysis are undertaken using Markov chain Monte Carlo methods. The second modelling scheme introduces the dynamic ZPD model which is implemented through Sequential Monte Carlo methods (also known as particle filters). This procedure enables us to update our inference from data as new transactions take place and is known as online data processing. We apply our models to a set of FTSE100 index changes. Based on the probability integral transform, modified for the case of integer-valued random variables, we show that our models are capable of explaining well the observed distribution of price change. We then apply the deviance information criterion and introduce its sequential version for the purpose of model comparison for off-line and online modelling, respectively. Moreover, in order to add more flexibility to the tails of the ZPD distribution, we introduce the zero inflated generalised Poisson difference distribution and outline its possible application for modelling UHF data.
APA, Harvard, Vancouver, ISO, and other styles
20

Sha, Sha. "Performance Modelling and Analysis of Handover and Call Admission Control Algorithm for Next Generation Wireless Networks." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5509.

Full text
Abstract:
The next generation wireless system (NGWS) has been conceived as a ubiquitous wireless environment. It integrates existing heterogeneous access networks, as well as future networks, and will offer high speed data, real-time applications (e.g. Voice over IP, videoconference ) and real-time multimedia (e.g. real-time audio and video) support with a certain Quality of Service (QoS) level to mobile users. It is required that the mobile nodes have the capability of selecting services that are offered by each provider and determining the best path through the various networks. Efficient radio resource management (RRM) is one of the key issues required to support global roaming of the mobile users among different network architectures of the NGWS and a precise call admission control (CAC) scheme satisfies the requirements of high network utilization, cost reduction, minimum handover latency and high-level QoS of all the connections. This thesis is going to describe an adaptive class-based CAC algorithm, which is expected to prioritize the arriving channel resource requests, based on user¿s classification and channel allocation policy. The proposed CAC algorithm couples with Fuzzy Logic (FL) and Pre-emptive Resume (PR) theories to manage and improve the performance of the integrated wireless network system. The novel algorithm is assessed using a mathematical analytic method to measure the performance by evaluating the handover dropping probability and the system utilization.
APA, Harvard, Vancouver, ISO, and other styles
21

Reynolds, Toby J. "Bayesian modelling of integrated data and its application to seabird populations." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/1635.

Full text
Abstract:
Integrated data analyses are becoming increasingly popular in studies of wild animal populations where two or more separate sources of data contain information about common parameters. Here we develop an integrated population model using abundance and demographic data from a study of common guillemots (Uria aalge) on the Isle of May, southeast Scotland. A state-space model for the count data is supplemented by three demographic time series (productivity and two mark-recapture-recovery (MRR)), enabling the estimation of prebreeder emigration rate - a parameter for which there is no direct observational data, and which is unidentifiable in the separate analysis of MRR data. A Bayesian approach using MCMC provides a flexible and powerful analysis framework. This model is extended to provide predictions of future population trajectories. Adopting random effects models for the survival and productivity parameters, we implement the MCMC algorithm to obtain a posterior sample of the underlying process means and variances (and population sizes) within the study period. Given this sample, we predict future demographic parameters, which in turn allows us to predict future population sizes and obtain the corresponding posterior distribution. Under the assumption that recent, unfavourable conditions persist in the future, we obtain a posterior probability of 70% that there is a population decline of >25% over a 10-year period. Lastly, using MRR data we test for spatial, temporal and age-related correlations in guillemot survival among three widely separated Scottish colonies that have varying overlap in nonbreeding distribution. We show that survival is highly correlated over time for colonies/age classes sharing wintering areas, and essentially uncorrelated for those with separate wintering areas. These results strongly suggest that one or more aspects of winter environment are responsible for spatiotemporal variation in survival of British guillemots, and provide insight into the factors driving multi-population dynamics of the species.
APA, Harvard, Vancouver, ISO, and other styles
22

Vierheller, Janine. "Modelling excitation coupling in ventricular cardiac myocytes." Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19158.

Full text
Abstract:
Um die Kontraktion einer Herzmuskelzelle durch den Kalziumeinstrom zu ermöglichen, ist die Kopplung von Erregung und Kontraktion (ECC) von zentraler Bedeutung. Durch das elektrische Signal einer Nachbarzelle wird die Depolarisation des Sarkolemmas verursacht, wodurch sich die L-Typ-Kalziumkanäale (LKK) öffnen und der Amplifizierungsprozess eingeleitet wird. Letzterer ist bekannt als Kalzium induzierte Kalzium Freisetzung (CICR). Durch die LKK wird ein Kalziumeinstrom in die Zelle ermöglicht, welcher zur Öffnung der Ryanodinrezeptoren (RyR) des Sarkoplasmatischen Retikulums (SR) führt. Durch die Kalziumfreisetzung des SR wird dieses im Cytoplasma akkumuliert. Modelle für diese Prozesse werden seit mehreren Jahrzenten entwickelt. Bisher fehlte jedoch die Kombination aus räumlich aufgelösten Kalziumkonzentrationen der dyadischen Spalte mit stochastischen Simulationen der einzelnen Kalziumkanäle und die Kalziumdynamiken in der ganzen Zelle mit einem Elektrophysiologiemodell einer ganzen Herzmuskelzelle. In dieser Arbeit entwickleten wir ein neues Modell, in welchem die Konzentrationsgradienten von einzelnen Kanälen bis zum Ganzzelllevel räumlich aufgelöst werden. Es wurde der quasistatische Ansatz und die Finite-Elemente-Methode zur Integration partieller Differentialgleichungen verwendet. Es wurden Simulationen mit unterschiedlichen RyR Markow-Kette-Modellen, verschiedenen Parametern für die Bestandteile des SR, verschiedenen Konditionen des Natrium-Kalzium-Austauschers und unter Einbindung der Mitochondrien durchgeführt. Ziel war es, das physiologische Verhalten einer Kaninchen-Herzmuskelzelle zu simulieren. In dem neu entwickelten Multiskalenmodell wurden Hochleistungsrechner verwendet, um detaillierte Informationen über die Verteilung, die Regulation und die Relevanz von den im ECC involvierten Komponenten aufzuzeigen. Zukünftig soll das entwickelte Modell Anwendung bei der Untersuchung von Herzkontraktionen und Herzmuskelversagen finden.
Excitation contraction coupling (ECC) is of central importance to enable the contraction of the cardiac myocyte via calcium in ux. The electrical signal of a neighbouring cell causes the membrane depolarization of the sarcolemma and L-type Ca2+ channels (LCCs) open. The amplifcation process is initiated. This process is known as calcium-induced calcium release (CICR). The calcium in ux through the LCCs activates the ryanodine receptors (RyRs) of the sarcoplasmic reticulum (SR). The Ca2+ release of the SR accumulates calcium in the cytoplasm. For many decades models for these processes were developed. However, previous models have not combined the spatially resolved concentration dynamics of the dyadic cleft including the stochastic simulation of individual calcium channels and the whole cell calcium dynamics with a whole cardiac myocyte electrophysiology model. In this study, we developed a novel approach to resolve concentration gradients from single channel to whole cell level by using quasistatic approximation and finite element method for integrating partial differential equations. We ran a series of simulations with different RyR Markov chain models, different parameters for the SR components, sodium-calcium exchanger conditions, and included mitochondria to approximate physiological behaviour of a rabbit ventricular cardiac myocyte. The new multi-scale simulation tool which we developed makes use of high performance computing to reveal detailed information about the distribution, regulation, and importance of components involved in ECC. This tool will find application in investigation of heart contraction and heart failure.
APA, Harvard, Vancouver, ISO, and other styles
23

Roy, Shubhabrata. "A Complete Framework for Modelling Workload Volatility of VoD System - a Perspective to Probabilistic Management." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2014. http://tel.archives-ouvertes.fr/tel-01061418.

Full text
Abstract:
There are some new challenges in system administration and design to optimize the resource management for a cloud based application. Some applications demand stringent performance requirements (e.g. delay and jitter bounds), while some applications exhibit bursty (volatile) workloads. This thesis proposes an epidemic model inspired (and continuous time Markov Chain based) framework, which can reproduce workload volatility namely the "buzz effects" (when there is a sudden increase of a content popularity) of a Video on Demand (VoD) system. Two estimation procedures (heuristic and a Markov Chain Monte Carlo (MCMC) based approach) have also been proposed in this work to calibrate the model against workload traces. Obtained model parameters from the calibration procedures reveal some interesting property of the model. Based on numerical simulations, precisions of both procedures have been analyzed, which show that both of them perform reasonably. However, the MCMC procedure outperforms the heuristic approach. This thesis also compares the proposed model with other existing models examining the goodness-of-fit of some statistical properties of real workload traces. Finally this work suggests a probabilistic resource provisioning approach based on a Large Deviation Principle (LDP). LDP statistically characterizes the buzz effects that causeextreme workload volatility. This analysis exploits the information obtained using the LDP of the VoD system for defining resource management policies. These policies may be of some interest to all stakeholders in the emerging context of cloud networking.
APA, Harvard, Vancouver, ISO, and other styles
24

Filatenkova, Milana S. "Quantitative tool for in vivo analysis of DNA-binding proteins using High Resolution Sequencing Data." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20414.

Full text
Abstract:
DNA-binding proteins (DBPs) such as repair proteins, DNA polymerases, re- combinases, transcription factors, etc. manifest diverse stochastic behaviours dependent on physiological conditions inside the cell. Now that multiple independent in vitro studies have extensively characterised different aspects of the biochemistry of DBPs, computational and mathematical tools that would be able to integrate this information into a coherent framework are in huge demand, especially when attempting a transition to in vivo characterisation of these systems. ChIP-Seq is the method commonly used to study DBPs in vivo. This method generates high resolution sequencing data { population scale readout of the activity of DBPs on the DNA. The mathematical tools available for the analysis of this type of data are at the moment very restrictive in their ability to extract mechanistic and quantitative details on the activity of DBPs. The main trouble that researchers experience when analysing such population scale sequencing data is effectively disentangling complexity in these data, since the observed output often combines diverse outcomes of multiple unsynchronised processes reflecting biomolecular variability. Although being a static snapshot ChIP-Seq can be effectively utilised as a readout for the dynamics of DBPs in vivo. This thesis features a new approach to ChIP-Seq analysis { namely accessing the concealed details of the dynamic behaviour of DBPs on DNA using probabilistic modelling, statistical inference and numerical optimisation. In order to achieve this I propose to integrate previously acquired assumptions about the behaviour of DBPs into a Markov- Chain model which would allow to take into account their intrinsic stochasticity. By incorporating this model into a statistical model of data acquisition, the experimentally observed output can be simulated and then compared to in vivo data to reverse engineer the stochastic activity of DBPs on the DNA. Conventional tools normally employ simple empirical models where the parameters have no link with the mechanistic reality of the process under scrutiny. This thesis marks the transition from qualitative analysis to mechanistic modelling in an attempt to make the most of the high resolution sequencing data. It is also worth noting that from a computer science point of view DBPs are of great interest since they are able to perform stochastic computation on DNA by responding in a probabilistic manner to the patterns encoded in the DNA. The theoretical framework proposed here allows to quantitatively characterise complex responses of these molecular machines to the sequence features.
APA, Harvard, Vancouver, ISO, and other styles
25

Galetti, Erica. "Seismic interferometry and non-linear tomography." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10506.

Full text
Abstract:
Seismic records contain information that allows geoscientists to make inferences about the structure and properties of the Earth’s interior. Traditionally, seismic imaging and tomography methods require wavefields to be generated and recorded by identifiable sources and receivers, and use these directly-recorded signals to create models of the Earth’s subsurface. However, in recent years the method of seismic interferometry has revolutionised earthquake seismology by allowing unrecorded signals between pairs of receivers, pairs of sources, and source-receiver pairs to be constructed as Green’s functions using either cross-correlation, convolution or deconvolution of wavefields. In all of these formulations, seismic energy is recorded and emitted by surrounding boundaries of receivers and sources, which need not be active and impulsive but may even constitute continuous, naturally-occurring seismic ambient noise. In the first part of this thesis, I provide a comprehensive overview of seismic interferometry, its background theory, and examples of its application. I then test the theory and evaluate the effects of approximations that are commonly made when the interferometric formulae are applied to real datasets. Since errors resulting from some approximations can be subtle, these tests must be performed using almost error-free synthetic data produced with an exact waveform modelling method. To make such tests challenging the method and associated code must be applicable to multiply-scattering media. I developed such a modelling code specifically for interferometric tests and applications. Since virtually no errors are introduced into the results from modelling, any difference between the true and interferometric waveforms can safely be attributed to specific origins in interferometric theory. I show that this is not possible when using other, previously available methods: for example, the errors introduced into waveforms synthesised by finite-difference methods due to the modelling method itself, are larger than the errors incurred due to some (still significant) interferometric approximations; hence that modelling method can not be used to test these commonly-applied approximations. I then discuss the ability of interferometry to redatum seismic energy in both space and time, allowing virtual seismograms to be constructed at new locations where receivers may not have been present at the time of occurrence of the associated seismic source. I present the first successful application of this method to real datasets at multiple length scales. Although the results are restricted to limited bandwidths, this study demonstrates that the technique is a powerful tool in seismologists’ arsenal, paving the way for a new type of ‘retrospective’ seismology where sensors may be installed at any desired location at any time, and recordings of seismic events occurring at any other time can be constructed retrospectively – even long after their energy has dissipated. Within crustal seismology, a very common application of seismic interferometry is ambient-noise tomography (ANT). ANT is an Earth imaging method which makes use of inter-station Green’s functions constructed from cross-correlation of seismic ambient noise records. It is particularly useful in seismically quiescent areas where traditional tomography methods that rely on local earthquake sources would fail to produce interpretable results due to the lack of available data. Once constructed, interferometric Green’s functions can be analysed using standard waveform analysis techniques, and inverted for subsurface structure using more or less traditional imaging methods. In the second part of this thesis, I discuss the development and implementation of a fully non-linear inversion method which I use to perform Love-wave ANT across the British Isles. Full non-linearity is achieved by allowing both raypaths and model parametrisation to vary freely during inversion in Bayesian, Markov chain Monte Carlo tomography, the first time that this has been attempted. Since the inversion produces not only one, but a large ensemble of models, all of which fit the data to within the noise level, statistical moments of different order such as the mean or average model, or the standard deviation of seismic velocity structures across the ensemble, may be calculated: while the ensemble average map provides a smooth representation of the velocity field, a measure of model uncertainty can be obtained from the standard deviation map. In a number of real-data and synthetic examples, I show that the combination of variable raypaths and model parametrisation is key to the emergence of previously-unobserved, loop-like uncertainty topologies in the standard deviation maps. These uncertainty loops surround low- or high-velocity anomalies. They indicate that, while the velocity of each anomaly may be fairly well reconstructed, its exact location and size tend to remain uncertain; loops parametrise this location uncertainty, and hence constitute a fully non-linearised, Bayesian measure of spatial resolution. The uncertainty in anomaly location is shown to be due mainly to the location of the raypaths that were used to constrain the anomaly also only being known approximately. The emergence of loops is therefore related to the variation in raypaths with velocity structure, and hence to 2nd and higher order wave-physics. Thus, loops can only be observed using non-linear inversion methods such as the one described herein, explaining why these topologies have never been observed previously. I then present the results of fully non-linearised Love-wave group-velocity tomography of the British Isles in different frequency bands. At all of the analysed periods, the group-velocity maps show a good correlation with known geology of the region, and also robustly detect novel features. The shear-velocity structure with depth across the Irish Sea sedimentary basin is then investigated by inverting the Love-wave group-velocity maps, again fully non-linearly using Markov chain Monte Carlo inversion, showing an approximate depth to basement of 5 km. Finally, I discuss the advantages and current limitations of the fully non-linear tomography method implemented in this project, and provide guidelines and suggestions for its improvement.
APA, Harvard, Vancouver, ISO, and other styles
26

Trabelsi, Brahim. "Simulation numérique de l’écoulement et mélange granulaires par des éléments discrets ellipsoïdaux." Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/9300/1/trabelsi.pdf.

Full text
Abstract:
Les matériaux granulaires sont omniprésents, ils se trouvent aussi bien dans la nature que dans quelques applications industrielles. Parmi les applications industrielles utilisant les matériaux granulaires, on cite le mélange des poudres dans les industries agro-alimentaires, chimiques, métallurgiques et pharmaceutiques. La caractérisation et l'étude du comportement de ces matériaux sont nécessaires pour la compréhension de plusieurs phénomènes naturels comme le mouvement des dunes et les avalanches de neige, et de processus industriels tel que l'écoulement et le mélange des grains dans un mélangeur. Le comportement varié des matériaux granulaires les rend inclassables parmi les trois états de la matière : solide, liquide et gazeux. Ceci a fait dire qu'il s'agit d'un ``quatrième état'' de la matière, situé entre solide et liquide. L'objectif de ce travail est de concevoir et de mettre en oeuvre des méthodes efficaces d'éléments discrets pour la simulation et l'analyse des processus de mélange et de ségrégation des particules ellipsoïdales dans des mélangeurs culbutants industriels tels que le mélangeur à cerceaux. Dans la DEM l'étape la plus critique en terme de temps CPU est celle de la détection et de résolution de contact. Donc pour que la DEM soit efficace il faut optimiser cette étape. On se propose de combiner le modèle du potentiel géométrique et la condition algébrique de contact entre deux ellipsoïdes proposée par Wang et al., pour l'élaboration d'un algorithme efficace de détection de contact externe entre particules ellipsoïdales. Puis de de prouver un résultat théorique et d'élaborer un algorithme pour le contact interne. D'autre part, le couplage DEM-chaîne de Markov permet de diminuer très sensiblement le temps de simulation en déterminant la matrice de transition à partir d'une simulation à courte durée puis en calculant l'état du système à l'aide du modèle de chaîne de Markov. En effet, en utilisant la théorie des matrices strictement positives et en se basant sur le théorème de Perron-Frobenius on peut approximer le nombre de transitions nécessaires pour la convergence vers un état donné.
APA, Harvard, Vancouver, ISO, and other styles
27

Widén, Joakim. "System Studies and Simulations of Distributed Photovoltaics in Sweden." Doctoral thesis, Uppsala universitet, Fasta tillståndets fysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-132907.

Full text
Abstract:
Grid-connected photovoltaic (PV) capacity is increasing worldwide, mainly due to extensive subsidy schemes for renewable electricity generation. A majority of newly installed systems are distributed small-scale systems located in distribution grids, often at residential customers. Recent developments suggest that such distributed PV generation (PV-DG) could gain more interest in Sweden in the near future. With prospects of decreasing system prices, an extensive integration does not seem impossible. In this PhD thesis the opportunities for utilisation of on-site PV generation and the consequences of a widespread introduction are studied. The specific aims are to improve modelling of residential electricity demand to provide a basis for simulations, to study load matching and grid interaction of on-site PV and to add to the understanding of power system impacts. Time-use data (TUD) provided a realistic basis for residential load modelling. Both a deterministic and a stochastic approach for generating different types of end-use profiles were developed. The models are capable of realistically reproducing important electric load properties such as diurnal and seasonal variations, short time-scale fluctuations and random load coincidence. The load matching capability of residential on-site PV was found to be low by default but possible to improve to some extent by different measures. Net metering reduces the economic effects of the mismatch and has a decisive impact on the production value and on the system sizes that are reasonable to install for a small-scale producer. Impacts of large-scale PV-DG on low-voltage (LV) grids and on the national power system were studied. Power flow studies showed that voltage rise in LV grids is not a limiting factor for integration of PV-DG. Variability and correlations with large-scale wind power were determined using a scenario for large-scale building-mounted PV. Profound impacts on the power system were found only for the most extreme scenarios.
Felaktigt tryckt som Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 711
APA, Harvard, Vancouver, ISO, and other styles
28

Dorff, Rebecca. "Modelling Infertility with Markov Chains." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4070.

Full text
Abstract:
Infertility affects approximately 15% of couples. Testing and interventions are costly, in time, money, and emotional energy. This paper will discuss using Markov decision and multi-armed bandit processes to identify a systematic approach of interventions that will lead to the desired baby while minimizing costs.
APA, Harvard, Vancouver, ISO, and other styles
29

O'Leary, Rebecca A. "Informed statistical modelling of habitat suitability for rare and threatened species." Queensland University of Technology, 2008. http://eprints.qut.edu.au/17779/.

Full text
Abstract:
In this thesis a number of statistical methods have been developed and applied to habitat suitability modelling for rare and threatened species. Data available on these species are typically limited. Therefore, developing these models from these data can be problematic and may produce prediction biases. To address these problems there are three aims of this thesis. The _rst aim is to develop and implement frequentist and Bayesian statistical modelling approaches for these types of data. The second aim is develop and implement expert elicitation methods. The third aim is to apply these novel approaches to Australian rare and threatened species case studies with the intention of habitat suitability modelling. The _rst aim is ful_lled by investigating two innovative approaches for habitat suitability modelling and sensitivity analysis of the second approach to priors. The _rst approach is a new multilevel framework developed to model the species distribution at multiple scales and identify excess zeros (absences outside the species range). Applying a statistical modelling approach to the identi_cation of excess zeros has not previously been conducted. The second approach is an extension and application of Bayesian classi_cation trees to modelling the habitat suitability of a threatened species. This is the _rst `real' application of this approach in ecology. Lastly, sensitivity analysis of the priors in Bayesian classi_cation trees are examined for a real case study. Previously, sensitivity analysis of this approach to priors has not been examined. To address the second aim, expert elicitation methods are developed, extended and compared in this thesis. In particular, one elicitation approach is extended from previous research, there is a comparison of three elicitation methods, and one new elicitation approach is proposed. These approaches are illustrated for habitat suitability modelling of a rare species and the opinions of one or two experts are elicited. The _rst approach utilises a simple questionnaire, in which expert opinion is elicited on whether increasing values of a covariate either increases, decreases or does not substantively impact on a response. This approach is extended to express this information as a mixture of three normally distributed prior distributions, which are then combined with available presence/absence data in a logistic regression. This is one of the _rst elicitation approaches within the habitat suitability modelling literature that is appropriate for experts with limited statistical knowledge and can be used to elicit information from single or multiple experts. Three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression are compared, one of which is the questionnaire approach. Included in this comparison of three elicitation methods are a summary of the advantages and disadvantages of these three methods, the results from elicitations and comparison of the prior and posterior distributions. An expert elicitation approach is developed for classi_cation trees, in which the size and structure of the tree is elicited. There have been numerous elicitation approaches proposed for logistic regression, however no approaches have been suggested for classi_cation trees. The last aim of this thesis is addressed in all chapters, since the statistical approaches proposed and extended in this thesis have been applied to real case studies. Two case studies have been examined in this thesis. The _rst is the rare native Australian thistle (Stemmacantha australis), in which the dataset contains a large number of absences distributed over the majority of Queensland, and a small number of presence sites that are only within South-East Queensland. This case study motivated the multilevel modelling framework. The second case study is the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The application and sensitivity analysis of Bayesian classi_cation trees, and all expert elicitation approaches investigated in this thesis are applied to this case study. This work has several implications for conservation and management of rare and threatened species. Novel statistical approaches addressing the _rst aim provide extensions to currently existing methods, or propose a new approach, for identi _cation of current and potential habitat. We demonstrate that better model predictions can be achieved using each method, compared to standard techniques. Elicitation approaches addressing the second aim ensure expert knowledge in various forms can be harnessed for habitat modelling, a particular bene_t for rare and threatened species which typically have limited data. Throughout, innovations in statistical methodology are both motivated and illustrated via habitat modelling for two rare and threatened species: the native thistle Stemmacantha australis and the brush-tailed rock wallaby Petrogale penicillata.
APA, Harvard, Vancouver, ISO, and other styles
30

Dixon, William J., and bill dixon@dse vic gov au. "Uncertainty in Aquatic Toxicological Exposure-Effect Models: the Toxicity of 2,4-Dichlorophenoxyacetic Acid and 4-Chlorophenol to Daphnia carinata." RMIT University. Biotechnology and Environmental Biology, 2005. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070119.163720.

Full text
Abstract:
Uncertainty is pervasive in risk assessment. In ecotoxicological risk assessments, it arises from such sources as a lack of data, the simplification and abstraction of complex situations, and ambiguities in assessment endpoints (Burgman 2005; Suter 1993). When evaluating and managing risks, uncertainty needs to be explicitly considered in order to avoid erroneous decisions and to be able to make statements about the confidence that we can place in risk estimates. Although informative, previous approaches to dealing with uncertainty in ecotoxicological modelling have been found to be limited, inconsistent and often based on assumptions that may be false (Ferson & Ginzburg 1996; Suter 1998; Suter et al. 2002; van der Hoeven 2004; van Straalen 2002a; Verdonck et al. 2003a). In this thesis a Generalised Linear Modelling approach is proposed as an alternative, congruous framework for the analysis and prediction of a wide range of ecotoxicological effects. This approach was used to investigate the results of toxicity experiments on the effect of 2,4-Dichlorophenoxyacetic Acid (2,4-D) formulations and 4-Chlorophenol (4-CP, an associated breakdown product) on Daphnia carinata. Differences between frequentist Maximum Likelihood (ML) and Bayesian Markov-Chain Monte-Carlo (MCMC) approaches to statistical reasoning and model estimation were also investigated. These approaches are inferentially disparate and place different emphasis on aleatory and epistemic uncertainty (O'Hagan 2004). Bayesian MCMC and Probability Bounds Analysis methods for propagating uncertainty in risk models are also compared for the first time. For simple models, Bayesian and frequentist approaches to Generalised Linear Model (GLM) estimation were found to produce very similar results when non-informative prior distributions were used for the Bayesian models. Potency estimates and regression parameters were found to be similar for identical models, signifying that Bayesian MCMC techniques are at least a suitable and objective replacement for frequentist ML for the analysis of exposureresponse data. Applications of these techniques demonstrated that Amicide formulations of 2,4-D are more toxic to Daphnia than their unformulated, Technical Acid parent. Different results were obtained from Bayesian MCMC and ML methods when more complex models and data structures were considered. In the analysis of 4-CP toxicity, the treatment of 2 different factors as fixed or random in standard and Mixed-Effect models was found to affect variance estimates to the degree that different conclusions would be drawn from the same model, fit to the same data. Associated discrepancies in the treatment of overdispersion between ML and Bayesian MCMC analyses were also found to affect results. Bayesian MCMC techniques were found to be superior to the ML ones employed for the analysis of complex models because they enabled the correct formulation of hierarchical (nested) datastructures within a binomial logistic GLM. Application of these techniques to the analysis of results from 4-CP toxicity testing on two strains of Daphnia carinata found that between-experiment variability was greater than that within-experiments or between-strains. Perhaps surprisingly, this indicated that long-term laboratory culture had not significantly affected the sensitivity of one strain when compared to cultures of another strain that had recently been established from field populations. The results from this analysis highlighted the need for repetition of experiments, proper model formulation in complex analyses and careful consideration of the effects of pooling data on characterising variability and uncertainty. The GLM framework was used to develop three dimensional surface models of the effects of different length pulse exposures, and subsequent delayed toxicity, of 4-CP on Daphnia. These models described the relationship between exposure duration and intensity (concentration) on toxicity, and were constructed for both pulse and delayed effects. Statistical analysis of these models found that significant delayed effects occurred following the full range of pulse exposure durations, and that both exposure duration and intensity interacted significantly and concurrently with the delayed effect. These results indicated that failure to consider delayed toxicity could lead to significant underestimation of the effects of pulse exposure, and therefore increase uncertainty in risk assessments. A number of new approaches to modelling ecotoxicological risk and to propagating uncertainty were also developed and applied in this thesis. In the first of these, a method for describing and propagating uncertainty in conventional Species Sensitivity Distribution (SSD) models was described. This utilised Probability Bounds Analysis to construct a nonparametric 'probability box' on an SSD based on EC05 estimates and their confidence intervals. Predictions from this uncertain SSD and the confidence interval extrapolation methods described by Aldenberg and colleagues (2000; 2002a) were compared. It was found that the extrapolation techniques underestimated the width of uncertainty (confidence) intervals by 63% and the upper bound by 65%, when compared to the Probability Bounds (P3 Bounds) approach, which was based on actual confidence estimates derived from the original data. An alternative approach to formulating ecotoxicological risk modelling was also proposed and was based on a Binomial GLM. In this formulation, the model is first fit to the available data in order to derive mean and uncertainty estimates for the parameters. This 'uncertain' GLM model is then used to predict the risk of effect from possible or observed exposure distributions. This risk is described as a whole distribution, with a central tendency and uncertainty bounds derived from the original data and the exposure distribution (if this is also 'uncertain'). Bayesian and P-Bounds approaches to propagating uncertainty in this model were compared using an example of the risk of exposure to a hypothetical (uncertain) distribution of 4-CP for the two Daphnia strains studied. This comparison found that the Bayesian and P-Bounds approaches produced very similar mean and uncertainty estimates, with the P-bounds intervals always being wider than the Bayesian ones. This difference is due to the different methods for dealing with dependencies between model parameters by the two approaches, and is confirmation that the P-bounds approach is better suited to situations where data and knowledge are scarce. The advantages of the Bayesian risk assessment and uncertainty propagation method developed are that it allows calculation of the likelihood of any effect occurring, not just the (probability)bounds, and that the same software (WinBugs) and model construction may be used to fit regression models and predict risks simultaneously. The GLM risk modelling approaches developed here are able to explain a wide range of response shapes (including hormesis) and underlying (non-normal) distributions, and do not involve expression of the exposure-response as a probability distribution, hence solving a number of problems found with previous formulations of ecotoxicological risk. The approaches developed can also be easily extended to describe communities, include modifying factors, mixed-effects, population growth, carrying capacity and a range of other variables of interest in ecotoxicological risk assessments. While the lack of data on the toxicological effects of chemicals is the most significant source of uncertainty in ecotoxicological risk assessments today, methods such as those described here can assist by quantifying that uncertainty so that it can be communicated to stakeholders and decision makers. As new information becomes available, these techniques can be used to develop more complex models that will help to bridge the gap between the bioassay and the ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
31

Clark, Edward Blair. "A framework for modelling stochastic optimisation algorithms with Markov chains." Thesis, University of York, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.495863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Erlwein, Christina. "Applications of hidden Markov models in financial modelling." Thesis, Brunel University, 2008. http://bura.brunel.ac.uk/handle/2438/7898.

Full text
Abstract:
Various models driven by a hidden Markov chain in discrete or continuous time are developed to capture the stylised features of market variables whose levels or values constitute as the underliers of financial derivative contracts or investment portfolios. Since the parameters are switching regimes, the changes and developments in the economy as soon as they arise are readily reflected in these models. The change of probability measure technique and the EM algorithm are fundamental techniques utilised in the optimal parameter estimation. Recursive adaptive filters for the state of the Markov chain and other auxiliary processes related to the Markov chain are derived which in turn yield self-tuning dynamic financial models. A hidden Markov model (HMM)-based modelling set-up for commodity prices is developed and the predictability of the gold market under this setting is examined. An Ornstein-Uhlenbeck (OU) model with HMM parameters is proposed and under this set-up, we address two statistical inference issues: the sensitivity of the model to small changes in parameter estimates and the selection of the optimal number of states. The extended OU model is implemented on a data set of 30-day Canadian T-bill yields. An exponential of a Markov-switching OU process plus a compound Poisson process is put forward as a model for the evolution of electricity spot prices. Using a data set compiled by Nord Pool, we illustrate the vast improvements gained in incorporating regimes in the model. A multivariate HMM is employed as a framework in providing the solutions of two asset allocation problems; one involves the mean-variance utility function and the other entails the CVaR constraint. Finally, the valuation of credit default swaps highlights the important considerations necessitated by pricing in a regime-switching environment. Certain numerical schemes are applied to obtain approximations for the default probabilities and swap rates.
APA, Harvard, Vancouver, ISO, and other styles
33

Groen, Maria Margaretha de. "Modelling interception and transpiration at monthly time steps : introducing daily variability through Markov chains /." Lisse : Swets & Zeitlinger, 2002. http://www.loc.gov/catdir/enhancements/fy0647/2003275124-d.html.

Full text
Abstract:
Thesis (doctoral) - Delft University of Technology, Delft, 2002.
"Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the International Institute for Infrastructural, Hydraulic and Environmental Engineering for the Degree of Doctor to be defended in public on Monday, 29 April 2002 at 13:30 hours in Delft, The Netherlands." Includes bibliographical references (p. [191]-199).
APA, Harvard, Vancouver, ISO, and other styles
34

Bondesson, Carl. "Modelling of Safety Concepts for Autonomous Vehicles using Semi-Markov Models." Thesis, Uppsala universitet, Signaler och System, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353060.

Full text
Abstract:
Autonomous vehicles is soon a reality in the every-day life. Though before it is used commercially the vehicles need to be proven safe. The current standard for functional safety on roads, ISO 26262, does not include autonomous vehicles at the moment, which is why in this project an approach using semi-Markov models is used to assess safety. A semi-Markov process is a stochastic process modelled by a state space model where the transitions between the states of the model can be arbitrarily distributed. The approach is realized as a MATLAB tool where the user can use a steady-state based analysis called a Loss and Risk based measure of safety to assess safety. The tool works and can assess safety of semi-Markov systems as long as they are irreducible and positive recurrent. For systems that fulfill these properties, it is possible to draw conclusions about the safety of the system through a risk analysis and also about which autonomous driving level the system is in through a sensitivity analysis. The developed tool, or the approach with the semi-Markov model, might be a good complement to ISO 26262.
APA, Harvard, Vancouver, ISO, and other styles
35

Greening, Philip. "The influence of market structure, collaboration and price competition on supply network disruptions in open and closed markets." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/8473.

Full text
Abstract:
The relaxation of international boundaries has enabled the globalisation of markets making available an ever increasing number of specialised suppliers and markets. Inevitably this results in supply chains sharing suppliers and customers reflected in a network of relationships. Within this context firms buyers configure their supply relationships based on their perception of supply risk. Risk is managed by either increasing trust or commitment or by increasing the number of suppliers. Increasing trust and commitment facilitates collaboration and reduces the propensity for a supplier to exit the relationship. Conversely, increasing the number of suppliers reduces dependency and increases the ease of making alternative supply arrangements. The emergent network of relationships is dynamic and complex, and due in no small part to the influence of inventory management practices, tightly coupled. This critical organization of the network describes a system that contrary to existing supply chain conceptualisation exists far from equilibrium, requiring a different more appropriate theoretical lens through which to view them. This thesis adopts a Complex Adaptive Systems (CAS) perspective to position supply networks as tightly coupled complex systems which according to Normal Accident Theory (NAT) are vulnerable to disruptions as a consequence of normal operations. The consequential boundless and emergent nature of supply networks makes them difficult to research using traditional empirical methods, instead this research builds a generalised supply network agent based computer model, allowing network constituents (agents) to take autonomous parallel action reflecting the true emergent nature of supply networks. This thesis uses the results from a series of carefully designed computer experiments to elucidate how supply networks respond to a variety of market structures and permitted agent behaviours. Market structures define the vertical (between tier) and horizontal (within tier) levels of price differentiation. Within each structure agents are permitted to autonomously modify their prices (constrained by market structure) and collaborate by sharing demand information. By examining how supply networks respond to different permitted agent behaviours in a range of market structures this thesis makes 4 contributions. Firstly, it extends NAT by incorporating the adaptive nature of supply network constituents. Secondly it extends supply chain management by specifying supply networks as dynamic not static phenomena. Thirdly it extends supply chain risk management through developing an understanding of the impact different permitted behaviour combinations on the networks vulnerability to disruptions in the context of normal operations. Finally by developing the understanding how normal operations impact a supply networks vulnerability to disruptions it informs the practice of supply chain risk management.
APA, Harvard, Vancouver, ISO, and other styles
36

Voskoglou, Michael Gr. "Mathematical modelling in classroom: The importance of validation of the constructed model." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-83164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zingmark, Per-Henrik. "Models for Ordered Categorical Pharmacodynamic Data." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis: Univ.-bibl. [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Dobay, Eduardo Sangiorgio. "Complexidade e tomada de decisão." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-22012015-135228/.

Full text
Abstract:
Neste trabalho foi elaborada uma estrutura de modelos probabilísticos simples que pudessem descrever o processo de tomada de decisão de agentes humanos que são confrontados com a tarefa de prever elementos de uma sequência aleatória gerada por uma cadeia de Markov de memória L. Essa estrutura partiu de uma abordagem bayesiana em que o agente infere uma distribuição de probabilidades a partir de uma série de observações da sequência e de suas próprias respostas, considerando que o agente tenha uma memória de tamanho K. Como resultado da abordagem bayesiana, o agente adota uma estratégia ótima que consiste na perseveração na alternativa mais provável dado o histórico das últimas tentativas; por conta disso e de observações experimentais de que humanos tendem a adotar nesse tipo de problema estratégias sub-ótimas, por exemplo a de pareamento de probabilidades (probability matching), foram desenvolvidas variações sobre esse modelo que tentassem descrever mais de perto o comportamento adotado por humanos. Nesse sentido, foram adotadas as variáveis de troca de resposta (possível ação tomada pelo agente) e de recompensa (possível resultado da ação) na formulação do modelo e foram adicionados parâmetros, inspirados em modelos de ação dopaminérgica, que permitissem um desvio da estratégia ótima resultante da abordagem bayesiana. Os modelos construídos nessa estrutura foram simulados computacionalmente para diversos valores dos parâmetros, incluindo as memórias K e L do agente e da cadeia de Markov, respectivamente. Através de análises de correlação, esses resultados foram comparados aos dados experimentais, de um grupo de pesquisa do Instituto de Ciências Biomédicas da USP, referentes a tarefas de tomada de decisão envolvendo pessoas de diversas faixas etárias (de 3 a 73 anos) e cadeias de Markov de memórias 0, 1 e 2. Nessa comparação, concluiu-se que as diferenças entre grupos etários no experimento podem ser explicadas em nossa modelagem através da variação da memória K do agente crianças de até 5 anos mostram um limite K = 1, e as de até 12 anos mostram um limite K = 2 e da variação de um parâmetro de reforço de aprendizado dependendo do grupo e da situação de decisão à qual os indivíduos eram expostos, o valor ajustado desse parâmetro variou de 10% para baixo até 30% para cima do seu valor original de acordo com a abordagem bayesiana.
In this work we developed a simple probabilistic modeling framework that could describe the process of decision making in human agents that are presented with the task of predicting elements of a random sequence generated by a Markov chain with memory L. Such framework arised from a Bayesian approach in which the agent infers a probability distribution from a series of observations on the sequence and on its own answers, and considers that the agent\'s memory has length K. As a result of the Bayesian approach, the agent adopts an optimal strategy that consists in perseveration of the most likely alternative given the history of the last few trials; because of that and of experimental evidence that humans tend, in such kinds of problems, to adopt suboptimal strategies such as probability matching, variations on that model were developed in an attempt to have a closer description of the behavior adopted by humans. In that sense, the `shift\' (possible action taken by the agent on its response) and `reward\' (possible result of the action) variables were adopted in the formulation of the model, and parameters inspired by models of dopaminergic action were added to allow deviation from the optimal strategy that resulted from the Bayesian approach. The models developed in that framework were computationally simulated for many values of the parameters, including the agent\'s and the Markov chain\'s memory lengths K and L respectively. Through correlation analysis these results were compared to experimental data, from a research group from the Biomedical Science Institute at USP, regarding decision making tasks that involved people of various ages (3 to 73 years old) and Markov chains of orders 0, 1 and 2. In this comparison it was concluded that the differences between age groups in the experiment can be explained in our modeling through variation of the agent\'s memory length K children up to 5 years old exhibited a limitation of K = 1, and those up to 12 years old were limited to K = 2 and through variation of a learning reinforcement parameter depending on the group and the decision situation to which the candidates were exposed, the fitted value for that parameter ranged from 10% below to 30% above its original value according to the Bayesian approach.
APA, Harvard, Vancouver, ISO, and other styles
39

Razetti, Agustina. "Modélisation et caractérisation de la croissance des axones à partir de données in vivo." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4016/document.

Full text
Abstract:
La construction du cerveau et de ses connexions pendant le développement reste une question ouverte dans la communauté scientifique. Des efforts fructueux ont été faits pour élucider les mécanismes de la croissance axonale, tels que la guidance axonale et les molécules de guidage. Cependant, des preuves récentes suggèrent que d'autres acteurs seraient impliqués dans la croissance des neurones in vivo. Notamment, les axones se développent dans des environnements mécaniquement contraints. Ainsi, pour bien comprendre ce processus dynamique, il faut prendre en compte les mécanismes collectifs et les interactions mécaniques au sein des populations axonales. Néanmoins, les techniques pour mesurer directement cela à partir de cerveaux vivants sont aujourd'hui insuffisantes ou lourdes à mettre en œuvre. Cette thèse résulte d'une collaboration multidisciplinaire, pour faire la lumière sur le développement axonal in vivo et les morphologies complexes des axones adultes. Notre travail a été inspiré et validé à partir d'images d'axones y individuels chez la drosophile, de type sauvage et modifiés génétiquement, que nous avons segmentés et normalisés. Nous avons d'abord proposé un cadre mathématique pour l'étude morphologique et la classification des groupes axonaux. A partir de cette analyse, nous avons émis l'hypothèse que la croissance axonale dérive d'un processus stochastique et que la variabilité et la complexité des arbres axonaux résultent de sa nature intrinsèque, ainsi que des stratégies d'élongation développées pour surmonter les contraintes mécaniques du cerveau en développement. Nous avons conçu un modèle mathématique de la croissance d'un axone isolé fondé sur des chaînes de Markov gaussiennes avec deux paramètres, représentant la rigidité axonale et l'attraction du champ cible. Nous avons estimé les paramètres de ce modèle à partir de données réelles et simulé la croissance des axones à l'échelle de populations et avec des contraintes spatiales pour tester notre hypothèse. Nous avons abordé des thèmes de mathématiques appliquées ainsi que de la biologie, et dévoilé des effets inexplorés de la croissance collective sur le développement axonal in vivo
How the brain wires up during development remains an open question in the scientific community across disciplines. Fruitful efforts have been made to elucidate the mechanisms of axonal growth, such as pathfinding and guiding molecules. However, recent evidence suggests other actors to be involved in neuron growth in vivo. Notably, axons develop in populations and embedded in mechanically constrained environments. Thus, to fully understand this dynamic process, one must take into account collective mechanisms and mechanical interactions within the axonal populations. However, techniques to directly measure this from living brains are today lacking or heavy to implement. This thesis emerges from a multidisciplinary collaboration, to shed light on axonal development in vivo and how adult complex axonal morphologies are attained. Our work is inspired and validated from images of single wild type and mutated Drosophila y axons, which we have segmented and normalized. We first proposed a mathematical framework for the morphological study and classification of axonal groups. From this analysis we hypothesized that axon growth derives from a stochastic process, and that the variability and complexity of axonal trees result from its intrinsic nature, as well as from elongation strategies developed to overcome the mechanical constraints of the developing brain. We designed a mathematical model of single axon growth based on Gaussian Markov Chains with two parameters, accounting for axon rigidity and attraction to the target field. We estimated the model parameters from data, and simulated the growing axons embedded in spatially constraint populations to test our hypothesis. We dealt with themes from applied mathematics as well as from biology, and unveiled unexplored effects of collective growth on axonal development in vivo
APA, Harvard, Vancouver, ISO, and other styles
40

Khayyat, Khalid M. Jamil. "Performance modelling and QoS support for wireless Ad Hoc networks." Thesis, 2011. http://hdl.handle.net/1828/3632.

Full text
Abstract:
We present a Markov chain analysis for studying the performance of wireless ad hoc networks. The models presented in this dissertation support an arbitrary backoff strategy. We found that the most important parameter affecting the performance of binary exponential backoff is the initial backoff window size. Our experimental results show that the probability of collision can be reduced when the initial backoff window size equals the number of terminals. Thus, the throughput of the system increases and, at the same time, the delay to transmit the frame is reduced. In our second contribution, we present a new analytical model of a Medium Access Control (MAC) layer for wireless ad hoc networks that takes into account frame retry limits for a four-way handshaking mechanism. This model offers flexibility to address some design issues such as the effects of traffic parameters as well as possible improvements for wireless ad hoc networks. It effectively captures important network performance characteristics such as throughput, channel utilization, delay, and average energy. Under this analytical framework, we evaluate the effect of the Request-to-Send (RTS) state on unsuccessful transmission probability and its effect on performance particularly when the hidden terminal problem is dominant, the traffic is heavy, or the data frame length is very large. By using our proposed model, we show that the probability of collision can be reduced when using a Request-to-Send/Clear- to-Send (RTS/CTS) mechanism. Thus, the throughput increases and, at the same time, the delay and the average energy to transmit the frame decrease. In our third contribution, we present a new analytical model of a MAC layer for wireless ad hoc networks that takes into account channel bit errors and frame retry limits for a two-way handshaking mechanism. This model offers flexibility to address design issues such as the effects of traffic parameters and possible improvements for wireless ad hoc networks. We illustrate that an important parameter affecting the performance of binary exponential backoff is the initial backoff window size. We show that for a low bit error rate (BER) the throughput increases and, at the same time, the delay and the average energy to transmit the frame decrease. Results show also that the negative acknowledgment-based (NAK-based) model proves more useful for a high BER. In our fourth contribution, we present a new analytical model of a MAC layer for wireless ad hoc networks that takes into account Quality of Service (QoS) of the MAC layer for a two-way handshaking mechanism. The model includes a high priority traffic class (class 1) and a low priority traffic class (class 2). Extension of the model to more QoS levels is easily accomplished. We illustrate an important parameter affecting the performance of an Arbitration InterFrame Space (AIFS) and small backoff window size limits. They cause the frame to start contending the channel earlier and to complete the backoff sooner. As a result, the probability of sending the frame increases. Under this analytical framework, we evaluate the effect of QoS on successful transmission probability and its effect on performance, particularly when high priority traffic is dominant.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
41

Thyer, Mark Andrew. "Modelling Long-Term Persistence in Hydrological Time Series." 2001. http://hdl.handle.net/1959.13/24891.

Full text
Abstract:
The hidden state Markov (HSM) model is introduced as a new conceptual framework for modelling long-term persistence in hydrological time series. Unlike the stochastic models currently used, the conceptual basis of the HSM model can be related to the physical processes that influence long-term hydrological time series in the Australian climatic regime. A Bayesian approach was used for model calibration. This enabled rigourous evaluation of parameter uncertainty, which proved crucial for the interpretation of the results. Applying the single site HSM model to rainfall data from selected Australian capital cities provided some revealing insights. In eastern Australia, where there is a significant influence from the tropical Pacific weather systems, the results showed a weak wet and medium dry state persistence was likely to exist. In southern Australia the results were inconclusive. However, they suggested a weak wet and strong dry persistence structure may exist, possibly due to the infrequent incursion of tropical weather systems in southern Australia. This led to the postulate that the tropical weather systems are the primary cause of two-state long-term persistence. The single and multi-site HSM model results for the Warragamba catchment rainfall data supported this hypothesis. A strong two-state persistence structure was likely to exist in the rainfall regime of this important water supply catchment. In contrast, the single and multi-site results for the Williams River catchment rainfall data were inconsistent. This illustrates further work is required to understand the application of the HSM model. Comparisons with the lag-one autoregressive [AR(1)] model showed that it was not able to reproduce the same long-term persistence as the HSM model. However, with record lengths typical of real data the difference between the two approaches was not statistically significant. Nevertheless, it was concluded that the HSM model provides a conceptually richer framework than the AR(1) model.
PhD Doctorate
APA, Harvard, Vancouver, ISO, and other styles
42

PUTRA, MARWAN SURACHMAN, and Marwan Surachman Putra. "MODELLING LAND USE AND LAND COVER CHANGE BY INTEGRATING CELLULAR AUTOMATA AND MARKOV CHAIN- A CASE STUDY IN CILIWUNG WATERSHED, INDONESIA." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ee26d6.

Full text
Abstract:
碩士
國立臺北大學
城市治理英語碩士學位學程
107
The steep increase in the urban population in Jakarta city has exerted heavy pressure on the land resources periphery since the last few decades, especially in the Ciliwung watershed that significantly contributes to the environmental issue in Jakarta. Land use simulations are of particular interest to rural-urban, regional planners, and the government owing to the future impacts of actions and policies are decisive in order for a more sustainable future. The purposes of this thesis are to explore drivers and components and to simulate future land use and land cover change. Remote Sensing data were used to generate classification maps for 1997, 2007, and 2018 classified as forest, waterbody, vegetation, and built-up area, respectively. Multicriteria decision-making and fuzzy parameter standardisation approaches were applied to produce transition suitability image. Cramer's V method was used to determine the significant contributor and driver of land use and land cover change, and the results showed that transportation and land price are two of the most influential factors. Markov chain and cellular automata were employed to generate simulated maps of the Ciliwung watershed in 2028 and 2038. The model was validated against actual land use map in 2018 and the Kappa value is 0.6907, indicating acceptable simulation accuracy. Based on the simulation, it is found that the area of the forest might decrease from 4.987 ha to 4.916 ha. Moreover, the vegetation cover might decrease from 10.666 ha to 8.560 ha, and water body might slightly decrease from 84 ha to 81 ha. Conversely, the built-up area might increase significantly from 22.873 ha to 25.052 ha. In the end, this thesis indicates that it is essential to simulate land use and land cover to help local authorities and government centres to give better understand a complex land-use system and to develop an improved land use management strategy considering the possible development trend in the future.
APA, Harvard, Vancouver, ISO, and other styles
43

Seyed, Momen Kaveh. "Identifying Nursing Activities to Estimate the Risk of Cross-contamination." Thesis, 2012. http://hdl.handle.net/1807/34914.

Full text
Abstract:
Hospital Acquired Infections (HAI) are a global patient safety challenge, costly to treat, and affect hundreds of millions of patients annually worldwide. It has been shown that the majority of HAI are transferred to patients by caregivers' hands and therefore, can be prevented by proper hand hygiene (HH). However, many factors including cognitive load, cause caregivers to forget to cleanse their hands. Hand hygiene compliance among caregivers remains low around the world. In this thesis I showed that it is possible to build a wearable accelerometer-based HH reminder system to identify ongoing nursing activities with the patient, indicate the high-risk activities, and prompt the caregivers to clean their hands. Eight subjects participated in this study, each wearing five wireless accelerometer sensors on the wrist, upper arms and the back. A pattern recognition approach was used to classify six nursing activities offline. Time-domain features that included mean, standard deviation, energy, and correlation among accelerometer axes were found to be suitable features. On average, 1-Nearest Neighbour classifier was able to classify the activities with 84% accuracy. A novel algorithm was developed to adaptively segment the accelerometer signals to identify the start and stop time of each nursing activity. The overall accuracy of the algorithm for a total of 96 events performed by 8 subjects was approximately 87%. The accuracy was higher than 91% for 5 out of 8 subjects. The sequence of nursing activities was modelled by an 18-state Markov Chain. The model was evaluated by recently published data. The simulation results showed that the high-risk of cross-contamination decreases exponentially by frequency of HH and this happens more rapidly up to 50%-60% hand hygiene rate. It was also found that if the caregiver enters the room with high-risk of transferring infection to the current patient, given the assumptions in this study, only 55% HH is capable of reducing the risk of infection transfer to the lowest level. This may help to prevent the next patient from acquiring infection, preventing an infection outbreak. The model is also capable of simulating the effects of the imperfect HH on the risk of cross-contamination.
APA, Harvard, Vancouver, ISO, and other styles
44

Abhinav, S. "Stochastic Modelling of Vehicle-Structure Interactions : Dynamic State And Parameter Estimation, And Global Response Sensitivity Analysis." Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2736.

Full text
Abstract:
The analysis of vehicle-structure interaction systems plays a significant role in the design and maintenance of bridges. In recent years, the assessment of the health of existing bridges and the design of new ones has gained significance, in part due to the progress made in the development of faster moving locomotives, the desire for lighter bridges, and the imposition of performance criteria against rare events such as occurrence of earthquakes and fire. A probabilistic analysis would address these issues, and also assist in determination of reliability and in estimating the remaining life of the structure. In this thesis, we aim to develop tools for the probabilistic analysis techniques of state estimation, parameter identification and global response sensitivity analysis of vehicle-structure interaction systems, which are also applicable to the broader class of structural dynamical systems. The thesis is composed of six chapters and three appendices. The contents of these chapters and the appendices are described in brief in the following paragraphs. In chapter 1, we introduce the problem of probabilistic analysis of vehicle-structure interactions. The introduction is organized in three parts, dealing separately with issues of forward problems, inverse problems, and global response sensitivity analysis. We begin with an overview of the modelling and analysis of vehicle-structure interaction systems, including the application of spatial substructuring and mesh partitioning schemes. Following this, we describe Bayesian techniques for state and parameter estimation for the general class of state-space models of dynamical systems, including the application of the Kalman filter and particle filters for state estimation, MCMC sampling based filters for parameter identification, and the extended Kalman filter, the unscented Kalman filter and the ensemble Kalman filter for the problem of combined state and parameter identification. In this context, we present the Rao-Blackwellization method which leads to variance reduction in particle filtering. Finally, we present the techniques of global response sensitivity analysis, including Sobol’s analysis and distance-based measures of sensitivity indices. We provide an outline and a review of literature on each of these topics. In our review of literature, we identify the difficulties encountered when adopting these tools to problems involving vehicle-structure interaction systems, and corresponding to these issues, we identify some open problems for research. These problems are addressed in chapters 2, 3, 4 and 5. In chapter 2, we study the application of finite element modelling, combined with numerical solutions of governing stochastic differential equations, to analyse instrumented nonlinear moving vehicle-structure systems. The focus of the chapter is on achieving computational efficiency by deploying, within a single modeling framework, three sub structuring schemes with different methodological moorings. The schemes considered include spatial substructuring schemes (involving free-interface coupling methods), a spatial mesh partitioning scheme for governing stochastic differential equations (involving the use of a predictor corrector method with implicit integration schemes for linear regions and explicit schemes for local nonlinear regions), and application of the Rao-Blackwellization scheme (which permits the use of Kalman’s filtering for linear substructures and Monte Carlo filters for nonlinear substructures). The main effort in this work is expended on combining these schemes with provisions for interfacing of the substructures by taking into account the relative motion of the vehicle and the supporting structure. The problem is formulated with reference to an archetypal beam and multi-degrees of freedom moving oscillator with spatially localized nonlinear characteristics. The study takes into account imperfections in mathematical modelling, guide way unevenness, and measurement noise. The numerical results demonstrate notable reduction in computational effort achieved on account of introduction of the substructuring schemes. In chapter 3, we address the issue of identification of system parameters of structural systems using dynamical measurement data. When Markov chain Monte Carlo (MCMC) samplers are used in problems of system parameter identification, one would face computational difficulties in dealing with large amount of measurement data and (or) low levels of measurement noise. Such exigencies are likely to occur in problems of parameter identification in dynamical systems when amount of vibratory measurement data and number of parameters to be identified could be large. In such cases, the posterior probability density function of the system parameters tends to have regions of narrow supports and a finite length MCMC chain is unlikely to cover pertinent regions. In this chapter, strategies are proposed based on modification of measurement equations and subsequent corrections, to alleviate this difficulty. This involves artificial enhancement of measurement noise, assimilation of transformed packets of measurements, and a global iteration strategy to improve the choice of prior models. Illustrative examples include a laboratory study on a beam-moving trolley system. In chapter 4, we consider the combined estimation of the system states and parameters of vehicle-structure interaction systems. To this end, we formulate a framework which uses MCMC sampling for parameter estimation and particle filtering for state estimation. In chapters 2 and 3, we described the computational issues faced when adopting these techniques individually. When used together, we come across both sets of issues, and find the complexity of the estimation problem is greatly increased. In this chapter, we address the computational issues by adopting the sub structuring techniques proposed in chapter 2, and the parameter identification method based on modified measurement models presented in chapter 3. The proposed method is illustrated on a computational study on a beam-moving oscillator system with localized nonlinearities, as well as on a laboratory study on a beam-moving trolley system. In chapter 5, we present global response sensitivity indices for structural dynamical systems with random system parameters excited by multiple random excitations. Two new procedures for evaluating global response sensitivity measures with respect to the excitation components are proposed. The first procedure is valid for stationary response of linear systems under stationary random excitations and is based on the notion of Hellinger’s metric of distance between two power spectral density functions. The second procedure is more generally valid and is based on the l2 norm based distance measure between two probability density functions. Specific cases which admit exact solutions are presented and solution procedures based on Monte Carlo simulations for more general class of problems are outlined. The applicability of the proposed procedures to the case of random system parameters is demonstrated using suitable illustrations. Illustrations include studies on a parametrically excited linear system and a nonlinear random vibration problem involving moving oscillator-beam system that considers excitations due to random support motions and guide-way unevenness. In chapter 6 we summarize the contributions made in chapters 2, 3, 4, and 5, and on the basis of these studies, present a few problems for future research. In addition to these chapters, three appendices are included in this thesis. Appendices A and B correspond to chapter 3. In appendix A, we study the effect on the nature of the posterior probability density functions of large measurement data set and small measurement noise. Appendix B illustrates the MCMC sampling based parameter estimation procedure of chapter 3 using a laboratory study on a bending–torsion coupled, geometrically non-linear building frame under earthquake support motion. In appendix C, we present Ito-Taylor time discretization schemes for stochastic delay differential equations found in chapter 5.
APA, Harvard, Vancouver, ISO, and other styles
45

Kvanta, Hugo. "Modelling Safety of Autonomous Driving with Semi-Markov Processes." Thesis, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444617.

Full text
Abstract:
With the advent of autonomous vehicles, the issue of safety-evaluationhas become key. ISO26262 recommends using Markov chains. However, in their most common form, Markov chains lack the flexibility required to model non- exponential probability distributions and systems displaying parallelism. In these cases, generalized semi-Markov processes arebetter suited. Though, these are significantly more taxing to analyze mathematically.  This thesis instead explores the option of simulating these systemsdirectly via MATLAB’s Simulink and Stateflow. An example system, here called CASE, currently under study by Scania was used as an example. The results showed that direct simulation is indeed possible, but the computational times are significantly greater than those from standard MATLAB-functions. The method should therefore be employed on parallel systems when results with a high level of fidelity are needed, and alternative methods are not available.
APA, Harvard, Vancouver, ISO, and other styles
46

Deus, Raquel Margarida Viana Faria de. "GIS-BASED MEASUREMENT, ANALYSIS AND MODELLING OF LAND-USE AND LAND-COVER CHANGE IN COASTAL AREAS. THE CASE OF THE ALGARVE, PORTUGAL." Doctoral thesis, 2015. http://hdl.handle.net/10362/16262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Šejnová, Gabriela. "Vliv stochastického chování iontových kanálů na přenos signálu a informace na excitabilních neuronálních membránách." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-356195.

Full text
Abstract:
The stochastic behavior of voltage-gated ion channels causes fluctuations of conductances and voltages across neuronal membranes, contributing to the neuronal noise which is ubiquitous in the nervous system. While this phenomenon can be observed also on other parts of the neuron, here we concentrated on the axon and the way the channel noise influences axonal input-output characteristics. This was analysed by working with our newly created computational compartmental model, programmed in Matlab environment, built up using the Hodgkin-Huxley mathematical formalism and channel noise implemented via extended Markov Chain Monte Carlo method. The model was thoroughly verified to simulate plausibly a mammalian axon of CA3 neuron. Based on our simulations, we confirmed quantitatively the findings that the channel noise is the most prominent on membranes with smaller number of Na+ and K+ channels and that it majorly increases the variability of travel times of action potentials (APs) along axons, decreasing thereby the temporal precision of APs. The simulations analysing the effect of axonal demyelination and axonal diameter correlated well with other finding referred in Literature. We further focused on spike pattern and how is its propagation influenced by inter-spike intervals (ISI). We found, that APs fired...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography