To see the other types of publications on this topic, follow the link: Dependence modelling.

Dissertations / Theses on the topic 'Dependence modelling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Dependence modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Taku, Marie Manyi. "Modelling Dependence of Insurance Risks." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-9064.

Full text
Abstract:
Modelling one-dimensional data can be performed by different wellknown ways. Modelling two-dimensional data is a more open question. There is no unique way to describe dependency of two dimensional data. In this thesis dependency is modelled by copulas. Insurance data from two different regions (Göinge and Kronoberg) in Southern Sweden is investigated. It is found that a suitable model is that marginal data are Normal Inverse Gaussian distributed and copula is a better dependence measure than the usual linear correlation together with Gaussian marginals.
APA, Harvard, Vancouver, ISO, and other styles
2

Lecei, Ivan [Verfasser]. "Modelling extremal dependence / Ivan Lecei." Ulm : Universität Ulm, 2018. http://d-nb.info/1173249745/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Johnson, Jill Suzanne. ""Modelling Dependence in Extreme Environmental Events"." Thesis, University of Newcastle upon Tyne, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.525050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Lanhua. "Risk management : modelling dependence between asset returns." Thesis, Imperial College London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.420966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Navarrete, Miguel A. Ancona. "Dependence modelling and spatial prediction for extreme values." Thesis, Lancaster University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kereszturi, Monika. "Assessing and modelling extremal dependence in spatial extremes." Thesis, Lancaster University, 2017. http://eprints.lancs.ac.uk/86369/.

Full text
Abstract:
Offshore structures, such as oil platforms and vessels, must be built such that they can withstand extreme environmental conditions (e.g., high waves and strong winds) that may occur during their lifetime. This means that it is essential to quantify probabilities of the occurrence of such extreme events. However, a difficulty arises in that there are very limited data available at these levels. The statistical field of extreme value theory provides asymptotically motivated models for extreme events, hence allowing extrapolation to very rare events. In addition to the risk to a single site, we are also interested in the joint risk of multiple offshore platforms being affected by the same extreme event. In order to understand joint extremal behaviour for two or more locations, the spatial dependence between the different locations must be considered. Extremal dependence between two locations can be of two types: asymptotic independence (AI) when the extremes at the two sites are unlikely to occur together, and asymptotic dependence (AD) when it is possible for both sites to be affected simultaneously. For finite samples it is often difficult to determine which type of dependence the data are more consistent with. In a large ocean basin it is reasonable to expect both of these features to be present, with some close by locations AD, with the dependence decreasing with distance, and some far apart locations AI. In this thesis we develop new diagnostic tools for distinguishing between AD and AI and illustrate these on North Sea wave height data. We also investigate how extremal dependence changes with direction and find evidence for spatial anisotropy in our data set. The most widely used spatial models assume asymptotic dependence or perfect independence between sites, which is often unrealistic in practice. Models that attempt to capture both AD and AI exist, but they are difficult to implement in practice due to their complexity and they are restricted in the forms of AD and AI they can model. In this thesis we introduce a family of bivariate distributions that exhibits all the required features of short, medium and long range extremal dependence required for pairwise dependence modelling in spatial applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Menguturk, Levent Ali. "Information-based jumps, asymmetry and dependence in financial modelling." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/10953.

Full text
Abstract:
In mathematical finance, economies are often presented with the specification of a probability space equipped with a filtration that encodes information flow. The information-based framework of Brody, Hughston and Macrina (BHM) emphasises the role of market information in deriving asset price dynamics, instead of assuming price behaviour from the start. We extend the BHM framework by (i) modelling the nature of access to information through information blockages and activations of new information sources, and (ii) introducing a new class of multivariate Markov processes that we call Generalised Liouville Processes (GLPs) which can model the flow of information about vectors of assets. The analysis of access to information allows us to derive price dynamics with jumps. It additionally enables us to develop an information-switching framework, and price derivatives under regime-switching economies. We also indicate some geometrical aspects of appearances of new information sources. We represent information jumps on the unit sphere in the Hilbert space of square-integrable functions, and on hyperbolic spaces. We use differential geometry, information theory and what we call n-order piecewise enlargements of filtrations to dynamically quantify the impact of sudden changes in the sources of information. This helps us to model the stochastic evolution of what may be viewed as information asymmetry. In related work, we construct GLPs on finite time horizons by splitting so-called Levy random bridges into non-overlapping subprocesses. The terminal values of GLPs have generalised multivariate Liouville distributions, and GLPs can model a wide spectrum of information-driven dependence structures between assets. The law of an n-dimensional GLP under an equivalent measure is that of an n-vector of independent Levy processes. We focus on a special type of GLPs that we call Archimedean Survival Processes (ASPs). The terminal value of an ASP has an [Symbol appears here. To view, please open pdf attachment] 1-norm symmetric distribution, and hence, an Archimedean survival copula.
APA, Harvard, Vancouver, ISO, and other styles
8

Carreira, Inês Duarte. "Modelling dependence between frequency and severity of insurance claims." Master's thesis, Instituto Superior de Economia e Gestão, 2017. http://hdl.handle.net/10400.5/14631.

Full text
Abstract:
Mestrado em Actuarial Science
A estimação da perda individual é uma importante tarefa para calcular os preços das apólices de seguro. A abordagem padrão assume independência entre a frequência e a severidade dos sinistros, o que pode não ser uma suposição realística. Neste texto, a dependência entre números e montantes de sinistros é explorada, num contexto de Modelos Lineares Generalizados. Um modelo de severidade condicional e um modelo de Cópula são apresentados como alternativas para modelar esta dependência e posteriormente aplicados a um conjunto de dados fornecido por uma seguradora portuguesa. No final, a comparação com o cenário de independência é realizada.
The estimation of the individual loss is an important task to price insurance policies. The standard approach assumes independence between claim frequency and severity, which may not be a realistic assumption. In this text, the dependence between claim counts and claim sizes is explored, in a Generalized Linear Model framework. A Conditional severity model and a Copula model are presented as alternatives to model this dependence and later applied to a data set provided by a Portuguese insurance company. At the end, the comparison with the independence scenario is carried out.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
9

Werner, Christoph. "Structured expert judgement for dependence in probabilistic modelling of uncertainty : advances along the dependence elicitation process." Thesis, University of Strathclyde, 2018. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=30519.

Full text
Abstract:
In decision and risk analysis problems, modelling uncertainty probabilistically provides key insights and information for decision makers. A common challenge is that uncertainties are typically not isolated but interlinked which introduces complex (and often unexpected) effects on the model output. Therefore, dependence needs to be taken into account and modelled appropriately if simplifying assumptions, such as independence, are not sensible. Similar to the case of univariate uncertainty, relevant historical data to quantify a (dependence) model are often lacking or too costly to obtain. This may be true even when data on a model's univariate quantities, such as marginal probabilities, are available. Then, specifying dependence between the uncertain variables through expert judgement is the only sensible option. A structured and formal process to the elicitation is essential for ensuring methodological robustness. This thesis consists of three published works and two papers which are to be published (one under review and one working paper). Two of these works provide comprehensive overviews from different perspectives about the research on dependence elicitation processes. Based on these reviews, novel risk assessment and expert judgement methods are proposed - (1) allowing experts to structure and share their knowledge and beliefs about dependence relationships prior to a quantitative assessment and (2) ensuring experts' (detailed) quantitative assessments are feasible while their elicitation is intuitive. The original research presented in this thesis is applied in case-studies with experts in real risk modelling contexts for the UK Higher Education sector, terrorism risk and future risk of antibacterial multi-drug resistance.
APA, Harvard, Vancouver, ISO, and other styles
10

Xia, Xinghua. "Essays on dependence modelling with vine copulas and its applications." Thesis, University of Leicester, 2018. http://hdl.handle.net/2381/42235.

Full text
Abstract:
This thesis contains three essays on dependence modelling with high dimension vine copulas and its applications in credit portfolio risk, asset allocation and international financial contagion. In the first essay, we demonstrate the superiority of vine copulas over multivariate Gaussian copula when modelling the dependence structure of a credit portfolio risk factors. We introduce the vine copulas to modelling the dependence structure of multi risk factors log returns in the combined framework of both threshold model and mixture model credit risk modelling. The second essay studies asset allocation decisions in the presence of regime switching on asset allocation with alternative investments. We find evidence that two regimes, characterized as bear and bull states, are required to capture the joint distribution of stock, bond and alternative investments returns. Optimal asset allocation varies considerably across these states and changes over time. Therefore, in order to capture observed asymmetric dependence and tail dependence in financial asset returns, we introduce high dimensional vine copula and construct a multivariate vine copula regime-switching model, which account for asymmetric dependence and tail dependence in high dimensional data. The third essay explores the cross-market dependence between six popular equity indices (S&P 500, NASDAQ 100, FTSE 100, DAX 30, Euro Stoxx 50 and Nikkei 225), and their corresponding volatility indices (VIX, VXN, VFTSE, VDAX, VSTOXX and VXJ). In particular, we propose a novel dynamic method that combine the Generalised Autoregressive Score (GAS) Method with high dimension R-vine copula approach which is able to capture the time-varying tail dependence coefficient (TDC) of index returns.
APA, Harvard, Vancouver, ISO, and other styles
11

Abadi, Mostafa Shams Esfand. "Analysis of new techniques for risk aggregation and dependence modelling." Master's thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/9239.

Full text
Abstract:
Mestrado em Ciências Actuariais
In risk aggregation we are interested in the distribution of the sum of dependent risks. The objective of risk aggregation and dependence modeling is to model adequately dependent insurance portfolios in order to evaluate the overall risk exposure. This master thesis investigates some practical aspects of modeling risk aggregation and dependency. We give an introduction to copula-based hierarchical aggregation model through reordering algorithm. This approach can be easily applicable in high dimensions and consists of a tree structure, bivariate copulas, and marginal distributions. This method is empirically illustrated using data set of Danish Fire Insurance Data. These data were collected at Copenhagen Reinsurance over the period 1980 to 1990 and every total claim has been divided into three risks consisting of a building loss, a loss of contents and a loss of profits caused by the same fire.
APA, Harvard, Vancouver, ISO, and other styles
12

Lundman, Josef. "Modelling Energy Dependence of Liquid Ionisation Chambers Using Fluence Pencil Kernels." Thesis, Umeå universitet, Radiofysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-53909.

Full text
Abstract:
The high demand on accuracy in radiotherapy is to a large extent ensured through measurements of dose to water. The liquid ionisation chamber (LIC) is a type of detector that has several desirable properties for such measurements, e.g. a small active volume and minimal directional dependent response. There are, however, still gaps in knowledge concerning fundamental characteristics of this kind of detector. One of these characteristics is the detector’s response variation in relation to water with varying beam quality. This work aims to increase the knowledge of the LIC’s behaviour and attempts to come up with a method to construct correction factors for the response variation. The response model proposed by Eklund and Ahnesjö [2009] has been evaluated for two LICs, one filled with isooctane and the other with tetramethylsilane (TMS). The evaluation was done for two photon beams, 6 and 15 MV. It was found that the energy dependent response calculations from this method could not explain the difference between the LIC and reference air-filled ionisation chamber measurements in the larger fields. The response model leads to corrections for the TMS filled LIC in the direction away from the reference measurements. For the LIC filled with isooctane the corrections points towards the reference but were too small to completely explain the difference.
De höga kraven på noggrannhet i radioterapi kontrolleras genom mätningar av dos till vatten. Vätskejonisationskammaren (LIC) är en detektor med flera önskvärda egenskaper för sådana mätningar, den har exempelvis en liten aktiv volym och en respons med litet vinkelberoende. Fortfarande finns luckor i kunskapen om denna detektors grundläggande egenskaper. En av dessa är hur detektorns respons skiljer sig från vatten beroende på strålkvalitet. Syftet med detta arbete har varit att öka kunskapen om LICens beteende samt att försöka komma fram till en metod för att konstruera korrektionsfaktorer för den strålkvalitetsberoende responsen. Responsmodellen, presenterad av Eklund och Ahnesjö [2009], utvärderades för två LICar, en fylld med isooktan och en med tetrametylsilan (TMS), i två fotonstrålar, 6 och 15 MV. Den energiberoende responsen som beräknades från responsmodellen kunde inte förklara skillnaden mellan LIC- och referenskammarmätningarna i de stora fälten. Som referenskammare användes en luftfylld jonkammare. Modellen ledde till korrektioner i riktning bort från referenskammarvärdet för den TMS-fyllda kammaren. För kammaren med isooktan skedde korrektionen i riktning mot referenskammaren men var för liten för att helt förklara skillnaden.
APA, Harvard, Vancouver, ISO, and other styles
13

Backman, Fredrik. "Dependence Modelling and Risk Analysis in a Joint Credit-Equity Framework." Thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168154.

Full text
Abstract:
This thesis is set in the intersection between separate types of financial markets, with emphasis on joint risk modelling. Relying on empirical findings pointing toward the ex- istence of dependence across equity and corporate debt markets, a simulation framework intended to capture this property is developed. A few different types of models form building blocks of the framework, including stochastic processes describing the evolution of equity and credit risk factors in continuous time, as well as a credit rating based model, providing a mechanism for imposing dependent credit migrations and defaults for firms participating in the market. A flexible modelling framework results, proving capable of generating dependence of varying strength and shape, across as well as within studied markets. Particular focus is given to the way markets interact in the tails of the distributions. By means of simulation, it is highlighted that dependence as produced by the model tends to spread asymmetrically with simultaneously extreme outcomes occurring more frequently in lower than in upper tails. Attempts to fit the model to observed market data featuring historical stock index and corporate bond index values are promising as both marginal distributions and dependence connecting the investigated asset types appear largely replicable, although we conclude further validation remains.
APA, Harvard, Vancouver, ISO, and other styles
14

Deara, Mohamed Ahmed. "Replacement policies for a two-component system with failure dependence." Thesis, University of Salford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Oduneye, Chris Emeka. "Credit modelling : generating spread dynamics with intensities and creating dependence with copulas." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/6910.

Full text
Abstract:
The thesis is an investigation into the pricing of credit risk under the intensity framework with a copula generating default dependence between obligors. The challenge of quantifying credit risk and the derivatives that are associated with the asset class has seen an explosion of mathematical research into the topic. As credit markets developed the modelling of credit risk on a portfolio level, under the intensity framework, was unsatisfactory in that either: 1. The state variables of the intensities were driven by diffusion processes and so could not generate the observed level of default correlation (see Schönbucher (2003a)) or, 2. When a jump component was added to the state variables, it solved the problem of low default correlation, but the model became intractable with a high number of parameters to calibrate to (see Chapovsky and Tevaras (2006)) or, 3. Use was made of the conditional independence framework (see Duffie and Garleanu (2001)). Here, conditional on a common factor, obligors’ intensities are independent. However the framework does not produce the observed level of default correlation, especially for portfolios with obligors that are dispersed in terms of credit quality. Practitioners seeking to have interpretable parameters, tractability and to reproduce observed default correlations shifted away from generating default dependence with intensities and applied copula technology to credit portfolio pricing. The one factor Gaussian copula and some natural extensions, all falling under the factor framework, became standard approaches. The factor framework is an efficient means of generating dependence between obligors. The problem with the factor framework is that it does not give a representation to the dynamics of credit risk, which arise because credit spreads evolve with time. A comprehensive framework which seeks to address these issues is developed in the thesis. The framework has four stages: 1. Choose an intensity model and calibrate the initial term structure. 2. Calibrate the variance parameter of the chosen state variable of the intensity model. 3. When extended to a portfolio of obligors choose a copula and calibrate to standard market portfolio products. 4. Combine the two modelling frameworks, copula and intensity, to produce a dynamic model that generates dependence amongst obligors. The thesis contributes to the literature in the following way: • It finds explicit analytical formula for the pricing of credit default swaptions with an intensity process that is driven by the extended Vasicek model. From this an efficient calibration routine is developed. Many works (Jamshidian (2002), Morini and Brigo (2007) and Schönbucher (2003b)) have focused on modelling credit swap spreads directly with modified versions of the Black and Scholes option formula. The drawback of using a modified Black and Scholes approach is that pricing of more exotic structures whose value depend on the term structure of credit spreads is not feasible. In addition, directly modelling credit spreads, which is required under these approaches, offers no explicit way of simulating default times. In contrast, with intensity models, there is a direct mechanism to simulate default times and a representation of the term structure of credit spreads is given. Brigo and Alfonsi (2005) and Bielecki et al. (2008) also consider intensity modelling for the purposes of pricing credit default swaptions. In their works the dynamics of the intensity process is driven by the Cox Ingersoll and Ross (CIR) model. Both works are constrained because the parameters of the CIR model they consider are constant. This means that when there is more than one tradeable credit default swaption exact calibration of the model is usually not possible. This restriction is not in place in our methodology. • The thesis develops a new method, called the loss algorithm, in order to construct the loss distribution of a portfolio of obligors. The current standard approach developed by Turc et al. (2004) requires differentiation of an interpolated curve (see Hagan and West (2006) for the difficulties of such an approach) and assumes the existence of a base correlation curve. The loss algorithm does not require the existence of a base correlation curve or differentiation of an interpolated curve to imply the portfolio loss distribution. • Schubert and Schönbucher (2001) show theoretically how to combine copula models and stochastic intensity models. In the thesis the Schubert and Schönbucher (2001)framework is implemented by combining the extended Vasicek model and the Gaussian copula model. An analysis of the impact of the parameters of the combined models and how they interact is given. This is as follows: – The analysis is performed by considering two products, securitised loans with embedded triggers and leverage credit linked notes with recourse. The two products both have dependence on two obligors, a counterparty and a reference obligor. – Default correlation is shown to impact significantly on pricing. – We establish that having large volatilities in the spread dynamics of the reference obligor or counterparty creates a de-correlating impact: the higher the volatility the lower the impact of default correlation. – The analysis is new because, classically, spread dynamics are not considered when modelling dependence between obligors. • The thesis introduces a notion called the stochastic liquidity threshold which illustrates a new way to induce intensity dynamics into the factor framework. • Finally the thesis shows that the valuation results for single obligor credit default swaptions can be extended to portfolio index swaptions after assuming losses on the portfolio occur on a discretised set and independently to the index spread level.
APA, Harvard, Vancouver, ISO, and other styles
16

Berg, Edvin, and Karl Wilhelm Lange. "Enhancing ESG-Risk Modelling - A study of the dependence structure of sustainable investing." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-266378.

Full text
Abstract:
The interest in sustainable investing has increased significantly during recent years. Asset managers and institutional investors are urged to invest more sustainable from their stakeholders, reducing their investment universe. This thesis has found that sustainable investments have a different linear dependence structure compared to the regional markets in Europe and North America, but not in Asia-Pacific. However, the largest drawdowns of an sustainable compliant portfolio has historically been lower compared to the a random market portfolio, especially in Europe and North America.
Intresset för hållbara investeringar har ökat avsevärt de senaste åren. Fondförvaltare och institutionella investerare är, från deras intressenter, manade att investera mer hållbart vilket minskar förvaltarnas investeringsuniversum. Denna uppsats har funnit att hållbara investeringar har en beroendestruktur som är skild från de regionala marknaderna i Europa och Nordamerika, men inte för Asien-Stillahavsregionen. De största värdeminskningarna i en hållbar portfölj har historiskt varit mindre än värdeminskningarna från en slumpmässig marknadsportfölj, framförallt i Europa och Nordamerika.
APA, Harvard, Vancouver, ISO, and other styles
17

Kosgodagan, Alex. "High-dimensional dependence modelling using Bayesian networks for the degradation of civil infrastructures and other applications." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0020/document.

Full text
Abstract:
Cette thèse explore l’utilisation des réseaux Bayésiens (RB) afin de répondre à des problématiques de dégradation en grandes dimensions concernant des infrastructures du génie civil. Alors que les approches traditionnelles basées l’évolution physique déterministe de détérioration sont déficientes pour des problèmes à grande échelle, les gestionnaires d’ouvrages ont développé une connaissance de modèles nécessitant la gestion de l’incertain. L’utilisation de la dépendance probabiliste se révèle être une approche adéquate dans ce contexte tandis que la possibilité de modéliser l’incertain est une composante attrayante. Le concept de dépendance au sein des RB s’exprime principalement de deux façons. D’une part, les probabilités conditionnelles classiques s’appuyant le théorème de Bayes et d’autre part, une classe de RB faisant l’usage de copules et corrélation de rang comme mesures de dépendance. Nous présentons à la fois des contributions théoriques et pratiques dans le cadre de ces deux classes de RB ; les RB dynamiques discrets et les RB non paramétriques, respectivement. Des problématiques concernant la paramétrisation de chacune des classes sont également abordées. Dans un contexte théorique, nous montrons que les RBNP permet de caractériser n’importe quel processus de Markov
This thesis explores high-dimensional deterioration-related problems using Bayesian networks (BN). Asset managers become more and more familiar on how to reason with uncertainty as traditional physics-based models fail to fully encompass the dynamics of large-scale degradation issues. Probabilistic dependence is able to achieve this while the ability to incorporate randomness is enticing.In fact, dependence in BN is mainly expressed in two ways. On the one hand, classic conditional probabilities that lean on thewell-known Bayes rule and, on the other hand, a more recent classof BN featuring copulae and rank correlation as dependence metrics. Both theoretical and practical contributions are presented for the two classes of BN referred to as discrete dynamic andnon-parametric BN, respectively. Issues related to the parametrization for each class of BN are addressed. For the discrete dynamic class, we extend the current framework by incorporating an additional dimension. We observed that this dimension allows to have more control on the deterioration mechanism through the main endogenous governing variables impacting it. For the non-parametric class, we demonstrate its remarkable capacity to handle a high-dimension crack growth issue for a steel bridge. We further show that this type of BN can characterize any Markov process
APA, Harvard, Vancouver, ISO, and other styles
18

Zugic, Richard. "Modelling the tribology of thin film interfaces." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Aparicio, Acosta Felipe Miguel. "Nonlinear modelling and analysis under long-range dependence with an application to positive time series /." [S.l.] : [s.n.], 1995. http://library.epfl.ch/theses/?nr=1381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Labukhin, Dmitry Li Xun. "Modelling, design, and simulation of facet reflection and gain polarization dependence in semiconductor optical amplifiers." *McMaster only, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Patta, Vaia. "Aspects of categorical physics : a category for modelling dependence relations and a generalised entropy functor." Thesis, University of Oxford, 2018. http://ora.ox.ac.uk/objects/uuid:8bfd2a2d-524e-4ffa-953b-33d66ba186ed.

Full text
Abstract:
Two applications of Category Theory are considered. The link between them is applications to Physics and more specifically to Entropy. The first research chapter is broader in scope and not explicitly about Physics, although connections to Statistical Mechanics are made towards the end of the chapter. Matroids are abstract structures that describe dependence, and strong maps are certain structure-preserving functions between them with desirable properties. We examine properties of various categories of matroids and strong maps: we compute limits and colimits; we find free and cofree constructions of various subcategories; we examine factorisation structures, including a translation principle from geometric lattices; we find functors with convenient properties to/from vector spaces, multisets of vectors, geometric lattices, and graphs; we determine which widely used operations on matroids are functorial (these include deletion, contraction, series and parallel connection, and a simplification monad); lastly, we find a categorical characterisation of the greedy algorithm. In conclusion, this project determines which aspects of Matroid Theory are most and least conducive to categorical treatment. The purpose of the second research chapter is to provide a categorical framework for generalising and unifying notions of Entropy in various settings, exploiting the fact that Entropy is a monotone subadditive function. A categorical characterisation of Entropy through a category of thermodynamical systems and adiabatic processes is found. A modelling perspective (adiabatic categories) that directly generalises an existing model is compared to an axiomatisation through topological and linear structures (topological weak semimodules), where the latter is based on a categorification of semimodules. Properties of each class of categories are examined; most notably a cancellation property of adiabatic categories generalising an existing result, and an adjunction between the categories of weak semimodules and symmetric monoidal categories. An adjunction between categories of adiabatic categories and topological weak semimodules is found. We examine in which cases each of these classes of categories constitutes a traced monoidal category. Lastly, examples of physical applications are provided. In conclusion, this project uncovers a way of, and makes progress towards, retrieving the statistical formulation of Entropy from simple axioms.
APA, Harvard, Vancouver, ISO, and other styles
22

Santos, Mariana Faria dos. "Modelling claim counts of homogeneous risk groups using copulas." Master's thesis, Instituto Superior de Economia e Gestão, 2010. http://hdl.handle.net/10400.5/2932.

Full text
Abstract:
Mestrado em Ciências Actuariais
Over the years modelling the dependence between random variables has been a challenge in many areas, like insurance and finance. Recently with the new capital requirement regime for the European insurance business, this subject is increasing importance since, according to Solvency II, the insurer's risks should be modelled separately, then aggregated follow¬ing some dependence structure. The challenge of this framework is to achieve an accurate way of joining dependent risks in order to not over or underestimate the capital require¬ments. The aim of this thesis is to give a practical application of a multivariate model based on copulas as well as all the theoretical and important concepts related to the theory of copulas. Although the definition of copula dates from 1959, only recently some authors such Clemen and Reilly (1999), Daul et al (2003), Dias (2004), Frees and Valdez (1998) and Embrechts et al (2003) applied the copula framework for the finance and insurance data. In the mean time, estimation procedures and goodness-of-fit tests has been developing in the literature. In this thesis we introduce the theory of copulas together with the study of copula models for insurance data. Beforehand, copula definition andpropertiesaswellassome types of copulas discussed in literature are introduced. We present methods to estimate the parameters and a goodness-of-fit test for copulas. Afterwards, we present a summary of Solvency II and a multivariate model to fit claim counts between three homogeneous risk groups that are the core of the automobile business: Third party liability property damages, third party liability bodily injury and material own damages. The methodology followed is based on copula models and the procedures are carried out in two steps. First, we model the marginal distributions of each risk group and test the goodness-of-fit of each distribution. We propose two models to fit the marginal distributions: a discrete and an approximating continuous model. In the first model we test a Poisson and a Negative Binomial distribution whereas in the second one we test a Gamma and a Normal distribution approximations. In spite of being the natural approach to fit the claim counts, the discrete model has some limitations since copulas have serious restrictions when the marginals are discrete. Thus, a continuous model is proposed to fit the data as an alternative avoiding these limitations. Finally, we fitdifferent copulas families estimating its parameters through some procedures, presented along this thesis. We evaluate the goodness-of-fit using a statistical test based on the empirical copula concept.
Ao longo dos últimos anos, avaliar a dependência entre variáveis aleatórias tem sido um desafio constante em muitas áreas, como por exemplo nos seguros e finanças. Recente¬mente, com o novo regime de solvência para as companhias de seguros europeias, este tema tem ganho um grande relevo, uma vez que, de acordo com o programa Solvência II, os riscos de uma companhia de seguros devem ser avaliados separadamente, sendo posteri¬ormente agregados de forma dependente. O grande desafio deste modelo é conseguir en¬contrar a forma mais correcta de agregar os diversos riscos considerando dependência, por forma a não subestimar nem sobrestimar o valor dos requisitos de capital. Posto isto, o objectivo desta tese é, por um lado, apresentar uma aplicação prática de um modelo mul-tivariado onde se utilizam diversas cópulas para avaliar a estrutura de dependência entre os riscos e, por outro lado, fornecer todos os conceitos teóricos mais importantes relacionados com a teoria das cópulas. Apesar da definição de cópula datar de 1959, apenas recente¬mente alguns autores, tais como Clemen and Reilly (1999), Daul et al (2003), Dias (2004), Frees and Valdez (1998) e Embrechts et al (2003), aplicaram este conceito à área da banca e seguros. Entretanto, têm sido desenvolvidos na literatura métodos para a estimação dos parâmetros e testes de ajustamento para modelos multivariados que utilizam cópulas. Nesta tese, a teoria das cópulas é introduzida juntamente com um estudo prático aplicado a dados de uma seguradora real. Em primeiro lugar, é apresentada a definição e as propriedades gerais das cópulas sendo também detalhadas algumas cópulas conheci¬das e estudadas na literatura. Além disto, são apresentados métodos para a estimação dos parâmetros e testes para avaliar a qualidade do ajustamento das cópulas aos dados. Segui¬damente, é feito um breve resumo sobre Solvência II e apresentado um modelo multivariado para ajustar o número de sinistros de três grupos de risco homogéneos que constituem o núcleo do ramo automóvel: Responsabilidade civil de danos materiais, responsabilidade civil de danos corporais e danos próprios. Os procedimentos são realizados em dois passos. Em primeiro lugar ajustam-se várias distribuições marginais para cada grupo de risco testando a qualidade de ajustamento de cada uma. Para o ajustamento das marginais são propostos dois modelos: um modelo discreto e um modelo de aproximação contínuo. No primeiro as distribuições testadas são a Binomial Negativa e a Poisson, enquanto que no segundo são testadas as distribuições Gama e Normal. O modelo discreto tem algumas limitações, uma vez que as cópulas têm sérias restrições quando as marginais são discretas. Daqui advém a necessidade de propor um modelo contínuo aproximado. Finalmente, ajustam-se diver¬sas cópulas utilizando métodos de estimação apresentados ao longo da tese. A qualidade do ajustamento é testada através de uma estatística tese baseada no conceito de cópula empírica.
APA, Harvard, Vancouver, ISO, and other styles
23

Forrest, Robyn Elizabeth. "Simulation models for estimating productivity and trade-offs in the data-limited fisheries of New South Wales, Australia." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/3417.

Full text
Abstract:
Recent shifts towards ecosystem based fisheries management (EBFM) around the world have necessitated consideration of effects of fishing on a larger range of species than previously. Non-selective multispecies fisheries are particularly problematic for EBFM, as they can contribute to erosion of ecosystem structure. The trade-off between catch of productive commercial species and abundance of low-productivity species is unavoidable in most multispecies fisheries. A first step in evaluation of this trade-off is estimation of productivity of different species but this is often hampered by poor data. This thesis develops techniques for estimating productivity for data-limited species and aims to help clarify EBFM policy objectives for the fisheries of New South Wales (NSW), Australia. It begins with development of an age-structured model parameterised in terms of optimal harvest rate, UMSY. UMSY is a measure of productivity, comparable among species and easily communicated to managers. It also represents a valid threshold for prevention of overfishing. The model is used to derive UMSY for 54 Atlantic fish stocks for which recruitment parameters had previously been estimated. In most cases, UMSY was strongly limited by the age at which fish were first caught. However, for some species, UMSY was more strongly constrained by life history attributes. The model was then applied to twelve species of Australian deepwater dogshark (Order Squaliformes), known to have been severely depleted by fishing. Results showed that the range of possible values of UMSY for these species is very low indeed. These findings enabled a preliminary stock assessment for three dogsharks (Centrophorus spp.) currently being considered for threatened species listing. Preliminary results suggest they have been overfished and that overfishing continues. Finally, an Ecopath with Ecosim ecosystem model, representing the 1976 NSW continental slope, is used to illustrate trade-offs in implementation of fishing policies under alternative policy objectives. Results are compared with those of a biogeochemical ecosystem model (Atlantis) of the same system, built by scientists from CSIRO. While there were large differences in model predictions for individual species, they gave similar results when ranking alternative fishing policies, suggesting that ecosystem models may be useful for exploring broad-scale strategic management options.
APA, Harvard, Vancouver, ISO, and other styles
24

Snguanyat, Ongorn. "Stochastic modelling of financial time series with memory and multifractal scaling." Queensland University of Technology, 2009. http://eprints.qut.edu.au/30240/.

Full text
Abstract:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
APA, Harvard, Vancouver, ISO, and other styles
25

Rodríguez, Gasén Rosa. "Modelling SEP events: latitudinal and longitudinal dependence of the injection rate of shock-accelerated protons and their flux profiles." Doctoral thesis, Universitat de Barcelona, 2011. http://hdl.handle.net/10803/31855.

Full text
Abstract:
Gradual SEP events is one of the greatest hazards in space environment, particularly for the launch and operation of spacecraft and for manned exploration. Predictions of their occurrence and intensity are essential to ensure the proper operation of technical and scientific instruments. However, nowadays there is a large gap between observations and models these events that can lead to predictions. This work focuses on the modelling of SEP events, particularly, on the influence of the observer's relative position and of the shock strength, on the simulated SEP flux profiles. Part I of the thesis, deals with 3D MHD simulations of interplanetary shocks. We have studied the potential relevance of the latitude of the observer on the evolution of the strength of the shock and its influence on the injection rate of shock-accelerated particles; thus, on the resulting flux profiles. It is the first time that such dependence on the latitude is quantified from the modelling of SEP events, because most of the codes used so far to simulate interplanetary shocks are not 3D codes or they have been applied to near-ecliptic events. To study the influence of the latitude of the observer and the strength of the shock in the SEP flux profiles, we have simulated the propagation of two shocks (slow and fast) up to several observers placed at different positions with respect to the nose of the shock. We have calculated the evolution of the plasma and magnetic field variables at the cobpoint, and we have derived the injection rate of shock-accelerated particles and the resulting proton flux profiles to be measured by each observer. We have discussed how observers located at different positions in space measure different SEP profiles, showing that variations on the latitude may result in intensity changes of up to one order of magnitude. In Part II, we have used a new shock-and-particle model to simulate the 1 March 1979 SEP event that was observed by three different spacecraft. These spacecraft were positioned at similar radial distances but at significantly different angular positions, with respect to the associated solar source location. This particular scenario allows us to test the capability of the model to study the relevance of longitudinal variations in the shape of the intensity flux profiles, and to derive the injection rate of shock-accelerated particles. Despite the interest of multi-spacecraft events and due to the restrictions that they impose, this is just the second multi-spacecraft scenario for which their shock-particle characteristics have been modelled. For the first time, a simulation of a propagation of an interplanetary shock has simultaneously reproduced the time shock arrival and the relevant plasma jumps across the shock at three spacecraft. We have fitted the proton intensities at the three spacecraft for different energy channels, and we have derived the particle transport conditions in space. We have quantified the efficiency of the shock at injecting particles in its way toward each observer, and we have discussed the influence of the observer's relative position on the injection rate of shock-accelerated particles. We have concluded that in this specific event the evolution of the injection rate can not be completely explained in terms of the normalized velocity jump. The work performed during this thesis shows that the injection rate of shock-accelerated particles and their resulting flux profiles depend both on the latitude and on the longitude of the observer. This implies that more SEP events have to be modelled in order to quantify this conclusion on firm ground.
Els esdeveniments graduals de partícules solars energètiques (SEP) són un risc important per als astronautes i l’ instrumentació espacial. És per això que són necessàries eines de predicció de la intensitat i l'ocurrència de les tempestes de partícules solars per a garantitzar l'operativitat del material tècnic i científic embarcat. Existeix un gran buit, però, entre les prediccions del models actuals (per a ús en meteorologia espacial), i les observacions d'esdeveniments SEP. El treball realitzat durant aquesta tesi doctoral es centra en diversos aspectes de la simulació d'esdeveniments SEP. En particular, analitzem la influència de la posició relativa de l'observador i de la força del xoc en els perfils de flux derivats del nostre model combinat xoc-i-partícula. A partir de simulacions 3D, obtenim que el ritme d'injecció de partícules accelerades pel xoc depèn de la longitud de l'observador i demostrem, per primera vegada, que també depèn de la seva latitud. I es mostra que, conseqüentment, els perfils de flux detectats poden variar en un ordre de magnitud depenent de la connexió magnètica de l'observador amb el front del xoc. A més a més, presentem una simulació 2D d'un esdeveniment solar vist per tres sondes interplanetàries, pel qual s'ha ajustat, per primera vegada, l'arribada del xoc i els perfils de intensitat dels protons de diferents canals d'energia observats per cadascuna de les sondes. Així mateix, hem ajustat els salts en velocitat i camp magnètic a l'arribada del xoc, hem derivat les condicions de transport de les partícules i hem quantificat l'eficiència del xoc com a injector de partícules. La conclusió final del treball és que els futurs models de predicció d'esdeveniments SEP per a meteorologia espacial han de tenir en compte la geometria global de l'escenari solar-interplanetari.
APA, Harvard, Vancouver, ISO, and other styles
26

Kostet, Daniel. "Railway bridges with floating slab track systems : Numerical modelling of rail stresses - Dependence on properties of floating slab mats." Thesis, Luleå tekniska universitet, Byggkonstruktion och -produktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-67297.

Full text
Abstract:
The increased use of continuously welded rails in the railway systems makes it necessary to increase the control of the rail stresses to avoid instability and damages of the rails. Large stresses are especially prone to appear at discontinuities in the railway systems, such as bridges, due to the interaction between the track and the bridge. The interaction leads to increased horizontal forces in the rails due to the changed stiffness between the embankment and the bridge, temperature variations, bending of the bridge structure because of vertical traffic loads and braking and traction forces. If the compressive rail stresses become too high it is necessary to use costly and maintenance-requiring devices such as rail expansion joints and other rail expansion devices. These devices increase the railway systems life cycle cost and should if possible be avoided. The use of non-ballasted track on high-speed railways, tramways and subways, has increased since this kind of track requires less maintenance and according to some investigations have a lower life cycle cost compared to ballasted track. The non-ballasted track is usually made of a track slab to which the rails are connected through fastenings. The track slab is connected to the bridge structure and held in place by shear keys. When non-ballasted tracks are used in populated areas it is sometimes necessary to introduce some vibration and noise damping solution. One of the possible solutions is to introduce a floating slab mat (elastic mat) under the track slab on the bridge. The influence of the floating slab mats properties on the rail stresses is investigated in this degree project. The investigation was performed through a numerical modelling of two railway bridges using the finite element software SOFiSTiK. The results from the investigation showed that there was a small reduction of the compressive rail stresses by approximately 3 – 7% (depending on the stiffness of the elastic support, load positions and the properties of the mat) when a mat was installed under the track slab. The results from the investigation also showed that there was a small reduction (up to approximately 1 %) of the compressive stresses in the rail when the thickness of the mat was increased, and the stiffness of the mat was reduced. This reduction of the compressive stresses is assumed to be caused by the mat being mounted on the sides of the shear keys. The lower stiffness of the mat allows the track slab and the bridge deck to move more freely parallel to each other in the horizontal direction. This leads to a decrease of the stresses in the rail due to a lower interaction between the track and the bridge. It was also shown that the rail stresses increased if the friction between the slab mat and the bridge deck was considered. This is because of an increase of the interaction between the track and the bridge due to the mats horizontal stiffness.
Den ökade användningen av kontinuerligt svetsade räler i järnvägsnäten i världen leder till en ökad kontroll av rälsspänningarna för att undvika instabilitet och skador på rälsen. Särskilt vid en diskontinuit i järnvägssystemet, som vid broar, kan stora tillskottspänningar i rälsen uppstå till följd av interaktionen mellan spår och bro. Interaktion leder till ökade horisontella krafter som verkar på rälsen och beror på den förändrade styvheten mellan järnvägsbank och bro, temperaturvariationer, nedböjning av bron på grund av vertikala trafiklaster samt broms- och accelerationskrafter. Om spänningarna i rälsen blir för stora behöver kostsamma och underhållskrävande dilatationsfogar införas. Dessa dilatationsfogar ökar järnvägssystemets livscykelkostnad och är något som ska undvikas att införas i den mån det är möjligt. Användningen av ballastfritt spår för höghastighetsjärnvägar, spårvägar och tunnelbanor ökar på grund av att dessa spår kräver mindre underhåll och har enligt vissa undersökningar en lägre livscykelkostnad i jämförelse med ballasterat spår. Ballastfritt spår består oftast av en betongplatta till vilken rälsen är kopplad genom befästningar. Plattan är i sin tur kopplad till underbyggnaden genom skjuvförbindare som håller plattan på plats. När ballastfritt spår används i bebodda områden är det ibland nödvändigt att ta till vibrations- och ljuddämpande åtgärder. En åtgärd som används på brokonstruktioner för att minska vibrationer och ljudföroreningar är att montera en vibrationsdämpande matta, som är tillverkad av ett elastiskt material, mellan betongplattan och broöverbyggnaden. I detta examensarbetet undersöks hur den vibrationsdämpande mattans egenskaper påverkar rälsspänningarna. Resultaten från undersökningen visar att spänningarna i rälsen minskar med cirka 3–7 % (beroende på det elastiska stödets styvhet, lastpositioner och mattans egenskaper) när en elastisk matta installeras under spårplattan i jämförelse med när ingen matta används. När mattans tjocklek ökar och när styvheten sänks minskar spänningarna med cirka 1 % i jämförelse mellan den tjockaste och tunnaste mattan. Denna minskning av spänningarna antas bero på att den vibrationsdämpande mattan som är monterad på sidan av skjuvförbindarna ger en möjlighet för spåret och bron att förskjutas fritt parallellt varandra innan en interaktion mellan spår och bro uppstår. Det visade sig även att om friktionen mellan mattan och broöverbyggnaden medräknas ökar spänningarna i rälsen. Detta beror på att mattan då skapar en större interaktion mellan spåret och bron gentemot fallet då mattans horisontella styvhet inte beaktas.
APA, Harvard, Vancouver, ISO, and other styles
27

Salih, Sarmed. "Rate-dependent cohesive-zone models for fracture and fatigue." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/ratedependent-cohesivezone-models-for-fracture-and-fatigue(d8bfee97-1a75-4418-8916-b5a7cf8cdfd9).html.

Full text
Abstract:
Despite the phenomena of fracture and fatigue having been the focus of academic research for more than 150 years, it remains in effect an empirical science lacking a complete and comprehensive set of predictive solutions. In this regard, the focus of the research in this thesis is on the development of new cohesive-zone models for fracture and fatigue that are afforded an ability to capture strain-rate effects. For the case of monotonic fracture in ductile material, different combinations of material response are examined with rate effects appearing either in the bulk material or localised to the cohesive-zone or in both. The development of a new rate-dependent CZM required first an analysis of two existing methods for incorporating rate dependency, i.e.either via a temporal critical stress or a temporal critical separation. The analysis revealed unrealistic crack behaviour at high loading rates. The new rate-dependent cohesive model introduced in the thesis couples the temporal responses of critical stress and critical separation and is shown to provide a stable and realistic solution to dynamic fracture. For the case of fatigue, a new frequency-dependent cohesive-zone model (FDCZM) has been developed for the simulation of both high and low-cycle fatigue-crack growth in elasto-plastic material. The developed model provides an alternative approach that delivers the accuracy of the loading-unloading hysteresis damage model along with the computational efficiency of the equally well-established envelope load-damage model by incorporating a fast-track feature. With the fast-track procedure, a particular damage state for one loading cycle is 'frozen in' over a predefined number of cycles. Stress and strain states are subsequently updated followed by an update on the damage state in the representative loading cycle which again is 'frozen in' and applied over the same number of cycles. The process is repeated up to failure. The technique is shown to be highly efficient in terms of time and cost and is particularly effective when a large number of frozen cycles can be applied without significant loss of accuracy. To demonstrate the practical worth of the approach, the effect that the frequency has on fatigue crack growth in austenitic stainless-steel 304 is analysed. It is found that the crack growth rate (da/dN) decreases with increasing frequency up to a frequency of 5 Hz after which it levels off. The behaviour, which can be linked to martensitic phase transformation, is shown to be accurately captured by the new FDCZM.
APA, Harvard, Vancouver, ISO, and other styles
28

Gaigalas, Raimundas. "A Non-Gaussian Limit Process with Long-Range Dependence." Doctoral thesis, Uppsala : Matematiska institutionen, Univ. [distributör], 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Tate, Catriona Mary. "Effect of probe-target sequence mismatches on the results of microarray hybridisations : position-dependence, modelling and impact of evolutionary distance." Thesis, University of Manchester, 2010. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.529202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Griffith, Daniel, Manfred M. Fischer, and James P. LeSage. "The spatial autocorrelation problem in spatial interaction modelling: A comparison of two common solutions." Springer, 2017. http://epub.wu.ac.at/6032/1/spatial.pdf.

Full text
Abstract:
Spatial interaction models of the gravity type are widely used to describe origin-destination flows. They draw attention to three types of variables to explain variation in spatial interactions across geographic space: variables that characterize the origin region of interaction, variables that characterize the destination region of interaction, and variables that measure the separation between origin and destination regions. A violation of standard minimal assumptions for least squares estimation may be associated with two problems: spatial autocorrelation within the residuals, and spatial autocorrelation within explanatory variables. This paper compares a spatial econometric solution with the spatial statistical Moran eigenvector spatial filtering solution to accounting for spatial autocorrelation within model residuals. An example using patent citation data that capture knowledge flows across 257 European regions serves to illustrate the application of the two approaches.
APA, Harvard, Vancouver, ISO, and other styles
31

Pesee, Chatchai. "Stochastic Modelling of Financial Processes with Memory and Semi-Heavy Tails." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16057/.

Full text
Abstract:
This PhD thesis aims to study financial processes which have semi-heavy-tailed marginal distributions and may exhibit memory. The traditional Black-Scholes model is expanded to incorporate memory via an integral operator, resulting in a class of market models which still preserve the completeness and arbitragefree conditions needed for replication of contingent claims. This approach is used to estimate the implied volatility of the resulting model. The first part of the thesis investigates the semi-heavy-tailed behaviour of financial processes. We treat these processes as continuous-time random walks characterised by a transition probability density governed by a fractional Riesz- Bessel equation. This equation extends the Feller fractional heat equation which generates a-stable processes. These latter processes have heavy tails, while those processes generated by the fractional Riesz-Bessel equation have semi-heavy tails, which are more suitable to model financial data. We propose a quasi-likelihood method to estimate the parameters of the fractional Riesz- Bessel equation based on the empirical characteristic function. The second part considers a dynamic model of complete financial markets in which the prices of European calls and puts are given by the Black-Scholes formula. The model has memory and can distinguish between historical volatility and implied volatility. A new method is then provided to estimate the implied volatility from the model. The third part of the thesis considers the problem of classification of financial markets using high-frequency data. The classification is based on the measure representation of high-frequency data, which is then modelled as a recurrent iterated function system. The new methodology developed is applied to some stock prices, stock indices, foreign exchange rates and other financial time series of some major markets. In particular, the models and techniques are used to analyse the SET index, the SET50 index and the MAI index of the Stock Exchange of Thailand.
APA, Harvard, Vancouver, ISO, and other styles
32

Sun, Huiling. "System dependency modelling." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/34918.

Full text
Abstract:
It is common for modern engineering systems to feature dependency relationships between its components. The existence of these dependencies render the fault tree analysis (FTA) and its efficient implementation, the Binary Decision Diagram (BDD) approach, inappropriate in predicting the system failure probability. Whilst the Markov method provides an alternative means of analysis of systems of this nature, it is susceptible to state space explosion problems for large, or even moderate sized systems. Within this thesis, a process is proposed to improve the applicability of the Markov analysis. With this process, the smallest independent sections (modules) which contain each dependency type are identified in a fault tree and analysed by the most efficient method. Thus, BDD and the Markov analysis are applied in a combined way to improve the analysis efficiency. The BDD method is applied to modules which contain no dependency, and the Markov analysis applied to modules in which dependencies exist. Different types of dependency which can arise in an engineering system assessment are identified. Algorithms for establishing a Markov model have also been developed for each type of dependency. Three types of system are investigated in this thesis in the context of dependency modelling: the continuously-operating system, the active-on-demand system and the phased-mission system. Different quantification techniques have been developed for each type of system to obtain the system failure probability and other useful predictive measures. Investigation is also carried out into the use of BDD in assessing non-repairable systems involving dependencies. General processes have been established to enable the quantification.
APA, Harvard, Vancouver, ISO, and other styles
33

Essman, Carl. "Social preconditions of collective action among NGO:s : A social network analysis of the information exchanges between 55 NGO:s in Georgia." Thesis, Stockholms universitet, Sociologiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-118567.

Full text
Abstract:
Individual shortcomings and the need for resources stimulates organizations desire to establish collaborative relations with each other. An organization tends to prefer to collaborate with other familiar organizations. The information available to an organization about its peers is necessary for its ability to appreciate the suitability of potential partners as well as their capabilities and ability to contribute to a successful collaborative relation. In a three stage analytical process, social network analysis and statistical network modelling is applied to investigate the correlation between patterns of communication and the extent to which organizations establish collaborative relationships. With a theoretical framework of resource dependence theory and social capital, data on information exchanges, resource exchanges and common advocacy among humanitarian 55 organizations is mapped. The first analytical stage explicates the structures of the collected information exchanges and evaluates the prevalence of coordination facilitating communication structures. The second stage appreciates the extent of inter-organizational involvement in collaborative relationships. The third step combines these results to demonstrate the covariance between the prevalence of coordination facilitating structures and extent of collaborative relations. The results indicate that the collected information exchanges exhibit few coordination facilitating structures and the organizations are only to a very limited extent engaged in collaborative relationships with each other. While consistent with previous research on the importance of communication for coordination, these observations illustrate the negative consequences of lacking communication. This analysis contributes with added empirical experiences to solidify our understanding of organizational behavior in inter-organizational interaction and tendencies to establish collaborative relations.
APA, Harvard, Vancouver, ISO, and other styles
34

Wollscheid, Daniel [Verfasser], Alexander [Akademischer Betreuer] Lion, and Leif [Akademischer Betreuer] Kari. "Predeformation and frequency dependence of filler-reinforced rubber under vibration : Experiments - Modelling - Finite Element Implementation / Daniel Wollscheid. Universität der Bundeswehr München, Fakultät für Luft- und Raumfahrttechnik. Gutachter: Alexander Lion ; Leif Kari. Betreuer: Alexander Lion." Neubiberg : Universitätsbibliothek der Universität der Bundeswehr München, 2014. http://d-nb.info/1071848372/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wollscheid, Daniel Verfasser], Alexander [Akademischer Betreuer] [Lion, and Leif [Akademischer Betreuer] Kari. "Predeformation and frequency dependence of filler-reinforced rubber under vibration : Experiments - Modelling - Finite Element Implementation / Daniel Wollscheid. Universität der Bundeswehr München, Fakultät für Luft- und Raumfahrttechnik. Gutachter: Alexander Lion ; Leif Kari. Betreuer: Alexander Lion." Neubiberg : Universitätsbibliothek der Universität der Bundeswehr München, 2014. http://nbn-resolving.de/urn:nbn:de:bvb:706-4247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Shahtahmasebi, Said. "Statistical modelling of dependency in old age." Thesis, Bangor University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Fernández, S. Alejandro D. "Modelling the temperature dependences of Silicon Carbide BJTs." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-202754.

Full text
Abstract:
Silicon Carbide (SiC), owing to its large bandgap, has proved itself to be a very viable semiconductor material for the development of extreme temperature electronics. Moreover, its electrical properties like critical field (Ecrit) and saturation velocity (vsat) are superior as compared to the commercially abundant Silicon, thus making it a better alternative for RF and high power applications. The in-house SiC BJT process at KTH has matured a lot over the years and recently developed devices and circuits have shown to work at temperatures exceeding 500˚C. However, the functional reliability of more complex circuits requires the use of simulators and device models to describe the behavior of constituent devices. SPICE Gummel Poon (SGP) is one such model that describes the behavior of the BJT devices. It is simpler as compared to the other models because of its relatively small number of parameters. A simple semi-empirical DC compact model has been successfully developed for low voltage applications SiC BJTs. The model is based on a temperature dependent SiC-SGP model. Studies over the temperature dependences for the SGP parameters have been performed. The SGP parameters have been extracted and some have been optimized over a wide temperature range and they have been compared with the measured data. The accuracy of the developed compact model based on these parameters has been proven by comparing it with the measured data as well. A fairly accurate performance at the required working conditions and correlation with the measured results of the SiC compact model has been achieved.
APA, Harvard, Vancouver, ISO, and other styles
38

Ayis, Salma Ahmed. "Modelling unobserved heterogeneity : theoretical and practical aspects." Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dakpo, K. Hervé. "Non-parametric modelling of pollution-generating technologies : theoretical and methodological considerations, with an application to the case of greenhouse gas emissions in suckler breeding systems in French grassland areas." Thesis, Clermont-Ferrand 1, 2015. http://www.theses.fr/2015CLF10474/document.

Full text
Abstract:
La prise en compte des problèmes environnementaux dans la responsabilité sociale des entreprises a généré en économie de nombreuses propositions. Parmi elles, le cadre d’analyse basé sur l’évaluation de la performance en utilisant notamment les techniques d’enveloppement des données (DEA) s’est très vite répandu dans la littérature théorique comme empirique. Ce travail de thèse s’inscrit dans cette logique en mettant l’accent sur la modélisation des technologies polluantes. Par ailleurs, la question des changements climatiques et de la forte contribution de l’agriculture et en particulier de l’élevage dans les émissions de gaz à effet de serre (GES) impose à ce secteur de relever aujourd’hui en plus du défi économique celui de l’amélioration de sa performance environnementale. L’objectif général de cette recherche doctorale est donc de fournir un nouveau cadre d’analyse théorique et empirique dans la modélisation des technologies polluantes afin d’évaluer l’éco-efficience des systèmes productifs, en particulier le cas des émissions de GES en élevage extensif de ruminants. Dans un premier temps, nous montrons les limites théoriques et méthodologiques des modèles existants. Néanmoins, nous insistons sur le fait que les approches basées sur l’estimation de plusieurs sous-technologies indépendantes pour prendre en compte les différents processus présents dans les systèmes productifs sont très prometteuses. Dès lors dans un deuxième temps, nous proposons une nouvelle extension de la méthode « by-production » qui repose sur l’introduction d’interconnections entre les différentes sous-technologies impliquées afin de construire un système plus unifié. Dans un troisième temps, une comparaison empirique utilisant des données d’exploitations de viande ovine de notre extension avec les approches existantes a révélé certaines incohérences de ces dernières. Enfin pour aller plus loin, nous élargissons dans un quatrième temps notre approche afin de prendre en compte les aspects dynamiques et notamment la présence de coûts d’ajustement. Les résultats de l’analyse empirique entreprise avec des données d’exploitations bovines allaitantes (viande) ont révélé la nécessité de prendre en compte ces aspects, mais ont aussi révélé la forte hétérogénéité existante dans les stratégies d’investissements des éleveurs
The growing importance of environmental matters in social responsibility of firms has generated many frameworks of analysis in the economic literature. Among those frameworks, performance evaluation and benchmarking using the non-parametric Data Envelopment Analysis (DEA) have increased at a very fast rate. This PhD research focuses on models that include undesirable outputs such as pollution in the overall production system, to appraise eco-efficiency of decision making units (DMUs). Besides, the recent awareness on the large contribution of agriculture and particularly livestock farming to global warming, has highlighted for this sector the challenge of reaching both economic and environmental performances. In this line, the overall objective of this dissertation is to provide a theoretical and empirical background in modelling pollution-generating technologies and to suggest theoretical improvements that are consistent with the particular case of greenhouse gas emissions in extensive livestock systems. Firstly, we showed that all existing approaches that deal with undesirable outputs in the non-parametric analysis (i.e. DEA) have some strong drawbacks. However, the models grounded on the estimation of multiple independent sub-technologies offer interesting opportunities. Secondly, I developed a new framework that extends the by-production approach through the introduction of some explicit dependence constraints that link the sub-technologies in order to build a unified system. Thirdly, an empirical comparison, using a sample of French sheep meat farms, of this by-production modelling extension with the existing approaches, revealed some inconsistencies of these latter. Finally, we expanded this new by-production formulation to account for dynamic aspects related to the presence of adjustment costs. The application to the case of French suckler cow farms underlined the necessity of accounting for dynamic aspects and also showed high heterogeneity in investment strategies of these farmers
APA, Harvard, Vancouver, ISO, and other styles
40

Ho, Duc Thang. "Context dependent fuzzy modelling and its applications." Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13574/.

Full text
Abstract:
Fuzzy rule-based systems (FRBS) use the principle of fuzzy sets and fuzzy logic to describe vague and imprecise statements and provide a facility to express the behaviours of the system with a human-understandable language. Fuzzy information, once defined by a fuzzy system, is fixed regardless of the circumstances and therefore makes it very difficult to capture the effect of context on the meaning of the fuzzy terms. While efforts have been made to integrate contextual information into the representation of fuzzy sets, it remains the case that often the context model is very restrictive and/or problem specific. The work reported in this thesis is our attempt to create a practical frame work to integrate contextual information into the representation of fuzzy sets so as to improve the interpretability as well as the accuracy of the fuzzy system. Throughout this thesis, we have looked at the capability of the proposed context dependent fuzzy sets as a stand alone as well as in combination with other methods in various application scenarios ranging from time series forecasting to complicated car racing control systems. In all of the applications, the highly competitive performance nature of our approach has proven its effectiveness and efficiency compared with existing techniques in the literature.
APA, Harvard, Vancouver, ISO, and other styles
41

Lumley, Thomas. "Marginal regression modelling of weakly dependent data /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/9555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Song, Fei. "Modelling time-dependent plastic behaviour of geomaterials." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/672273.

Full text
Abstract:
Representing the time-dependent plastic behaviour of geomaterials is a critical issue in the correct application of tunnelling design techniques such as the convergence-confinement method or numerical modelling. Furthermore, during underground excavations below the water table, the effect of seepage flow cannot be ignored, and the behaviour of the tunnel must be analysed in a coupled hydro-mechanical framework. The main objective of this thesis is to analyse the response of tunnels excavated in saturated time-dependent plastic rock masses. For this purpose, a time-dependent plastic constitutive model has been developed and implemented in the software CODE_BRIGHT to simulate the time-dependent, strain-softening and creep-induced failure behaviour of geomaterials. Moreover, a coupled hydro-mechanical model is utilised to simulate the interaction between solid deformations and fluid flows. The obtained results provide relevant insights into the response of tunnels excavated in saturated time-dependent plastic rock masses. However, numerical difficulties might occur when modelling multi-stage excavations problems, when considering multi-physics coupled processes or non-linear mechanical material models, especially if the layers or pieces of excavated material are relatively coarse. In order to mitigate these numerical difficulties, a smoothed excavation (SE) method has been proposed and implemented in the software CODE_BRIGHT, which can improve numerical efficiency and mitigate non-convergence issues. Subsequently, to analyse the stability of tunnels with a combined support system, numerical solutions have been developed for tunnels excavated in strain-softening rock masses, considering the whole process of tunnel advancement, and the sequential installation of primary and secondary support systems. For this purpose, the actual compatibility conditions at both the rock-support interface and the support-support interface are considered. This method provides a convenient alternative method for the preliminary design of supported tunnels.
La adecuada representación del comportamiento plástico y dependiente del tiempo de los geomateriales es una cuestión crítica en la correcta aplicación de técnicas de diseño de túneles como el método de convergencia-confinamiento o el modelado numérico. Por otro lado, durante las excavaciones subterráneas por debajo del nivel freático no se puede ignorar el efecto del flujo de filtración y, por tanto, el comportamiento del túnel debe analizarse en un marco hidromecánico acoplado. El principal objetivo de esta tesis es analizar la respuesta de túneles excavados en macizos rocosos plásticos saturados y dependientes del tiempo. Para este propósito, se ha desarrollado e implementado un modelo constitutivo plástico dependiente del tiempo en el software CODE_BRIGHT, que permite simular el comportamiento dependiente del tiempo, el de reblandecimiento por deformación y el inducido por fluencia de los geomateriales. Además, se ha utilizado un modelo hidromecánico acoplado para simular la interacción entre la deformación del sólidas y el flujo de fluido. Los resultados obtenidos proporcionan información relevante sobre la respuesta de los túneles excavados en macizos rocosos plásticos, saturados y dependientes del tiempo. Sin embargo, pueden surgir dificultades numéricas al modelar problemas de excavaciones en varias etapas, al considerar procesos multifísicos acoplados o modelos de materiales mecánicos no lineales, especialmente si las capas o piezas de material excavado son relativamente gruesas. Para mitigar estas dificultades numéricas, se ha propuesto e implementado un método de excavación suavizada (SE) en el software CODE_BRIGHT, que puede mejorar la eficiencia numérica y mitigar los problemas de no convergencia. Posteriormente, para analizar la estabilidad de túneles con sistemas de sostenimiento combinado, se han desarrollado soluciones numéricas para túneles excavados en macizos con reblandecimiento por deformación, considerando todo el proceso de avance del túnel y la instalación secuencial de los sistemas de sostenimiento primario y secundario. Con este propósito, se han considerado las condiciones reales de compatibilidad tanto en la interfaz roca- sostenimiento como en la interfaz sostenimiento-sostenimiento. Este método proporciona un método alternativo conveniente para el diseño preliminar de túneles con sostenimiento.
APA, Harvard, Vancouver, ISO, and other styles
43

Ridley, L. M. "Dependency modelling using fault-tree and cause-consequence analysis." Thesis, Loughborough University, 2000. https://dspace.lboro.ac.uk/2134/7350.

Full text
Abstract:
The technique of fault tree analysis is commonly used to assess the probability of failure of industrial systems. During the analysis of the fault tree the component failures are assumed to occur independently. When this condition is not satisfied alternative approaches such as the Markov method can be used. Constructing the Markov representation of a system is not such as intuitive process for engineers as fault tree construction since the state-transition diagram does not readily document the failure logic. In addition to this the size of the Markov diagram increases rapidly as the number of components in the system increases. This thesis presents the development of a new model which uses a combination of conventional fault tree methods with those of Markov methods to solve systems containing sequential or standby failures. New gates were developed in order to incorporate the dependent failures on the fault tree structure. The new assessment method was shown to efficiently solve these systems. With theses extended fault tree capabilities in place the technique was embedded within an optimisation framework to obtain the best system performance for systems containing standby failures. Sequential failures can be represented on a fault tree by using the Priority-And gate, however they can also be represented on a Cause-Consequence diagram. As with the fault tree analysis method, the Cause-Consequence Diagram method documents the failure logic of the system. In addition to this the Cause-Consequence Diagram produces the exact failure probability in a very efficient calculation procedure and has significant implications in terms of efficiency for static systems. Construction and analysis rules were devised for a cause-consequence diagram and used on systems containing independent and dependent failures.
APA, Harvard, Vancouver, ISO, and other styles
44

Kaehkoenen, Kalle Esa Eelis. "Modelling activity dependencies for building construction project scheduling." Thesis, University of Reading, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Jämsä, Anne. "In vitro modelling of tau phosphorylating kinases: emphasis on Cdk5 /." Stockholm, 2007. http://diss.kib.ki.se/2007/978-91-7357-400-6/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Doherty, B. "Context dependency and sub-band based modelling for speech recognition." Thesis, Queen's University Belfast, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Dimitrova, Dimitrina S. "Dependent risk modelling in (re)insurance and ruin." Thesis, City, University of London, 2007. http://openaccess.city.ac.uk/18910/.

Full text
Abstract:
The work presented in this dissertation is motivated by the observation that the classical (re)insurance risk modelling assumptions of independent and identically distributed claim amounts, Poisson claim arrivals and premium income accumulating linearly at a certain rate, starting from possibly non-zero initial capital, are often not realistic and violated in practice. There is an abundance of examples in which dependence is observed at various levels of the underlying risk model. Developing risk models which are more general than the classical one and can successfully incorporate dependence between claim amounts, consecutively arriving at the insurance company, and/or dependence between the claim inter-arrival times, is at the heart of this dissertation. The main objective is to consider such general models and to address the problem of (non-) ruin within a finite-time horizon of an insurance company. Furthermore, the aim is to consider general risk and performance measures in the context of a risk sharing arrangement such as an excess of loss (XL) re insurance contract. There are two parties involved in an XL re insurance contract and their interests are contradictory, as has been first noted by Karl Borch in the 1960s. Therefore, we define joint, between the cedent and the reinsurer, risk and performance measures, both based on the probability of ruin, and show how the latter can be used to optimally set the parameters of an XL reinsurance treaty. Explicit expressions for the proposed risk and performance measures are derived and are used efficiently in numerical illustrations.
APA, Harvard, Vancouver, ISO, and other styles
48

Panchenko, Valentyn. "Nonparametric methods in economics and finance: dependence, causality and prediction." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2006. http://dare.uva.nl/document/30844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ranciati, Saverio <1988&gt. "Statistical modelling of spatio-temporal dependencies in NGS data." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7680/.

Full text
Abstract:
Next-generation sequencing (NGS) has rapidly become the current standard in genetic related analysis. This switch from microarray to NGS required new statistical strategies to address the research questions inherent to the considered phenomena. First and foremost, NGS dataset usually consist of discrete observations characterized by overdispersion - that is, discrepancy between expected and observed variability - and an abundance of zeros, measured across a huge number of regions of the genome. With respect to chromatin immunoprecipitation sequencing (ChIP-Seq), a class of NGS data, it is of primary focus to discover the underlying (unobserved) pattern of `enrichment': more particularly, there is interest in the interactions between genes (or broader regions of the genome) and proteins, as they describe the mechanism of regulation under different conditions such as healthy or damaged tissue. Another interesting research question involves the clustering of these observations into groups that have practical relevance and interpretability, considering in particular that a single unit could potentially be allocated into more than one of these clusters, as it is reasonable to assume that its participation is not exclusive to one and only biological function and/or mechanism. Many of these complex processes, indeed, could also be described by sets of ordinary differential equations (ODE's), which are mathematical representations of the changes of a system through time, following a dynamic that is governed by some parameters we are interested in. In this thesis, we address the aforementioned tasks and research questions employing different statistical strategies, such as model-based clustering, graphical models, penalized smoothing and regression. We propose extensions of the existing approaches to better fit the problem at hand and we elaborate the methodology in a Bayesian environment, with the focus on incorporating the structural dependencies - both spatial and temporal - of the data at our disposal.
APA, Harvard, Vancouver, ISO, and other styles
50

Zurauskiene, Justina. "Bayesian nonparametric approaches to modelling dependencies in systems biology." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/30660.

Full text
Abstract:
All living organisms exhibit complex behaviour, and this is a result of the underlying regulatory mechanisms that occur at cellular and molecular levels. For this reason such reactions are of central importance in the field of systems biology. Throughout this thesis we are concerned with mathematical models that allow us to better under- stand and represent the biological phenomena behind experimental data, and equally to make predictions about key regulatory processes happening in the cells. Specifically, this work explores and demonstrates how modern Bayesian nonparametric techniques, namely Gaussian process regression and Dirichlet process mixture models, can be applied in order to model complex systems biology data. Here we have developed a new technique based on Gaussian process regression approaches to model metabolic regulatory processes at the cellular level. Our technique allows us to model noisy metabolite time course data and predicts dynamical metabolic flux behaviour in the associated pathways; we demonstrate that by learning the dependencies between several metabolites we can strengthen our predictions in sparsely sampled regions. We furthermore discuss when Gaussian processes can accurately reconstruct the underlying functions and when they are subject to the Nyquist limit. Next we proceed to modelling biological processes that occur at the molecular level. Here we are interested in studying large and diverse functional genomics datasets. A variety of computational techniques allow us to analyse such data and model biological processes underlying them; an important class of these methods are techniques that permit the detection of heterogeneity in experimentally observed data. Here we employ Dirichlet processes to estimate the number of clusters within such genomic datasets and further propose a new method to tackle the data fusion problem. Our technique primarily relies on the outcomes from nonparametric Bayesian clustering approaches and is based on graph theory concepts, but in parallel we also discuss and show how this graph-theoretical approach can be extended to integrate results from non-Bayesian type clustering algorithms. We show that by integrating several data types we can successfully identify e.g. sets of genes that are regulated by similar transcription factors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography