Dissertations / Theses on the topic 'Mixture cure model'

To see the other types of publications on this topic, follow the link: Mixture cure model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Mixture cure model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Krachey, Elizabeth Catherine. "Variations on the Accelerated Failure Time Model: Mixture Distributions, Cure Rates, and Different Censoring Scenarios." NCSU, 2009. http://www.lib.ncsu.edu/theses/available/etd-08182009-102357/.

Full text
Abstract:
The accelerated failure time (AFT) model is a popular model for time-to-event data. It provides a useful alternative when the proportional hazards assumption is in question and it provides an intuitive linear regression interpretation where the logarithm of the survival time is regressed on the covariates. We have explored several deviations from the standard AFT model. Standard survival analysis assumes that in the case of perfect follow-up, every patient will eventually experience the event of interest. However, in some clinical trials, a number of patients may never experience such an event, and in essence, are considered cured from the disease. In such a scenario, the Kaplan-Meier survival curve will level off at a nonzero proportion. Hence there is a window of time in which most or all of the events occur, while heavy censoring occurs in the tail. The two-component mixture cure model provides a means of adjusting the AFT model to account for this cured fraction. Chapters 1 and 2 propose parametric and semiparametric estimation procedures for this cure rate AFT model. Survival analysis methods for interval-censoring have been much slower to develop than for the right-censoring case. This is in part because interval-censored data have a more complex censoring mechanism and because the counting process theory developed for right-censored data does not generalize or extend to interval-censored data. Because of the analytical difficulty associated with interval-censored data, recent estimation strategies have focused on the implementation rather than the large sample theoretical justifications of the semiparametric AFT model. Chapter 3 proposes a semiparametric Bayesian estimation procedure for the AFT model under interval-censored data.
APA, Harvard, Vancouver, ISO, and other styles
2

Erich, Roger Alan. "Regression Modeling of Time to Event Data Using the Ornstein-Uhlenbeck Process." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1342796812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Seppä, K. (Karri). "Quantifying regional variation in the survival of cancer patients." Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789526200118.

Full text
Abstract:
Abstract Monitoring regional variation in the survival of cancer patients is an important tool for assessing realisation of regional equity in cancer care. When regions are small or sparsely populated, the random component in the total variation across the regions becomes prominent. The broad aim of this doctoral thesis is to develop methods for assessing regional variation in the cause-specific and relative survival of cancer patients in a country and for quantifying the public health impact of the regional variation in the presence of competing hazards of death using summary measures that are interpretable also for policy-makers and other stakeholders. Methods for summarising the survival of a patient population with incomplete follow-up in terms of the mean and median survival times are proposed. A cure fraction model with two sets of random effects for regional variation is fitted to cause-specific survival data in a Bayesian framework using Markov chain Monte Carlo simulation. This hierarchical model is extended to the estimation of relative survival where the expected survival is estimated by region and considered as a random quantity. The public health impact of regional variation is quantified by the extra survival time and the number of avoidable deaths that would be gained if the patients achieved the most favourable level of relative survival. The methods proposed were applied to real data sets from the Finnish Cancer Registry. Estimates of the mean and the median survival times of colon and thyroid cancer patients, respectively, were corrected for the bias that was caused by the inherent selection of patients during the period of diagnosis with respect to their age at diagnosis. The cure fraction model allowed estimation of regional variation in cause-specific and relative survival of breast and colon cancer patients, respectively, with a parsimonious number of parameters yielding reasonable estimates also for sparsely populated hospital districts
Tiivistelmä Syöpäpotilaiden elossaolon alueellisen vaihtelun seuraaminen on tärkeää arvioitaessa syövänhoidon oikeudenmukaista jakautumista alueittain. Kun alueet ovat pieniä tai harvaan asuttuja, alueellisen kokonaisvaihtelun satunnainen osa kasvaa merkittäväksi. Tämän väitöstutkimuksen tavoitteena on kehittää menetelmiä, joilla pystytään arvioimaan maan sisäistä alueellista vaihtelua lisäkuolleisuudessa, jonka itse syöpä potilaille aiheuttaa, ja tiivistämään alueellisen vaihtelun kansanterveydellinen merkitys mittalukuihin, jotka ottavat kilpailevan kuolleisuuden huomioon ja ovat myös päättäjien tulkittavissa. Ehdotetuilla menetelmillä voidaan potilaiden ennustetta kuvailla käyttäen elossaolo-ajan keskiarvoa ja mediaania, vaikka potilaiden seuruu olisi keskeneräinen. Potilaiden syykohtaiselle kuolleisuudelle sovitetaan bayesiläisittäin MCMC-simulaatiota hyödyntäen malli, jossa parantuneiden potilaiden osuuden kuvaamisen lisäksi alueellinen vaihtelu esitetään kahden satunnaisefektijoukon avulla. Tämä hierarkkinen malli laajennetaan suhteellisen elossaolon estimointiin, jossa potilaiden odotettu elossaolo estimoidaan alueittain ja siihen liittyvä satunnaisvaihtelu otetaan huomioon. Alueellisen vaihtelun kansanterveydellistä merkitystä mitataan elossaoloajan keskimääräisellä pidentymällä sekä vältettävien kuolemien lukumäärällä, jotka voitaisiin saavuttaa, mikäli suotuisin suhteellisen elossaolon taso saavutettaisiin kaikilla alueilla. Kehitettyjä menetelmiä käytettiin Suomen Syöpärekisterin aineistojen analysointiin. Paksusuoli- ja kilpirauhassyöpäpotilaiden elinaikojen keskiarvojen ja mediaanien estimaatit oikaistiin harhasta, joka aiheutui potilaiden luontaisesta valikoitumisesta diagnosointijakson aikana iän suhteen. Parantuneiden osuuden satunnaisefektimalli mahdollisti rintasyöpäpotilaiden syykohtaisen kuolleisuuden ja paksusuolisyöpäpotilaiden suhteellisen elossaolon kuvaamisen vähäisellä määrällä parametreja ja antoi järkeenkäyvät estimaatit myös harvaan asutuille sairaanhoitopiireille
APA, Harvard, Vancouver, ISO, and other styles
4

Kutal, Durga Hari. "Various Approaches on Parameter Estimation in Mixture and Non-mixture Cure Models." Thesis, Florida Atlantic University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10929031.

Full text
Abstract:

Analyzing life-time data with long-term survivors is an important topic in medical application. Cure models are usually used to analyze survival data with the proportion of cure subjects or long-term survivors. In order to include the proportion of cure subjects, mixture and non-mixture cure models are considered. In this dissertation, we utilize both maximum likelihood and Bayesian methods to estimate model parameters. Simulation studies are carried out to verify the finite sample performance of the estimation methods. Real data analyses are reported to illustrate the goodness-of-fit via Fréchet, Weibull and Exponentiated Exponential susceptible distributions. Among the three parametric susceptible distributions, Fréchet is the most promising.

Next, we extend the non-mixture cure model to include a change point in a covariate for right censored data. The smoothed likelihood approach is used to address the problem of a log-likelihood function which is not differentiable with respect to the change point. The simulation study is based on the non-mixture change point cure model with an exponential distribution for the susceptible subjects. The simulation results revealed a convincing performance of the proposed method of estimation.

APA, Harvard, Vancouver, ISO, and other styles
5

Weston, Claire Louise. "Applications of non-mixture cure models in childhood cancer studies." Thesis, University of Leicester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492826.

Full text
Abstract:
The United Kingdom Children's Cancer Study Group (UKCCSG) was formed in 1977 with the aims of improving the management and advancing knowledge and Ludy of childhood cancers. UKCCSG studies are usually analysed using Cox models to assess whether certain prognostic factors may have an influence on survival. Cox models assume that proportional hazards exist and that all individuals will eventually experience the event of interest resulting in a long-term survival of zero. In childhood cancer, this may not be the case, as survival rates in excess of 70% are often observed. Parametric cure models have been proposed as an alternative method for analysing long term outcome aata m cases such as these.
APA, Harvard, Vancouver, ISO, and other styles
6

Ward, Alexander P. "Modelling Response Patterns for A Large-Scale Mail Survey Study Using Mixture Cure Models." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555587554123989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Calsavara, Vinicius Fernando. "Modelos de sobrevivência com fração de cura usando um termo de fragilidade e tempo de vida Weibull modificada generalizada." Universidade Federal de São Carlos, 2011. https://repositorio.ufscar.br/handle/ufscar/4546.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:04Z (GMT). No. of bitstreams: 1 3451.pdf: 871063 bytes, checksum: 8af58118f0d60c000ca46f5d8bfda544 (MD5) Previous issue date: 2011-02-24
In survival analysis, some studies are characterized by having a significant fraction of units that will never suffer the event of interest, even if accompanied by a long period of time. For the analysis of long-term data, we approach the standard mixture model by Berkson & Gage, where we assume the generalized modified Weibull distribution for the lifetime of individuals at risk. This model includes several classes of models as special cases, allowing its use to discriminate models. The standard mixture model implicitly assume that those individuals experiencing the event of interest possess homogeneous risk. Alternatively, we consider the standard mixture model with a frailty term in order to quantify the unobservable heterogeneity among individuals. This model is characterized by the inclusion of a unobservable random variable, which represents information that can not or have not been observed. We assume multiplicative frailty with a gamma distribution. For the lifetime of individuals at risk, we assume the Weibull distribution, obtaining the frailty Weibull standard mixture model. For both models, we realized simulation studies with the purpose of analyzing the frequentists properties of estimation procedures. Applications to real data set showed the applicability of the proposed models in which parameter estimates were determined using the approaches of maximum likelihood and Bayesian.
Em análise de sobrevivência determinados estudos caracterizam-se por apresentar uma fração significativa de unidades que nunca apresentarão o evento de interesse, mesmo se acompanhados por um longo período de tempo. Para a análise de dados com longa duração, abordamos o modelo de mistura padrão de Berkson & Gage supondo que os tempos de vida dos indivíduos em risco seguem distribuição Weibull modificada generalizada. Este modelo engloba diversas classes de modelos como casos particulares, propiciando o uso deste para discriminar modelos. O modelo abordado assume implicitamente que todos os indivíduos que falharam possuem risco homogêneo. Alternativamente, consideramos o modelo de mistura padrão com um termo de fragilidade com o objetivo de quantificar a heterogeneidade não observável entre os indivíduos. Este modelo é caracterizado pela inclusão de uma variável aleatória não observável, que representa as informações que não podem ou que não foram observadas. Assumimos que a fragilidade atua de forma multiplicativa com distribuição gama. Para os tempos de vida dos indivíduos em risco consideramos a distribuição Weibull, obtendo o modelo de mistura padrão Weibull com fragilidade. Para os dois modelos realizamos estudos de simulação com o objetivo de analisar as propriedades frequentistas dos processos de estimação. Aplicações a conjunto de dados reais mostraram a aplicabilidade dos modelos propostos, em que a estimação dos parâmetros foram determinadas através das abordagens de máxima verossimilhança e Bayesiana.
APA, Harvard, Vancouver, ISO, and other styles
8

Pešout, Pavel. "Přístupy k shlukování funkčních dat." Doctoral thesis, Vysoká škola ekonomická v Praze, 2007. http://www.nusl.cz/ntk/nusl-77066.

Full text
Abstract:
Classification is a very common task in information processing and important problem in many sectors of science and industry. In the case of data measured as a function of a dependent variable such as time, the most used algorithms may not pattern each of the individual shapes properly, because they are interested only in the choiced measurements. For the reason, the presented paper focuses on the specific techniques that directly address the curve clustering problem and classifying new individuals. The main goal of this work is to develop alternative methodologies through the extension to various statistical approaches, consolidate already established algorithms, expose their modified forms fitted to demands of clustering issue and compare some efficient curve clustering methods thanks to reported extensive simulated data experiments. Last but not least is made, for the sake of executed experiments, comprehensive confrontation of effectual utility. Proposed clustering algorithms are based on two principles. Firstly, it is presumed that the set of trajectories may be probabilistic modelled as sequences of points generated from a finite mixture model consisting of regression components and hence the density-based clustering methods using the Maximum Likehood Estimation are investigated to recognize the most homogenous partitioning. Attention is paid to both the Maximum Likehood Approach, which assumes the cluster memberships to be some of the model parameters, and the probabilistic model with the iterative Expectation-Maximization algorithm, that assumes them to be random variables. To deal with the hidden data problem both Gaussian and less conventional gamma mixtures are comprehended with arranging for use in two dimensions. To cope with data with high variability within each subpopulation it is introduced two-level random effects regression mixture with the ability to let an individual vary from the template for its group. Secondly, it is taken advantage of well known K-Means algorithm applied to the estimated regression coefficients, though. The task of the optimal data fitting is devoted, because K-Means is not invariant to linear transformations. In order to overcome this problem it is suggested integrating clustering issue with the Markov Chain Monte Carlo approaches. What is more, this paper is concerned in functional discriminant analysis including linear and quadratic scores and their modified probabilistic forms by using random mixtures. Alike in K-Means it is shown how to apply Fisher's method of canonical scores to the regression coefficients. Experiments of simulated datasets are made that demonstrate the performance of all mentioned methods and enable to choose those with the most result and time efficiency. Considerable boon is the facture of new advisable application advances. Implementation is processed in Mathematica 4.0. Finally, the possibilities offered by the development of curve clustering algorithms in vast research areas of modern science are examined, like neurology, genome studies, speech and image recognition systems, and future investigation with incorporation with ubiquitous computing is not forbidden. Utility in economy is illustrated with executed application in claims analysis of some life insurance products. The goals of the thesis have been achieved.
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Kyeong Eun. "Bayesian models for DNA microarray data analysis." Diss., Texas A&M University, 2005. http://hdl.handle.net/1969.1/2465.

Full text
Abstract:
Selection of signi?cant genes via expression patterns is important in a microarray problem. Owing to small sample size and large number of variables (genes), the selection process can be unstable. This research proposes a hierarchical Bayesian model for gene (variable) selection. We employ latent variables in a regression setting and use a Bayesian mixture prior to perform the variable selection. Due to the binary nature of the data, the posterior distributions of the parameters are not in explicit form, and we need to use a combination of truncated sampling and Markov Chain Monte Carlo (MCMC) based computation techniques to simulate the posterior distributions. The Bayesian model is ?exible enough to identify the signi?cant genes as well as to perform future predictions. The method is applied to cancer classi?cation via cDNA microarrays. In particular, the genes BRCA1 and BRCA2 are associated with a hereditary disposition to breast cancer, and the method is used to identify the set of signi?cant genes to classify BRCA1 and others. Microarray data can also be applied to survival models. We address the issue of how to reduce the dimension in building model by selecting signi?cant genes as well as assessing the estimated survival curves. Additionally, we consider the wellknown Weibull regression and semiparametric proportional hazards (PH) models for survival analysis. With microarray data, we need to consider the case where the number of covariates p exceeds the number of samples n. Speci?cally, for a given vector of response values, which are times to event (death or censored times) and p gene expressions (covariates), we address the issue of how to reduce the dimension by selecting the responsible genes, which are controlling the survival time. This approach enables us to estimate the survival curve when n << p. In our approach, rather than ?xing the number of selected genes, we will assign a prior distribution to this number. The approach creates additional ?exibility by allowing the imposition of constraints, such as bounding the dimension via a prior, which in e?ect works as a penalty. To implement our methodology, we use a Markov Chain Monte Carlo (MCMC) method. We demonstrate the use of the methodology with (a) di?use large B??cell lymphoma (DLBCL) complementary DNA (cDNA) data and (b) Breast Carcinoma data. Lastly, we propose a mixture of Dirichlet process models using discrete wavelet transform for a curve clustering. In order to characterize these time??course gene expresssions, we consider them as trajectory functions of time and gene??speci?c parameters and obtain their wavelet coe?cients by a discrete wavelet transform. We then build cluster curves using a mixture of Dirichlet process priors.
APA, Harvard, Vancouver, ISO, and other styles
10

Gouveia, Bruno Pauka. "Modelo de mistura padrão com tempos de vida exponenciais ponderados." Universidade Federal de São Carlos, 2010. https://repositorio.ufscar.br/handle/ufscar/4544.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:04Z (GMT). No. of bitstreams: 1 3137.pdf: 2333509 bytes, checksum: 17d0f072d443263a81b8c895dc712a3b (MD5) Previous issue date: 2010-03-05
Financiadora de Estudos e Projetos
In this work, we brie_y introduce the concepts of long-term survival analysis. We dedicated ourselves exclusively to the standard mixture cure model from Boag (1949) and Berkson & Gage (1952), showing its deduction and presenting the imunes probability function, which is taken from the model itself and we investigated the identi_ability issues of the mixture model. Motivated by the possibility that a experiment design can lead to a biased sample selection, we studied the weighted probability distributions, more speci_cally the weighted exponential distributions family and its properties. We studied two distributions that belong to this family; namely, the length biased exponential distribution and the beta exponential distribution. Using the GAMLSS package in R, we made some simulation studies intending to evidence the bias that occur when the possibility of a weighted sample is ignored.
Neste trabalho apresentamos brevemente os conceitos que de_nem a análise de sobreviv ência de longa duração. Dedicamo-nos exclusivamente ao modelo de mistura padrão de Boag (1949) e Berkson & Gage (1952), sendo que nos preocupamos com sua formulação, apresentamos a função probabilidade de imunes, que é derivada do próprio modelo e investigamos a questão da identi_cabilidade. Motivados pela possibilidade de que um planejamento experimental leve a uma seleção viciada da amostra, estudamos as distribui ções ponderadas de probabilidade, mais especi_camente a família das distribuições exponenciais ponderadas e suas propriedades. Estudamos duas distribuições pertencentes a essa família, a distribuição exponencial length biased e a distribuição beta exponencial. Fazendo uso do pacote GAMLSS em R, realizamos alguns estudos de simulação com o intuito de evidenciar o erro cometido quando se ignora a possibilidade de que a amostra seja proveniente de uma distribuição ponderada.
APA, Harvard, Vancouver, ISO, and other styles
11

Sorin, Edouard. "Fissuration en modes mixtes dans le bois : diagnostic et évaluation des méthodes de renforcement local." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0264/document.

Full text
Abstract:
Cette thèse s’effectue au sein de l’université de Bordeaux. Ce projet concerne la construction en bois et en particulier la compréhension des phénomènes à l’origine des fissures dans les structures bois. L’un des objectifs étant de concevoir des méthodes efficaces de renforcement local pour les éléments de structure. Pour cela, l’étude se décompose en plusieurs étapes, la compréhension des phénomènes mis en jeu dans la création des fissures sur des bois de construction. Ce travail s’orientera sur la modélisation de fissure en mode mixte, la recherche de solution de renforcement avec compréhension fine de l’impact de types de renforts sur la propagation de la fissure. Cette étude sera accompagnée d’une campagne d’essais, afin de vérifier l’efficacité des renforcements choisis et d’identifier l’impact de l’effet d’échelle sur les modèle de prédiction. Des essais de grandes dimensions seront donc réalisés pour mieux appréhender les effets de groupes et les effets d’échelle sur du matériau d’emploi. On vise ensuite à définir des outils prédictifs de la résistance des systèmes renforcés et de moyens de contrôles pour les Plan d’Assurance Qualité
The purpose of reinforcing assemblies and structural elements inwood is to overcome the resistance limits of the material, by transferring greaterefforts in areas which can lead to premature cracking in structures. The reinforcementsused can be made of steel, composite materials or wood. Their hook can bemechanical (screwed bodies) or by adhesion (structural bonding like glued-in rodsfor example). In both cases, the transfer of solicitations remains poorly known, andthe effect of the beginning and the deflection of crack are not well apprehended. Inengineering techniques, the wood resistance in the reinforced area is neglected, whichis in line with the precautionary principle. Currently, the scientific investigations areinterested in the resistance of those kind of techniques without considering the interactionsbetween the quasi-brittle behavior of the wood and the reinforcementswhich govern the gain in mechanical performance. However, these solutions can leadto a failure caused by the progressive splitting of the wood and the anchor loss ofthe reinforcement. So it seems accurate to propose predictions of the short-termstrength for splitting of reinforced and unreinforced beams, which can be used tofurther exploration of the long-term failure mechanism. That is why, in this study, aglobal prediction model of the ultimate strength of structural components subjectedto splitting, reinforced and unreinforced ones, was developed. It considers the quasibrittlebehavior of the wood and crack propagation in mixed mode, using a mixinglaw established on the R-curves. The relevance of this modeling was then comparedto the current dimensioning methods of the Eurocodes 5, for notched beams, withexperimental campaigns conducted at different scales
APA, Harvard, Vancouver, ISO, and other styles
12

Faria, Adriano Augusto de. "Essays in empirical finance." reponame:Repositório Institucional do FGV, 2017. http://hdl.handle.net/10438/19503.

Full text
Abstract:
Submitted by Adriano Faria (afaria@fgvmail.br) on 2017-12-13T19:49:29Z No. of bitstreams: 1 Tese_deFaria.pdf: 3657553 bytes, checksum: 11ec67914c866ca46d83c67c1592e093 (MD5)
Approved for entry into archive by GILSON ROCHA MIRANDA (gilson.miranda@fgv.br) on 2017-12-21T11:41:13Z (GMT) No. of bitstreams: 1 Tese_deFaria.pdf: 3657553 bytes, checksum: 11ec67914c866ca46d83c67c1592e093 (MD5)
Made available in DSpace on 2017-12-27T12:18:22Z (GMT). No. of bitstreams: 1 Tese_deFaria.pdf: 3657553 bytes, checksum: 11ec67914c866ca46d83c67c1592e093 (MD5) Previous issue date: 2017-03-16
This thesis is a collection of essays in empirical finance mainly focused on term structure models. In the first three chapters, we developed methods to extract the yield curve from government and corporate bonds. We measure the performance of such methods in pricing, Value at Risk and forecasting exercises. In its turn, the last chapter brings a discussion about the effects of different metrics of the optimal portfolio on the estimation of a CCAPM model.In the first chapter, we propose a segmented model to deal with the seasonalities appearing in real yield curves. In different markets, the short end of the real yield curve is influenced by seasonalities of the price index that imply a lack of smoothness in this segment. Borrowing from the flexibility of spline models, a B-spline function is used to fit the short end of the yield curve, while the medium and the long end are captured by a parsimonious parametric four-factor exponential model. We illustrate the benefits of the proposed term structure model by estimating real yield curves in one of the biggest government index-linked bond markets in the world. Our model is simultaneously able to fit the yield curve and to provide unbiased Value at Risk estimates for different portfolios of bonds negotiated in this market.Chapter 2 introduces a novel framework for the estimation of corporate bond spreads based on mixture models. The modeling methodology allows us to enhance the informational content used to estimate the firm level term structure by clustering firms together using observable firm characteristics. Our model builds on the previous literature linking firm level characteristics to credit spreads. Specifically, we show that by clustering firms using their observable variables, instead of the traditional matrix pricing (cluster by rating/sector), it is possible to achieve gains of several orders of magnitude in terms of bond pricing. Empirically, we construct a large panel of firm level explanatory variables based on results from a handful of previous research and evaluate their performance in explaining credit spread differences. Relying on panel data regressions we identify the most significant factors driving the credit spreads to include in our term structure model. Using this selected sample, we show that our methodology significantly improves in sample fitting as well as produces reliable out of sample price estimations when compared to the traditional models.Chapter 3 brings the paper “Forecasting the Brazilian Term Structure Using Macroeconomic Factors”, published in Brazilian Review of Econometrics (BRE). This paper studies the forecasting of the Brazilian interest rate term structure using common factors from a wide database of macroeconomic series, from the period of January 2000 to May 2012. Firstly the model proposed by Moench (2008) is implemented, in which the dynamic of the short term interest rate is modeled using a Factor Augmented VAR and the term structure is derived using the restrictions implied by no-arbitrage. Similarly to the original study, this model resulted in better predictive performance when compared to the usual benchmarks, but presented deterioration of the results with increased maturity. To avoid this problem, we proposed that the dynamic of each rate be modeled in conjunction with the macroeconomic factors, thus eliminating the no-arbitrage restrictions. This attempt produced superior forecasting results. Finally, the macro factors were inserted in a parsimonious parametric three-factor exponential model.The last chapter presents the paper “Empirical Selection of Optimal Portfolios and its Influence in the Estimation of Kreps-Porteus Utility Function Parameters”, also published in BRE. This paper investigates the effects on the estimation of parameters related to the elasticity of intertemporal substitution and risk aversion, of the selection of different portfolios to represent the optimal aggregate wealth endogenously derived in equilibrium models with Kreps-Porteus recursive utility. We argue that the usual stock market wide index is not a good portfolio to represent optimal wealth of the representative agent, and we propose as an alternative the portfolio from the Investment Fund Industry. Especially for Brazil, where that industry invests most of its resources in fixed income, the aforementioned substitution of the optimal proxy portfolio caused a significant increase in the risk aversion coefficient and the elasticity of the intertemporal substitution in consumption.
APA, Harvard, Vancouver, ISO, and other styles
13

Prabel, Benoit. "MODÉLISATION AVEC LA MÉTHODE X-FEM DE LA PROPAGATION DYNAMIQUE ET DE L'ARRÊT DE FISSURE DE CLIVAGE DANS UN ACIER DE CUVE REP." Phd thesis, INSA de Lyon, 2007. http://tel.archives-ouvertes.fr/tel-00278939.

Full text
Abstract:
Ce mémoire de thèse présente l'étude de la propagation et de l'arrêt d'une fissure de clivage dans un acier de cuve REP. Une bonne compréhension des phénomènes en jeu nécessite de solides données expérimentales, ainsi qu'un outil de modélisation performant. La méthode X-FEM est implantée dans Cast3m, permettant de simuler efficacement la propagation de fissure par éléments finis. Deux techniques concernant l'actualisation des fonctions de niveau, ainsi qu'une intégration non conforme sont proposées. La campagne d'essais de propagation de fissure concerne trois géométries : des éprouvettes CT, et des anneaux comprimés en mode I et mixte. On relève les vitesses de propagation. Des fractographies désignent le clivage comme responsable de la ruine. Un modèle de propagation basé sur la contrainte principale en pointe de fissure est identifié. La contrainte critique de clivage dépend de la vitesse de sollicitation. Ce modèle permet de prédire précisément par la simulation numérique, le comportement de la fissure observé expérimentalement.
APA, Harvard, Vancouver, ISO, and other styles
14

蔡長宇. "Semiparametric analysis for mixture cure model using prevalent survival data." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/26453975967007014919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Huang, Wei-Lun, and 黃韋倫. "Semiparametric Analysis of Transformation Mixture Cure Models for Interval-censored Data." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/74t8xb.

Full text
Abstract:
碩士
國立陽明大學
公共衛生研究所
106
Interval-censored data arise when individuals are observed periodically, and we only know that the event occurs at a certain interval. In this thesis, we consider the mixed-case interval-censored data. Because the medical treatment has progressed greatly, some patients can be cured for some diseases. In genetic epidemiology, the subjects will be nonsusceptible to certain genetic diseases, when they do not have the specific genes related to the diseases. To analyze this type of data, we propose the transformation mixture cure models, in which the first part is a logistic regression model fitting the probability of cure, and the second part is a semiparametric transformation model for the event time of the non-cured people. In addition, the transformation model includes the Cox proportional hazard model and the proportional odds model. We develop an EM algorithm to obtain the semiparametric maximum likelihood estimators and estimate the variance of the estimates of parameters with profile likelihood. In the simulation studies, we conduct the practical scenarios to evaluate the performance of the proposed approach. Finally, we apply the proposed approach to analyze the data of HIV-1 infection among persons with hemophilia.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography