Dissertations / Theses on the topic 'Survival analysis'

To see the other types of publications on this topic, follow the link: Survival analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Survival analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wardak, Mohammad Alif. "Survival analysis." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2810.

Full text
Abstract:
Survival analysis pertains to a statistical approach designed to take into account the amount of time an experimental unit contributes to a study. A Mayo Clinic study of 418 Primary Biliary Cirrhosis patients during a ten year period was used. The Kaplan-Meier Estimator, a non-parametric statistic, and the Cox Proportional Hazard methods were the tools applied. Kaplan-Meier results include total values/censored values.
APA, Harvard, Vancouver, ISO, and other styles
2

Abrams, Keith Rowland. "Bayesian survival analysis." Thesis, University of Liverpool, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316744.

Full text
Abstract:
In cancer research the efficacy of a new treatment is often assessed by means of a clinical trial. In such trials the outcome measure of interest is usually time to death from entry into the study. The time to intermediate events may also be of interest, for example time to the spread of the disease to other organs (metastases). Thus, cancer clinical trials can be seen to generate multi-state data, in which patients may be in anyone of a finite number of states at a particular time. The classical analysis of data from cancer clinical trials uses a survival regression model. This type of model allows for the fact that patients in the trial will have been observed for different lengths of time and for some patients the time to the event of interest will not be observed (censored). The regression structure means that a measure of treatment effect can be obtained after allowing for other important factors. Clinical trials are not conducted in isolation, but are part of an on-going learning process. In order to assess the current weight of evidence for the use of a particular treatment a Bayesian approach is necessary. Such an approach allows for the formal inclusion of prior information, either in the form of clinical expertise or the results from previous studies, into the statistical analysis. An initial Bayesian analysis, for a single non-recurrent event, can be performed using non-temporal models that consider the occurrence of events up to a specific time from entry into the study. Although these models are conceptually simple, they do not explicitly allow for censoring or covariates. In order to address both of these deficiencies a Bayesian fully parametric multiplicative intensity regression model is developed. The extra complexity of this model means that approximate integration techniques are required. Asymptotic Laplace approximations and the more computer intensive Gauss-Hermite quadrature are shown to perform well and yield virtually identical results. By adopting counting process notation the multiplicative intensity model is extended to the multi-state scenario quite easily. These models are used in the analysis of a cancer clinical trial to assess the efficacy of neutron therapy compared to standard photon therapy for patients with cancer of the pelvic region. In this trial there is prior information both in the form of clinical prior beliefs and results from previous studies. The usefulness of multi-state models is also demonstrated in the analysis of a pilot quality of life study. Bayesian multi-state models are shown to provide a coherent framework for the analysis of clinical studies, both interventionist and observational, yielding clinically meaningful summaries about the current state of knowledge concerning the disease/treatment process.
APA, Harvard, Vancouver, ISO, and other styles
3

Baade, Ingrid Annette. "Survival analysis diagnostics." Thesis, Queensland University of Technology, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

林國輝 and Kwok-fai Lam. "Topics in survival analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B30408994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lam, Kwok-fai. "Topics in survival analysis /." Hong Kong : University of Hong Kong, 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13829919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yuan, Lin. "Bayesian nonparametric survival analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22253.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

North, Bernard. "Contributions to survival analysis." Thesis, University of Reading, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nhogue, Wabo Blanche Nadege. "Hedge Funds and Survival Analysis." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/26257.

Full text
Abstract:
Using data from Hedge Fund Research, Inc. (HFR), this study adapts and expands on existing methods in survival analysis in an attempt to investigate whether hedge funds mortality can be predicted on the basis of certain hedge funds characteristics. The main idea is to determine the characteristics which contribute the most to the survival and failure probabilities of hedge funds and interpret them. We establish hazard models with time-independent covariates, as well as time-varying covariates to interpret the selected hedge funds characteristics. Our results show that size, age, performance, strategy, annual audit, fund offshore and fund denomination are the characteristics that best explain hedge fund failure. We find that 1% increase in performance decreases the hazard by 3.3%, the small size and the less than 5 years old hedge funds are the most likely to die and the event-driven strategy is the best to use as compare to others. The risk of death is 0.668 times lower for funds who indicated that an annual audit is performed as compared to the funds who did not indicated that an annual audit is performed. The risk of death for the offshore hedge funds is 1.059 times higher than the non-offshore hedge funds.
APA, Harvard, Vancouver, ISO, and other styles
9

Salter, Amy Beatrix. "Multivariate dependencies in survival analysis." Title page, contents and introduction only, 1999. http://web4.library.adelaide.edu.au/theses/09PH/09phs177.pdf.

Full text
Abstract:
Bibliography: leaves 177-181. This thesis investigates determinants of factors associated with retention of injecting drug users on the South Australian methadone program over the decade 1981 to mid 1991. Truncated multivariate survival models are proposed for the analysis of data from the program, and the theory of graphical chain models applied to the data. A detailed analysis is presented which gives further insight into the nature of the relationships that exist amongst these data. This provides an application of graphical chain models to survival data.
APA, Harvard, Vancouver, ISO, and other styles
10

Wienke, Andreas. "Frailty models in survival analysis." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985529598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Xinjian. "HIV/AIDS relative survival analysis." unrestricted, 2007. http://etd.gsu.edu/theses/available/etd-07262007-123251/.

Full text
Abstract:
Thesis (M.S.)--Georgia State University, 2007.
Title from file title page. Gengsheng (Jeff) Qin, committee chair; Ruiguang (Rick) Song, Xu Zhang, Yu-Sheng Hsu, committee members. Electronic text (79 p. : col. ill.) : digital, PDF file. Description based on contents viewed Sept. 16, 2008. Includes bibliographical references (p. 38-42).
APA, Harvard, Vancouver, ISO, and other styles
12

Putcha, Venkata Rama Prasad. "Random effects in survival analysis." Thesis, University of Reading, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bruno, Rexanne Marie. "Statistical Analysis of Survival Data." UNF Digital Commons, 1994. http://digitalcommons.unf.edu/etd/150.

Full text
Abstract:
The terminology and ideas involved in the statistical analysis of survival data are explained including the survival function, the probability density function, the hazard function, censored observations, parametric and nonparametric estimations of these functions, the product limit estimation of the survival function, and the proportional hazards estimation of the hazard function with explanatory variables. In Appendix A these ideas are applied to the actual analysis of the survival data for 54 cervical cancer patients.
APA, Harvard, Vancouver, ISO, and other styles
14

Aguilar, Tamara Alejandra Fernandez. "Gaussian processes for survival analysis." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:b5a7a3b2-d1bd-40f1-9b8d-dbb2b9cedd29.

Full text
Abstract:
Survival analysis is an old area of statistics dedicated to the study of time-to-event random variables. Typically, survival data have three important characteristics. First, the response is a waiting time until the occurrence of a predetermined event. Second, the response can be "censored", meaning that we do not observe its actual value but a bound for it. Last, the presence of covariates. While there exists some feasible parametric methods for modelling this type of data, they usually impose very strong assumptions on the real complexity of the response and how it interacts with the covariates. While these assumptions allow us to have tractable inference schemes, we lose inference power and overlook important relationships in the data. Due to the inherent limitations of parametric models, it is natural to consider non-parametric approaches. In this thesis, we introduce a novel Bayesian non-parametric model for survival data. The model is based on using a positive map of a Gaussian process with stationary covariance function as prior over the so-called hazard function. This model is thoughtfully studied in terms of prior behaviour and posterior consistency. Alternatives to incorporate covariates are discussed as well as an exact and tractable inference scheme.
APA, Harvard, Vancouver, ISO, and other styles
15

Shani, Najah Turki. "Multivariate analysis and survival analysis with application to company failure." Thesis, Bangor University, 1991. https://research.bangor.ac.uk/portal/en/theses/multivariate-analysis-and-survival-analysis-with-application-to-company-failure(a031bf91-13bc-4367-b4fc-e240ab54a73b).html.

Full text
Abstract:
This thesis offers an explanation of the statistical modelling of corporate financial indicators in the context where the life of a company is terminated. Whilst it is natural for companies to fail or close down, an excess of failure causes a reduction in the activity of the economy as a whole. Therefore, studies on business failure identification leading to models which may provide early warnings of impending financial crisis may make some contribution to improving economic welfare. This study considers a number of bankruptcy prediction models such as multiple discriminant analysis and logit, and then introduces survival analysis as a means of modelling corporate failure. Then, with a data set of UK companies which failed, or were taken over, or were still operating when the information was collected, we provide estimates of failure probabilities as a function of survival time, and we specify the significance of financial characteristics which are covariates of survival. Three innovative statistical methods are introduced. First, a likelihood solution is provided to the problem of takeovers and mergers in order to incorporate such events into the dichotomous outcome of failure and survival. Second, we move away from the more conventional matched pairs sampling framework to one that reflects the prior probabilities of failure and construct a sample of observations which are randomly censored, using stratified sampling to reflect the structure of the group of failed companies. The third innovation concerns the specification of survival models, which relate the hazard function to the length of survival time and to a set of financial ratios as predictors. These models also provide estimates of the rate of failure and of the parameters of the survival function. The overall adequacy of these models has been assessed using residual analysis and it has been found that the Weibull regression model fitted the data better than other parametric models. The proportional hazard model also fitted the data adequately and appears to provide a promising approach to the prediction of financial distress. Finally, the empirical analysis reported in this thesis suggests that survival models have lower classification error than discriminant and logit models.
APA, Harvard, Vancouver, ISO, and other styles
16

Fontenelle, OtÃvio Fernandes. "Survival Analysis; Micro and Small Enterprises; Modeling Survival Data, Data Characterization Survival; parametric Estimator KAPLAN-MEIER." Universidade Federal do CearÃ, 2009. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=4173.

Full text
Abstract:
nÃo hÃ
The main objective of this research is to explore economics issues that may induce impact on lifetime of small businesses during 2002 to 2006. The group of enterprises studied was selected from database of taxpayers recorded at fiscal authority of State of CearÃ. To do that, the methodology was focused on a branch of statistics which deals with survival analysis, called duration analysis or duration modeling in economics. It was applied non-linear model whose non-parametric estimator chosen was KAPLAN-MEIER. Through that methodology, it was developed sceneries based on the following attributes: county where the enterprises were established; economics activities based on national classification, fiscal version 1.0/1.1; and, finally, the relationship between State of Cearà â as fiscal authority â and enterprises. The counties were grouped applying two parameters of stratifications: gross domestic product(GDP) per capita and investment in education per capita. Before any stratification, only counties with thirty or more enterprises starting their activities in year 2002 were considered in sceneries to analysis.
A dissertaÃÃo tem o objetivo de investigar fatores econÃmicos que possam influenciar na sobrevida de micros e pequenas empresas (MEPs) contribuintes do Imposto sobre OperaÃÃes relativas à CirculaÃÃo de Mercadorias e sobre PrestaÃÃes de ServiÃos de Transporte Interestadual e Intermunicipal e de ComunicaÃÃo (ICMS) do Estado do Cearà no perÃodo de 2002 à 2006. Para isso, aplicou-se uma tÃcnica estatÃstica denominada anÃlise de sobrevivÃncia a partir de modelos nÃo lineares cujo estimador nÃo-paramÃtrico escolhido foi o de KAPLAN-MEIER. Com os dados de sobrevivÃncia devidamente modelados, buscou-se estratificÃ-los focando os municÃpios dos logradouros das MEPs; dentro do que tange as operaÃÃes do ICMS, focando as atividades econÃmicas segundo a classificaÃÃo nacional de atividades econÃmicas (CNAE) versÃo fiscal 1.0/1.1; e, finalmente, observar a relaÃÃo do Estado â enquanto autoridade fiscal â com esses pequenos estabelecimentos, restringindo temporariamente seu faturamento ou mesmo baixando sua inscriÃÃo estadual, impossibilitando a continuidade de suas atividades. Dos municÃpios, utilizou-se como Ãndice de estratificaÃÃo entre as curvas de sobrevivÃncia o produto interno bruto (PIB) per capita e os investimentos mÃdio per capita em educaÃÃo daquelas empresas localizadas em municÃpios com 30 ou mais estabelecimentos ativados no ano de 2002. Dentre outras, duas importantes observaÃÃes foram identificar o municÃpio de Fortaleza como um âoutlinerâ frente aos outros municÃpios e a forte dominÃncia da curva de sobrevivÃncia das empresas que nÃo sofreram intervenÃÃo do fisco em suas atividades sobre aquelas que tiveram.
APA, Harvard, Vancouver, ISO, and other styles
17

葉英傑 and Ying-Kit David Ip. "Analysis of clustered grouped survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ip, Ying-Kit David. "Analysis of clustered grouped survival data /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B2353011x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Aparicio, Vázquez Ignacio. "Venn Prediction for Survival Analysis : Experimenting with Survival Data and Venn Predictors." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278823.

Full text
Abstract:
The goal of this work is to expand the knowledge on the field of Venn Prediction employed with Survival Data. Standard Venn Predictors have been used with Random Forests and binary classification tasks. However, they have not been utilised to predict events with Survival Data nor in combination with Random Survival Forests. With the help of a Data Transformation, the survival task is transformed into several binary classification tasks. One key aspect of Venn Prediction are the categories. The standard number of categories is two, one for each class to predict. In this work, the usage of ten categories is explored and the performance differences between two and ten categories are investigated. Seven data sets are evaluated, and their results presented with two and ten categories. For the Brier Score and Reliability Score metrics, two categories offered the best results, while Quality performed better employing ten categories. Occasionally, the models are too optimistic. Venn Predictors rectify this performance and produce well-calibrated probabilities.
Målet med detta arbete är att utöka kunskapen om området för Venn Prediction som används med överlevnadsdata. Standard Venn Predictors har använts med slumpmässiga skogar och binära klassificeringsuppgifter. De har emellertid inte använts för att förutsäga händelser med överlevnadsdata eller i kombination med Random Survival Forests. Med hjälp av en datatransformation omvandlas överlevnadsprediktion till flera binära klassificeringsproblem. En viktig aspekt av Venn Prediction är kategorierna. Standardantalet kategorier är två, en för varje klass. I detta arbete undersöks användningen av tio kategorier och resultatskillnaderna mellan två och tio kategorier undersöks. Sju datamängder används i en utvärdering där resultaten presenteras för två och tio kategorier. För prestandamåtten Brier Score och Reliability Score gav två kategorier de bästa resultaten, medan för Quality presterade tio kategorier bättre. Ibland är modellerna för optimistiska. Venn Predictors korrigerar denna prestanda och producerar välkalibrerade sannolikheter.
APA, Harvard, Vancouver, ISO, and other styles
20

Müller, Andrea Martina. "Genetic association analysis with survival phenotypes." Diss., lmu, 2009. http://nbn-resolving.de/urn:nbn:de:bvb:19-99742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Aron, Liviu. "Genetic analysis of dopaminergic neuron survival." Diss., lmu, 2010. http://nbn-resolving.de/urn:nbn:de:bvb:19-117874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Mannathoko, Bame Joshua. "Survival analysis of SMMEs in Botswana." Thesis, Nelson Mandela Metropolitan University, 2011. http://hdl.handle.net/10948/1531.

Full text
Abstract:
This study investigates the factors influencing survival of micro enterprises funded by the Department of Youth in Botswana. Data drawn from 271 business ventures established between the years 2005 and 2009 was analysed by using the Cox proportional hazards model (CPHM), a survival analysis technique. Results from the analysis suggest that businesses operated by younger owners endure a higher risk of failure in comparison to businesses owned by older entrepreneurs while firm size at start-up was also a significant determinant of survival. As a component of human capital, a personal contribution to the start-up capital and prior employment experience were also found to be significant predictors of business survival. Regarding gender of the business owner, the claim that female operated businesses face a higher probability of failure when compared to businesses run by males was not supported by the study results. The amount of funding from the DOY at start-up was found not to have any influence on the survival or failure outcomes for the business projects. Based on these findings, certain policy implications can be deduced. This study recommends that policy makers focus more on human capital requirements of beneficiaries of government business development initiatives as well as entrepreneur contribution to start-up capital in order to increase the success rate of the business ventures. In addition, the capacity to perform continuous monitoring and mentoring of government funded businesses ventures, particularly SMMEs, should be increased within the relevant departments or alternatively outsourcing of the requisite skills should be considered. Lastly, recommendation to replicate this research, at a larger scale in future is proposed.
APA, Harvard, Vancouver, ISO, and other styles
23

Lu, Xuewen. "Semiparametric regression models in survival analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0030/NQ27458.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Duchesne, Thierry. "Multiple time scales in survival analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0002/NQ44759.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gay, John Michael. "Frailties in the Bayesian survival analysis." Thesis, Imperial College London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bond, Simon James. "Aspects of competing risks survival analysis." Thesis, University of Warwick, 2004. http://wrap.warwick.ac.uk/54208/.

Full text
Abstract:
This thesis is focused on the topic of competing risks survival analysis. The first chapter provides an introduction and motivation with a brief literature review. Chapter 2 considers the fundamental functional of all competing risks data: the crude incidence function. This function is considered in the light of the counting process framework which provides powerful mathematics to calculate confidence bands in an analytical form, rather than bootstrapping or simulation. Chapter 3 takes the Peterson bounds and considers what happens in the event of covariate information. Fortunately, these bounds do become tighter in some cases. Chapter 4 considers what can be inferred about the effect of covariates in the case of competing risks. The conclusion is that there exist bounds on any covariate-time transformation. These two preceding chapters are illustrated with a data set in chapter 5. Chapter 6 considers the result of Heckman and Honore (1989) and investigates the question of their generalisation. It reaches the conclusion that the simple assumption of a univariate covariate-time transformation is not enough to provide identifiability. More practical questions of modeling dependent competing risks data through the use of frailty models to induce dependence is considered in chapter 7. A practical and implementable model is illustrated. A diversion is taken into more abstract probability theory in chapter 8 which considers the Bayesian non-parametric tool: P61ya trees. The novel framework of this tool is explained and some results are obtained concerning the limiting random density function and the issues which arise when trying to integrate with a realised P61ya distribution as the integrating measure. Chapter 9 applies the theory of chapters 7 and 8 to a competing risks data set of a prostate cancer clinical trial. This has several continuous baseline covariates and gives the opportunity to use a frailty model discussed in chapter 7 where the unknown frailty distribution is modeled using a P61ya tree which is considered in chapter 8. An overview of the thesis is provided in chapter 10 and directions for future research are considered here.
APA, Harvard, Vancouver, ISO, and other styles
27

Olivi, Alessandro. "Survival analysis of gas turbine components." Thesis, Linköpings universitet, Statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129707.

Full text
Abstract:
Survival analysis is applied on mechanical components installed in gas turbines. We use field experience data collected from repair inspection reports. These data are highly censored since the exact time-to-event is unknown. We only know that it lies before or after the repair inspection time. As event we consider irreparability level of the mechanical components. The aim is to estimate survival functions that depend on the different environmental attributes of the sites where the gas turbines operate. Then, the goal is to use this information to obtain optimal time points for preventive maintenance. Optimal times are calculated with respect to the minimization of a cost function which considers expected costs of preventive and corrective maintenance. Another aim is the investigation of the effect of five different failure modes on the component lifetime. The methods used are based on the Weibull distribution, in particular we apply the Bayesian Weibull AFT model and the Bayesian Generalized Weibull model. The latter is preferable for its greater flexibility and better performance. Results reveal that components from gas turbines located in a heavy industrial environment at a higher distance from sea tend to have shorter lifetime. Then, failure mode A seems to be the most harmful for the component lifetime. The model used is capable of predicting customer-specific optimal replacement times based on the effect of environmental attributes. Predictions can be also extended for new components installed at new customer sites.
APA, Harvard, Vancouver, ISO, and other styles
28

Siannis, Fotios. "Sensitivity analysis for correlated survival models." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/78861/.

Full text
Abstract:
In this thesis we introduce a model for informative censoring. We assume that the joint distribution of the failure and the censored times depends on a parameter δ, which is actually a measure of the possible dependence, and a bias function B(t,θ). Knowledge of δ means that the joint distribution is fully specified, while B(t,θ) can be any function of the failure times. Being unable to draw inferences about δ, we perform a sensitivity analysis on the parameters of interest for small values of δ, based on a first order approximation. This will give us an idea of how robust our estimates are in the presence of small dependencies, and whether the ignorability assumption can lead to misleading results. Initially we propose the model for the general parametric case. This is the simplest possible case and we explore the different choices for the standardized bias function. After choosing a suitable function for B(t,θ) we explore the potential interpretation of δ through it's relation to the correlation between quantities of the failure and the censoring processes. Generalizing our parametric model we propose a proportional hazards structure, allowing the presence of covariates. At this stage we present a data set from a leukemia study in which the knowledge, under some certain assumptions, of the censored and the death times of a number of patients allows us to explore the impact of informative censoring to our estimates. Following the analysis of the above data we introduce an extension to Cox's partial likelihood, which will call "modified Cox's partial likelihood", based on the assumptions that censored times do contribute information about the parameters of interest. Finally we perform parametric bootstraps to assess the validity of our model and to explore up to what values of parameter δ our approximation holds.
APA, Harvard, Vancouver, ISO, and other styles
29

Alqahtani, Khaled Mubarek A. "Survival analysis based on genomic profiles." Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/16471/.

Full text
Abstract:
Accurate survival prediction is critical in the management of cancer patients’ care and well-being. Previous studies have shown that copy number alterations (CNA) in some key genes are individually associated with disease phenotypes and patients’ prognosis. However, in many complex diseases like cancer, it is expected that a large number of genes with such an association span the genome. Furthermore, genome-wide CNA profiles are person-specific. Each patient has their own profile and any differences in the profile between patients may help to explain the differences in the patients’ survival. Hence, extracting the relevant information in the genome-wide CNA profile is critical in the prediction of cancer patients’ survival. It is currently a modelling challenge to incorporate the genome-wide CNA profiles, in addition to the patients’ clinical information, to predict cancer patients survival. Therefore, the focus of this thesis is to establish or develop statistical methods that are able to include CNA (ultra-high dimensional data) in survival Analysis. In order to address this objective, we go throw two main parts. The first part of the thesis concentrates on CNA estimation. CNA can be estimated using the ratio of a tumour sample to a normal sample. Therefore, we investigate the approximations of the distribution of the ratio of two Poisson random variables. In the second part of the thesis, we extend the Cox proportional hazard (PH) model for prediction of patients survival probability by incorporating the genome-wide CNA profiles as random predictors. The patients clinical information remains as fixed predictors in the model. In this part three types of distribution of random effect are investigated. First, the random effects are assumed to be normally distributed with mean zero and diagonal structure covariance matrix which has equal variances and covariances of zero. The diagonal structure of covariance matrix is the simplest possible structure for a variance-covariance matrix. This structure indicates independence between neighbouring genomic windows. However, CNAs have dependencies between neighbouring genomic windows, and spatial characteristics which are ignored with such a covariance structure. We address the spatial dependence structure of CNAs. In order to achieve this, we start first by discussing other structures of variance-covariance matrices of random effects ( Compound symmetry covariance matrix , and Inverse of covariance matrix). Then, we impose smoothness using first and second differences of random effects. Specifically, the random effects are assumed to be correlated random effects that follow a mixture of two distributions, normal and Cauchy, for the first or second differences (SCox). Our approach in these two scenario was a genome-wide approach, in the sense that we took into account all of the CNA information in the genome. In this regard, the model does not include a variable selection mechanism. Third, as the previous methods employ all predictors regardless of their relevance, which make it difficult to interpret the results, we introduce a novel algorithm based on Sparse-smoothed Cox model (SSCox) within a random effects model-frame work to model the survival time using the patients’ clinical characteristics as fixed effects and CNA profiles as random effects. We assumed CNA coefficients to be correlated random effects that follow a mixture of three distributions: normal (to achieve shrinkage around the mean values), Cauchy for the second-order differences (to gain smoothness), and Laplace (to achieve sparsity). We illustrate each method with a real dataset from a lung cancer cohort as well as simulated data. For the simulation studies, we find that our SSCox method generally preformed better than the sparse partial leastsquare methods in prediction performance. Our estimator had smaller mean square error, and mean absolute error than its main competitors. For the real data set, we find that the SSCox model is suitable and has enabled a survival probability prediction based on the patients clinical information and CNA profiles. The results indicate that cancer T- and N-staging are significant factors in affecting the patients survival, and the estimates of random effects allow us to examine the contribution to the survival of some genomic regions across the genome.
APA, Harvard, Vancouver, ISO, and other styles
30

Hirst, William Mark. "Outcome measurement error in survival analysis." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Glazier, Seth William. "Sequential Survival Analysis with Deep Learning." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7528.

Full text
Abstract:
Survival Analysis is the collection of statistical techniques used to model the time of occurrence, i.e. survival time, of an event of interest such as death, marriage, the lifespan of a consumer product or the onset of a disease. Traditional survival analysis methods rely on assumptions that make it difficult, if not impossible to learn complex non-linear relationships between the covariates and survival time that is inherent in many real world applications. We first demonstrate that a recurrent neural network (RNN) is better suited to model problems with non-linear dependencies in synthetic time-dependent and non-time-dependent experiments.
APA, Harvard, Vancouver, ISO, and other styles
32

Thamrin, Sri Astuti. "Bayesian survival analysis using gene expression." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/62666/1/Sri_Astuti_Thamrin_Thesis.pdf.

Full text
Abstract:
This thesis developed and applied Bayesian models for the analysis of survival data. The gene expression was considered as explanatory variables within the Bayesian survival model which can be considered the new contribution in the analysis of such data. The censoring factor that is inherent of survival data has also been addressed in terms of its impact on the fitting of a finite mixture of Weibull distribution with and without covariates. To investigate this, simulation study were carried out under several censoring percentages. Censoring percentage as high as 80% is acceptable here as the work involved high dimensional data. Lastly the Bayesian model averaging approach was developed to incorporate model uncertainty in the prediction of survival.
APA, Harvard, Vancouver, ISO, and other styles
33

Lee, Yau-wing. "Modelling multivariate survival data using semiparametric models." Click to view the E-thesis via HKUTO, 2000. http://sunzi.lib.hku.hk/hkuto/record/B4257528X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Zhigang. "Nonproportional hazards regression models for survival analysis /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Volinsky, Christopher T. "Bayesian model averaging for censored survival models /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

White, Ceri. "Cluster analysis : algorithms, hazards and small area relative survival." Thesis, University of South Wales, 2008. https://pure.southwales.ac.uk/en/studentthesis/cluster-analysis(b799eddf-4d11-4cd2-9cd0-3d0480dcaedd).html.

Full text
Abstract:
This thesis presents research that has demonstrated the use of clustering algorithms in the analysis of datasets routinely collected by cancer registries. This involved a review of existing algorithms and their application in studies of spatial and temporal variations in cancer rates. As a result of continuing public and scientific concern there has been an increase in the numbers of cancer related enquiries in recent years that has helped to raise the profile of the work of cancer registries. There are no official guidelines on the approach to be taken in such studies in relation to cluster analysis. In this study, a variety of cluster algorithms were applied to leukaemia data collected by the Welsh Cancer Intelligence and Surveillance Unit in order to propose an approach that could be adopted in future investigations of cancer incidence in Wales. For example, different methodologies have been employed to determine if an excess risk occurs near hazardous sources and one of the studies in the portfolio compares the results of using three methods to determine if an increased risk of cancer occurs in the vicinity of landfill sites and electric power lines. This uses new digital products that permit a more detailed estimation of the population at risk and permit a sensitivity analysis of the results of such investigations. In the third portfolio, analysis of relative survival at small area level has been made possible using a new level of geographical resolution that has recently been released in the United Kingdom. This study shows the benefits of using this new level of geography for small area studies of cancer survival where there are generally small numbers of deaths per spatial unit. It is anticipated that together these research studies will be of wider benefit to other registries in the UK charged with investigating spatial and temporal variations in cancer rates.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Feifei, and 周飞飞. "Cure models for univariate and multivariate survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45700977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Parast, Layla. "Landmark Prediction of Survival." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10085.

Full text
Abstract:
The importance of developing personalized risk prediction estimates has become increasingly evident in recent years. In general, patient populations may be heterogenous and represent a mixture of different unknown subtypes of disease. When the source of this heterogeneity and resulting subtypes of disease are unknown, accurate prediction of survival may be difficult. However, in certain disease settings the onset time of an observable intermediate event may be highly associated with these unknown subtypes of disease and thus may be useful in predicting long term survival. Throughout this dissertation, we examine an approach to incorporate intermediate event information for the prediction of long term survival: the landmark model. In Chapter 1, we use the landmark modeling framework to develop procedures to assess how a patient’s long term survival trajectory may change over time given good intermediate outcome indications along with prognosis based on baseline markers. We propose time-varying accuracy measures to quantify the predictive performance of landmark prediction rules for residual life and provide resampling-based procedures to make inference about such accuracy measures. We illustrate our proposed procedures using a breast cancer dataset. In Chapter 2, we aim to incorporate intermediate event time information for the prediction of survival. We propose a fully non-parametric procedure to incorporate intermediate event information when only a single baseline discrete covariate is available for prediction. When a continuous covariate or multiple covariates are available, we propose to incorporate intermediate event time information using a flexible varying coefficient model. To evaluate the performance of the resulting landmark prediction rule and quantify the information gained by using the intermediate event, we use robust non-parametric procedures. We illustrate these procedures using a dataset of post-dialysis patients with end-stage renal disease. In Chapter 3, we consider improving efficiency by incorporating intermediate event information in a randomized clinical trial setting. We propose a semi-nonparametric two-stage procedure to estimate survival by incorporating intermediate event information observed before the landmark time. In addition, we present a testing procedure using these resulting estimates to test for a difference in survival between two treatment groups. We illustrate these proposed procedures using an AIDS dataset.
APA, Harvard, Vancouver, ISO, and other styles
39

Oller, Piqué Ramon. "Survival analysis issues with interval-censored data." Doctoral thesis, Universitat Politècnica de Catalunya, 2006. http://hdl.handle.net/10803/6520.

Full text
Abstract:
L'anàlisi de la supervivència s'utilitza en diversos àmbits per tal d'analitzar dades que mesuren el temps transcorregut entre dos successos. També s'anomena anàlisi de la història dels esdeveniments, anàlisi de temps de vida, anàlisi de fiabilitat o anàlisi del temps fins a l'esdeveniment. Una de les dificultats que té aquesta àrea de l'estadística és la presència de dades censurades. El temps de vida d'un individu és censurat quan només és possible mesurar-lo de manera parcial o inexacta. Hi ha diverses circumstàncies que donen lloc a diversos tipus de censura. La censura en un interval fa referència a una situació on el succés d'interès no es pot observar directament i només tenim coneixement que ha tingut lloc en un interval de temps aleatori. Aquest tipus de censura ha generat molta recerca en els darrers anys i usualment té lloc en estudis on els individus són inspeccionats o observats de manera intermitent. En aquesta situació només tenim coneixement que el temps de vida de l'individu es troba entre dos temps d'inspecció consecutius.

Aquesta tesi doctoral es divideix en dues parts que tracten dues qüestions importants que fan referència a dades amb censura en un interval. La primera part la formen els capítols 2 i 3 els quals tracten sobre condicions formals que asseguren que la versemblança simplificada pot ser utilitzada en l'estimació de la distribució del temps de vida. La segona part la formen els capítols 4 i 5 que es dediquen a l'estudi de procediments estadístics pel problema de k mostres. El treball que reproduïm conté diversos materials que ja s'han publicat o ja s'han presentat per ser considerats com objecte de publicació.

En el capítol 1 introduïm la notació bàsica que s'utilitza en la tesi doctoral. També fem una descripció de l'enfocament no paramètric en l'estimació de la funció de distribució del temps de vida. Peto (1973) i Turnbull (1976) van ser els primers autors que van proposar un mètode d'estimació basat en la versió simplificada de la funció de versemblança. Altres autors han estudiat la unicitat de la solució obtinguda en aquest mètode (Gentleman i Geyer, 1994) o han millorat el mètode amb noves propostes (Wellner i Zhan, 1997).

El capítol 2 reprodueix l'article d'Oller et al. (2004). Demostrem l'equivalència entre les diferents caracteritzacions de censura no informativa que podem trobar a la bibliografia i definim una condició de suma constant anàloga a l'obtinguda en el context de censura per la dreta. També demostrem que si la condició de no informació o la condició de suma constant són certes, la versemblança simplificada es pot utilitzar per obtenir l'estimador de màxima versemblança no paramètric (NPMLE) de la funció de distribució del temps de vida. Finalment, caracteritzem la propietat de suma constant d'acord amb diversos tipus de censura. En el capítol 3 estudiem quina relació té la propietat de suma constant en la identificació de la distribució del temps de vida. Demostrem que la distribució del temps de vida no és identificable fora de la classe dels models de suma constant. També demostrem que la probabilitat del temps de vida en cadascun dels intervals observables és identificable dins la classe dels models de suma constant. Tots aquests conceptes els
il·lustrem amb diversos exemples.

El capítol 4 s'ha publicat parcialment en l'article de revisió metodològica de Gómez et al. (2004). Proporciona una visió general d'aquelles tècniques que s'han aplicat en el problema no paramètric de comparació de dues o més mostres amb dades censurades en un interval. També hem desenvolupat algunes rutines amb S-Plus que implementen la versió permutacional del tests de Wilcoxon, Logrank i de la t de Student per a dades censurades en un interval (Fay and Shih, 1998). Aquesta part de la tesi doctoral es complementa en el capítol 5 amb diverses propostes d'extensió del test de Jonckeere. Amb l'objectiu de provar una tendència en el problema de k mostres, Abel (1986) va realitzar una de les poques generalitzacions del test de Jonckheere per a dades censurades en un interval. Nosaltres proposem altres generalitzacions d'acord amb els resultats presentats en el capítol 4. Utilitzem enfocaments permutacionals i de Monte Carlo. Proporcionem programes informàtics per a cada proposta i realitzem un estudi de simulació per tal de comparar la potència de cada proposta sota diferents models paramètrics i supòsits de tendència. Com a motivació de la metodologia, en els dos capítols s'analitza un conjunt de dades d'un estudi sobre els beneficis de la zidovudina en pacients en els primers estadis de la infecció del virus VIH (Volberding et al., 1995).

Finalment, el capítol 6 resumeix els resultats i destaca aquells aspectes que s'han de completar en el futur.
Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. Interval censoring refers to the situation when the event of interest cannot be directly observed and it is only known to have occurred during a random interval of time. This kind of censoring has produced a lot of work in the last years and typically occurs for individuals in a study being inspected or observed intermittently, so that an individual's lifetime is known only to lie between two successive observation times.

This PhD thesis is divided into two parts which handle two important issues of interval censored data. The first part is composed by Chapter 2 and Chapter 3 and it is about formal conditions which allow estimation of the lifetime distribution to be based on a well known simplified likelihood. The second part is composed by Chapter 4 and Chapter 5 and it is devoted to the study of test procedures for the k-sample problem. The present work reproduces several material which has already been published or has been already submitted.

In Chapter 1 we give the basic notation used in this PhD thesis. We also describe the nonparametric approach to estimate the distribution function of the lifetime variable. Peto (1973) and Turnbull (1976) were the first authors to propose an estimation method which is based on a simplified version of the likelihood function. Other authors have studied the uniqueness of the solution given by this method (Gentleman and Geyer, 1994) or have improved it with new proposals (Wellner and Zhan, 1997).

Chapter 2 reproduces the paper of Oller et al. (2004). We prove the equivalence between different characterizations of noninformative censoring appeared in the literature and we define an analogous constant-sum condition to the one derived in the context of right censoring. We prove as well that when the noninformative condition or the constant-sum condition holds, the simplified likelihood can be used to obtain the nonparametric maximum likelihood estimator (NPMLE) of the failure time distribution function. Finally, we characterize the constant-sum property according to different types of censoring. In Chapter 3 we study the relevance of the constant-sum property in the identifiability of the lifetime distribution. We show that the lifetime distribution is not identifiable outside the class of constant-sum models. We also show that the lifetime probabilities assigned to the observable intervals are identifiable inside the class of constant-sum models. We illustrate all these notions with several examples.

Chapter 4 has partially been published in the survey paper of Gómez et al. (2004). It gives a general view of those procedures which have been applied in the nonparametric problem of the comparison of two or more interval-censored samples. We also develop some S-Plus routines which implement the permutational version of the Wilcoxon test, the Logrank test and the t-test for interval censored data (Fay and Shih, 1998). This part of the PhD thesis is completed in Chapter 5 by different proposals of extension of the Jonckeere's test. In order to test for an increasing trend in the k-sample problem, Abel (1986) gives one of the few generalizations of the Jonckheree's test for interval-censored data. We also suggest different Jonckheere-type tests according to the tests presented in Chapter 4. We use permutational and Monte Carlo approaches. We give computer programs for each proposal and perform a simulation study in order compare the power of each proposal under different parametric assumptions and different alternatives. We motivate both chapters with the analysis of a set of data from a study of the benefits of zidovudine in patients in the early stages of the HIV infection (Volberding et al., 1995).

Finally, Chapter 6 summarizes results and address those aspects which remain to be completed.
APA, Harvard, Vancouver, ISO, and other styles
40

Kriner, Monika. "Survival Analysis with Multivariate adaptive Regression Splines." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-73695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Villeneuve, Paul. "Population based survival analysis of childhood cancer." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10442.

Full text
Abstract:
Purpose: To assess the survival of children diagnosed with cancer between 1982 and 1988 using population based data. Subjects: 4715 cancer patients diagnosed with cancer prior to age twenty, between 1982-1988, as reported population based cancer registries. Mortality status (up to December 31, 1991) was ascertained by linking subjects to the Canadian mortality database. Actuarial survival rates and assessment of the role of covariates (ie. gender, age at diagnosis, year of diagnosis) on survival using the proportional hazards model. The five year survival rate of children diagnosed with a primary malignancy between 1982 and 1988 was 69%. Among those cancers examined, age at diagnosis was a significant prognostic factor for children diagnosed with leukaemia, neuroblastoma, astrocytoma, ependymoma, and rhabdomyosarcoma ($\rm p0.05$). Infants with leukaemia had a substantially poorer prognosis when compared to children diagnosed after age one. Conversely, those diagnosed with neuroblastoma prior to age one had a considerably improved chance of survival. After adjusting for age and year of diagnosis, females were found to have a markedly higher survival for acute lymphocytic leukaemia, and ependymomas ($\rm p0.05$). Improved survival was observed for children diagnosed more recently with acute lymphocytic leukaemia, acute non-lymphocytic leukaemia and Non-Hodgkin's lymphoma ($\rm p0.02$). There was evidence to suggest that survival also improved by year of diagnosis among children with fibrosarcoma ($\rm p=0.05$), Wilms' tumour ($\rm p=0.07$) and ovarian germ cell malignancies ($\rm p=0.06$). No significant trends in survival were observed for the other forms of childhood cancer examined. Geographical variations in survival were assessed for children diagnosed with either acute lymphocytic leukaemia, an astrocytoma or Hodgkin's disease. The survival rates among children diagnosed with acute lymphocytic leukaemia or astrocytoma in British Columbia between 1987 and 1988, were found to be significantly higher when compared to the remainder of the cohort ($\rm p0.05$).
APA, Harvard, Vancouver, ISO, and other styles
42

Kakuma, Ritsuko. "Delirium in the elderly : a survival analysis." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33414.

Full text
Abstract:
Mortality rates have consistently been shown to be greater in patients with delirium compared to those without. Published work over the last decade has revealed however, that several confounding factors play key roles in contributing to the excess mortality in the delirium population and that statistical adjustment for these factors in multivariate analyses minimizes, if not eliminates, the association between delirium and mortality. These factors include pre-existing dementia, advanced age, severe medical illness, diminished functional status, and intoxication or withdrawal from medications. However, studies on prognosis and prognostic indicators of delirium in the past have been limited to subjects admitted to the hospital where the sample may include both incident and prevalent cases of delirium.
Objective. To determine whether prevalent delirium is an independent predictor for mortality among elderly patients seen in the Emergency department. Potentially confounding factors were assessed to reveal their prognostic contributions in this population. Survival analysis was carried out using the Cox Proportional Hazards Modelling technique.
Methods. As part of a larger study, 268 patients seen in the Emergency department in two Montreal hospitals (107 delirium cases, 161 controls) were followed up in 6 month intervals for a total of 18 months. Dates of deaths for the deceased were obtained from the Ministere de la Sante et des Service Sociaux.
Results. The analysis revealed a non-significant association between delirium and mortality rate for the English speaking subjects, when adjusted for age, sex, pre-morbid cognitive decline (IQCODE), Basic ADL, Instrumental ADL, comorbidity, number of medication, education (years), eyesight, and hearing problems (p = 0.752, HR =1 .095, CI: 0.622--1.929). On the other hand, for the French speaking subjects, the same model revealed a highly significant association between delirium and death rate (p = 0.001, HR = 9.078, CI: 2.362--34.892). Possible explanations for the different results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Xiao, Yongling. "Flexible marginal structural models for survival analysis." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=107571.

Full text
Abstract:
In longitudinal studies, both treatments and covariates may vary throughout the follow-up period. Time-dependent (TD) Cox proportional hazards (PH) models can be used to model the effect of time-varying treatments on the hazard. However, two challenges exist in such modeling. First, accurate modeling of the effects of TD treatments on the hazard requires resolving the uncertainty about the etiological relevance of treatments taken in different time periods. The second challenge arises in the presence of TD confounders affected by prior treatments. By assuming the absence of the other challenge, two different methodologies, weighted cumulative exposure (WCE) and marginal structural models (MSM), have been recently proposed to separately address each challenge, respectively. In this thesis, I proposed the combination of these methodologies so as to address both challenges simultaneously, as both may commonly arise in combination in longitudinal studies.In the first manuscript, I proposed and validated a novel approach to implement the marginal structural Cox proportional hazards model (referred to as Cox MSM) with inverse-probability-of-treatment weighting (IPTW) directly via a weighted time-dependent Cox PH model, rather than via a pooled logistic regression approximation. The simulations show that the IPTW estimator yields consistent estimates of the causal effect of treatment, but it may suffer from large variability, due to some extremely high IPT weights. The precision of the IPTW estimator could be improved by normalizing the stabilized IPT weights.Simple weight truncation has been proposed and commonly used in practice as another solution to reduce the large variability of IPTW estimators. However, truncation levels are typically chosen based on ad hoc criteria which have not been systematically evaluated. Thus, in the second manuscript, I proposed a systematic data-adaptive approach to select the optimal truncation level which minimizes the estimated expected MSE of the IPTW estimates. In simulation, the new approach exhibited the performance that was as good as the approaches that simply truncate the stabilized weights at high percentiles such as the 99th or 99.5th of their distribution, in terms of reducing the variance and improving the MSE of the estimatesIn the third manuscript, I proposed a new, flexible model to estimate the cumulative effect of time-varying treatment in the presence of the time-dependent confounders/mediators. The model incorporated weighted cumulative exposure modeling in a marginal structural Cox model. Specifically, weighted cumulative exposure was used to summarize the treatment history, which was defined as the weighted sum of the past treatments. The function that assigns different weights to treatments received at different times was modeled with cubic regression splines. The stabilized IPT weights for each person at each visit were calculated to account for the time-varying confounding and mediation. The weighted Cox MSM, using stabilized IPT weights, was fitted to estimate the total causal cumulative effect of the treatments on the hazard. Simulations demonstrate that the proposed new model can estimate the total causal cumulative effect, i.e. to capture both the direct and the indirect (mediated by the TD confounder) treatment effects. Bootstrap-based 95% confidence bounds for the estimated weight function were constructed and the impact of some extreme IPT weights on the estimates of the causal cumulative effect was explored.In the last manuscript, I applied the WCE MSM to the Swiss HIV Cohort Study (SHCS) to re-assess whether the cumulative exposure to abacavir therapy may increase the potential risk of cardiovascular events, such as myocardial infarction or the cardiovascular-related death.
Dans les études longitudinales, aussi bien les covariables que les traitements peuvent varier au cours de la période de suivi. Les modèles de Cox à effets proportionnels avec variables dépendantes du temps peuvent être utilisés pour modéliser l'effet de traitement variant au cours du temps. Cependant, deux défis apparaissent pour ce type de modélisation. Tout d'abord, une modélisation précise des effets des traitements dépendants du temps sur le risque nécessite de résoudre l'incertitude quant à l'importance étiologique des traitements pris a différentes périodes de temps. Ensuite, un second défi se pose dans le cas de la présence d'une variable de confusion qui dépend du temps et qui est également un médiateur de l'effet du traitement sur le risque. Deux différentes méthodologies ont récemment été suggérées pour répondre, séparément, à chacun de ces deux défis, respectivement l'exposition cumulée pondérée et les modèles structuraux marginaux (MSM). Dans cette thèse, j'ai proposé la combinaison de ces méthodologies de façon à répondre aux deux défis simultanément, étant donné qu'ils peuvent tous les deux fréquemment se poser en même temps dans des études longitudinales. Dans le premier article, j'ai proposé et validé une nouvelle approche pour mettre en œuvre le Cox MSM avec la pondération par l'inverse de probabilité de traitement (PIPT) directement à partir d'un modèle de Cox a effets proportionnels pondéré et avec variables dépendantes du temps plutôt que par une approximation par régression logistique sur données agrégées. Les simulations montrent que l'estimateur PIPT donne des estimations consistantes de l'effet causal du traitement alors qu'il serait associé à une grande variabilité dans les estimations, à cause d'inverses de probabilités de traitement extrêmement élevés. La simple troncature de poids a été proposée et couramment utilisée dans la pratique comme une autre solution pour réduire la grande variabilité des estimateurs PIPT. Cependant, les niveaux de troncature sont généralement choisis en fonction de critères ad hoc, qui n'ont pas été systématiquement évalués. Ainsi, dans le deuxième article, j'ai proposé une approche systématique adaptative aux données systématique pour sélectionner le niveau de troncature optimal qui minimise l'erreur quadratique moyenne des estimations PIPT. Dans le troisième article, j'ai proposé un nouveau modèle flexible afin d'estimer l'effet cumulatif de traitements qui varient dans le temps en présence de facteurs de confusion/médiateurs dépendant du temps. Le modèle intègre la modélisation de l'exposition cumulative pondérée dans un Cox MSM. Plus précisément, l'exposition cumulée pondérée a été utilisée pour résumer l'histoire du traitement, qui a été définie comme la somme pondérée des traitements antérieurs. La fonction qui assigne des poids différents aux traitements reçus à différents moments a été modélisée avec des régressions par B-splines cubiques, en utilisant différentes covariables dépendantes du temps artificielles. Les poids IPT stabilisés pour chaque personne à chaque visite ont été calculés afin de tenir compte des variables de confusion et des médiateurs qui dépendent du temps. Le modèle structurel marginal de Cox à effets proportionnel et avec des covariables dépendantes du temps pondéré, qui utilise des poids stabilisés pondérés, a été ajusté pour estimer l'effet cumulatif causal total des traitements sur le risque. Les simulations montrent que le nouveau modèle proposé permet d'estimer l'effet cumulatif causal total, c'est à dire qu'il permet de capturer à la fois les effets direct et indirect.Dans le dernier article, j'ai appliqué le modèle structural marginal avec exposition cumulée pondérée à une étude de cohorte suisse sur le VIH afin de réévaluer si l'exposition cumulée à la thérapie abacavir augmentait le risque potentiel d'événements cardiovasculaires, tels que l'infarctus du myocarde ou le décès lié a un événement cardiovasculaire.
APA, Harvard, Vancouver, ISO, and other styles
44

Long, Yongxian, and 龙泳先. "Semiparametric analysis of interval censored survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45541152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

McKinley, Trevelyan John. "Spatial survival analysis of infectious animal diseases." Thesis, University of Exeter, 2007. http://hdl.handle.net/10036/27033.

Full text
Abstract:
This thesis investigates the feasibility of using spatial survival modelling techniques to develop dynamic space-time predictive models of risk for infectious animal disease epidemics. Examples of diseases with potentially vast socioeconomic impacts include avian influenza, bovine tuberculosis and foot-and-mouth disease (FMD), all of which have received wide coverage in the recent media. The relatively sporadic occurrence of such large scale animal disease outbreaks makes determination of optimal control policies difficult, and policy makers must balance the relative impacts of different response strategies based on little prior information. It is in this situation that the use of mathematical and statistical modelling techniques can provide powerful insights into the future course of an infectious epidemic. The motivating example for this thesis is the outbreak of FMD in Devon in 2001, however we are interested in developing more general techniques that can be applied to other animal diseases. Many of the models fitted to the 2001 UK FMD data set have focussed on modelling the global spread of the disease across the entire country and then using these models to assess the effects of nationwide response strategies. However it has been shown that the dynamics of the disease are not uniform across the whole of the UK and can vary significantly across different spatial regions. Of interest here is exploring whether modelling at a smaller spatial scale can provide more useful measures of risk and guide the development of more efficient control policies. We begin by introducing some of the main epidemiological issues and concepts involved in modelling infectious animal diseases, from the microscopic through to the farm population level. We then discuss the various mathematical modelling techniques that have applied previously and how they relate to various biological principals discussed in the earlier chapters. We then highlight some limitations with these approaches and offer potential ways in which survival analysis techniques could be used to overcome some of these problems. To this end we formulate a spatial survival model and fit it to the Devon data set with some naive initial covariates that fail to capture the dynamics of the disease. Some work by colleagues at the Veterinary Laboratories Agency, Weybridge (Arnold 2005), produced estimates of viral excretion rates for infected herds of different species type over time, and these form the basis for the development of a dynamic space-time varying viral load covariate that quantifies the viral load acting at any spatial location at any point in time. The novel use of this covariate as a means of censoring the data set via exposure is then introduced, though the models still fail to explain the variation in the epidemic process. Two potential reasons for this are identified - the possible presence of non-localised infections and/or premise varying susceptibility. We then explore ways in which the survival approach can be extended to model more than one epidemic process through the use of mixture and long-term survivor models. Some simple simulations suggest that resistance to infection is the most likely cause of the poor model fits, and a series of more complex simulation experiments show that both the mixture and long-term survivor models offer various advantages over the conventional approach when resistance is present in the data set. However key to their performance is the ability to correctly capture the mixing, although in the worst case scenario they still replicate the results from the conventional model. We also use these simulations to explore potential ways in which space-time predictions of the hazard of infection can be used as a means of targeting control policies to areas of ‘high-risk’ of infection. This shows the importance of ensuring that the scale of the control order matches the scale of the epidemic, and suggests possible dangers when using global level models to derive response strategies for situations where the dynamics of the disease change at smaller spatial scales. Finally we apply these techniques to the Devon data set and offer some conclusions and future work.
APA, Harvard, Vancouver, ISO, and other styles
46

Freeman, S. N. "Statistical analysis of avian breeding and survival." Thesis, University of Kent, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.278099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Anzures-Cabrera, Judith. "Survival analysis : competing risks, truncation and immunes." Thesis, University of Warwick, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Nieto-Barajas, Luis E. "Bayesian nonparametric survival analysis via Markov processes." Thesis, University of Bath, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Saul, Alan D. "Gaussian process based approaches for survival analysis." Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/17946/.

Full text
Abstract:
Traditional machine learning focuses on the situation where a fixed number of features are available for each data-point. For medical applications each individual patient will typically have a different set of clinical tests associated with them. This results in a varying number of observed per patient features. An important indicator of interest in medical domains is survival information. Survival data presents its own particular challenges such as censoring. The aim of this thesis is to explore how machine learning ideas can be transferred to the domain of clinical data analysis. We consider two primary challenges; firstly how survival models can be made more flexible through non-linearisation and secondly methods for missing data imputation in order to handle the varying number of observed per patient features. We use the framework of Gaussian process modelling to facilitate conflation of our approaches; allowing the dual challenges of survival data and missing data to be addressed. The results show promise, although challenges remain. In particular when a large proportion of data is missing, greater uncertainty in inferences results. Principled handling of this uncertainty requires propagation through any Gaussian process model used for subsequent regression.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhou, Xiao Hua. "Robust procedures in survival analysis and reliability /." The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu148769392319865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography