Dissertations / Theses on the topic 'Survival data analysi'

To see the other types of publications on this topic, follow the link: Survival data analysi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Survival data analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

LIU, XIAOQIU. "Managing Cardiovascular Risk in Hypertension: Methodological Issues in Blood Pressure Data Analysis." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/154475.

Full text
Abstract:
Hypertension remains in 2017 a leading cause of mortality and disability worldwide. A number of issues related to the determinants of cardiovascular risk in hypertensive patients and to the strategies for better hypertension control are still pending. In such a context, aims of my research program were: 1. To investigate the contribution of blood pressure variability to the risk of cardiovascular mortality in hypertensive patients. In this setting, different methods for assessing blood pressure variability and different models exploring the link between blood pressure variability and outcome were investigated. 2. To assess the possibility that a hypertension management strategy based on hemodynamic assessment of patients through impedance cardiography might lead to a better hypertension control over 24 hours than a conventional approach only based on blood pressure measurement during clinic visits. To these aims, this thesis summarizes data obtained by performing a). An in-depth analysis of a study conducted in the Dublin hypertensive population, including 11492 subjects, and b). The analysis of longitudinal data collected in the frame of BEAUTY (BEtter control of blood pressure in hypertensive pAtients monitored Using the hoTman® sYstem) study. In Dublin study, the proportional hazard Cox model and accelerated failure time models have been used to estimate the additional effect of blood pressure variability on cardiovascular mortality over and above the effect of increased mean BP levels, with an attempt to identify the best threshold values for risk stratification. On the other hand, in BEAUTY study, mixed model and generalized estimation equation are used for the longitudinal data analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

TASSISTRO, ELENA. "Adverse events in survival data: from clinical questions to methods for statistical analysis." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2022. http://hdl.handle.net/10281/365520.

Full text
Abstract:
Nello studio di un nuovo trattamento con un tempo di sopravvivenza come outcome, l’insuccesso può essere definito in modo da includere un evento avverso serio (AE) tra gli endpoint tipicamente considerati, come ad esempio ricaduta o progressione. Questi eventi si comportano come rischi competitivi, dove l’occorrenza di una ricaduta come primo evento e il conseguente cambio di trattamento escludono la possibilità di osservare AE legati al trattamento stesso. L’analisi degli AE può essere affrontata mediante due diversi approcci: 1. descrizione dell’occorrenza osservata di AE come primo evento: la capacità del trattamento di proteggere dalla ricaduta ha un impatto sulla possibilità di osservare AE dovuti all’azione dei rischi competitivi. 2. Valutazione dell’impatto del trattamento sullo sviluppo di AE in pazienti che sono liberi da ricaduta nel tempo: si dovrebbe considerare l’occorrenza di AE come se la ricaduta non escludesse la possibilità di osservare AE legati al trattamento stesso. Nella prima parte della tesi abbiamo rivisto la strategia di analisi per i due approcci partendo dal tipo di domanda clinica di interesse. Quindi abbiamo identificato le quantità più adatte e i possibili stimatori (proporzione grezza, tasso di AE, incidenza grezza, stimatori smoothed di Kaplan-Meier e di Aalen-Nelson per l’hazard causa-specifico) e li abbiamo valutati relativamente a due aspetti, solitamente necessari in un contesto di sopravvivenza: (i) Lo stimatore dovrebbe tenere in considerazione la presenza di censura a destra (ii) La quantità teorica e lo stimatore dovrebbero essere funzioni del tempo. Nella seconda parte della tesi abbiamo proposto metodi alternativi, come modelli di regressione, curve di Kaplan-Meier stratificate e inverse probability of censoring weighting, per rilassare l’assunto di indipendenza tra i tempi potenziali di AE e di ricaduta. Abbiamo mostrato attraverso simulazioni che questi metodi superano i problemi legati all’uso dei classici stimatori per i rischi competitivi nel secondo approccio. In particolare, abbiamo simulato differenti scenari fissando l’hazard di ricaduta indipendente da due covariate binarie, dipendente da X1, dipendente da entrambe le covariate X1 e X2 anche attraverso la loro interazione. Abbiamo mostrato che si può gestire la selezione dei pazienti, e quindi ottenere indipendenza condizionata tra i tempi potenziali, aggiustando per tutte le covariate osservate. Si noti che anche aggiustando solo per poche covariate osservate come nella realtà a causa di covariate non misurate, si ottengono stime meno distorte rispetto a quelle che si ottengono dal Kaplan-Meier naive censurando per la ricaduta. Infatti, abbiamo dimostrato che la stima ottenuta con il Kaplan-Meier naive è sempre distorta a meno che l’hazard di ricaduta sia indipendente dalle covariate. In un ipotetico scenario dove tutte le covariate sono osservate, la stima della sopravvivenza media pesata ottenuta sia non parametricamente sia dal modello di Cox e la stima della sopravvivenza dall’inverse probability of censoring weighting dovrebbero essere non distorte (metodi applicati aggiustando per entrambe le covariate). Inoltre, segnaliamo che con l’inverse probability of censoring weighting si possono ottenere stime distorte quando tutte le possibili interazioni tra le covariate osservate non sono incluse nel modello per stimare i pesi. Tuttavia, l’inserimento dell’interazione non è necessario quando si usa il modello di Cox pesato, poiché condizionatamente alle covariate osservate, questo modello è robusto nella stima della sopravvivenza media. Ciò nonostante, una limitazione nell’uso del metodo della sopravvivenza media pesata è dato dal fatto che può essere utilizzato solo in presenza di covariate binarie (o categoriche), poiché se la covariata è continua non è possibile identificare i sottogruppi entro cui la funzione di sopravvivenza è stimata.
When studying a novel treatment with a survival time outcome, failure can be defined to include a serious adverse event (AE) among the endpoints typically considered, for instance relapse or progression. These events act as competing risks, where the occurrence of relapse as first event and the subsequent treatment change exclude the possibility of observing AE related to the treatment itself. In principle, the analysis of AE could be tackled by two different approaches: 1. the description of the observed occurrence of AE as first event: treatment ability to protect from relapse has an impact on the chance of observing AE due to the competing risks action. 2. the assessment of the treatment impact on the development of AE in patients who are relapse free in time: one should consider the occurrence of AE as if relapse would not exclude the possibility of observing AE related to the treatment itself. In the first part of the thesis we reviewed the strategy of analysis for the two approaches starting from the type of clinical question of interest. Then we identified the suitable quantities and possible estimators (crude proportion, AE rate, crude incidence, Kaplan-Meier and Aalen-Nelson smoothed estimators of the cause-specific hazard) and judge them according to two features, usually needed in a survival context: (i) the estimator should address for the presence of right censoring (ii) the theoretical quantity and estimator should be functions of time. In the second part of the thesis we proposed alternative methods, such as regression models, stratified Kaplan-Meier curves and inverse probability of censoring weighting, to relax the assumption of independence between the potential time to AE and the potential time to relapse. We showed through simulations that these methods overcome the problems related to the use of standard competing risks estimators in the second approach. In particular, we simulated different scenarios setting the hazard of relapse independent from two binary covariates, dependent from X1 only, dependent from both covariates X1 and X2, also through their interaction. We showed that one can handle patients’ selection, and thus obtain conditional independence between the two potential times, adjusting for all the observed covariates. Of note, even adjusting only for few observed covariates as in the reality due to unmeasured covariates, gives less biased estimates with respect to the estimate obtained from the naive Kaplan-Meier censoring by relapse. In fact, we proved that the estimate obtained from the naive Kaplan-Meier is always biased unless the hazard of relapse is independent from the covariates values. In an hypothetical scenario where all the covariates are observed, the weighted average survival estimate obtained either non parametrically or by the Cox model and the survival estimate from the inverse probability of censoring weighting would be unbiased (methods applied adjusting for both covariates). In addition, we point out that with the inverse probability of censoring weighting method one could obtained biased estimates when all the possible interactions between the observed covariates are not included in the model to estimate the weights. However, the inclusion of the interaction is not needed when the weighted Cox model is used, since conditional on the observed covariates, this model is robust in estimating the average survival. Nevertheless, a limitation in the use of the weighted average survival method is given by the fact that it may be applied only in the presence of binary (or categorical) covariates, since if the covariate is continuous it is impossible to identify the subgroups in which the survival function is estimated.
APA, Harvard, Vancouver, ISO, and other styles
3

Bruno, Rexanne Marie. "Statistical Analysis of Survival Data." UNF Digital Commons, 1994. http://digitalcommons.unf.edu/etd/150.

Full text
Abstract:
The terminology and ideas involved in the statistical analysis of survival data are explained including the survival function, the probability density function, the hazard function, censored observations, parametric and nonparametric estimations of these functions, the product limit estimation of the survival function, and the proportional hazards estimation of the hazard function with explanatory variables. In Appendix A these ideas are applied to the actual analysis of the survival data for 54 cervical cancer patients.
APA, Harvard, Vancouver, ISO, and other styles
4

Fontenelle, OtÃvio Fernandes. "Survival Analysis; Micro and Small Enterprises; Modeling Survival Data, Data Characterization Survival; parametric Estimator KAPLAN-MEIER." Universidade Federal do CearÃ, 2009. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=4173.

Full text
Abstract:
nÃo hÃ
The main objective of this research is to explore economics issues that may induce impact on lifetime of small businesses during 2002 to 2006. The group of enterprises studied was selected from database of taxpayers recorded at fiscal authority of State of CearÃ. To do that, the methodology was focused on a branch of statistics which deals with survival analysis, called duration analysis or duration modeling in economics. It was applied non-linear model whose non-parametric estimator chosen was KAPLAN-MEIER. Through that methodology, it was developed sceneries based on the following attributes: county where the enterprises were established; economics activities based on national classification, fiscal version 1.0/1.1; and, finally, the relationship between State of Cearà â as fiscal authority â and enterprises. The counties were grouped applying two parameters of stratifications: gross domestic product(GDP) per capita and investment in education per capita. Before any stratification, only counties with thirty or more enterprises starting their activities in year 2002 were considered in sceneries to analysis.
A dissertaÃÃo tem o objetivo de investigar fatores econÃmicos que possam influenciar na sobrevida de micros e pequenas empresas (MEPs) contribuintes do Imposto sobre OperaÃÃes relativas à CirculaÃÃo de Mercadorias e sobre PrestaÃÃes de ServiÃos de Transporte Interestadual e Intermunicipal e de ComunicaÃÃo (ICMS) do Estado do Cearà no perÃodo de 2002 à 2006. Para isso, aplicou-se uma tÃcnica estatÃstica denominada anÃlise de sobrevivÃncia a partir de modelos nÃo lineares cujo estimador nÃo-paramÃtrico escolhido foi o de KAPLAN-MEIER. Com os dados de sobrevivÃncia devidamente modelados, buscou-se estratificÃ-los focando os municÃpios dos logradouros das MEPs; dentro do que tange as operaÃÃes do ICMS, focando as atividades econÃmicas segundo a classificaÃÃo nacional de atividades econÃmicas (CNAE) versÃo fiscal 1.0/1.1; e, finalmente, observar a relaÃÃo do Estado â enquanto autoridade fiscal â com esses pequenos estabelecimentos, restringindo temporariamente seu faturamento ou mesmo baixando sua inscriÃÃo estadual, impossibilitando a continuidade de suas atividades. Dos municÃpios, utilizou-se como Ãndice de estratificaÃÃo entre as curvas de sobrevivÃncia o produto interno bruto (PIB) per capita e os investimentos mÃdio per capita em educaÃÃo daquelas empresas localizadas em municÃpios com 30 ou mais estabelecimentos ativados no ano de 2002. Dentre outras, duas importantes observaÃÃes foram identificar o municÃpio de Fortaleza como um âoutlinerâ frente aos outros municÃpios e a forte dominÃncia da curva de sobrevivÃncia das empresas que nÃo sofreram intervenÃÃo do fisco em suas atividades sobre aquelas que tiveram.
APA, Harvard, Vancouver, ISO, and other styles
5

葉英傑 and Ying-Kit David Ip. "Analysis of clustered grouped survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ip, Ying-Kit David. "Analysis of clustered grouped survival data /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B2353011x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Yau-wing. "Modelling multivariate survival data using semiparametric models." Click to view the E-thesis via HKUTO, 2000. http://sunzi.lib.hku.hk/hkuto/record/B4257528X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nhogue, Wabo Blanche Nadege. "Hedge Funds and Survival Analysis." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/26257.

Full text
Abstract:
Using data from Hedge Fund Research, Inc. (HFR), this study adapts and expands on existing methods in survival analysis in an attempt to investigate whether hedge funds mortality can be predicted on the basis of certain hedge funds characteristics. The main idea is to determine the characteristics which contribute the most to the survival and failure probabilities of hedge funds and interpret them. We establish hazard models with time-independent covariates, as well as time-varying covariates to interpret the selected hedge funds characteristics. Our results show that size, age, performance, strategy, annual audit, fund offshore and fund denomination are the characteristics that best explain hedge fund failure. We find that 1% increase in performance decreases the hazard by 3.3%, the small size and the less than 5 years old hedge funds are the most likely to die and the event-driven strategy is the best to use as compare to others. The risk of death is 0.668 times lower for funds who indicated that an annual audit is performed as compared to the funds who did not indicated that an annual audit is performed. The risk of death for the offshore hedge funds is 1.059 times higher than the non-offshore hedge funds.
APA, Harvard, Vancouver, ISO, and other styles
9

Kulich, Michal. "Additive hazards regression with incomplete covariate data /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/9562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

梁翠蓮 and Tsui-lin Leung. "Proportional odds model for survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B42575011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Leung, Tsui-lin. "Proportional odds model for survival data." Click to view the E-thesis via HKUTO, 1999. http://sunzi.lib.hku.hk/hkuto/record/B42575011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

林國輝 and Kwok-fai Lam. "Topics in survival analysis." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B30408994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lam, Kwok-fai. "Topics in survival analysis /." Hong Kong : University of Hong Kong, 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13829919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

李友榮 and Yau-wing Lee. "Modelling multivariate survival data using semiparametric models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B4257528X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhou, Feifei, and 周飞飞. "Cure models for univariate and multivariate survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45700977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Fei, and 劉飛. "Statistical inference for banding data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41508701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Liu, Fei. "Statistical inference for banding data." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B41508701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Oller, Piqué Ramon. "Survival analysis issues with interval-censored data." Doctoral thesis, Universitat Politècnica de Catalunya, 2006. http://hdl.handle.net/10803/6520.

Full text
Abstract:
L'anàlisi de la supervivència s'utilitza en diversos àmbits per tal d'analitzar dades que mesuren el temps transcorregut entre dos successos. També s'anomena anàlisi de la història dels esdeveniments, anàlisi de temps de vida, anàlisi de fiabilitat o anàlisi del temps fins a l'esdeveniment. Una de les dificultats que té aquesta àrea de l'estadística és la presència de dades censurades. El temps de vida d'un individu és censurat quan només és possible mesurar-lo de manera parcial o inexacta. Hi ha diverses circumstàncies que donen lloc a diversos tipus de censura. La censura en un interval fa referència a una situació on el succés d'interès no es pot observar directament i només tenim coneixement que ha tingut lloc en un interval de temps aleatori. Aquest tipus de censura ha generat molta recerca en els darrers anys i usualment té lloc en estudis on els individus són inspeccionats o observats de manera intermitent. En aquesta situació només tenim coneixement que el temps de vida de l'individu es troba entre dos temps d'inspecció consecutius.

Aquesta tesi doctoral es divideix en dues parts que tracten dues qüestions importants que fan referència a dades amb censura en un interval. La primera part la formen els capítols 2 i 3 els quals tracten sobre condicions formals que asseguren que la versemblança simplificada pot ser utilitzada en l'estimació de la distribució del temps de vida. La segona part la formen els capítols 4 i 5 que es dediquen a l'estudi de procediments estadístics pel problema de k mostres. El treball que reproduïm conté diversos materials que ja s'han publicat o ja s'han presentat per ser considerats com objecte de publicació.

En el capítol 1 introduïm la notació bàsica que s'utilitza en la tesi doctoral. També fem una descripció de l'enfocament no paramètric en l'estimació de la funció de distribució del temps de vida. Peto (1973) i Turnbull (1976) van ser els primers autors que van proposar un mètode d'estimació basat en la versió simplificada de la funció de versemblança. Altres autors han estudiat la unicitat de la solució obtinguda en aquest mètode (Gentleman i Geyer, 1994) o han millorat el mètode amb noves propostes (Wellner i Zhan, 1997).

El capítol 2 reprodueix l'article d'Oller et al. (2004). Demostrem l'equivalència entre les diferents caracteritzacions de censura no informativa que podem trobar a la bibliografia i definim una condició de suma constant anàloga a l'obtinguda en el context de censura per la dreta. També demostrem que si la condició de no informació o la condició de suma constant són certes, la versemblança simplificada es pot utilitzar per obtenir l'estimador de màxima versemblança no paramètric (NPMLE) de la funció de distribució del temps de vida. Finalment, caracteritzem la propietat de suma constant d'acord amb diversos tipus de censura. En el capítol 3 estudiem quina relació té la propietat de suma constant en la identificació de la distribució del temps de vida. Demostrem que la distribució del temps de vida no és identificable fora de la classe dels models de suma constant. També demostrem que la probabilitat del temps de vida en cadascun dels intervals observables és identificable dins la classe dels models de suma constant. Tots aquests conceptes els
il·lustrem amb diversos exemples.

El capítol 4 s'ha publicat parcialment en l'article de revisió metodològica de Gómez et al. (2004). Proporciona una visió general d'aquelles tècniques que s'han aplicat en el problema no paramètric de comparació de dues o més mostres amb dades censurades en un interval. També hem desenvolupat algunes rutines amb S-Plus que implementen la versió permutacional del tests de Wilcoxon, Logrank i de la t de Student per a dades censurades en un interval (Fay and Shih, 1998). Aquesta part de la tesi doctoral es complementa en el capítol 5 amb diverses propostes d'extensió del test de Jonckeere. Amb l'objectiu de provar una tendència en el problema de k mostres, Abel (1986) va realitzar una de les poques generalitzacions del test de Jonckheere per a dades censurades en un interval. Nosaltres proposem altres generalitzacions d'acord amb els resultats presentats en el capítol 4. Utilitzem enfocaments permutacionals i de Monte Carlo. Proporcionem programes informàtics per a cada proposta i realitzem un estudi de simulació per tal de comparar la potència de cada proposta sota diferents models paramètrics i supòsits de tendència. Com a motivació de la metodologia, en els dos capítols s'analitza un conjunt de dades d'un estudi sobre els beneficis de la zidovudina en pacients en els primers estadis de la infecció del virus VIH (Volberding et al., 1995).

Finalment, el capítol 6 resumeix els resultats i destaca aquells aspectes que s'han de completar en el futur.
Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. Interval censoring refers to the situation when the event of interest cannot be directly observed and it is only known to have occurred during a random interval of time. This kind of censoring has produced a lot of work in the last years and typically occurs for individuals in a study being inspected or observed intermittently, so that an individual's lifetime is known only to lie between two successive observation times.

This PhD thesis is divided into two parts which handle two important issues of interval censored data. The first part is composed by Chapter 2 and Chapter 3 and it is about formal conditions which allow estimation of the lifetime distribution to be based on a well known simplified likelihood. The second part is composed by Chapter 4 and Chapter 5 and it is devoted to the study of test procedures for the k-sample problem. The present work reproduces several material which has already been published or has been already submitted.

In Chapter 1 we give the basic notation used in this PhD thesis. We also describe the nonparametric approach to estimate the distribution function of the lifetime variable. Peto (1973) and Turnbull (1976) were the first authors to propose an estimation method which is based on a simplified version of the likelihood function. Other authors have studied the uniqueness of the solution given by this method (Gentleman and Geyer, 1994) or have improved it with new proposals (Wellner and Zhan, 1997).

Chapter 2 reproduces the paper of Oller et al. (2004). We prove the equivalence between different characterizations of noninformative censoring appeared in the literature and we define an analogous constant-sum condition to the one derived in the context of right censoring. We prove as well that when the noninformative condition or the constant-sum condition holds, the simplified likelihood can be used to obtain the nonparametric maximum likelihood estimator (NPMLE) of the failure time distribution function. Finally, we characterize the constant-sum property according to different types of censoring. In Chapter 3 we study the relevance of the constant-sum property in the identifiability of the lifetime distribution. We show that the lifetime distribution is not identifiable outside the class of constant-sum models. We also show that the lifetime probabilities assigned to the observable intervals are identifiable inside the class of constant-sum models. We illustrate all these notions with several examples.

Chapter 4 has partially been published in the survey paper of Gómez et al. (2004). It gives a general view of those procedures which have been applied in the nonparametric problem of the comparison of two or more interval-censored samples. We also develop some S-Plus routines which implement the permutational version of the Wilcoxon test, the Logrank test and the t-test for interval censored data (Fay and Shih, 1998). This part of the PhD thesis is completed in Chapter 5 by different proposals of extension of the Jonckeere's test. In order to test for an increasing trend in the k-sample problem, Abel (1986) gives one of the few generalizations of the Jonckheree's test for interval-censored data. We also suggest different Jonckheere-type tests according to the tests presented in Chapter 4. We use permutational and Monte Carlo approaches. We give computer programs for each proposal and perform a simulation study in order compare the power of each proposal under different parametric assumptions and different alternatives. We motivate both chapters with the analysis of a set of data from a study of the benefits of zidovudine in patients in the early stages of the HIV infection (Volberding et al., 1995).

Finally, Chapter 6 summarizes results and address those aspects which remain to be completed.
APA, Harvard, Vancouver, ISO, and other styles
19

Long, Yongxian, and 龙泳先. "Semiparametric analysis of interval censored survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45541152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Aparicio, Vázquez Ignacio. "Venn Prediction for Survival Analysis : Experimenting with Survival Data and Venn Predictors." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278823.

Full text
Abstract:
The goal of this work is to expand the knowledge on the field of Venn Prediction employed with Survival Data. Standard Venn Predictors have been used with Random Forests and binary classification tasks. However, they have not been utilised to predict events with Survival Data nor in combination with Random Survival Forests. With the help of a Data Transformation, the survival task is transformed into several binary classification tasks. One key aspect of Venn Prediction are the categories. The standard number of categories is two, one for each class to predict. In this work, the usage of ten categories is explored and the performance differences between two and ten categories are investigated. Seven data sets are evaluated, and their results presented with two and ten categories. For the Brier Score and Reliability Score metrics, two categories offered the best results, while Quality performed better employing ten categories. Occasionally, the models are too optimistic. Venn Predictors rectify this performance and produce well-calibrated probabilities.
Målet med detta arbete är att utöka kunskapen om området för Venn Prediction som används med överlevnadsdata. Standard Venn Predictors har använts med slumpmässiga skogar och binära klassificeringsuppgifter. De har emellertid inte använts för att förutsäga händelser med överlevnadsdata eller i kombination med Random Survival Forests. Med hjälp av en datatransformation omvandlas överlevnadsprediktion till flera binära klassificeringsproblem. En viktig aspekt av Venn Prediction är kategorierna. Standardantalet kategorier är två, en för varje klass. I detta arbete undersöks användningen av tio kategorier och resultatskillnaderna mellan två och tio kategorier undersöks. Sju datamängder används i en utvärdering där resultaten presenteras för två och tio kategorier. För prestandamåtten Brier Score och Reliability Score gav två kategorier de bästa resultaten, medan för Quality presterade tio kategorier bättre. Ibland är modellerna för optimistiska. Venn Predictors korrigerar denna prestanda och producerar välkalibrerade sannolikheter.
APA, Harvard, Vancouver, ISO, and other styles
21

Hirst, William Mark. "Outcome measurement error in survival analysis." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Yang. "Transformation models for survival data analysis and applications." Tallahassee, Florida : Florida State University, 2009. http://etd.lib.fsu.edu/theses/available/etd-03242009-145017/.

Full text
Abstract:
Thesis (Ph. D.)--Florida State University, 2009.
Advisor: Xu-Feng Niu, Florida State University, College of Arts and Sciences, Dept. of Statistics. Title and description from dissertation home page (viewed on Nov. 18, 2009). Document formatted into pages; contains x, 97 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Huan. "Survival analysis for censored data under referral bias." Thesis, University of Brighton, 2014. https://research.brighton.ac.uk/en/studentTheses/5b39ddc3-1c64-4dd2-8182-a4014c6b97b6.

Full text
Abstract:
This work arises from a hepatitis C cohort study and focuses on estimating the effects of covariates on progression to cirrhosis. In hepatitis C cohort studies, patients may be recruited to the cohort with referral bias because clinically the patients with more rapid disease progression are preferentially referred to liver clinics. This referral bias can lead to significantly biased estimates of the effects of covariates on progression to cirrhosis.
APA, Harvard, Vancouver, ISO, and other styles
24

Lim, Hee-Jeong. "Statistical analysis of interval-censored and truncated survival data /." free to MU campus, to others for purchase, 2001. http://wwwlib.umi.com/cr/mo/fullcit?p3025635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Yue. "Bayesian Cox Models for Interval-Censored Survival Data." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1479476510362603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Pecková, Monika. "Efficiency based adaptive tests for censored survival data /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/9599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Nwi-Mozu, Isaac. "Robustness of Semi-Parametric Survival Model: Simulation Studies and Application to Clinical Data." Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/etd/3618.

Full text
Abstract:
An efficient way of analyzing survival clinical data such as cancer data is a great concern to health experts. In this study, we investigate and propose an efficient way of handling survival clinical data. Simulation studies were conducted to compare performances of various forms of survival model techniques using an R package ``survsim". Models performance was conducted with varying sample sizes as small ($n5000$). For small and mild samples, the performance of the semi-parametric outperform or approximate the performance of the parametric model. However, for large samples, the parametric model outperforms the semi-parametric model. We compared the effectiveness and reliability of our proposed techniques using a real clinical data of mild sample size. Finally, systematic steps on how to model and explain the proposed techniques on real survival clinical data was provided.
APA, Harvard, Vancouver, ISO, and other styles
28

Nieto-Barajas, Luis E. "Bayesian nonparametric survival analysis via Markov processes." Thesis, University of Bath, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Martinenko, Evgeny. "Functional Data Analysis and its application to cancer data." Doctoral diss., University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6323.

Full text
Abstract:
The objective of the current work is to develop novel procedures for the analysis of functional data and apply them for investigation of gender disparity in survival of lung cancer patients. In particular, we use the time-dependent Cox proportional hazards model where the clinical information is incorporated via time-independent covariates, and the current age is modeled using its expansion over wavelet basis functions. We developed computer algorithms and applied them to the data set which is derived from Florida Cancer Data depository data set (all personal information which allows to identify patients was eliminated). We also studied the problem of estimation of a continuous matrix-variate function of low rank. We have constructed an estimator of such function using its basis expansion and subsequent solution of an optimization problem with the Schattennorm penalty. We derive an oracle inequality for the constructed estimator, study its properties via simulations and apply the procedure to analysis of Dynamic Contrast medical imaging data.
Ph.D.
Doctorate
Mathematics
Sciences
Mathematics
APA, Harvard, Vancouver, ISO, and other styles
30

Boyd, Katherine. "Non-ignorable missing covariate data in parametric survival analysis." Thesis, University of Warwick, 2007. http://wrap.warwick.ac.uk/55751/.

Full text
Abstract:
Within any epidemiological study missing data is almost inevitable. This missing data is often ignored; however, unless we can assume quite restrictive mechanisms, this will lead to biased estimates. Our motivation are data collected to study the long-term effect of severity of disability upon survival in children with cerebral palsy (henceforth CP). The analysis of such an old data set brings to light statistical difficulties. The main issue in this data is the amount of missing covariate data. We raise concerns about the mechanism causing data to be missing. We present a flexible class of joint models for the survival times and the missing data mechanism which allows us to vary the mechanism causing the missing data. Simulation studies prove this model to be both precise and reliable in estimating survival with missing data. We show that long term survival in the moderately disabled is high and, therefore, a large proportion will be surviving to times when they require care specifically for elderly CP sufferers. In particular, our models suggest that survival from diagnosis is considerably higher than has been previously estimated from this data. This thesis contributes to the discussion of possible methods for dealing with NMAR data.
APA, Harvard, Vancouver, ISO, and other styles
31

Rajeev, Deepthi. "Separate and Joint Analysis of Longitudinal and Survival Data." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1775.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Macis, Ambra. "Statistical Models and Machine Learning for Survival Data Analysis." Doctoral thesis, Università degli studi di Brescia, 2023. https://hdl.handle.net/11379/568945.

Full text
Abstract:
L'argomento principale di questa tesi è l'analisi della sopravvivenza, un insieme di metodi utilizzati negli studi longitudinali in cui l'interesse non è solo nel verificarsi (o meno) di un particolare evento, ma anche nel tempo necessario per osservarlo. Negli anni sono stati inizialmente proposti dei modelli statistici e, in seguito, sono stati introdotti anche metodi di machine learning per affrontare studi di analisi di sopravvivenza. La prima parte del lavoro fornisce un'introduzione ai concetti di base dell'analisi di sopravvivenza e un'ampia rassegna della letteratura esistente. Nello specifico, particolare attenzione è stata posta sui principali modelli statistici (non parametrici, semiparametrici e parametrici) e, tra i metodi di machine learning, sugli alberi e sulle random forests di sopravvivenza. Per questi metodi sono state descritte le principali proposte introdotte negli ultimi decenni. Nella seconda parte della tesi sono invece stati riportati i miei contributi di ricerca. Questi lavori si sono concentrati principalmente su due obiettivi: (1) la razionalizzazione in un protocollo unificato dell'approccio computazionale, che ad oggi è basato su diversi pacchetti esistenti con poca documentazione, molti punti ancora oscuri e anche alcuni bug, e (2) l'applicazione di metodi di analisi dei dati di sopravvivenza in un contesto insolito in cui, per quanto ne sappiamo, questo approccio non era mai stato utilizzato. Nello specifico il primo contributo è consistito nella scrittura di un tutorial volto a permettere a coloro che sono interessati di utilizzare questi metodi, facendo ordine tra i molti pacchetti esistenti e risolvendo i molteplici problemi computazionali presenti. Esso affronta i principali passi da seguire quando si vuole condurre uno studio di simulazione, con particolare attenzione a: (i) simulazione dei dati di sopravvivenza, (ii) adattamento del modello e (iii) valutazione della performance. Il secondo contributo è invece basato sull'applicazione di metodi di analisi di sopravvivenza, sia modelli statistici che algoritmi di machine learning, per analizzare le prestazioni offensive dei giocatori della National Basketball Association (NBA). In particolare, è stata effettuata una procedura di selezione delle variabili per determinare le principali variabili associate alla probabilità di superare un determinato numero di punti fatti durante la parte di stagione successiva all'All-Stars game e il tempo necessario per farlo. Concludendo, questa tesi si propone di porre le basi per lo sviluppo di un framework unificato in grado di armonizzare gli approcci frammentati esistenti e privo di errori computazionali. Inoltre, i risultati di questa tesi suggeriscono che un approccio di analisi di sopravvivenza può essere esteso anche a nuovi contesti.
The main topic of this thesis is survival analysis, a collection of methods used in longitudinal studies in which the interest is not only in the occurrence (or not) of a particular event, but also in the time needed for observing it. Over the years, firstly statistical models and then machine learning methods have been proposed to address studies of survival analysis. The first part of the work provides an introduction to the basic concepts of survival analysis and an extensive review of the existing literature. In particular, the focus has been set on the main statistical models (nonparametric, semiparametric and parametric) and, among machine learning methods, on survival trees and random survival forests. For these methods the main proposals introduced during the last decades have been described. In the second part of the thesis, instead, my research contributions have been reported. These works mainly focused on two aims: (1) the rationalization into a unified protocol of the computational approach, which nowadays is based on several existing packages with few documentation, several still obscure points and also some bugs, and (2) the application of survival data analysis methods in an unusual context where, to our best knowledge, this approach had never been used. In particular, the first contribution consisted in the writing of a tutorial aimed to enable the interested users to approach these methods, making order among the many existing algorithms and packages and providing solutions to the several related computational issues. It dealt with the main steps to follow when a simulation study is carried out, paying attention to: (i) survival data simulation, (ii) model fitting and (iii) performance assessment. The second contribution was based on the application of survival analysis methods, both statistical models and machine learning algorithms, for analyzing the offensive performance of the National Basketball Association (NBA) players. In particular, variable selection has been performed for determining the main variables associated to the probability of exceeding a given amount of scored points during the post All-Stars game season segment and the time needed for doing it. Concluding, this thesis proposes to lay the ground for the development of a unified framework able to harmonize the existing fragmented approaches and without computational issues. Moreover, the findings of this thesis suggest that a survival analysis approach can be extended also to new contexts.
APA, Harvard, Vancouver, ISO, and other styles
33

Che, Huiwen. "Cutoff sample size estimation for survival data: a simulation study." Thesis, Uppsala universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-234982.

Full text
Abstract:
This thesis demonstrates the possible cutoff sample size point that balances goodness of es-timation and study expenditure by a practical cancer case. As it is crucial to determine the sample size in designing an experiment, researchers attempt to find the suitable sample size that achieves desired power and budget efficiency at the same time. The thesis shows how simulation can be used for sample size and precision calculations with survival data. The pre-sentation concentrates on the simulation involved in carrying out the estimates and precision calculations. The Kaplan-Meier estimator and the Cox regression coefficient are chosen as point estimators, and the precision measurements focus on the mean square error and the stan-dard error.
APA, Harvard, Vancouver, ISO, and other styles
34

Louw, Elizabeth Magrietha. "Fitting of survival functions for grouped data on insurance policies." Pretoria : [s.n.], 2005. http://upetd.up.ac.za/thesis/available/etd-11282005-123928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Cai, Jianwen. "Generalized estimating equations for censored multivariate failure time data /." Thesis, Connect to this title online; UW restricted, 1992. http://hdl.handle.net/1773/9581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wong, Kin-yau, and 黃堅祐. "Analysis of interval-censored failure time data with long-term survivors." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199473.

Full text
Abstract:
Failure time data analysis, or survival analysis, is involved in various research fields, such as medicine and public health. One basic assumption in standard survival analysis is that every individual in the study population will eventually experience the event of interest. However, this assumption is usually violated in practice, for example when the variable of interest is the time to relapse of a curable disease resulting in the existence of long-term survivors. Also, presence of unobservable risk factors in the group of susceptible individuals may introduce heterogeneity to the population, which is not properly addressed in standard survival models. Moreover, the individuals in the population may be grouped in clusters, where there are associations among observations from a cluster. There are methodologies in the literature to address each of these problems, but there is yet no natural and satisfactory way to accommodate the coexistence of a non-susceptible group and the heterogeneity in the susceptible group under a univariate setting. Also, various kinds of associations among survival data with a cure are not properly accommodated. To address the above-mentioned problems, a class of models is introduced to model univariate and multivariate data with long-term survivors. A semiparametric cure model for univariate failure time data with long-term survivors is introduced. It accommodates a proportion of non-susceptible individuals and the heterogeneity in the susceptible group using a compound- Poisson distributed random effect term, which is commonly called a frailty. It is a frailty-Cox model which does not place any parametric assumption on the baseline hazard function. An estimation method using multiple imputation is proposed for right-censored data, and the method is naturally extended to accommodate interval-censored data. The univariate cure model is extended to a multivariate setting by introducing correlations among the compound- Poisson frailties for individuals from the same cluster. This multivariate cure model is similar to a shared frailty model where the degree of association among each pair of observations in a cluster is the same. The model is further extended to accommodate repeated measurements from a single individual leading to serially correlated observations. Similar estimation methods using multiple imputation are developed for the multivariate models. The univariate model is applied to a breast cancer data and the multivariate models are applied to the hypobaric decompression sickness data from National Aeronautics and Space Administration, although the methodologies are applicable to a wide range of data sets.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
37

Louw, Elizabeth Magrietha. "Fitting of survival functions for grouped data on insurance policies." Diss., University of Pretoria, 2002. http://hdl.handle.net/2263/29891.

Full text
Abstract:
The aim of the research is the statistical modelling of parametric survival distributions of grouped survival data of long- and shortterm policies in the insurance industry, by means of a method of maximum likelihood estimation subject to constraints. This methodology leads to explicit expressions for the estimates of the parameters, as well as for approximated variances and covariances of the estimates, which gives exact maximum likelihood estimates of the parameters. This makes direct extension to more complex designs feasible. The statistical modelling offers parametric models for survival distributions, in contrast with non-parametric models that are used commonly in the actuarial profession. When the parametric models provide a good fit to data, they tend to give more precise estimates of the quantities of interest such as odds ratios, hazard ratios or median lifetimes. These estimates form the statistical foundation for scientific decisionmaking with respect to actuarial design, maintenance and marketing of insurance policies. Although the methodology in this thesis is developed specifically for the insurance industry, it may be applied in the normal context of research and scientific decision making, that includes for example survival distributions for the medical, biological, engineering, econometric and sociological sciences.
Dissertation (PhD (Mathematical Statistics))--University of Pretoria, 2002.
Mathematics and Applied Mathematics
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
38

Boudreau, Christian. "Duration Data Analysis in Longitudinal Survey." Thesis, University of Waterloo, 2003. http://hdl.handle.net/10012/1043.

Full text
Abstract:
Considerable amounts of event history data are collected through longitudinal surveys. These surveys have many particularities or features that are the results of the dynamic nature of the population under study and of the fact that data collected through longitudinal surveys involve the use of complex survey designs, with clustering and stratification. These particularities include: attrition, seam-effect, censoring, left-truncation and complications in the variance estimation due to the use of complex survey designs. This thesis focuses on the last two points. Statistical methods based on the stratified Cox proportional hazards model that account for intra-cluster dependence, when the sampling design is uninformative, are proposed. This is achieved using the theory of estimating equations in conjunction with empirical process theory. Issues concerning analytic inference from survey data and the use of weighted versus unweighted procedures are also discussed. The proposed methodology is applied to data from the U. S. Survey of Income and Program Participation (SIPP) and data from the Canadian Survey of Labour and Income Dynamics (SLID). Finally, different statistical methods for handling left-truncated sojourns are explored and compared. These include the conditional partial likelihood and other methods, based on the Exponential or the Weibull distributions.
APA, Harvard, Vancouver, ISO, and other styles
39

Shinohara, Russell. "Estimation of survival of left truncated and right censored data under increasing hazard." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100210.

Full text
Abstract:
When subjects are recruited through a cross-sectional survey they have already experienced the initiation of the event of interest, say the onset of a disease. This method of recruitment results in the fact that subjects with longer duration of the disease have a higher chance of being selected. It follows that censoring in such a case is not non-informative. The application of standard techniques for right-censored data thus introduces a bias to the analysis; this is referred to as length-bias. This paper examines the case where the subjects are assumed to enter the study at a uniform rate, allowing for the analysis in a more efficient unconditional manner. In particular, a new method for unconditional analysis is developed based on the framework of a conditional estimator. This new method is then applied to the several data sets and compared with the conditional technique of Tsai [23].
APA, Harvard, Vancouver, ISO, and other styles
40

López, Segovia Lucas. "Survival data analysis with heavy-censoring and long-term survivors." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/276170.

Full text
Abstract:
The research developed in this thesis has been motivated by two datasets, which are introduced in Chapter 2, one concerning the mortality of calves from birth to weaning while the other refers to survival of patients diagnosed with melanoma. In both cases the percentage of censoring is high, it is very likely to have immune individuals and proper analysis accounting for the possibility of a not negligible proportion of cured individuals has to be performed. Cure models are introduced in Chapter 3 together with the available software to perform the analysis, such as SAS, R and STATA, among others. We investigate the effect that heavy censoring could have on the estimation of the regression coefficients in the Cox model via a simulation study which considers several scenarios given by different sample sizes and censoring levels, results presented in Chapter 4. An application of a mixture cure model, which includes a Cox model for the survival part and a logistic model for the cure part of patients with melanoma, is described in Chapter 5. In addition, discussions about test for sufficient follow-up and censoring levels are also presented for this data. The data analysis is carried out using the macro in SAS: PSPMCM. The results show that patients with Sentinel Lymph Node (SLN): negative status to biopsy, Clark's level of invasion I-III, Histopathological of Malignant Melanoma subtype: Superficial Spreading Melanoma (SSM), younger than 46 years, and female, are more likely to be cured, whereas patients with melanoma in head and neck, Breslow's micrometric depth = 4mm and ulceration presents, are patients with increased risk of relapse. In particular, patients with Breslow's micrometric depth = 4mm are at higher risk for death. Furthermore, since mixture cure models do not have the property of proportional hazards for the entire population, they can be extended to non-mixture cure models by means of nonlinear transformation models as defined in Tsodikov (2003). An application of the extended hazard models is presented for the mortality of calves in Chapter 6. The methodology allows to get estimates for the cure rate as well as for genetic and environmental effects for each herd. A relevant feature of the non-mixture cure models is that they model, separately, factors which could affect survival from those affecting the cure model, making the interpretation of these models relatively easy. Results are shown in section 6.3.1, and were obtained using the library NLTM of the statistical package R. The short (mortality) and long term (survivors) effects are determined for each factors, as well as its statistical significance in each herd. For example in the herd 1, we find that calving month and difficulty at birth is the set of statistically significant factors for the nonsusceptible (long-term survivors) proportion. Calves born in the period march-august have lower probability of survive than those born in September-February; and the probability of survive is much lower for those that have difficulties at calving for herd 1. For herd 7 the effect of difficulty at calving is different as for herd 1, here only is significative the category strongly assisted. Calves that born from strongly assisted calving have lower probability of survive that calves from without assistance calving. Regarding short-term (mortality) effects, we only find statistically significant predictors in herd 7 where the risk of death of calves born from older mothers, hence with a longer reproductive life, is twice the risk of death of calves born from younger mothers. The obtained results have been compared with those coming from standard survival models. It is also included, a discussion about the likely erroneous conclusions that may yield from standard models, without taking into account the cure.
La investigación desarrollada en esta tesis ha sido motivada por dos conjuntos de datos, introducidos en el capítulo 2, uno relacionado con la mortalidad de terneros desde el nacimiento hasta el destete, el otro con la supervivencia de los pacientes diagnosticados con melanoma. En ambos el porcentaje de censura es alto, la presencia de individuos inmunes es probable y un modelo que tome en cuenta esta proporción no despreciable de individuos inmunes será el más apropiado para su análisis. Los modelos de cura combinados se introducen en el capítulo 3 junto con el software disponible para realizar el análisis, tales como SAS, R y STATA, entre otros. Investigamos el efecto que una alta censura podría tener en la estimación de los coeficientes de regresión en el modelo de Cox, vía estudios de simulación para varios escenarios dado por diferentes tamaños de muestra y niveles de censura. Los resultados son presentados en el capítulo 4. La aplicación de un modelo de cura combinado, que incluye un modelo de Cox para la parte de supervivencia y un modelo logístico para la parte de cura de los pacientes con melanoma, se describe en el capítulo 5. Se presentan discusiones acerca de la prueba para el seguimiento suficiente y niveles de censura. El análisis se realiza mediante la macro de SAS: PSPMCM. Los resultados muestran que los pacientes con ganglios linfáticos Centinela (SLN): con biopsia negativa, nivel de Clark de invasión I-III, subtipo histopatológica de Melanoma maligno: con extensión superficial (SSM), menores de 46 años y mujer, tienen más probabilidades de ser curados, mientras que pacientes con melanoma en cabeza o cuello, Breslow micrométrico mayor o igual a 4mm de profundidad y ulceración presente, son pacientes con mayor riesgo de recaída. En particular, pacientes con Breslow micrométrico mayor o igual 4mm de profundidad están en riesgo de muerte. Por otra parte, como los modelos de cura combinados no tienen la propiedad de riesgos proporcionales para la población, estos pueden ser extendidos a modelos de cura no combinados via modelos de transformación no lineal definidos en Tsodikov (2003). Se presenta aplicación de los modelos de riesgo extendido para los datos de mortalidad de terneros en el capítulo 6. La metodología permite obtener estimaciones de la proporción de cura, así como los efectos de los factores genéticos y ambientales para cada rebaño. Una característica relevante de los modelos de cura no combinados es que modelan por separado, los factores que podrían afectar la supervivencia de aquellos que afectan el modelo de cura, y la interpretación es relativamente fácil. Los resultados se muestran en la sección 6.3.1 y se obtuvieron utilizando la librería NLTM del paquete estadístico R. Los efectos a corto plazo (mortalidad) y a largo plazo (sobrevivientes) son determinados para cada factor, así como su significación estadística en cada rebaño. Por ejemplo en el rebaño 1, encontramos que el mes del parto y la dificultad al nacer son estadísticamente significativos para la proporción no susceptible (sobrevivientes a largo plazo). Terneros nacidos en el periodo Marzo-Agosto tienen baja probabilidad de sobrevivir que aquellos nacidos en septiembre y febrero; y la probabilidad de sobrevivir es mucho menor para aquellos que tienen dificultades en el parto. Para el rebaño 7 el efecto de la dificultad al parto es diferente al rebaño 1, sólo es significativa la categoría fuertemente asistida. Los terneros de partos fuertemente asistidos tienen menor probabilidad de sobrevivir que aquellos sin asistencia. Respecto a los efectos a corto plazo (mortalidad), sólo encontramos predictores estadísticamente significativos en el rebaño 7 donde el riesgo de muerte de los nacidos de madres con una larga vida reproductiva, están al doble del riesgo de muerte que los nacidos de madres más jóvenes. Se incluye una discusión sobre las conclusiones erróneas que pueden obtenerse de los modelos estándar sino se toma en cuenta la cura.
APA, Harvard, Vancouver, ISO, and other styles
41

Branson, Michael Robert. "The analysis of survival data in which patients switch treatment." Thesis, University of Reading, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.394239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

DIAS, Cícero Rafael Barros. "New continuous distributions applied to lifetime data and survival analysis." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/17307.

Full text
Abstract:
Submitted by Isaac Francisco de Souza Dias (isaac.souzadias@ufpe.br) on 2016-07-08T19:03:53Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_CiceroDias_VersaoCD.pdf: 1665746 bytes, checksum: bf5520194ce2f18a505403954f133c62 (MD5)
Made available in DSpace on 2016-07-08T19:03:53Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_CiceroDias_VersaoCD.pdf: 1665746 bytes, checksum: bf5520194ce2f18a505403954f133c62 (MD5) Previous issue date: 2016-02-23
Statistical analysis of lifetime data is an important topic in engineering, biomedical, social sciences and others areas. There is a clear need for extended forms of the classical distributions to obtain more flexible distributions with better fits. In this work, we study and propose new distributions and new classes of continuous distributions. We present the work in three independentes parts. In the first one, we study with some details a lifetime model of the beta generated class proposed by Eugene; Lee; Famoye (2002). The new distribution is called the beta Nadarajah-Haghighi distribution, which can be used to model survival data. Its failure rate function is quite flexible and takes several forms depending on its parameters. The proposed model includes as special models several important distributions discussed in the literature, such as the exponential, generalized exponential (GUPTA; KUNDU, 1999), extended exponential (NADARAJAH; HAGHIGHI, 2011) and exponential-type (LEMONTE, 2013) distributions. We provide a comprehensive mathematical treatment of the new distribution and obtain explicit expressions for the moments, generating and quantile functions, incomplete moments, order statistics and entropies. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is derived. We fit the proposed model to a real data set to prove empirically its flexibility and potentiality. In the second part, we study general mathematical properties of a new generator of continuous distributions with three extra shape parameters called the exponentiated Marshal-Olkin family. We present some special models of the new class and some of its mathematical properties including moments and generating function. The method of maximum likelihood is used for estimating the model parameters. We illustrate the usefulness of the new distributions by means of two applications to real data sets. In the third part, we propose another new class of distributions based on the distribution introduced by Nadarajah and Haghighi (2011). We study some mathematical properties of this new class called Nadarajah-Haghighi-G (NH-G) family of distributions. Some special models are presented and we obtain explicit expressions for the quantile function, ordinary and incomplete moments, generating function and order statistics. The estimation of the model parameters is explored by maximum likelihood and we illustrate the flexibility of the new family with two applications to real data.
Análise estatística de dados de tempo de vida é um importante tópico em engenharia,biomedicina, ciências sociais, dentre outras áreas. Existe uma clara necessidade de se estender formas das clássicas distribuições para obter distribuições mais flexíveis com melhores ajustes. Neste trabalho, estudamos e propomos novas distribuições e novas classes de istribuições contínuas. Nós apresentamos o trabalho em três partes independentes. Na primeira, nós estudamos com alguns detalhes um modelo de tempo de vida da classe dos modelos beta generalizados proposto por Eugene; Lee; Famoye (2002). A nova distribuição é denominada de beta Nadarajah-Haghighi, a qual pode ser usada para modelar dados de sobrevivência. Sua função de taxa de falha é bastante flexível podendo ser de diversas formas dependendo dos seus parâmetros. O modelo proposto inclui como casos especiais muitas importantes distribuições discutidas na literatura, tais como as distribuições exponencial, exponential generalizada (GUPTA; KUNDU, 1999), exponencial extendida (NADARAJAH; HAGHIGHI, 2011) e a tipo exponencial (LEMONTE, 2013). Nós fornecemos um tratamento matemático abrangente da nova distribuição e obtemos explícitas expressões para os momentos, funções geratriz de momentos e quantílica, momentos incompletos, estatísticas de ordem e entropias. O método de máxima verossimilhança é usado para estimar os parâmetros do modelo e a matriz de informação observada é derivada. Nós ajustamos o modelo proposto para um conjunto de dados reais para provar a empiricamente sua flexibilidade e potencialidade. Na segunda parte, nós estudamos as propriedades matemáticas gerais de um novo gerador de distribuições contínuas com três parâmetros de forma extras chamada de família de distribuições MarshalOlkin exponencializada. Nós apresentamos alguns modelos especiais da nova classe e algumas das suas propriedades matemáticas incluindo momentos e função geratriz de momentos. O método de máxima verossimilhança é utilizado para estimação dos parâmetros do modelo. Nós ilustramos a utilidade da nova distribuição por meio de duas aplicações a conjuntos de dados reais. Na terceira parte, nós propomos outra nova classe distribuições baseada na distribuição introduzida por Nadarajah e Haghighi(2011). Nós estudamos algumas propriedades matemáticas dessa nova classe denominada Nadarajah-Haghighi-G (NH-G) família de distribuições. Alguns modelos especiais são apresentados e obtemos explícitas expressões para a função quantília, momentos ordinários e incompletos, função geratriz e estatística de ordem. A estimação dos parâmetros do modelo é explorada por máxima verossimilhança e nós ilustramos a flexibilidade da nova família com duas aplicações a dados reais.
APA, Harvard, Vancouver, ISO, and other styles
43

Kelly, Jodie. "Topics in the statistical analysis of positive and survival data." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
44

Akcin, Haci Mustafa. "Direct adjustment method on Aalen's additive hazards model for competing risks data." unrestricted, 2008. http://etd.gsu.edu/theses/available/etd-04182008-095207/.

Full text
Abstract:
Thesis (M.S.)--Georgia State University, 2008.
Title from file title page. Xu Zhang, committee chair; Yichuan Zhao, Jiawei Liu, Yu-Sheng Hsu, committee members. Electronic text (51 p.) : digital, PDF file. Description based on contents viewed July 15, 2008. Includes bibliographical references (p. 50-51).
APA, Harvard, Vancouver, ISO, and other styles
45

Cheung, Tak-lun Alan, and 張德麟. "Modelling multivariate interval-censored and left-truncated survival data using proportional hazards model." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29536637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Meddis, Alessandra. "Inference and validation of prognostic marker for correlated survival data with application to cancer." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASR005.

Full text
Abstract:
Les données de survie en grappes sont souvent recueillies dans le cadre de la recherche médicale. Elles sont caractérisées par des corrélations entre des observations appartenant à un même groupe. Ici, nous discutons des extensions a des données en grappes dans différents contextes : évaluation de la performance d'un biomarqueur candidat, et l’estimation de l'effet du traitement dans une méta-analyse sur données individuels (IPD) avec risques concurrents. La première a été motivée par l'étude IMENEO, une méta-analyse où l'intérêt portait sur la validité pronostique des cellules tumorales circulantes (CTCs). Notre objectif était de déterminer dans quelle mesure les CTCs discriminent les patients qui sont morts de ceux qui ne l'ont pas fait dans les t-années, en comparant des individus ayant le même stade de tumeur. Bien que la courbe ROC dépendante du temps ait été largement utilisée pour la discrimination des biomarqueurs, il n'existe pas de méthodologie permettant de traiter des données en grappes censurées. Nous proposons un estimateur pour les courbes ROC dépendantes du temps et pour l'AUC lorsque les temps d'évènements sont correlés. Nous avons employé un modèle de fragilité partagée pour modéliser l'effet des covariables et du biomarqueur sur la réponse afin de tenir compte de l'effet de la grappe. Une étude de simulation a été réalisée et a montré un biais négligeable pour l'estimateur proposé et pour un estimateur non paramétrique fondé sur la pondération par la probabilité inverse d’être censuré (IPCW), tandis qu'un estimateur semi-paramétrique, ignorant la structure en grappe est nettement biaisé.Nous avons également considéré une méta-analyse IPD pour quantifier le bénéfice de l'ajout de la chimiothérapie à la radiothérapie sur chaque risque concurrent pour les patients avec un carcinome nasopharyngien . Les recommandations pour l'analyse des risques concurrents dans le cadre d'essais cliniques randomisés sont bien établies. Étonnamment, aucune recommendation n'a encore été proposée pour l’anlayse d'une méta-analyse IPD avec les risque concurrents. Pour combler cette lacune, ce travail a détaillé la manière de traiter l'hétérogénéité entre les essais par un modèle de régression stratifié pour les risques concurrents et il souligne que les mesures standardes d'hétérogénéité pour évaluer l'incohérence peuvent facilement être utilisées. Les problèmes typiques qui se posent avec les méta-analyses et les avantages dus à la disponibilité des caractéristiques au niveau du patient ont été soulignées. Nous avons proposé une approche landmark pour la fonction d'incidence cumulée afin d'étudier l'impact du temps de suivi sur l'effet du traitement.L'hypothèse d'une taille de grappe non informative était faite dans les deux analyses. On dit que la taille de grappe est informative lorsque la variable réponse dépend de la taille de grappe conditionnellement à un ensemble de variables explicatives. Intuitivement, une méta-analyse répondrait à cette hypothèse. Cependant, la taille de grappe non informative est généralement supposée, même si elle peut être fausse dans certaines situations, ce qui conduit à des résultats incorrects. La taille des grappes informatives (ICS) est un problème difficile et sa présence a un impact sur le choix de la méthodologie. Nous avons discuté plus en détail de l'interprétation des résultats et des quantités qui peuvent être estimées et dans quelles conditions. Nous avons proposé un test pour l'ICS avec des données en grappes censurées. À notre connaissance, il s'agit du premier test sur le contexte de l'analyse de survie. Une étude de simulation a été réalisée pour évaluer la puissance du test et quelques exemples sont fournis à titre d'illustration.L'implémentation de chacun de ces développements est disponible sur https://github.com/AMeddis
Clustered data often arises in medical research. These are characterized by correlations between observations belonging to the same cluster. Here, we discuss some extension to clustered data in different contexts: evaluating the performance of a candidate biomarker, and assessing the treatment effect in an individual patient data (IPD) meta-analysis with competing risks. The former was motivated by the IMENEO study, an IPD meta-analysis where the prognostic validity of the Circulating Tumor Cells (CTCs) was of interest. Our objective was to determine how well CTCs discriminates patients that died from the one that did not within the t-years, comparing individuals with same tumor stage. Although the covariate-specific time dependent ROC curve has been widely used for biomarker's discrimination, there is no methodology that can handle clusteres censored data. We proposed an estimator for the covariate-specific time dependent ROC curves and area under the ROC curve when clustered failure times are detected. We considered a shared frailty model for modeling the effect of the covariates and the biomarker on the outcome in order to account for the cluster effect. A simulation study was conducted and it showed negligible bias for the proposed estimator and a nonparametric one based on inverse probability censoring weighting, while a semiparametric estimator, ignoring the clustering, is markedly biased.We further considered an IPD meta-analysis with competing risks to assess the benefit of the addition of chemotherapy to radiotherapy on each competing endpoint for patients with nasopharyngeal carcinoma. Recommendations for the analysis of competing risks in the context of randomized clinical trials are well established. Surprisingly, no formal guidelines have been yet proposed to conduct an IPD meta-analysis with competing risk endpoints. To fill this gap, this work detailed: how to handle the heterogeneity between trials via a stratified regression model for competing risks and it highlights that the usual metrics of inconsistency to assess heterogeneity can readily be employed. The typical issues that arise with meta-analyses and the advantages due to the availability of patient-level characteristics were underlined. We proposed a landmark approach for the cumulative incidence function to investigate the impact of follow up on the treatment effect.The assumption of non informative cluster size was made in both the analyses. The cluster size is said to be informative when the outcome depends on the size of the cluster conditional on a set of covariates. Intuitively, a meta-analysis would meet this assumption. However, non informative cluster size is commonly assumed even though it may be not true in some situations and it leads to incorrect results. Informative cluster size (ICS) is a challenging problem and its presence has an impact on the choice of the correct methodology. We discussed more in details interpretation of results and which quantities can be estimated under which conditions. We proposed a test for ICS with censored clustered data. To our knowledge, this is the first test on the context of survival analysis. A simulation study was conducted to assess the power of the test and some illustrative examples were provided.The implementation of each of these developments are available at https://github.com/AMeddis
APA, Harvard, Vancouver, ISO, and other styles
47

Nan, Bin. "Information bounds and efficient estimates for two-phase designs with lifetime data /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/9587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Qiuju. "Statistical inference for joint modelling of longitudinal and survival data." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/statistical-inference-for-joint-modelling-of-longitudinal-and-survival-data(65e644f3-d26f-47c0-bbe1-a51d01ddc1b9).html.

Full text
Abstract:
In longitudinal studies, data collected within a subject or cluster are somewhat correlated by their very nature and special cares are needed to account for such correlation in the analysis of data. Under the framework of longitudinal studies, three topics are being discussed in this thesis. In chapter 2, the joint modelling of multivariate longitudinal process consisting of different types of outcomes are discussed. In the large cohort study of UK north Stafforshire osteoarthritis project, longitudinal trivariate outcomes of continuous, binary and ordinary data are observed at baseline, year 3 and year 6. Instead of analysing each process separately, joint modelling is proposed for the trivariate outcomes to account for the inherent association by introducing random effects and the covariance matrix G. The influence of covariance matrix G on statistical inference of fixed-effects parameters has been investigated within the Bayesian framework. The study shows that by joint modelling the multivariate longitudinal process, it can reduce the bias and provide with more reliable results than it does by modelling each process separately. Together with the longitudinal measurements taken intermittently, a counting process of events in time is often being observed as well during a longitudinal study. It is of interest to investigate the relationship between time to event and longitudinal process, on the other hand, measurements taken for the longitudinal process may be potentially truncated by the terminated events, such as death. Thus, it may be crucial to jointly model the survival and longitudinal data. It is popular to propose linear mixed-effects models for the longitudinal process of continuous outcomes and Cox regression model for survival data to characterize the relationship between time to event and longitudinal process, and some standard assumptions have been made. In chapter 3, we try to investigate the influence on statistical inference for survival data when the assumption of mutual independence on random error of linear mixed-effects models of longitudinal process has been violated. And the study is conducted by utilising conditional score estimation approach, which provides with robust estimators and shares computational advantage. Generalised sufficient statistic of random effects is proposed to account for the correlation remaining among the random error, which is characterized by the data-driven method of modified Cholesky decomposition. The simulation study shows that, by doing so, it can provide with nearly unbiased estimation and efficient statistical inference as well. In chapter 4, it is trying to account for both the current and past information of longitudinal process into the survival models of joint modelling. In the last 15 to 20 years, it has been popular or even standard to assume that longitudinal process affects the counting process of events in time only through the current value, which, however, is not necessary to be true all the time, as recognised by the investigators in more recent studies. An integral over the trajectory of longitudinal process, along with a weighted curve, is proposed to account for both the current and past information to improve inference and reduce the under estimation of effects of longitudinal process on the risk hazards. A plausible approach of statistical inference for the proposed models has been proposed in the chapter, along with real data analysis and simulation study.
APA, Harvard, Vancouver, ISO, and other styles
49

Mason, Tracey. "Application of survival methods for the analysis of adverse event data." Thesis, Keele University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267646.

Full text
Abstract:
The concept of collecting Adverse Events (AEs) arose with the advent of the Thalidomide incident. Prior to this the development and marketing of drugs was not regulated in any way. It was the teterogenic effects which raised people's awareness of the damage prescription drugs could cause. This thesis will begin by describing the background to the foundation of the Committee for the Safety of Medicines (CSM) and how AEs are collected today. This thesis will investigate survival analysis, discriminant analysis and logistic regression to identify prognostic indicators. These indicators will be developed to build, assess and compare predictor models produced to see if the factors identified are similar amongst the methodologies used and if so are the background assumptions valid in this case. ROC analysis will be used to classify the prognostic indices produced by a valid cut-off point, in many medical applications the emphasis is on creating the index - the cut-off points are chosen by clinical judgement. Here ROC analysis is used to give a statistical background to the decision. In addition neural networks will be investigated and compared to the other models. Two sets of data are explored within the thesis, firstly data from a Phase III clinical trial used to assess the efficacy and safety of a new drug used to repress the advance of Alzheimer's disease where AEs are collected routinely and secondly data from a drug monitoring system used by the Department of Rheumatology at the Haywood Hospital to identify patients likely to require a change in their medication based on their blood results.
APA, Harvard, Vancouver, ISO, and other styles
50

Rosenberg, Magdalena. "Survival Time : A Survey on the Current Survival Time for an Unprotected Public System." Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-23199.

Full text
Abstract:
Survival Time, what exactly does the term imply and what is the best method to measure it? Several experts within the field of Internet security have used the term; some has gone further and presented statistical facts on the survival time throughout the years. This bachelor thesis aim to present a universal definition of the term and further on measure the current survival time for a given unprotected system. By the deployment of a decoy, data will be captured and collected through port monitoring. Mainly focus will lie on building a time curve presenting the estimated time for an unprotected public system to get detected on the Internet and the elapsed time hence the system gets attacked.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography