Dissertations / Theses on the topic 'Censored failure time outcome'

To see the other types of publications on this topic, follow the link: Censored failure time outcome.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Censored failure time outcome.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

ROTA, MATTEO. "Cut-pont finding methods for continuous biomarkers." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/40114.

Full text
Abstract:
My PhD dissertation deals with statistical methods for cut-point finding for continuous biomarkers. Categorization is often needed for clinical decision making when dealing with diagnostic (or prognostic) biomarkers and a dichotomous or censored failure time outcome. This allows the definition of two or more prognostic risk groups, or also patient’s stratifications for inclusion in randomized clinical trials (RCTs). We investigate the following cut-point finding methods: minimum P-value, Youden index, concordance probability and point closest to-(0,1) corner in the ROC plane. We compare them by assuming both Normal and Gamma biomarker distributions, showing whether they lead to the identification of the same true cut-point and further investigating their performance by simulation. Within the framework of censored survival data, we will consider here new estimation approaches of the optimal cut-point, which use a conditional weighting method to estimate the true positive and false positive fractions. Motivating examples on real datasets are discussed within the dissertation for both the dichotomous and censored failure time outcome. In all simulation scenarios, the point closest-to-(0,1) corner in the ROC plane and concordance probability approaches outperformed the other methods. Both these methods showed good performance in the estimation of the optimal cut-point of a biomarker. However, to improve results communicability, the Youden index or the concordance probability associated to the estimated cut-point could be reported to summarize the associated classification accuracy. The use of the minimum P-value approach for cut-point finding is not recommended because its objective function is computed under the null hypothesis of absence of association between the true disease status and X. This is in contrast with the presence of some discrimination potential of the biomarker X that leads to the dichotomization issue. The investigated cut-point finding methods are based on measures, i.e. sensitivity and specificity, defined conditionally on the outcome. My PhD dissertation opens the question on whether these methods could be applied starting from predictive values, that typically represent the most useful information for clinical decisions on treatments. However, while sensitivity and specificity are invariant to disease prevalence, predictive values vary across populations with different disease prevalence. This is an important drawback of the use of predictive values for cut-point finding. More in general, great care should be taken when establishing a biomarker cut-point for clinical use. Methods for categorizing new biomarkers are often essential in clinical decision-making even if categorization of a continuous biomarker is gained at a considerable loss of power and information. In the future, new methods involving the study of the functional form between the biomarker and the outcome through regression techniques such as fractional polynomials or spline functions should be considered to alternatively define cut-points for clinical use. Moreover, in spite of the aforementioned drawback related to the use of predictive values, we also think that additional new methods for cut-point finding should be developed starting from predictive values.
APA, Harvard, Vancouver, ISO, and other styles
2

Gorelick, Jeremy Sun Jianguo. "Nonparametric analysis of interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/7009.

Full text
Abstract:
Title from PDF of title page (University of Missouri--Columbia, viewed on Feb 26, 2010). The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Dissertation advisor: Dr. (Tony) Jianguo Sun. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Lianming. "Statistical analysis of multivariate interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4375.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (May 2, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
4

Cai, Jianwen. "Generalized estimating equations for censored multivariate failure time data /." Thesis, Connect to this title online; UW restricted, 1992. http://hdl.handle.net/1773/9581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Man-Hua. "Statistical analysis of multivariate interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/4776.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on March 6, 2009) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Qiang. "Nonparametric treatment comparisons for interval-censored failure time data /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Chao. "Nonparametric and semiparametric methods for interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4415.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (May 2, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
8

Wong, Kin-yau, and 黃堅祐. "Analysis of interval-censored failure time data with long-term survivors." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199473.

Full text
Abstract:
Failure time data analysis, or survival analysis, is involved in various research fields, such as medicine and public health. One basic assumption in standard survival analysis is that every individual in the study population will eventually experience the event of interest. However, this assumption is usually violated in practice, for example when the variable of interest is the time to relapse of a curable disease resulting in the existence of long-term survivors. Also, presence of unobservable risk factors in the group of susceptible individuals may introduce heterogeneity to the population, which is not properly addressed in standard survival models. Moreover, the individuals in the population may be grouped in clusters, where there are associations among observations from a cluster. There are methodologies in the literature to address each of these problems, but there is yet no natural and satisfactory way to accommodate the coexistence of a non-susceptible group and the heterogeneity in the susceptible group under a univariate setting. Also, various kinds of associations among survival data with a cure are not properly accommodated. To address the above-mentioned problems, a class of models is introduced to model univariate and multivariate data with long-term survivors. A semiparametric cure model for univariate failure time data with long-term survivors is introduced. It accommodates a proportion of non-susceptible individuals and the heterogeneity in the susceptible group using a compound- Poisson distributed random effect term, which is commonly called a frailty. It is a frailty-Cox model which does not place any parametric assumption on the baseline hazard function. An estimation method using multiple imputation is proposed for right-censored data, and the method is naturally extended to accommodate interval-censored data. The univariate cure model is extended to a multivariate setting by introducing correlations among the compound- Poisson frailties for individuals from the same cluster. This multivariate cure model is similar to a shared frailty model where the degree of association among each pair of observations in a cluster is the same. The model is further extended to accommodate repeated measurements from a single individual leading to serially correlated observations. Similar estimation methods using multiple imputation are developed for the multivariate models. The univariate model is applied to a breast cancer data and the multivariate models are applied to the hypobaric decompression sickness data from National Aeronautics and Space Administration, although the methodologies are applicable to a wide range of data sets.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
9

Bouadoumou, Maxime K. "Jackknife Empirical Likelihood for the Accelerated Failure Time Model with Censored Data." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/112.

Full text
Abstract:
Kendall and Gehan estimating functions are used to estimate the regression parameter in accelerated failure time (AFT) model with censored observations. The accelerated failure time model is the preferred survival analysis method because it maintains a consistent association between the covariate and the survival time. The jackknife empirical likelihood method is used because it overcomes computation difficulty by circumventing the construction of the nonlinear constraint. Jackknife empirical likelihood turns the statistic of interest into a sample mean based on jackknife pseudo-values. U-statistic approach is used to construct the confidence intervals for the regression parameter. We conduct a simulation study to compare the Wald-type procedure, the empirical likelihood, and the jackknife empirical likelihood in terms of coverage probability and average length of confidence intervals. Jackknife empirical likelihood method has a better performance and overcomes the under-coverage problem of the Wald-type method. A real data is also used to illustrate the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Goodall, R. L. "Analysis of interval-censored failure time data with application to studies of HIV infection." Thesis, University College London (University of London), 2007. http://discovery.ucl.ac.uk/1446247/.

Full text
Abstract:
In clinical trials and cohort studies the event of interest is often not observable, and is known only to have occurred between the visit when the event was first observed and the previous visit such data are called interval-censored. This thesis develops three pieces of research that build upon published methods for interval-censored data. Novel methods are developed which can be applied via self-written macros in the standard packages, with the aim of increasing the use of appropriate methods in applied medical research. The non-parametric maximum likelihood estimator 1,2 (NPMLE) is the most common statistical method for estimating of the survivor function for interval-censored data. However, the choice of method for obtaining confidence intervals for the survivor function is unclear. Three methods are assessed and compared using simulated data and data from the MRC Delta trial 3. Non- or semi-parametric methods that correctly account for interval-censoring are not readily available in statistical packages. Typically the event time is taken to be the right endpoint of the censoring interval and standard methods (e.g. Kaplan-Meier) for the analysis of right-censored failure time data are used, giving biased estimates of the survival curve. A simulation study compared simple imputation using the right endpoint and interval midpoint to the NPMLE and a proposed smoothed version of the NPMLE that extends the work of Pan and Chappell 4. These methods were also applied to data from the CHIPS study 5. Different approaches to the estimation of a binary covariate are compared: (i) a proportional hazards model 6, (ii) a piecewise exponential model 7, (iii) a simpler proportional hazards model based on imputed event times, and (iv) a proposed approximation to the piecewise exponential model that is a more rigorous alternative to simple imputation methods whilst simple to fit using standard software.
APA, Harvard, Vancouver, ISO, and other styles
11

Lim, Hee-Jeong. "Statistical analysis of interval-censored and truncated survival data /." free to MU campus, to others for purchase, 2001. http://wwwlib.umi.com/cr/mo/fullcit?p3025635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Shinohara, Russell. "Estimation of survival of left truncated and right censored data under increasing hazard." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100210.

Full text
Abstract:
When subjects are recruited through a cross-sectional survey they have already experienced the initiation of the event of interest, say the onset of a disease. This method of recruitment results in the fact that subjects with longer duration of the disease have a higher chance of being selected. It follows that censoring in such a case is not non-informative. The application of standard techniques for right-censored data thus introduces a bias to the analysis; this is referred to as length-bias. This paper examines the case where the subjects are assumed to enter the study at a uniform rate, allowing for the analysis in a more efficient unconditional manner. In particular, a new method for unconditional analysis is developed based on the framework of a conditional estimator. This new method is then applied to the several data sets and compared with the conditional technique of Tsai [23].
APA, Harvard, Vancouver, ISO, and other styles
13

Colomay, Harold K. (Harold Kenney). "A survey of a class of nonparametric two-sample tests for right censored failure time data /." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=56645.

Full text
Abstract:
This thesis takes an in-depth focus at a specific class of nonparametric two-sample procedures for right censored failure time data-standardized weighted log-rank (SWL) statistics. This family of tests comprises the very famous Gehan, Efron, and log-rank procedures. The first two of these reduce to the Wilcoxon test with censoring absent, while the third one is a censored data generalization of the Savage test. Two particular topics of interest to us are (1) the generation of SWL statistics as score tests within the context of some popular regression models, and (2) asymptotic and small-sample behavior.
APA, Harvard, Vancouver, ISO, and other styles
14

Cheung, Tak-lun Alan, and 張德麟. "Modelling multivariate interval-censored and left-truncated survival data using proportional hazards model." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29536637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lu, Yinghua. "Empirical Likelihood Inference for the Accelerated Failure Time Model via Kendall Estimating Equation." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/math_theses/76.

Full text
Abstract:
In this thesis, we study two methods for inference of parameters in the accelerated failure time model with right censoring data. One is the Wald-type method, which involves parameter estimation. The other one is empirical likelihood method, which is based on the asymptotic distribution of likelihood ratio. We employ a monotone censored data version of Kendall estimating equation, and construct confidence intervals from both methods. In the simulation studies, we compare the empirical likelihood (EL) and the Wald-type procedure in terms of coverage accuracy and average length of confidence intervals. It is concluded that the empirical likelihood method has a better performance. We also compare the EL for Kendall’s rank regression estimator with the EL for other well known estimators and find advantages of the EL for Kendall estimator for small size sample. Finally, a real clinical trial data is used for the purpose of illustration.
APA, Harvard, Vancouver, ISO, and other styles
16

Lu, Min. "A Study of the Calibration Regression Model with Censored Lifetime Medical Cost." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/14.

Full text
Abstract:
Medical cost has received increasing interest recently in Biostatistics and public health. Statistical analysis and inference of life time medical cost have been challenging by the fact that the survival times are censored on some study subjects and their subsequent cost are unknown. Huang (2002) proposed the calibration regression model which is a semiparametric regression tool to study the medical cost associated with covariates. In this thesis, an inference procedure is investigated using empirical likelihood ratio method. The unadjusted and adjusted empirical likelihood confidence regions are constructed for the regression parameters. We compare the proposed empirical likelihood methods with normal approximation based method. Simulation results show that the proposed empirical likelihood ratio method outperforms the normal approximation based method in terms of coverage probability. In particular, the adjusted empirical likelihood is the best one which overcomes the under coverage problem.
APA, Harvard, Vancouver, ISO, and other styles
17

Kelly, Jodie. "Topics in the statistical analysis of positive and survival data." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
18

Assareh, Hassan. "Bayesian hierarchical models in statistical quality control methods to improve healthcare in hospitals." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/53342/1/Hassan_Assareh_Thesis.pdf.

Full text
Abstract:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
APA, Harvard, Vancouver, ISO, and other styles
19

Chang, Yin-Chu, and 張茵筑. "Joint analysis of longitudinal and interval-censored failure time data." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/72nher.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Jin-long, and 黃進龍. "Nonparametric tests for interval-censored failure time data via multiple imputation." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/am7z65.

Full text
Abstract:
博士
國立中山大學
應用數學系研究所
96
Interval-censored failure time data often occur in follow-up studies where subjects can only be followed periodically and the failure time can only be known to lie in an interval. In this paper we consider the problem of comparing two or more interval-censored samples. We propose a multiple imputation method for discrete interval-censored data to impute exact failure times from interval-censored observations and then apply existing test for exact data, such as the log-rank test, to imputed exact data. The test statistic and covariance matrix are calculated by our proposed multiple imputation technique. The formula of covariance matrix estimator is similar to the estimator used by Follmann, Proschan and Leifer (2003) for clustered data. Through simulation studies we find that the performance of the proposed log-rank type test is comparable to that of the test proposed by Finkelstein (1986), and is better than that of the two existing log-rank type tests proposed by Sun (2001) and Zhao and Sun (2004) due to the differences in the method of multiple imputation and the covariance matrix estimation. The proposed method is illustrated by means of an example involving patients with breast cancer. We also investigate applying our method to the other two-sample comparison tests for exact data, such as Mantel''s test (1967) and the integrated weighted difference test.
APA, Harvard, Vancouver, ISO, and other styles
21

Sun, De-Yu, and 孫德宇. "Generalized rank tests for univariate and bivariate interval-censored failure time data." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/66877955733235934635.

Full text
Abstract:
碩士
國立中山大學
應用數學系研究所
91
In Part 1 of this paper, we adapt Turnbull’s algorithm to estimate the distribution function of univariate interval-censored and truncated failure time data. We also propose four non-parametric tests to test whether two groups of the data come from the same distribution. The powers of proposed test statistics are compared by simulation under different distributions. The proposed tests are then used to analyze an AIDS study. In Part 2, for bivariate interval-censored data, we propose some models of how to generate the data and several methods to measure the correlation between the two variates. We also propose several nonparametric tests to determine whether the two variates are mutually independent or whether they have the same distribution. We demonstrate the performance of these tests by simulation and give an application to AIDS study(ACTG 181).
APA, Harvard, Vancouver, ISO, and other styles
22

Hsu, Hung-Yen, and 許鴻彥. "The distribution of a non-parametric test for interval-censored failure time data." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/46813527210810946696.

Full text
Abstract:
碩士
國立中山大學
應用數學系
87
A generalized non-parametric test for the interval-censored failure time data is proposed in determining whether p lifetime populations come from the same distribution. However, the distribution of a non-parametric statistic is not easy to obtain, therefore a simulation study is necessary. In this article, we propose a simulation procedure for determining the failure time distribution based on discrete interval-censored failure time data. The simulation results indicate that the proposed test is approximately chi-square distribution with (p-1) degree of freedom times a constant.
APA, Harvard, Vancouver, ISO, and other styles
23

Luh, Horng-Huey, and 陸虹惠. "The distribution of a non-parametric test for interval-censored and truncated failure time data." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/19381589012742766359.

Full text
Abstract:
碩士
國立中山大學
應用數學系
87
In this paper, we discuss the distribution of a non-parametric test based on incomplete data for which the measurement of a survival time is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. The test is proposed in determining whether p lifetime populations come from the same distribution. To find the distribution of the test statistic we propose a simulation study. Simulation results indicate that the test is approximately (1/c) multiply chi-square distribution with p-1 degrees of freedom, where the constant c may depend on some factors.
APA, Harvard, Vancouver, ISO, and other styles
24

Kuo, Yu-Yu, and 郭育佑. "A generalization of rank tests based on interval-censored failure time data and its application to AIDS studies." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/60525455288344490979.

Full text
Abstract:
碩士
國立中山大學
應用數學系研究所
88
In this paper we propose a generalized rank test based on discrete interval-censored failure time data to determine whether two lifetime populations come from the same distribution. It reduces to the Logrank test or Wilcoxon test when one has exact or right-censored data. Simulation shows that the proposed test performs pretty satisfactory. An example is presented to demonstrate how the proposed test can be applied in AIDS study.
APA, Harvard, Vancouver, ISO, and other styles
25

Han, Baoguang. "Statistical analysis of clinical trial data using Monte Carlo methods." Thesis, 2014. http://hdl.handle.net/1805/4650.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
In medical research, data analysis often requires complex statistical methods where no closed-form solutions are available. Under such circumstances, Monte Carlo (MC) methods have found many applications. In this dissertation, we proposed several novel statistical models where MC methods are utilized. For the first part, we focused on semicompeting risks data in which a non-terminal event was subject to dependent censoring by a terminal event. Based on an illness-death multistate survival model, we proposed flexible random effects models. Further, we extended our model to the setting of joint modeling where both semicompeting risks data and repeated marker data are simultaneously analyzed. Since the proposed methods involve high-dimensional integrations, Bayesian Monte Carlo Markov Chain (MCMC) methods were utilized for estimation. The use of Bayesian methods also facilitates the prediction of individual patient outcomes. The proposed methods were demonstrated in both simulation and case studies. For the second part, we focused on re-randomization test, which is a nonparametric method that makes inferences solely based on the randomization procedure used in clinical trials. With this type of inference, Monte Carlo method is often used for generating null distributions on the treatment difference. However, an issue was recently discovered when subjects in a clinical trial were randomized with unbalanced treatment allocation to two treatments according to the minimization algorithm, a randomization procedure frequently used in practice. The null distribution of the re-randomization test statistics was found not to be centered at zero, which comprised power of the test. In this dissertation, we investigated the property of the re-randomization test and proposed a weighted re-randomization method to overcome this issue. The proposed method was demonstrated through extensive simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography