Academic literature on the topic 'Censored failure time outcome'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Censored failure time outcome.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Censored failure time outcome"

1

Zhou, Qingning, Jianwen Cai, and Haibo Zhou. "Outcome-dependent sampling with interval-censored failure time data." Biometrics 74, no. 1 (August 3, 2017): 58–67. http://dx.doi.org/10.1111/biom.12744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Antolini, Laura, and Maria Grazia Valsecchi. "Performance of binary markers for censored failure time outcome: nonparametric approach based on proportions." Statistics in Medicine 31, no. 11-12 (December 12, 2011): 1113–28. http://dx.doi.org/10.1002/sim.4443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Issa, Naim, Camden L. Lopez, Aleksandar Denic, Sandra J. Taler, Joseph J. Larson, Walter K. Kremers, Luisa Ricaurte, et al. "Kidney Structural Features from Living Donors Predict Graft Failure in the Recipient." Journal of the American Society of Nephrology 31, no. 2 (January 23, 2020): 415–23. http://dx.doi.org/10.1681/asn.2019090964.

Full text
Abstract:
BackgroundNephrosclerosis, nephron size, and nephron number vary among kidneys transplanted from living donors. However, whether these structural features predict kidney transplant recipient outcomes is unclear.MethodsOur study used computed tomography (CT) and implantation biopsy to investigate donated kidney features as predictors of death-censored graft failure at three transplant centers participating in the Aging Kidney Anatomy study. We used global glomerulosclerosis, interstitial fibrosis/tubular atrophy, artery luminal stenosis, and arteriolar hyalinosis to measure nephrosclerosis; mean glomerular volume, cortex volume per glomerulus, and mean cross-sectional tubular area to measure nephron size; and calculations from CT cortical volume and glomerular density on biopsy to assess nephron number. We also determined the death-censored risk of graft failure with each structural feature after adjusting for the predictive clinical characteristics of donor and recipient.ResultsThe analysis involved 2293 donor-recipient pairs. Mean recipient follow-up was 6.3 years, during which 287 death-censored graft failures and 424 deaths occurred. Factors that predicted death-censored graft failure independent of both donor and recipient clinical characteristics included interstitial fibrosis/tubular atrophy, larger cortical nephron size (but not nephron number), and smaller medullary volume. In a subset with 12 biopsy section slides, arteriolar hyalinosis also predicted death-censored graft failure.ConclusionsSubclinical nephrosclerosis, larger cortical nephron size, and smaller medullary volume in healthy donors modestly predict death-censored graft failure in the recipient, independent of donor or recipient clinical characteristics. These findings provide insights into a graft’s “intrinsic quality” at the time of donation, and further support the use of intraoperative biopsies to identify kidney grafts that are at higher risk for failure.
APA, Harvard, Vancouver, ISO, and other styles
4

Thongprayoon, Charat, Caroline C. Jadlowiec, Shennen A. Mao, Michael A. Mao, Napat Leeaphorn, Wisit Kaewput, Pattharawin Pattharanitima, Pitchaphon Nissaisorakarn, Matthew Cooper, and Wisit Cheungpasitporn. "Distinct phenotypes of kidney transplant recipients aged 80 years or older in the USA by machine learning consensus clustering." BMJ Surgery, Interventions, & Health Technologies 5, no. 1 (February 2023): e000137. http://dx.doi.org/10.1136/bmjsit-2022-000137.

Full text
Abstract:
ObjectivesThis study aimed to identify distinct clusters of very elderly kidney transplant recipients aged ≥80 and assess clinical outcomes among these unique clusters.DesignCohort study with machine learning (ML) consensus clustering approach.Setting and participantsAll very elderly (age ≥80 at time of transplant) kidney transplant recipients in the Organ Procurement and Transplantation Network/United Network for Organ Sharing database database from 2010 to 2019.Main outcome measuresDistinct clusters of very elderly kidney transplant recipients and their post-transplant outcomes including death-censored graft failure, overall mortality and acute allograft rejection among the assigned clusters.ResultsConsensus cluster analysis was performed in 419 very elderly kidney transplant and identified three distinct clusters that best represented the clinical characteristics of very elderly kidney transplant recipients. Recipients in cluster 1 received standard Kidney Donor Profile Index (KDPI) non-extended criteria donor (ECD) kidneys from deceased donors. Recipients in cluster 2 received kidneys from older, hypertensive ECD deceased donors with a KDPI score ≥85%. Kidneys for cluster 2 patients had longer cold ischaemia time and the highest use of machine perfusion. Recipients in clusters 1 and 2 were more likely to be on dialysis at the time of transplant (88.3%, 89.4%). Recipients in cluster 3 were more likely to be preemptive (39%) or had a dialysis duration less than 1 year (24%). These recipients received living donor kidney transplants. Cluster 3 had the most favourable post-transplant outcomes. Compared with cluster 3, cluster 1 had comparable survival but higher death-censored graft failure, while cluster 2 had lower patient survival, higher death-censored graft failure and more acute rejection.ConclusionsOur study used an unsupervised ML approach to cluster very elderly kidney transplant recipients into three clinically unique clusters with distinct post-transplant outcomes. These findings from an ML clustering approach provide additional understanding towards individualised medicine and opportunities to improve care for very elderly kidney transplant recipients.
APA, Harvard, Vancouver, ISO, and other styles
5

Mohsin Saeed, Noora, and Faiz Ahmed Mohamed Elfaki. "Parametric Weibull Model Based on Imputations Techniques for Partly Interval Censored Data." Austrian Journal of Statistics 49, no. 3 (February 20, 2020): 30–37. http://dx.doi.org/10.17713/ajs.v49i3.1027.

Full text
Abstract:
The term survival analysis has been used in a broad sense to describe collection of statistical procedures for data analysis for which the outcome variable of interest is time until an event occurs, the time to failure of an experimental unit might be censored and this can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, the analysis of this model is conducted based on parametric Weibull model via PIC data. Moreover, two imputation techniques are used, which are: left point and right point. The effectiveness of the proposed model is tested through numerical analysis on simulated and secondary data sets.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Qingning, Jianwen Cai, and Haibo Zhou. "Semiparametric inference for a two-stage outcome-dependent sampling design with interval-censored failure time data." Lifetime Data Analysis 26, no. 1 (January 7, 2019): 85–108. http://dx.doi.org/10.1007/s10985-019-09461-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guure, Chris B., and Samuel Bosomprah. "Bayesian Perspective on Random Censored Survival Data." International Scholarly Research Notices 2014 (October 29, 2014): 1–9. http://dx.doi.org/10.1155/2014/430357.

Full text
Abstract:
A unit is said to be randomly censored when the information on time occurrence of an event is not available due to either loss to followup, withdrawal, or nonoccurrence of the outcome event before the end of the study. It is assumed in independent random/noninformative censoring that each individual has his/her own failure time T and censoring time C; however, one can only observe the random vector, say, (X;δ). The classical approach is considered for analysing the generalised exponential distribution with random or noninformative censored samples which occur most often in biological or medical studies. The Bayes methods are also considered via a numerical approximation suggested by Lindley in 1980 and that of the Laplace approximation procedure developed by Tierney and Kadane in 1986 with assumed informative priors alongside linear exponential loss function and squared error loss function. A simulation study is carried out to compare the estimators proposed in this paper. Two datasets have also been illustrated.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Jian, and Sanjay Shete. "Estimation of indirect effect when the mediator is a censored variable." Statistical Methods in Medical Research 27, no. 10 (January 30, 2017): 3010–25. http://dx.doi.org/10.1177/0962280217690414.

Full text
Abstract:
A mediation model explores the direct and indirect effects of an initial variable ( X) on an outcome variable ( Y) by including a mediator ( M). In many realistic scenarios, investigators observe censored data instead of the complete data. Current research in mediation analysis for censored data focuses mainly on censored outcomes, but not censored mediators. In this study, we proposed a strategy based on the accelerated failure time model and a multiple imputation approach. We adapted a measure of the indirect effect for the mediation model with a censored mediator, which can assess the indirect effect at both the group and individual levels. Based on simulation, we established the bias in the estimations of different paths (i.e. the effects of X on M [ a], of M on Y [ b] and of X on Y given mediator M [ c’]) and indirect effects when analyzing the data using the existing approaches, including a naïve approach implemented in software such as Mplus, complete-case analysis, and the Tobit mediation model. We conducted simulation studies to investigate the performance of the proposed strategy compared to that of the existing approaches. The proposed strategy accurately estimates the coefficients of different paths, indirect effects and percentages of the total effects mediated. We applied these mediation approaches to the study of SNPs, age at menopause and fasting glucose levels. Our results indicate that there is no indirect effect of association between SNPs and fasting glucose level that is mediated through the age at menopause.
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Xianfeng, Xiao Yang, Xinhui Liu, Chunyan Yi, Qunying Guo, Xiaoran Feng, Haiping Mao, Fengxian Huang, and Xueqing Yu. "Patient Survival and Technique Failure in Continuous Ambulatory Peritoneal Dialysis Patients with Prior Stroke." Peritoneal Dialysis International: Journal of the International Society for Peritoneal Dialysis 36, no. 3 (May 2016): 308–14. http://dx.doi.org/10.3747/pdi.2014.00030.

Full text
Abstract:
Background To investigate patient survival and technical failure of patients with prior stroke receiving continuous ambulatory peritoneal dialysis (CAPD) in Southern China. Methods This was a retrospective study. All subjects were recruited from the peritoneal dialysis center in The First Affiliated Hospital of Sun Yat-sen University from 1 January 2006 to 31 December 2010. All eligible patients were assigned to stroke group and non-stroke group according to a history of stroke before receiving CAPD. The primary outcomes were all-cause mortality and death-censored technical failure. Cox regression was used to estimate risk factors of all-cause mortality and death-censored technique failure. Results Of the 1,068 recruited patients, 75 (7.0%) patients had a previous history of stroke. The all-cause mortality and death-censored technique failure were significantly higher in the stroke group compared with the non-stroke group, respectively (odds ratio [OR] 2.67, 95% confidence interval [CI] 1.59 – 4.46 and OR 2.52, 95% CI 1.19 – 5.34). Older age (changed by 10 years, hazard ratio [HR] 1.90, 95% CI 1.07 – 3.38), lower body mass index (BMI 18.5 – 23.9 vs < 18.5 kg/m2 reference, HR 0.17, 95% CI 0.05 – 0.55) and time to the first episode of peritonitis (HR 0.93, 95% CI 0.89 – 0.96) were independently associated with increased risk of all-cause mortality in patients with prior stroke. In addition, time to the first episode of peritonitis was associated with decreased risk of death-censored technique failure (HR 0.91, 95% CI 0.84 – 0.99) in those with prior stroke. Conclusions Continuous ambulatory peritoneal dialysis patients with prior stroke had high rates of all-cause mortality and technique failure compared with those without prior stroke. Older age, lower BMI, and time to the first episode of peritonitis were independent risk factors of all-cause mortality in patients with prior stroke.
APA, Harvard, Vancouver, ISO, and other styles
10

Stapleton, Caragh P., Graham M. Lord, Peter J. Conlon, and Gianpiero L. Cavalleri. "The relationship between donor-recipient genetic distance and long-term kidney transplant outcome." HRB Open Research 3 (July 29, 2020): 47. http://dx.doi.org/10.12688/hrbopenres.13021.1.

Full text
Abstract:
Background: We set out to quantify shared genetic ancestry between unrelated kidney donor-recipient pairs and test it as a predictor of time to graft failure. Methods: In a homogenous, unrelated, European cohort of deceased-donor kidney transplant pairs (n pairs = 1,808), we calculated, using common genetic variation, shared ancestry at the genic (n loci=40,053) and genomic level. We conducted a sub-analysis focused on transmembrane protein coding genes (n transcripts=8,637) and attempted replication of a previously published nonsynonymous transmembrane mismatch score. Measures of shared genetic ancestry were tested in a survival model against time to death-censored graft failure. Results: Shared ancestry calculated across the human leukocyte antigen (HLA) significantly associated with graft survival in individuals who had a high serological mismatch (n pairs = 186) with those who did not have any HLA mismatches indicating that shared ancestry calculated specific loci can capture known associations with genes impacting graft outcome. None of the other measures of shared ancestry at a genic level, genome-wide scale, transmembrane subset or nonsynonymous transmembrane mismatch score analysis were significant predictors of time to graft failure. Conclusions: In a large unrelated, deceased-donor European ancestry renal transplant cohort, shared donor-recipient genetic ancestry, calculated using common genetic variation, has limited value in predicting transplant outcome both on a genomic scale and at a genic level (other than at the HLA loci).
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Censored failure time outcome"

1

ROTA, MATTEO. "Cut-pont finding methods for continuous biomarkers." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/40114.

Full text
Abstract:
My PhD dissertation deals with statistical methods for cut-point finding for continuous biomarkers. Categorization is often needed for clinical decision making when dealing with diagnostic (or prognostic) biomarkers and a dichotomous or censored failure time outcome. This allows the definition of two or more prognostic risk groups, or also patient’s stratifications for inclusion in randomized clinical trials (RCTs). We investigate the following cut-point finding methods: minimum P-value, Youden index, concordance probability and point closest to-(0,1) corner in the ROC plane. We compare them by assuming both Normal and Gamma biomarker distributions, showing whether they lead to the identification of the same true cut-point and further investigating their performance by simulation. Within the framework of censored survival data, we will consider here new estimation approaches of the optimal cut-point, which use a conditional weighting method to estimate the true positive and false positive fractions. Motivating examples on real datasets are discussed within the dissertation for both the dichotomous and censored failure time outcome. In all simulation scenarios, the point closest-to-(0,1) corner in the ROC plane and concordance probability approaches outperformed the other methods. Both these methods showed good performance in the estimation of the optimal cut-point of a biomarker. However, to improve results communicability, the Youden index or the concordance probability associated to the estimated cut-point could be reported to summarize the associated classification accuracy. The use of the minimum P-value approach for cut-point finding is not recommended because its objective function is computed under the null hypothesis of absence of association between the true disease status and X. This is in contrast with the presence of some discrimination potential of the biomarker X that leads to the dichotomization issue. The investigated cut-point finding methods are based on measures, i.e. sensitivity and specificity, defined conditionally on the outcome. My PhD dissertation opens the question on whether these methods could be applied starting from predictive values, that typically represent the most useful information for clinical decisions on treatments. However, while sensitivity and specificity are invariant to disease prevalence, predictive values vary across populations with different disease prevalence. This is an important drawback of the use of predictive values for cut-point finding. More in general, great care should be taken when establishing a biomarker cut-point for clinical use. Methods for categorizing new biomarkers are often essential in clinical decision-making even if categorization of a continuous biomarker is gained at a considerable loss of power and information. In the future, new methods involving the study of the functional form between the biomarker and the outcome through regression techniques such as fractional polynomials or spline functions should be considered to alternatively define cut-points for clinical use. Moreover, in spite of the aforementioned drawback related to the use of predictive values, we also think that additional new methods for cut-point finding should be developed starting from predictive values.
APA, Harvard, Vancouver, ISO, and other styles
2

Gorelick, Jeremy Sun Jianguo. "Nonparametric analysis of interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/7009.

Full text
Abstract:
Title from PDF of title page (University of Missouri--Columbia, viewed on Feb 26, 2010). The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Dissertation advisor: Dr. (Tony) Jianguo Sun. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Lianming. "Statistical analysis of multivariate interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4375.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (May 2, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
4

Cai, Jianwen. "Generalized estimating equations for censored multivariate failure time data /." Thesis, Connect to this title online; UW restricted, 1992. http://hdl.handle.net/1773/9581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Man-Hua. "Statistical analysis of multivariate interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/4776.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on March 6, 2009) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Qiang. "Nonparametric treatment comparisons for interval-censored failure time data /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Chao. "Nonparametric and semiparametric methods for interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4415.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (May 2, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
8

Wong, Kin-yau, and 黃堅祐. "Analysis of interval-censored failure time data with long-term survivors." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199473.

Full text
Abstract:
Failure time data analysis, or survival analysis, is involved in various research fields, such as medicine and public health. One basic assumption in standard survival analysis is that every individual in the study population will eventually experience the event of interest. However, this assumption is usually violated in practice, for example when the variable of interest is the time to relapse of a curable disease resulting in the existence of long-term survivors. Also, presence of unobservable risk factors in the group of susceptible individuals may introduce heterogeneity to the population, which is not properly addressed in standard survival models. Moreover, the individuals in the population may be grouped in clusters, where there are associations among observations from a cluster. There are methodologies in the literature to address each of these problems, but there is yet no natural and satisfactory way to accommodate the coexistence of a non-susceptible group and the heterogeneity in the susceptible group under a univariate setting. Also, various kinds of associations among survival data with a cure are not properly accommodated. To address the above-mentioned problems, a class of models is introduced to model univariate and multivariate data with long-term survivors. A semiparametric cure model for univariate failure time data with long-term survivors is introduced. It accommodates a proportion of non-susceptible individuals and the heterogeneity in the susceptible group using a compound- Poisson distributed random effect term, which is commonly called a frailty. It is a frailty-Cox model which does not place any parametric assumption on the baseline hazard function. An estimation method using multiple imputation is proposed for right-censored data, and the method is naturally extended to accommodate interval-censored data. The univariate cure model is extended to a multivariate setting by introducing correlations among the compound- Poisson frailties for individuals from the same cluster. This multivariate cure model is similar to a shared frailty model where the degree of association among each pair of observations in a cluster is the same. The model is further extended to accommodate repeated measurements from a single individual leading to serially correlated observations. Similar estimation methods using multiple imputation are developed for the multivariate models. The univariate model is applied to a breast cancer data and the multivariate models are applied to the hypobaric decompression sickness data from National Aeronautics and Space Administration, although the methodologies are applicable to a wide range of data sets.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
9

Bouadoumou, Maxime K. "Jackknife Empirical Likelihood for the Accelerated Failure Time Model with Censored Data." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/112.

Full text
Abstract:
Kendall and Gehan estimating functions are used to estimate the regression parameter in accelerated failure time (AFT) model with censored observations. The accelerated failure time model is the preferred survival analysis method because it maintains a consistent association between the covariate and the survival time. The jackknife empirical likelihood method is used because it overcomes computation difficulty by circumventing the construction of the nonlinear constraint. Jackknife empirical likelihood turns the statistic of interest into a sample mean based on jackknife pseudo-values. U-statistic approach is used to construct the confidence intervals for the regression parameter. We conduct a simulation study to compare the Wald-type procedure, the empirical likelihood, and the jackknife empirical likelihood in terms of coverage probability and average length of confidence intervals. Jackknife empirical likelihood method has a better performance and overcomes the under-coverage problem of the Wald-type method. A real data is also used to illustrate the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Goodall, R. L. "Analysis of interval-censored failure time data with application to studies of HIV infection." Thesis, University College London (University of London), 2007. http://discovery.ucl.ac.uk/1446247/.

Full text
Abstract:
In clinical trials and cohort studies the event of interest is often not observable, and is known only to have occurred between the visit when the event was first observed and the previous visit such data are called interval-censored. This thesis develops three pieces of research that build upon published methods for interval-censored data. Novel methods are developed which can be applied via self-written macros in the standard packages, with the aim of increasing the use of appropriate methods in applied medical research. The non-parametric maximum likelihood estimator 1,2 (NPMLE) is the most common statistical method for estimating of the survivor function for interval-censored data. However, the choice of method for obtaining confidence intervals for the survivor function is unclear. Three methods are assessed and compared using simulated data and data from the MRC Delta trial 3. Non- or semi-parametric methods that correctly account for interval-censoring are not readily available in statistical packages. Typically the event time is taken to be the right endpoint of the censoring interval and standard methods (e.g. Kaplan-Meier) for the analysis of right-censored failure time data are used, giving biased estimates of the survival curve. A simulation study compared simple imputation using the right endpoint and interval midpoint to the NPMLE and a proposed smoothed version of the NPMLE that extends the work of Pan and Chappell 4. These methods were also applied to data from the CHIPS study 5. Different approaches to the estimation of a binary covariate are compared: (i) a proportional hazards model 6, (ii) a piecewise exponential model 7, (iii) a simpler proportional hazards model based on imputed event times, and (iv) a proposed approximation to the piecewise exponential model that is a more rigorous alternative to simple imputation methods whilst simple to fit using standard software.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Censored failure time outcome"

1

Interval-censored time-to-event data: Methods and applications. Boca Raton: Chapman and Hall/CRC, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hart, Graeme K., and David Pilcher. Severity of illness scoring systems. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780199600830.003.0029.

Full text
Abstract:
Clinical outcome comparisons for research and quality assurance require risk adjustment measures validated in the population of interest. There are many scoring systems using intensive care unit (ICU)-specific or administrative data sets, or both. Risk-adjusted ICU and hospital mortality outcome measures may be not granular enough or may be censored before the absolute risk of the studied outcome reaches that of the population at large. Data linkage methods may be used to examine longer-term outcomes. Organ failure scores provide a method for assessing the intra-episode time course of illness and scores using treatment variables may be useful for assessing care requirements. Each adjustment system has specific merits and limitations, which must be understood for appropriate use. Graphical representations of the comparisons facilitate understanding and time-appropriate response to variations in outcome. There are, as yet, no universally-accepted measures for severity of illness and risk adjustment in deteriorating patients outside the ICU.
APA, Harvard, Vancouver, ISO, and other styles
3

Statistical Analysis of Interval-Censored Failure Time Data. Springer London, Limited, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

The Statistical Analysis of Interval-censored Failure Time Data. New York, NY: Springer New York, 2006. http://dx.doi.org/10.1007/0-387-37119-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Jianguo. The Statistical Analysis of Interval-censored Failure Time Data. Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Jianguo, Karl E. Peace, and Ding-Geng Chen. Interval-Censored Time-to-event Data. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

The Statistical Analysis of Interval-censored Failure Time Data (Statistics for Biology and Health). Springer, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bradbury, Ian K. The robust analysis of right censored survival data using the accelerated failure time model. 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Ding-Geng. Interval-Censored Time-To-Event Data: Methods and Applications. Taylor & Francis Group, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lesaffre, Emmanuel, Kris Bogaerts, and Arnost Komárek. Survival Analysis with Interval-Censored Data. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Censored failure time outcome"

1

Wu, Qiwei, Hui Zhao, and Jianguo Sun. "Variable Selection of Interval-Censored Failure Time Data." In Emerging Topics in Statistics and Biostatistics, 475–87. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42196-0_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Xiaoming, Michael G. Hudgens, and Fei Zou. "Case-Cohort Studies with Time-Dependent Covariates and Interval-Censored Outcome." In Emerging Topics in Modeling Interval-Censored Survival Data, 221–34. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12366-5_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Du, Mingyue. "Overview of Recent Advances on the Analysis of Interval-Censored Failure Time Data." In Emerging Topics in Modeling Interval-Censored Survival Data, 9–24. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12366-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sevilimedu, Varadan, Lili Yu, Ding-Geng Chen, and Yuhlong Lio. "Misclassification Simulation Extrapolation Procedure for Interval-Censored Log-Logistic Accelerated Failure Time Model." In Emerging Topics in Modeling Interval-Censored Survival Data, 295–308. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12366-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cook, Tyler, Zhigang Zhang, and Jianguo Sun. "Simulation Studies on the Effects of the Censoring Distribution Assumption in the Analysis of Interval-Censored Failure Time Data." In Monte-Carlo Simulation-Based Statistical Modeling, 319–46. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3307-0_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bacharoudis, Konstantinos, Atanas Popov, and Svetan Ratchev. "Application of Advanced Simulation Methods for the Tolerance Analysis of Mechanical Assemblies." In IFIP Advances in Information and Communication Technology, 153–67. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72632-4_11.

Full text
Abstract:
AbstractIn the frame of a statistical tolerance analysis of complex assemblies, for example an aircraft wing, the capability to predict accurately and fast specified, very small quantiles of the distribution of the assembly key characteristic becomes crucial. The problem is significantly magnified, when the tolerance synthesis problem is considered in which several tolerance analyses are performed and thus, a reliability analysis problem is nested inside an optimisation one in a fully probabilistic approach. The need to reduce the computational time and accurately estimate the specified probabilities is critical. Therefore, herein, a systematic study on several state of the art simulation methods is performed whilst they are critically evaluated with respect to their efficiency to deal with tolerance analysis problems. It is demonstrated that tolerance analysis problems are characterised by high dimensionality, high non-linearity of the state functions, disconnected failure domains, implicit state functions and small probability estimations. Therefore, the successful implementation of reliability methods becomes a formidable task. Herein, advanced simulation methods are combined with in-house developed assembly models based on the Homogeneous Transformation Matrix method as well as off-the-self Computer Aided Tolerance tools. The main outcome of the work is that by using an appropriate reliability method, computational time can be reduced whilst the probability of defected products can be accurately predicted. Furthermore, the connection of advanced mathematical toolboxes with off-the-self 3D tolerance tools into a process integration framework introduces benefits to successfully deal with the tolerance allocation problem in the future using dedicated and powerful computational tools.
APA, Harvard, Vancouver, ISO, and other styles
7

van Kreveld, Marc. "The accelerated failure time model." In Survival Analysis with Interval-Censored Data, 179–220. Chapman and Hall/CRC, 2017. http://dx.doi.org/10.1201/9781315116945-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

van Kreveld, Marc. "The Bayesian accelerated failure time model." In Survival Analysis with Interval-Censored Data, 381–421. Chapman and Hall/CRC, 2017. http://dx.doi.org/10.1201/9781315116945-12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Jianguo. "Statistical Analysis of Doubly Interval-Censored Failure Time Data." In Handbook of Statistics, 105–22. Elsevier, 2003. http://dx.doi.org/10.1016/s0169-7161(03)23006-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dai, Hongsheng, and Huan Wang. "Accelerated failure time model for truncated and censored survival data." In Analysis for Time-to-Event Data under Censoring and Truncation, 61–78. Elsevier, 2017. http://dx.doi.org/10.1016/b978-0-12-805480-2.50004-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Censored failure time outcome"

1

Karimi, Mostafa, Noor Akma Ibrahim, Mohd Rizam Abu Bakar, and Jayanthi Arasan. "Rank-based inference for the accelerated failure time model in the presence of interval censored data." In INNOVATIONS THROUGH MATHEMATICAL AND STATISTICAL RESEARCH: Proceedings of the 2nd International Conference on Mathematical Sciences and Statistics (ICMSS2016). Author(s), 2016. http://dx.doi.org/10.1063/1.4952568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, S. C. "An application of the model building of fuzzy dynamic integrated judgment in the total failure time of censored data." In IEEE Annual Meeting of the Fuzzy Information, 2004. Processing NAFIPS '04. IEEE, 2004. http://dx.doi.org/10.1109/nafips.2004.1337435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jones, David I. G. "Stress Rupture of Ceramics: Time-Temperature Relationships." In ASME 1987 International Gas Turbine Conference and Exhibition. American Society of Mechanical Engineers, 1987. http://dx.doi.org/10.1115/87-gt-81.

Full text
Abstract:
Ceramic materials, such as Silicon Carbide and Silicon Nitride, have been considered as replacements for metals in many components of modern engines, as opportunities have arisen to make use of their unique properties. Improved production capabilities and end-product defect control have contributed to this progress, but limited information on the effect of these parameters on long term behavior, such as creep and stress rupture, may have contributed to some of the difficulties. This paper will examine the stress-rupture behavior of some ceramic materials, to demonstrate a unique relationship between the time to failure, at a given stress, and the temperature. The outcome is a single parameter combining the time to failure and temperature variables, which can allow designers to predict the useful life of ceramic components more accurately than insufficient data would otherwise allow.
APA, Harvard, Vancouver, ISO, and other styles
4

Fujiyama, K., H. Suzuki, and T. Tsuboi. "Risk Analysis of Failure Under Combined Damage Modes of Steam Turbine Components." In ASME 2010 International Mechanical Engineering Congress and Exposition. ASMEDC, 2010. http://dx.doi.org/10.1115/imece2010-38457.

Full text
Abstract:
The Risk Based Maintenance (RBM) procedure was applied to steam turbine casing as a typical example of components suffering from combined damage modes such as creep-fatigue cracking. Risk analysis was conducted through the field inspection database for cracks at the portion under creep-fatigue conditions. The primary stage of RBM is semi-quantitative risk analysis for risk prioritization of events using risk matrixes coupled with parts breakdown trees and event trees. The secondary stage is quantitative probabilistic risk assessment (PRA) to optimize maintenance intervals for the prioritized issues. The unreliability functions were expressed in two-dimensional log-normal type probability functions of operation time and start-up cycles. As the field data include censored or sustained ones, the precise correlation factors are not always obtained from the data sets. To estimate the most likely correlation factor, the Bayesian inference was introduced to the analysis of two-dimensional probability functions. The risk functions of operation periods were obtained using the assumed operation pattern, that is, the ratio of start-up cycles to operation time by substituting this relation into the two-dimensional probability distribution functions. Total expected cost function was defined as the sum of periodical repair cost rate and the total risk cost of subject event and component. The cost function has usually the optimum cost point and it can be used as the basis of the decision making of maintenance intervals.
APA, Harvard, Vancouver, ISO, and other styles
5

Carlucci, Elisa, and Leonardo Tognarelli. "Mixed Weibull Distribution as Best Representative of Forced Outage Distribution to be Implemented in BlockSim." In ASME 2014 Power Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/power2014-32282.

Full text
Abstract:
When an RBD is required, Failure distribution and Outage distribution are requested as inputs for each block. Should field data exist, the two distributions can be obtained by managing data in the most appropriate manner. While the Failure distribution is often a right censored data set, the outage distribution is always a time-to-failure distribution obtained through the RRX interpolation. Moreover, being the reliability approach conservative, the weibull 3P assumption is welcome, because the gamma value, in this particular circumstance, guarantees a minimum outage duration. However it has been noticed that while the 2P sometimes could be not-conservative enough, the 3P could result too much conservative with the risk of declaring a lower target than the actual one and hence with the consequence of not being commercially competitive. The proposed approach applies and develops the mixed weibull application, where each subpopulation distribution comes out from a single dataset, which represents a step forward after the traditional 2P and the more conservative 3P.
APA, Harvard, Vancouver, ISO, and other styles
6

Schenk, Bjoern, Peggy J. Brehm, M. N. Menon, William T. Tucker, and Alonso D. Peralta. "A New Probabilistic Approach for Accurate Fatigue Data Analysis of Ceramic Materials." In ASME 1999 International Gas Turbine and Aeroengine Congress and Exhibition. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/99-gt-319.

Full text
Abstract:
Statistical methods for the design of ceramic components for time-dependent failure modes have been developed which can significantly enhance component reliability, reduce baseline data generation costs, and lead to more accurate estimates of slow crack growth (SCG) parameters. These methods are incorporated into the AlliedSignal Engines CERAMIC and ERICA computer codes. Use of the codes facilitates generation of material strength parameters and SCG parameters simultaneously, by pooling fast fracture data from specimens that are of different sizes, or stressed by different loading conditions, with data derived from static fatigue experiments. The codes also include approaches to calculation of confidence bounds for the Weibull and SCG parameters of censored data and for the predicted reliability of ceramic components. This paper presents a summary of this new fatigue data analysis technique and an example demonstrating the capabilities of the codes with respect to time-dependent failure modes. This work was sponsored by the U.S. Department of Energy Oak Ridge National Laboratory (DoE/ORNL) under Contract No. DE-AC05-84OR21400.
APA, Harvard, Vancouver, ISO, and other styles
7

Nagata, Tadahisa, and Ken-ichiro Sugiyama. "Performance Evaluation of Japanese Nuclear Power Plant Based on Open Data and Information." In 17th International Conference on Nuclear Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/icone17-75914.

Full text
Abstract:
The operation term of Japanese nuclear power plants is less than 13 months. Moreover, the refuel outage period is longer than the other countries. It is likely that excessive preventive maintenances result in early “infant mortality” failures (early failures). However, the statistical evaluation report about failure type of Japanese nuclear power plants was not found. Therefore, Evaluation of plant engineering performance was tried on open data/information with a statistical method. The Weibull distribution/analysis, which needs not only failure data but also censored data like the preventive maintenance, was applied to plant performance evaluation. As open data/information, an annual report; “Operational Status of Nuclear Facilities in Japan” is very popular for the operation database of Japanese nuclear power plants. However, maintenance date/information of failed equipment which caused plant shutdowns were not reported in this annual report. Therefore, every equipment was assumed to be maintained during every shutdown, because this assumption generally makes conservative results of failure rate and Mean Time Before Failure (MTBF) for discussion about early failures. Data till March 2007 were collected from these annual reports. As a result of plant performance evaluation, failure type was early “infant mortality” failure type. Excessive maintenances probably resulted in early failure type. Moreover, influence of early failures and effects of plant design modifications were evaluated. Almost 30% of failures occurred within one month after restart, and failure type excluded these failures was chance (random) type. Early failures within one month may be closely evaluated to improve maintenances. Design modification effect of Japanese nuclear power plants was confirmed.
APA, Harvard, Vancouver, ISO, and other styles
8

Schenk, Bjoern, Peggy J. Brehm, M. N. Menon, Alonso D. Peralta, and William T. Tucker. "Status of the CERAMIC/ERICA Probabilistic Life Prediction Codes Development for Structural Ceramic Applications." In ASME 1999 International Gas Turbine and Aeroengine Congress and Exhibition. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/99-gt-318.

Full text
Abstract:
Statistical methods for design of ceramic components for fast fracture and time-dependent failure modes have been advanced to significantly enhance component reliability and reduce baseline data generation costs. The statistical methods are incorporated in AlliedSignal Engines CERAMIC and ERICA computer codes. Use of the codes facilitates generation of material strength and fatigue parameters by pooling data from specimens that are of varying sizes, stressed by dissimilar loading conditions, or tested at a variety of temperatures, and by including specimens that have been proof tested. The codes include approaches to calculate confidence bounds for Weibull and fatigue parameters of censored data as well as for predicted reliability of ceramic components. Rather than presenting highly detailed, previously published theoretical derivations (Cuccio, et al., 1994, Schenk, et al., 1998, Schenk, et al., 1999), this paper presents a summary of general methodology and a synopsis of code features and capabilities. The work was sponsored by the Department of Energy/Oak Ridge National Laboratory (DoE/ORNL) under Contract DE-AC05-84OR21400.
APA, Harvard, Vancouver, ISO, and other styles
9

Wereszczak, Andrew A., Kristin Breder, Mark J. Andrews, Timothy P. Kirkland, and Mattison K. Ferber. "Strength Distribution Changes in a Silicon Nitride as a Function of Stressing Rate and Temperature." In ASME 1998 International Gas Turbine and Aeroengine Congress and Exhibition. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/98-gt-527.

Full text
Abstract:
Machining damage (a surface flaw) and porous-region-flaw (a volume flaw) populations limited the flexure strengths of a commercially available silicon nitride at 25°C, while these same flaws, along with inclusions, limited flexure strengths at 850°C. The machining damage and porous region flaws were the primary interest in the present study because they caused failure at both temperatures. Censoring revealed that the two-parameter Weibull strength distributions representing each flaw population changed as a function of stressing rate (i.e., dynamic fatigue) and temperature. A decrease in the Weibull scaling parameter is recognized as an indication of slow crack growth or time-dependent strength reduction in monolithic ceramics. Available life prediction codes used for reliability predictions of structural ceramic components consider the slow crack growth phenomenon. However, changes in the Weibull modulus are infrequently observed or reported, and typically are not accounted for in these life prediction codes. In the present study, changes in both Weibull parameters for the strength distributions provided motivation to the authors to survey what factors (e.g., residual stress, slow crack growth, and changes in failure mechanisms) could provide partial or full explanation of the observed distribution changes in this silicon nitride. Lastly, exercises were performed to examine the effects of strength distribution changes on the failure probability prediction of a diesel exhaust valve. Because the surface area and volume of this valve were substantially larger than those of the tested bend bars, it was found that the valve’s failure probability analysis amplified some slight or inconclusive distribution changes which were not evident from the interpretation of the censored bend bar strength data.
APA, Harvard, Vancouver, ISO, and other styles
10

Goh, S. H., E. Susanto, Song Li, B. L. Yeoh, M. H. THor, and D. Zhou. "Automated Multi-Level Circuit Net Trace for Hotspot Analysis." In ISTFA 2019. ASM International, 2019. http://dx.doi.org/10.31399/asm.cp.istfa2019p0079.

Full text
Abstract:
Abstract Post-fault isolation layout net trace and circuit analysis based on abnormal hotspots is a critical step because it directly impacts the outcome of failure analysis. In this work, we review current commercial net tracing solutions in terms of their strengths and drawbacks. As an enhancement, a new net methodology that enables automation and the capability to execute tracing beyond first-level transistors is introduced. This approach could potentially eliminate manual net tracing and significantly improves the overall failure analysis turnaround time.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Censored failure time outcome"

1

Jiang, Zhiping, Ao Zhang, Shuxing Wang, Quanlei Ren, and Yizhu Wang. Prognostic value of ASXL1 mutations in patients with myelodysplastic syndromes and acute myeloid leukemia: A meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, April 2022. http://dx.doi.org/10.37766/inplasy2022.4.0013.

Full text
Abstract:
Review question / Objective: A meta-analysis was performed to investigate prognostic value of ASXL1 mutations in patients with myelodysplastic syndromes and acute myeloid leukemia. Condition being studied: Some MDS or AML patients have ASXL1 mutations while others haven’t. Main outcome(s): We used OS as the primary endpoint and AML transformation as the secondary endpoint. OS was defined as either death (failure) or survival at the last follow-up. AML transformation was defined as starting when the patient entered the trial and proceeding to the time of AML diagnosis.Combined HRs and 95% CIs for OS and AML transformation were used to evaluate the prognostic effect of ASXL1 mutations using the generic inverse variance method.
APA, Harvard, Vancouver, ISO, and other styles
2

Treadwell, Jonathan R., James T. Reston, Benjamin Rouse, Joann Fontanarosa, Neha Patel, and Nikhil K. Mull. Automated-Entry Patient-Generated Health Data for Chronic Conditions: The Evidence on Health Outcomes. Agency for Healthcare Research and Quality (AHRQ), March 2021. http://dx.doi.org/10.23970/ahrqepctb38.

Full text
Abstract:
Background. Automated-entry consumer devices that collect and transmit patient-generated health data (PGHD) are being evaluated as potential tools to aid in the management of chronic diseases. The need exists to evaluate the evidence regarding consumer PGHD technologies, particularly for devices that have not gone through Food and Drug Administration evaluation. Purpose. To summarize the research related to automated-entry consumer health technologies that provide PGHD for the prevention or management of 11 chronic diseases. Methods. The project scope was determined through discussions with Key Informants. We searched MEDLINE and EMBASE (via EMBASE.com), In-Process MEDLINE and PubMed unique content (via PubMed.gov), and the Cochrane Database of Systematic Reviews for systematic reviews or controlled trials. We also searched ClinicalTrials.gov for ongoing studies. We assessed risk of bias and extracted data on health outcomes, surrogate outcomes, usability, sustainability, cost-effectiveness outcomes (quantifying the tradeoffs between health effects and cost), process outcomes, and other characteristics related to PGHD technologies. For isolated effects on health outcomes, we classified the results in one of four categories: (1) likely no effect, (2) unclear, (3) possible positive effect, or (4) likely positive effect. When we categorized the data as “unclear” based solely on health outcomes, we then examined and classified surrogate outcomes for that particular clinical condition. Findings. We identified 114 unique studies that met inclusion criteria. The largest number of studies addressed patients with hypertension (51 studies) and obesity (43 studies). Eighty-four trials used a single PGHD device, 23 used 2 PGHD devices, and the other 7 used 3 or more PGHD devices. Pedometers, blood pressure (BP) monitors, and scales were commonly used in the same studies. Overall, we found a “possible positive effect” of PGHD interventions on health outcomes for coronary artery disease, heart failure, and asthma. For obesity, we rated the health outcomes as unclear, and the surrogate outcomes (body mass index/weight) as likely no effect. For hypertension, we rated the health outcomes as unclear, and the surrogate outcomes (systolic BP/diastolic BP) as possible positive effect. For cardiac arrhythmias or conduction abnormalities we rated the health outcomes as unclear and the surrogate outcome (time to arrhythmia detection) as likely positive effect. The findings were “unclear” regarding PGHD interventions for diabetes prevention, sleep apnea, stroke, Parkinson’s disease, and chronic obstructive pulmonary disease. Most studies did not report harms related to PGHD interventions; the relatively few harms reported were minor and transient, with event rates usually comparable to harms in the control groups. Few studies reported cost-effectiveness analyses, and only for PGHD interventions for hypertension, coronary artery disease, and chronic obstructive pulmonary disease; the findings were variable across different chronic conditions and devices. Patient adherence to PGHD interventions was highly variable across studies, but patient acceptance/satisfaction and usability was generally fair to good. However, device engineers independently evaluated consumer wearable and handheld BP monitors and considered the user experience to be poor, while their assessment of smartphone-based electrocardiogram monitors found the user experience to be good. Student volunteers involved in device usability testing of the Weight Watchers Online app found it well-designed and relatively easy to use. Implications. Multiple randomized controlled trials (RCTs) have evaluated some PGHD technologies (e.g., pedometers, scales, BP monitors), particularly for obesity and hypertension, but health outcomes were generally underreported. We found evidence suggesting a possible positive effect of PGHD interventions on health outcomes for four chronic conditions. Lack of reporting of health outcomes and insufficient statistical power to assess these outcomes were the main reasons for “unclear” ratings. The majority of studies on PGHD technologies still focus on non-health-related outcomes. Future RCTs should focus on measurement of health outcomes. Furthermore, future RCTs should be designed to isolate the effect of the PGHD intervention from other components in a multicomponent intervention.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography