To see the other types of publications on this topic, follow the link: Prediction of survival; Probability; Time models.

Journal articles on the topic 'Prediction of survival; Probability; Time models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Prediction of survival; Probability; Time models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lan, Yu, and Daniel F. Heitjan. "Adaptive parametric prediction of event times in clinical trials." Clinical Trials 15, no. 2 (January 29, 2018): 159–68. http://dx.doi.org/10.1177/1740774517750633.

Full text
Abstract:
Background: In event-based clinical trials, it is common to conduct interim analyses at planned landmark event counts. Accurate prediction of the timing of these events can support logistical planning and the efficient allocation of resources. As the trial progresses, one may wish to use the accumulating data to refine predictions. Purpose: Available methods to predict event times include parametric cure and non-cure models and a nonparametric approach involving Bayesian bootstrap simulation. The parametric methods work well when their underlying assumptions are met, and the nonparametric method gives calibrated but inefficient predictions across a range of true models. In the early stages of a trial, when predictions have high marginal value, it is difficult to infer the form of the underlying model. We seek to develop a method that will adaptively identify the best-fitting model and use it to create robust predictions. Methods: At each prediction time, we repeat the following steps: (1) resample the data; (2) identify, from among a set of candidate models, the one with the highest posterior probability; and (3) sample from the predictive posterior of the data under the selected model. Results: A Monte Carlo study demonstrates that the adaptive method produces prediction intervals whose coverage is robust within the family of selected models. The intervals are generally wider than those produced assuming the correct model, but narrower than nonparametric prediction intervals. We demonstrate our method with applications to two completed trials: The International Chronic Granulomatous Disease study and Radiation Therapy Oncology Group trial 0129. Limitations: Intervals produced under any method can be badly calibrated when the sample size is small and unhelpfully wide when predicting the remote future. Early predictions can be inaccurate if there are changes in enrollment practices or trends in survival. Conclusions: An adaptive event-time prediction method that selects the model given the available data can give improved robustness compared to methods based on less flexible parametric models.
APA, Harvard, Vancouver, ISO, and other styles
2

Gensheimer, Michael F., and Balasubramanian Narasimhan. "A scalable discrete-time survival model for neural networks." PeerJ 7 (January 25, 2019): e6257. http://dx.doi.org/10.7717/peerj.6257.

Full text
Abstract:
There is currently great interest in applying neural networks to prediction tasks in medicine. It is important for predictive models to be able to use survival data, where each patient has a known follow-up time and event/censoring indicator. This avoids information loss when training the model and enables generation of predicted survival curves. In this paper, we describe a discrete-time survival model that is designed to be used with neural networks, which we refer to as Nnet-survival. The model is trained with the maximum likelihood method using mini-batch stochastic gradient descent (SGD). The use of SGD enables rapid convergence and application to large datasets that do not fit in memory. The model is flexible, so that the baseline hazard rate and the effect of the input data on hazard probability can vary with follow-up time. It has been implemented in the Keras deep learning framework, and source code for the model and several examples is available online. We demonstrate the performance of the model on both simulated and real data and compare it to existing models Cox-nnet and Deepsurv.
APA, Harvard, Vancouver, ISO, and other styles
3

Ren, Kan, Jiarui Qin, Lei Zheng, Zhengyu Yang, Weinan Zhang, Lin Qiu, and Yong Yu. "Deep Recurrent Survival Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4798–805. http://dx.doi.org/10.1609/aaai.v33i01.33014798.

Full text
Abstract:
Survival analysis is a hotspot in statistical research for modeling time-to-event information with data censorship handling, which has been widely used in many applications such as clinical research, information system and other fields with survivorship bias. Many works have been proposed for survival analysis ranging from traditional statistic methods to machine learning models. However, the existing methodologies either utilize counting-based statistics on the segmented data, or have a pre-assumption on the event probability distribution w.r.t. time. Moreover, few works consider sequential patterns within the feature space. In this paper, we propose a Deep Recurrent Survival Analysis model which combines deep learning for conditional probability prediction at finegrained level of the data, and survival analysis for tackling the censorship. By capturing the time dependency through modeling the conditional probability of the event for each sample, our method predicts the likelihood of the true event occurrence and estimates the survival rate over time, i.e., the probability of the non-occurrence of the event, for the censored data. Meanwhile, without assuming any specific form of the event probability distribution, our model shows great advantages over the previous works on fitting various sophisticated data distributions. In the experiments on the three realworld tasks from different fields, our model significantly outperforms the state-of-the-art solutions under various metrics.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Kan, and Sheng Luo. "Dynamic predictions in Bayesian functional joint models for longitudinal and time-to-event data: An application to Alzheimer’s disease." Statistical Methods in Medical Research 28, no. 2 (July 28, 2017): 327–42. http://dx.doi.org/10.1177/0962280217722177.

Full text
Abstract:
In the study of Alzheimer’s disease, researchers often collect repeated measurements of clinical variables, event history, and functional data. If the health measurements deteriorate rapidly, patients may reach a level of cognitive impairment and are diagnosed as having dementia. An accurate prediction of the time to dementia based on the information collected is helpful for physicians to monitor patients’ disease progression and to make early informed medical decisions. In this article, we first propose a functional joint model to account for functional predictors in both longitudinal and survival submodels in the joint modeling framework. We then develop a Bayesian approach for parameter estimation and a dynamic prediction framework for predicting the subjects’ future outcome trajectories and risk of dementia, based on their scalar and functional measurements. The proposed Bayesian functional joint model provides a flexible framework to incorporate many features both in joint modeling of longitudinal and survival data and in functional data analysis. Our proposed model is evaluated by a simulation study and is applied to the motivating Alzheimer’s Disease Neuroimaging Initiative study.
APA, Harvard, Vancouver, ISO, and other styles
5

Alemazkoor, Negin, Conrad J. Ruppert, and Hadi Meidani. "Survival analysis at multiple scales for the modeling of track geometry deterioration." Proceedings of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit 232, no. 3 (March 9, 2017): 842–50. http://dx.doi.org/10.1177/0954409717695650.

Full text
Abstract:
Defects in track geometry have a notable impact on the safety of rail transportation. In order to make the optimal maintenance decisions to ensure the safety and efficiency of railroads, it is necessary to analyze the track geometry defects and develop reliable defect deterioration models. In general, standard deterioration models are typically developed for a segment of track. As a result, these coarse-scale deterioration models may fail to predict whether the isolated defects in a segment will exceed the safety limits after a given time period or not. In this paper, survival analysis is used to model the probability of exceeding the safety limits of the isolated defects. These fine-scale models are then used to calculate the probability of whether each segment of the track will require maintenance after a given time period. The model validation results show that the prediction quality of the coarse-scale segment-based models can be improved by exploiting information from the fine-scale defect-based deterioration models.
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Zhaohong, Wei Dong, Jinlong Shi, Kunlun He, and Zhengxing Huang. "Attention-Based Deep Recurrent Model for Survival Prediction." ACM Transactions on Computing for Healthcare 2, no. 4 (October 31, 2021): 1–18. http://dx.doi.org/10.1145/3466782.

Full text
Abstract:
Survival analysis exhibits profound effects on health service management. Traditional approaches for survival analysis have a pre-assumption on the time-to-event probability distribution and seldom consider sequential visits of patients on medical facilities. Although recent studies leverage the merits of deep learning techniques to capture non-linear features and long-term dependencies within multiple visits for survival analysis, the lack of interpretability prevents deep learning models from being applied to clinical practice. To address this challenge, this article proposes a novel attention-based deep recurrent model, named AttenSurv , for clinical survival analysis. Specifically, a global attention mechanism is proposed to extract essential/critical risk factors for interpretability improvement. Thereafter, Bi-directional Long Short-Term Memory is employed to capture the long-term dependency on data from a series of visits of patients. To further improve both the prediction performance and the interpretability of the proposed model, we propose another model, named GNNAttenSurv , by incorporating a graph neural network into AttenSurv, to extract the latent correlations between risk factors. We validated our solution on three public follow-up datasets and two electronic health record datasets. The results demonstrated that our proposed models yielded consistent improvement compared to the state-of-the-art baselines on survival analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Tan, Ping, Lu Yang, Hang Xu, and Qiang Wei. "Novel perioperative parameters-based nomograms for survival outcomes in upper tract urothelial carcinoma after radical nephroureterectomy." Journal of Clinical Oncology 37, no. 7_suppl (March 1, 2019): 414. http://dx.doi.org/10.1200/jco.2019.37.7_suppl.414.

Full text
Abstract:
414 Background: Recently, several postoperative nomograms for cancer-specific survival (CSS) after radical nephroureterectomy (RNU) were proposed, while they did not incorporate the same variables; meanwhile, many preoperative blood-based parameters, which were recently reported to be related to survival, were not included in their models. In addition, no nomogram for overall survival (OS) was available to date. Methods: The full data of 716 patients were available. The whole cohort was randomly divided into two cohorts: the training cohort for developing the nomograms (n = 508) and the validation cohort for validating the models (n = 208). Univariate and multivariate Cox proportional hazards regression models were used for establishing the prediction models. The discriminative accuracy of nomograms were measured by Harrell’s concordance index (C-index). The clinical usefulness and net benefit of the predictive models were estimated and visualized by using Decision curve analyses (DCA). Results: The median follow-up time was 42.0 months (IQR: 18.0-76.0). For CSS, tumor size, grade and pT stage, lymph node metastasis, NLR, PLR and fibrinogen level were identified as independent risk factors in the final model; while tumor grade and pT stage, lymph node metastasis, PLR, Cys-C and fibrinogen level were identified as independent predictors for OS model. The C-index for CSS prediction was 0.82 (95%CI: 0.79-0.85), and the OS nomogram model had an accuracy of 0.83 (95%CI: 0.80-0.86). The results of bootstrapping showed no deviation from the ideal. The calibration plots for the probability of CSS and OS at 3 or 5-year after RNU showed a favorable agreement between the prediction by the nomograms and actual observation. In the external validation cohort, the C-indexes of the nomograms for predicting CSS and OS were 0.79 (95%CI: 0.74-0.84) and 0.80 (95%CI: 0.75-0.85), respectively. As indicated by calibration plots, optimal agreement was observed between prediction and observation in the external cohort. Conclusions: The nomograms developed and validated based on preoperative blood-based parameters were superior to any single variable for predicting CSS and OS after RNU.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Xing-Rong, Yudi Pawitan, and Mark Clements. "Parametric and penalized generalized survival models." Statistical Methods in Medical Research 27, no. 5 (September 1, 2016): 1531–46. http://dx.doi.org/10.1177/0962280216664760.

Full text
Abstract:
We describe generalized survival models, where g( S( t| z)), for link function g, survival S, time t, and covariates z, is modeled by a linear predictor in terms of covariate effects and smooth time effects. These models include proportional hazards and proportional odds models, and extend the parametric Royston–Parmar models. Estimation is described for both fully parametric linear predictors and combinations of penalized smoothers and parametric effects. The penalized smoothing parameters can be selected automatically using several information criteria. The link function may be selected based on prior assumptions or using an information criterion. We have implemented the models in R. All of the penalized smoothers from the mgcv package are available for smooth time effects and smooth covariate effects. The generalized survival models perform well in a simulation study, compared with some existing models. The estimation of smooth covariate effects and smooth time-dependent hazard or odds ratios is simplified, compared with many non-parametric models. Applying these models to three cancer survival datasets, we find that the proportional odds model is better than the proportional hazards model for two of the datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

Andrinopoulou, Eleni-Rosalina, D. Rizopoulos, Johanna JM Takkenberg, and E. Lesaffre. "Combined dynamic predictions using joint models of two longitudinal outcomes and competing risk data." Statistical Methods in Medical Research 26, no. 4 (June 9, 2015): 1787–801. http://dx.doi.org/10.1177/0962280215588340.

Full text
Abstract:
Nowadays there is an increased medical interest in personalized medicine and tailoring decision making to the needs of individual patients. Within this context our developments are motivated from a Dutch study at the Cardio-Thoracic Surgery Department of the Erasmus Medical Center, consisting of patients who received a human tissue valve in aortic position and who were thereafter monitored echocardiographically. Our aim is to utilize the available follow-up measurements of the current patients to produce dynamically updated predictions of both survival and freedom from re-intervention for future patients. In this paper, we propose to jointly model multiple longitudinal measurements combined with competing risk survival outcomes and derive the dynamically updated cumulative incidence functions. Moreover, we investigate whether different features of the longitudinal processes would change significantly the prediction for the events of interest by considering different types of association structures, such as time-dependent trajectory slopes and time-dependent cumulative effects. Our final contribution focuses on optimizing the quality of the derived predictions. In particular, instead of choosing one final model over a list of candidate models which ignores model uncertainty, we propose to suitably combine predictions from all considered models using Bayesian model averaging.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Chuchu, Anja J. Rueten-Budde, Andreas Ranft, Uta Dirksen, Hans Gelderblom, and Marta Fiocco. "Dynamic prediction of overall survival: a retrospective analysis on 979 patients with Ewing sarcoma from the German registry." BMJ Open 10, no. 10 (October 2020): e036376. http://dx.doi.org/10.1136/bmjopen-2019-036376.

Full text
Abstract:
ObjectivesThis study aimed at developing a dynamic prediction model for patients with Ewing sarcoma (ES) to provide predictions at different follow-up times. During follow-up, disease-related information becomes available, which has an impact on a patient’s prognosis. Many prediction models include predictors available at baseline and do not consider the evolution of disease over time.SettingIn the analysis, 979 patients with ES from the Gesellschaft für Pädiatrische Onkologie und Hämatologie registry, who underwent surgery and treatment between 1999 and 2009, were included.DesignA dynamic prediction model was developed to predict updated 5-year survival probabilities from different prediction time points during follow-up. Time-dependent variables, such as local recurrence (LR) and distant metastasis (DM), as well as covariates measured at baseline, were included in the model. The time effects of covariates were investigated by using interaction terms between each variable and time.ResultsDeveloping LR, DM in the lungs (DMp) or extrapulmonary DM (DMo) has a strong effect on the probability of surviving an additional 5 years with HRs and 95% CIs equal to 20.881 (14.365 to 30.353), 6.759 (4.465 to 10.230) and 17.532 (13.210 to 23.268), respectively. The effects of primary tumour location, postoperative radiotherapy (PORT), histological response and disease extent at diagnosis on survival were found to change over time. The HR of PORT versus no PORT at the time of surgery is equal to 0.774 (0.594 to 1.008). One year after surgery, the HR is equal to 1.091 (0.851 to 1.397).ConclusionsThe time-varying effects of several baseline variables, as well as the strong impact of time-dependent variables, show the importance of including updated information collected during follow-up in the prediction model to provide accurate predictions of survival.
APA, Harvard, Vancouver, ISO, and other styles
11

Garber, Sean M., John P. Brown, Duncan S. Wilson, Douglas A. Maguire, and Linda S. Heath. "Snag longevity under alternative silvicultural regimes in mixed-species forests of central Maine." Canadian Journal of Forest Research 35, no. 4 (April 1, 2005): 787–96. http://dx.doi.org/10.1139/x05-021.

Full text
Abstract:
Predictions of snag longevity, defined here as the probability of snag survival to a given age, are key to designing silvicultural regimes that ensure their availability for wildlife and form an important component of carbon flow models. Species, diameter at breast height, stand density, management regime, and agent of tree mortality were assessed for their effect on snag longevity in a long-term silvicultural study on the Penobscot Experimental Forest in central Maine. Snag recruitment and fall data from USDA Forest Service inventories between 1981 and 1997 were analyzed using parametric survival analysis. A Weibull model fit the data best, indicating a significant lag time followed by rapid fall rates. Half-times varied among species, with Thuja occidentalis L. having the longest (10 years) and Picea species the shortest (6 years). Snag longevity was significantly greater with increasing diameter and decreased with increasing stand density. Agent of mortality and silvicultural treatment were also significant. Two models were developed for estimating probability of snag survival over time, one that included predictor variables unique to the silvicultural systems study on the Penobscot Experimental Forest and one using predictor variables available in most standard inventories. Snag survival models can easily be incorporated into comprehensive forest dynamics models to facilitate estimates of wildlife habitat structure and carbon flow.
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, David, Gaurav Goyal, Ronald S. Go, Sameer A. Parikh, and Che G. Ngufor. "Improved Interpretability of Machine Learning Model Using Unsupervised Clustering: Predicting Time to First Treatment in Chronic Lymphocytic Leukemia." JCO Clinical Cancer Informatics, no. 3 (December 2019): 1–11. http://dx.doi.org/10.1200/cci.18.00137.

Full text
Abstract:
PURPOSE Time to event is an important aspect of clinical decision making. This is particularly true when diseases have highly heterogeneous presentations and prognoses, as in chronic lymphocytic lymphoma (CLL). Although machine learning methods can readily learn complex nonlinear relationships, many methods are criticized as inadequate because of limited interpretability. We propose using unsupervised clustering of the continuous output of machine learning models to provide discrete risk stratification for predicting time to first treatment in a cohort of patients with CLL. PATIENTS AND METHODS A total of 737 treatment-naïve patients with CLL diagnosed at Mayo Clinic were included in this study. We compared predictive abilities for two survival models (Cox proportional hazards and random survival forest) and four classification methods (logistic regression, support vector machines, random forest, and gradient boosting machine). Probability of treatment was then stratified. RESULTS Machine learning methods did not yield significantly more accurate predictions of time to first treatment. However, automated risk stratification provided by clustering was able to better differentiate patients who were at risk for treatment within 1 year than models developed using standard survival analysis techniques. CONCLUSION Clustering the posterior probabilities of machine learning models provides a way to better interpret machine learning models.
APA, Harvard, Vancouver, ISO, and other styles
13

Ma, Junsheng, Brian P. Hobbs, and Francesco C. Stingo. "Integrating genomic signatures for treatment selection with Bayesian predictive failure time models." Statistical Methods in Medical Research 27, no. 7 (November 1, 2016): 2093–113. http://dx.doi.org/10.1177/0962280216675373.

Full text
Abstract:
Over the past decade, a tremendous amount of resources have been dedicated to the pursuit of developing genomic signatures that effectively match patients with targeted therapies. Although dozens of therapies that target DNA mutations have been developed, the practice of studying single candidate genes has limited our understanding of cancer. Moreover, many studies of multiple-gene signatures have been conducted for the purpose of identifying prognostic risk cohorts, and thus are limited for selecting personalized treatments. Existing statistical methods for treatment selection often model treatment-by-covariate interactions that are difficult to specify, and require prohibitively large patient cohorts. In this article, we describe a Bayesian predictive failure time model for treatment selection that integrates multiple-gene signatures. Our approach relies on a heuristic measure of similarity that determines the extent to which historically treated patients contribute to the outcome prediction of new patients. The similarity measure, which can be obtained from existing clustering methods, imparts robustness to the underlying stochastic data structure, which enhances feasibility in the presence of small samples. Performance of the proposed method is evaluated in simulation studies, and its application is demonstrated through a study of lung squamous cell carcinoma. Our Bayesian predictive failure time approach is shown to effectively leverage genomic signatures to match patients to the therapies that are most beneficial for prolonging their survival.
APA, Harvard, Vancouver, ISO, and other styles
14

Zheng, Panpan, Shuhan Yuan, and Xintao Wu. "SAFE: A Neural Survival Analysis Model for Fraud Early Detection." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1278–85. http://dx.doi.org/10.1609/aaai.v33i01.33011278.

Full text
Abstract:
Many online platforms have deployed anti-fraud systems to detect and prevent fraudulent activities. However, there is usually a gap between the time that a user commits a fraudulent action and the time that the user is suspended by the platform. How to detect fraudsters in time is a challenging problem. Most of the existing approaches adopt classifiers to predict fraudsters given their activity sequences along time. The main drawback of classification models is that the prediction results between consecutive timestamps are often inconsistent. In this paper, we propose a survival analysis based fraud early detection model, SAFE, which maps dynamic user activities to survival probabilities that are guaranteed to be monotonically decreasing along time. SAFE adopts recurrent neural network (RNN) to handle user activity sequences and directly outputs hazard values at each timestamp, and then, survival probability derived from hazard values is deployed to achieve consistent predictions. Because we only observe the user suspended time instead of the fraudulent activity time in the training data, we revise the loss function of the regular survival model to achieve fraud early detection. Experimental results on two real world datasets demonstrate that SAFE outperforms both the survival analysis model and recurrent neural network model alone as well as state-of-theart fraud early detection approaches.
APA, Harvard, Vancouver, ISO, and other styles
15

Shafipour, Gholamreza, and Abdolvahhab Fetanat. "Survival analysis in supply chains using statistical flowgraph models: Predicting time to supply chain disruption." Communications in Statistics - Theory and Methods 45, no. 21 (January 11, 2016): 6183–208. http://dx.doi.org/10.1080/03610926.2014.957856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Yamaguchi, S., C. Lee, O. Karaer, S. Ban, A. Mine, and S. Imazato. "Predicting the Debonding of CAD/CAM Composite Resin Crowns with AI." Journal of Dental Research 98, no. 11 (August 3, 2019): 1234–38. http://dx.doi.org/10.1177/0022034519867641.

Full text
Abstract:
A preventive measure for debonding has not been established and is highly desirable to improve the survival rate of computer-aided design/computer-aided manufacturing (CAD/CAM) composite resin (CR) crowns. The aim of this study was to assess the usefulness of deep learning with a convolution neural network (CNN) method to predict the debonding probability of CAD/CAM CR crowns from 2-dimensional images captured from 3-dimensional (3D) stereolithography models of a die scanned by a 3D oral scanner. All cases of CAD/CAM CR crowns were manufactured from April 2014 to November 2015 at the Division of Prosthodontics, Osaka University Dental Hospital (Ethical Review Board at Osaka University, approval H27-E11). The data set consisted of a total of 24 cases: 12 trouble-free and 12 debonding as known labels. A total of 8,640 images were randomly divided into 6,480 training and validation images and 2,160 test images. Deep learning with a CNN method was conducted to develop a learning model to predict the debonding probability. The prediction accuracy, precision, recall, F-measure, receiver operating characteristic, and area under the curve of the learning model were assessed for the test images. Also, the mean calculation time was measured during the prediction for the test images. The prediction accuracy, precision, recall, and F-measure values of deep learning with a CNN method for the prediction of the debonding probability were 98.5%, 97.0%, 100%, and 0.985, respectively. The mean calculation time was 2 ms/step for 2,160 test images. The area under the curve was 0.998. Artificial intelligence (AI) technology—that is, the deep learning with a CNN method established in this study—demonstrated considerably good performance in terms of predicting the debonding probability of a CAD/CAM CR crown with 3D stereolithography models of a die scanned from patients.
APA, Harvard, Vancouver, ISO, and other styles
17

Agrawal, Smita, Vivek Vaidya, Prajwal Chandrashekaraiah, Hemant Kulkarni, Li Chen, Karl Rudeen, Babu Narayanan, Orr Inbar, and Brigham Hyde. "Development of an artificial intelligence model to predict survival at specific time intervals for lung cancer patients." Journal of Clinical Oncology 37, no. 15_suppl (May 20, 2019): 6556. http://dx.doi.org/10.1200/jco.2019.37.15_suppl.6556.

Full text
Abstract:
6556 Background: Survival prediction models for lung cancer patients could help guide their care and therapy decisions. The objectives of this study were to predict probability of survival beyond 90, 180 and 360 days from any point in a lung cancer patient’s journey. Methods: We developed a Gradient Boosting model (XGBoost) using data from 55k lung cancer patients in the ASCO CancerLinQ database that used 3958 unique variables including Dx and Rx codes, biomarkers, surgeries and lab tests from ≤1 year prior to the prediction point, which was chosen at random for each patient. We used 40% data for training, 25% for hyper-parameter tuning, 20% for testing and 15% for holdout validation. Death date available in the Electronic Health Record was cross checked by linkage to death registries. Results: The model was validated on the holdout set of 8,468 patients. The Area Under the Curve (AUC) for the model was 0.79. The precision and recall for predicting survival beyond the three time points were between 0.7-0.8 and 0.8-0.9 respectively (see table). This compares favourably to other lung cancer survival models created using different machine learning techniques (Jochems 2017, Dekker 2009). A Cox-PH model created using the top 20 variables also had a significantly lower performance (see table). Analysis of input variables yielded distinctive patterns for patient subgroups and time points. Tumor status, medications, lab values and functional status were found to be significant in patient sub cohorts. Conclusions: An AI model to predict survival of lung cancer patients built using a large real world dataset yielded high accuracy. This general model can further be used to predict survival of sub cohorts stratified by variables such as stage or various treatment effects. Such a model could be useful for assessing patient risk and treatment options, evaluating cost and quality of care or determining clinical trial eligibility. [Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
18

Stanojevic, Sanja, Jenna Sykes, Anne L. Stephenson, Shawn D. Aaron, and George A. Whitmore. "Development and external validation of 1- and 2-year mortality prediction models in cystic fibrosis." European Respiratory Journal 54, no. 3 (May 16, 2019): 1900224. http://dx.doi.org/10.1183/13993003.00224-2019.

Full text
Abstract:
IntroductionWe aimed to develop a clinical tool for predicting 1- and 2-year risk of death for patients with cystic fibrosis (CF). The model considers patients' overall health status as well as risk of intermittent shock events in calculating the risk of death.MethodsCanadian CF Registry data from 1982 to 2015 were used to develop a predictive risk model using threshold regression. A 2-year risk of death estimated conditional probability of surviving the second year given survival for the first year. UK CF Registry data from 2007 to 2013 were used to externally validate the model.ResultsThe combined effect of CF chronic health status and CF intermittent shock risk provided a simple clinical scoring tool for assessing 1-year and 2-year risk of death for an individual CF patient. At a threshold risk of death of ≥20%, the 1-year model had a sensitivity of 74% and specificity of 96%. The area under the receiver operating curve (AUC) for the 2-year mortality model was significantly greater than the AUC for a model that predicted survival based on forced expiratory volume in 1 s <30% predicted (AUC 0.95 versus 0.68 respectively, p<0.001). The Canadian-derived model validated well with the UK data and correctly identified 79% of deaths and 95% of survivors in a single year in the UK.ConclusionsThe prediction models provide an accurate risk of death over a 1- and 2-year time horizon. The models performed equally well when validated in an independent UK CF population.
APA, Harvard, Vancouver, ISO, and other styles
19

Barbieri, Antoine, and Catherine Legrand. "Joint longitudinal and time-to-event cure models for the assessment of being cured." Statistical Methods in Medical Research 29, no. 4 (June 19, 2019): 1256–70. http://dx.doi.org/10.1177/0962280219853599.

Full text
Abstract:
Medical time-to-event studies frequently include two groups of patients: those who will not experience the event of interest and are said to be “cured” and those who will develop the event and are said to be “susceptible”. However, the cure status is unobserved in (right-)censored patients. While most of the work on cure models focuses on the time-to-event for the uncured patients (latency) or on the baseline probability of being cured or not (incidence), we focus in this research on the conditional probability of being cured after a medical intervention given survival until a certain time. Assuming the availability of longitudinal measurements collected over time and being informative on the risk to develop the event, we consider joint models for longitudinal and survival data given a cure fraction. These models include a linear mixed model to fit the trajectory of longitudinal measurements and a mixture cure model. In simulation studies, different shared latent structures linking both submodels are compared in order to assess their predictive performance. Finally, an illustration on HIV patient data completes the comparison.
APA, Harvard, Vancouver, ISO, and other styles
20

Joffe, Erel, Kevin R. Coombes, Yi Hua Qiu, Suk Young Yoo, Nianxiang Zhang, Elmer V. Bernstam, and Steven M. Kornblau. "Survival Prediction In High Dimensional Datasets – Comparative Evaluation Of Lasso Regularization and Random Survival Forests." Blood 122, no. 21 (November 15, 2013): 1728. http://dx.doi.org/10.1182/blood.v122.21.1728.1728.

Full text
Abstract:
Abstract Background High-dimensional data obtained using modern molecular technologies (e.g., gene expression, proteomics) and large clinical datasets is increasingly common. Risk stratification based on such high-dimensional data remains challenging. Traditional statistical models have a limited capability of handling large numbers of variables, non-linear effects, correlations and missing data. More importantly, as more variables are analyzed, models tend to over-fit (i.e., the model provides good predictions on the studied data but performs poorly on other data). Recently two methods have been proposed for handling multivariate analysis of high-dimensional data. The Least Absolute Shrinkage and Selection Operator (LASSO) minimizes the number of Cox regression coefficients, favoring models that are parsimonious and less likely to over-fit. LASSO is computationally efficient and capable of handling correlated variables. Random Survival Forests (RSF) combines multiple decision trees built on randomly selected subsets of variables. This enables the evaluation of all variables, even in the presence of significant correlations or missing data, and reduces over-fitting. Few studies have evaluated these models on realistic datasets. This study compared the performance of LASSO and RSF to that of the traditional Cox Proportional Hazard (CoxPH) with respect to their ability to predict survival based on high dimensional reverse phase protein array (RPPA) data from Acute Myeloid Leukemia (AML) patients. Methods Our data were derived from previous work to define the role of the pathway activation in AML using a custom made RPPA onto which were printed leukemia enriched cells from 511 newly diagnosed AML patients collected between 1992 and 2007. The RPPA was probed with 231 strictly validated antibodies. Data were normalized and analysis was performed for the 415 subjects that underwent therapy at MD Anderson. The dataset also included 38 clinical variables. We removed cases with a documented survival of less than 4 weeks and imputed a small number of sporadically missing values by random sampling with replacement. We generated LASSO, RSF, and CoxPH models using R. We built RSF models using 1000 trees and 10 random split points. We then identified key features using the RSF max-subtree and the glmnet cross validation methods. We built an RSF and Cox models to these variables only. Models were trained on a bootstrap sample of the size of the dataset, randomly sampled with replacement, and tested on the un-sampled remaining cases. The process was repeated for 50 iterations and results were averaged. We report Harrell’s concordance index (C-Index) and the Brier score for model performance. Harrell’s C-Index is a pairwise comparison of all observations in the testset. It evaluates the probability of erroneously assigining longer survival time to one case over the other. Brier score calculates the difference between the predicted and observed probabilities. It evaluates how well the model fits the entire dataset. Results Cox regression with LASSO regularization had the best performance based on both Brier score and C-Index (Table 1 and figure 1). For the complete dataset Cox regression models did not converge even when using forward feature selection. Cox regression following feature selection based on RSF had inferior performance compared to other methods. Conclusions LASSO and RSF allow for mutlivariate analysis of high-dimensional data that would have not been possible otherwise. LASSO regularization outperforms other methods in terms of accuracy and to select features for traditional Cox models. Using the latter approach has great appeal as traditional Cox models allow easy interpertation of the hazard associated with individual variables. Differences in Brier scores are more pronounced than C-Indexes, possibly indicating a tendency of LASSO regularization to overfit the data compared to RSF. Disclosures: No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
21

Hong, Fangxin, Brad S. Kahl, and Robert Gray. "Incremental Value in Outcome Prediction with Molecular Signatures in Diffuse Large B-Cell Lymphoma,." Blood 118, no. 21 (November 18, 2011): 3687. http://dx.doi.org/10.1182/blood.v118.21.3687.3687.

Full text
Abstract:
Abstract Abstract 3687 INTRODUCTION: Multiple gene expression-based biomarkers have been identified in diffuse large B-cell lymphoma (DLBCL) that are predictive for survival outcomes. Most studies assess predictive significance based on p-value from multivariate Cox regression; few investigations have evaluated the incremental usefulness of these biomarkers in risk prediction. Using the recently developed concordance measures (e.g., C-statistics) on censored survival data, we assessed the usefulness of two published gene-based risk signatures and compared them to the known clinical prognostic factors; with an overall goal of investigating the added value. METHOD: The added value of biomarkers was assessed by C-statistic and the Integrated Discrimination Improvement (IDI). The overall C-statistic is an estimated concordance between prediction and observation (event vs. non-event) - the probability that predicted risk score is higher for subject with earlier time of event. The IDI measures overall improvement in sensitivity and specificity. The signatures we selected are a six-gene predictor by Lossos et al. (2004) and a three-component signature (∼400 genes) by Lenz et al. (2008). We used the Lenz dataset which include 233 patients with DLBCL who received R-CHOP therapy (median follow-up=2.81 yr), and focused on predicting 3-year survival outcome (42% censored). Clinical prognostic factors evaluated are the traditional IPI components (stage, performance status, age, LDH, and number of extra nodal sites). RESULTS: The C-statistic was 0.60 and 0.721 for six-gene predictor and three-component signature, suggesting good discrimination ability by molecular signature when used alone. However, the performance is inferior to IPI risk factors, with a C-statistic of 0.733. When integrating gene signatures with IPI risk factors, the C-statistic was increased to 0.744 and 0.762, an improvement of only 0.011 (95% CI, -0.049 to 0.071) and 0.029 (95% CI, -0.033 to 0.091) for six-gene predictor and three-component signature, respectively. Furthermore, assessment by IDI reveals an added value of only 0.011 (95% CI, -0.008 to 0.081) and 0.076 (95% CI, 0.013 to 0.16) for the two molecular signatures. Kaplan-Meier survival curves for the four quartile groups based on the predictor scores confirms the marginal benefit in risk prediction using molecular signatures. (Figure 1). CONCLUSIONS: These results indicate that molecular biomarkers are inferior to clinical factors for risk assessment in DLBCL and provide little added value in risk prediction. These calculations suggest we will need to consider more than gene expression to develop highly discriminatory risk prediction models. However, the study of gene expression and clinical outcomes retains considerable potential to enhance understanding of disease mechanisms and uncover new therapeutic targets. Disclosures: No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
22

Rizopoulos, Dimitris, Jeremy M. G. Taylor, Joost Van Rosmalen, Ewout W. Steyerberg, and Johanna J. M. Takkenberg. "Personalized screening intervals for biomarkers using joint models for longitudinal and survival data." Biostatistics 17, no. 1 (August 28, 2015): 149–64. http://dx.doi.org/10.1093/biostatistics/kxv031.

Full text
Abstract:
Abstract Screening and surveillance are routinely used in medicine for early detection of disease and close monitoring of progression. Motivated by a study of patients who received a human tissue valve in the aortic position, in this work we are interested in personalizing screening intervals for longitudinal biomarker measurements. Our aim in this paper is 2-fold: First, to appropriately select the model to use at the time point the patient was still event-free, and second, based on this model to select the optimal time point to plan the next measurement. To achieve these two goals, we combine information theory measures with optimal design concepts for the posterior predictive distribution of the survival process given the longitudinal history of the subject.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Bo-Huan, Hsiao-Jung Tseng, Wei-Ting Chen, Pin-Cheng Chen, Yu-Pin Ho, Chien-Hao Huang, and Chun-Yen Lin. "Comparing Eight Prognostic Scores in Predicting Mortality of Patients with Acute-On-Chronic Liver Failure Who Were Admitted to an ICU: A Single-Center Experience." Journal of Clinical Medicine 9, no. 5 (May 20, 2020): 1540. http://dx.doi.org/10.3390/jcm9051540.

Full text
Abstract:
Limited data is available on long-term outcome predictions for patients with acute-on-chronic liver failure (ACLF) in an intensive care unit (ICU) setting. Assessing the reliability and accuracy of several mortality prediction models for these patients is helpful. Two hundred forty-nine consecutive patients with ACLF and admittance to the liver ICU in a single center in northern Taiwan between December 2012 and March 2015 were enrolled in the study and were tracked until February 2017. Ninety-one patients had chronic hepatitis B-related cirrhosis. Clinical features and laboratory data were collected at or within 24 h of the first ICU admission course. Eight commonly used clinical scores in chronic liver disease were calculated. The primary endpoint was overall survival. Acute physiology and chronic health evaluation (APACHE) III and chronic liver failure consortium (CLIF-C) ACLF scores were significantly superior to other models in predicting overall mortality as determined by time-dependent receiver operating characteristic (ROC) curve analysis (area under the ROC curve (AUROC): 0.817). Subgroup analysis of patients with chronic hepatitis B-related cirrhosis displayed similar results. CLIF-C organ function (OF), CLIF-C ACLF, and APACHE III scores were statistically superior to the mortality probability model III at zero hours (MPM0-III) and the simplified acute physiology (SAP) III scores in predicting 28-day mortality. In conclusion, for 28-day and overall mortality prediction of patients with ACLF admitted to the ICU, APACHE III, CLIF-OF, and CLIF-C ACLF scores might outperform other models. Further prospective study is warranted.
APA, Harvard, Vancouver, ISO, and other styles
24

Torkey, Hanaa, Mostafa Atlam, Nawal El-Fishawy, and Hanaa Salem. "A novel deep autoencoder based survival analysis approach for microarray dataset." PeerJ Computer Science 7 (April 21, 2021): e492. http://dx.doi.org/10.7717/peerj-cs.492.

Full text
Abstract:
Background Breast cancer is one of the major causes of mortality globally. Therefore, different Machine Learning (ML) techniques were deployed for computing survival and diagnosis. Survival analysis methods are used to compute survival probability and the most important factors affecting that probability. Most survival analysis methods are used to deal with clinical features (up to hundreds), hence applying survival analysis methods like cox regression on RNAseq microarray data with many features (up to thousands) is considered a major challenge. Methods In this paper, a novel approach applying autoencoder to reduce the number of features is proposed. Our approach works on features reconstruction, and removal of noise within the data and features with zero variance across the samples, which facilitates extraction of features with the highest variances (across the samples) that most influence the survival probabilities. Then, it estimates the survival probability for each patient by applying random survival forests and cox regression. Applying the autoencoder on thousands of features takes a long time, thus our model is applied to the Graphical Processing Unit (GPU) in order to speed up the process. Finally, the model is evaluated and compared with the existing models on three different datasets in terms of run time, concordance index, and calibration curve, and the most related genes to survival are discovered. Finally, the biological pathways and GO molecular functions are analyzed for these significant genes. Results We fine-tuned our autoencoder model on RNA-seq data of three datasets to train the weights in our survival prediction model, then using different samples in each dataset for testing the model. The results show that the proposed AutoCox and AutoRandom algorithms based on our feature selection autoencoder approach have better concordance index results comparing the most recent deep learning approaches when applied to each dataset. Each gene resulting from our autoencoder model weight is computed. The weights show the degree of effect for each gene upon the survival probability. For instance, four of the most survival-related experimentally validated genes are on the top of our discovered genes weights list, including PTPRG, MYST1, BG683264, and AK094562 for the breast cancer gene expression dataset. Our approach improves the survival analysis in terms of speeding up the process, enhancing the prediction accuracy, and reducing the error rate in the estimated survival probability.
APA, Harvard, Vancouver, ISO, and other styles
25

Claret, L., J. Lu, Y. Sun, D. Stepan, and R. Bruno. "A modeling framework to simulate motesanib efficacy in thyroid cancer." Journal of Clinical Oncology 27, no. 15_suppl (May 20, 2009): e14553-e14553. http://dx.doi.org/10.1200/jco.2009.27.15_suppl.e14553.

Full text
Abstract:
e14553 Background: Motesanib is a highly selective, oral inhibitor of VEGF receptors 1, 2, and 3; PDGFR, and Kit with antiangiogenic and direct antitumor activity. A modeling framework that simulates clinical endpoints, including objective response rate (ORR; per RECIST) and progression-free survival (PFS), was developed to support clinical development of motesanib. This study evaluated the framework using results from a trial of motesanib in thyroid cancer (TC). Methods: Models for tumor growth inhibition (J Clin Oncol 24[18S]:abstract 6025, 2006) with drug effect driven by area under the curve (AUC) (as predicted by a population pharmacokinetic model), overall survival, and probability and duration of dose reductions were developed based on data from 93 differentiated TC (DTC) and 91 medullary TC patients who received motesanib monotherapy (125 mg once daily [QD]) in a phase 2 study (Horm Res 68[suppl 3]:28–9, 2007; NEJM 359:31–42, 2008). The full simulation framework was assessed in predicting dose intensity (starting dose of 125 mg QD), tumor size over time, ORR, and PFS. Dose-response simulations were performed in DTC patients. Results: Survival times followed a Weibull distribution with ECOG performance status, baseline tumor size, and change in tumor size from baseline at week 7 as predictors. The probability of dose reductions was dependent on time and AUC. Time to event Weibull models predicted the duration of dose reductions and dose interruptions. The models correctly predicted median daily exposure intensities up to week 24. The predicted ORR in DTC patients was 15.0% (95% prediction interval [PI], 7.5%-23.7%) compared with the observed ORR of 14.0%. Predicted median PFS was 40 weeks (95% PI, 32–49 wk) compared with the observed median PFS of 40 weeks. Dose- response simulations confirmed the appropriateness of 125-mg QD dosing in DTC: the modeling framework predicted no clinically relevant improvement in PFS would be obtained by dose intensification. Conclusions: This modeling framework (dose reduction/tumor growth inhibition/survival) will be an important tool to simulate clinical response and support clinical development decisions. Further evaluation of the model using additional datasets will be required. [Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
26

Veerkamp, R. F., S. Brotherstone, B. Engel, and T. H. E. Meuwissen. "Analysis of censored survival data using random regression models." Animal Science 72, no. 1 (February 2001): 1–10. http://dx.doi.org/10.1017/s1357729800055491.

Full text
Abstract:
AbstractCensoring of records is a problem in the prediction of breeding values for longevity, because breeding values are required before actual lifespan is known. In this study we investigated the use of random regression models to analyse survival data, because this method combines some of the advantages of a multitrait approach and the more sophisticated proportional hazards models. A model was derived for the binary representation of survival data and links with proportional hazards models and generalized linear models are shown. Variance components and breeding values were predicted using a linear approximation, including time-dependent fixed effects and random regression coefficients. Production records in lactations 1 to 5 were available on 24741 cows in the UK, all having had the opportunity to survive five lactations. The random regression model contained a linear regression on milk yield within herd (no. = 1417) by lactation number (no. = 4), Holstein percentage and year-month of calving effect (no. = 72). The additive animal genetic effects were modelled using orthogonal polynomials of order 1 to 4 with random coefficients and the error terms were fitted for each lactation separately, either correlated or not. Variance components from the full (i.e. uncensored) data set, were used to predict breeding values for survival in each lactation from both uncensored and randomly censored data. In the uncensored data, estimates of heritabilities for culling probability in each lactation ranged from 0·02 to 0·04. Breeding values for lifespan (calculated from the survival breeding values) had a range of 2·4 to 3·6 lactations and a standard deviation of 0·25. Correlations between predicted breeding values for 129 bulls, each with more than 30 daughters, from the various data sets ranged from 0·81 to 0·99 and were insensitive to the model used. It is concluded that random regression analysis models used for test-day records analysis of milk yield, might also be of use in the analysis of censored survival data.
APA, Harvard, Vancouver, ISO, and other styles
27

Biglarian, Akbar, Enayatollah Bakhshi, Ahmad Reza Baghestani, Mahmood Reza Gohari, Mehdi Rahgozar, and Masoud Karimloo. "Nonlinear Survival Regression Using Artificial Neural Network." Journal of Probability and Statistics 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/753930.

Full text
Abstract:
Survival analysis methods deal with a type of data, which is waiting time till occurrence of an event. One common method to analyze this sort of data is Cox regression. Sometimes, the underlying assumptions of the model are not true, such as nonproportionality for the Cox model. In model building, choosing an appropriate model depends on complexity and the characteristics of the data that effect the appropriateness of the model. One strategy, which is used nowadays frequently, is artificial neural network (ANN) model which needs a minimal assumption. This study aimed to compare predictions of the ANN and Cox models by simulated data sets, which the average censoring rate were considered 20% to 80% in both simple and complex model. All simulations and comparisons were performed by R 2.14.1.
APA, Harvard, Vancouver, ISO, and other styles
28

Choi, Jiin, Stewart J. Anderson, Thomas J. Richards, and Wesley K. Thompson. "Prediction of transplant-free survival in idiopathic pulmonary fibrosis patients using joint models for event times and mixed multivariate longitudinal data." Journal of Applied Statistics 41, no. 10 (April 23, 2014): 2192–205. http://dx.doi.org/10.1080/02664763.2014.909784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sparapani, Rodney, Brent R. Logan, Robert E. McCulloch, and Purushottam W. Laud. "Nonparametric competing risks analysis using Bayesian Additive Regression Trees." Statistical Methods in Medical Research 29, no. 1 (January 7, 2019): 57–77. http://dx.doi.org/10.1177/0962280218822140.

Full text
Abstract:
Many time-to-event studies are complicated by the presence of competing risks. Such data are often analyzed using Cox models for the cause-specific hazard function or Fine and Gray models for the subdistribution hazard. In practice, regression relationships in competing risks data are often complex and may include nonlinear functions of covariates, interactions, high-dimensional parameter spaces and nonproportional cause-specific, or subdistribution, hazards. Model misspecification can lead to poor predictive performance. To address these issues, we propose a novel approach: flexible prediction modeling of competing risks data using Bayesian Additive Regression Trees (BART). We study the simulation performance in two-sample scenarios as well as a complex regression setting, and benchmark its performance against standard regression techniques as well as random survival forests. We illustrate the use of the proposed method on a recently published study of patients undergoing hematopoietic stem cell transplantation.
APA, Harvard, Vancouver, ISO, and other styles
30

Seibold, Heidi, Achim Zeileis, and Torsten Hothorn. "Individual treatment effect prediction for amyotrophic lateral sclerosis patients." Statistical Methods in Medical Research 27, no. 10 (February 21, 2017): 3104–25. http://dx.doi.org/10.1177/0962280217693034.

Full text
Abstract:
A treatment for a complicated disease might be helpful for some but not all patients, which makes predicting the treatment effect for new patients important yet challenging. Here we develop a method for predicting the treatment effect based on patient characteristics and use it for predicting the effect of the only drug (Riluzole) approved for treating amyotrophic lateral sclerosis. Our proposed method of model-based random forests detects similarities in the treatment effect among patients and on this basis computes personalised models for new patients. The entire procedure focuses on a base model, which usually contains the treatment indicator as a single covariate and takes the survival time or a health or treatment success measurement as primary outcome. This base model is used both to grow the model-based trees within the forest, in which the patient characteristics that interact with the treatment are split variables, and to compute the personalised models, in which the similarity measurements enter as weights. We applied the personalised models using data from several clinical trials for amyotrophic lateral sclerosis from the Pooled Resource Open–Access Clinical Trials database. Our results indicate that some amyotrophic lateral sclerosis patients benefit more from the drug Riluzole than others. Our method allows gradually shifting from stratified medicine to personalised medicine and can also be used in assessing the treatment effect for other diseases studied in a clinical trial.
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Ming, Sheng Luo, and Stacia DeSantis. "Bayesian quantile regression joint models: Inference and dynamic predictions." Statistical Methods in Medical Research 28, no. 8 (July 2, 2018): 2524–37. http://dx.doi.org/10.1177/0962280218784757.

Full text
Abstract:
In the traditional joint models of a longitudinal and time-to-event outcome, a linear mixed model assuming normal random errors is used to model the longitudinal process. However, in many circumstances, the normality assumption is violated and the linear mixed model is not an appropriate sub-model in the joint models. In addition, as the linear mixed model models the conditional mean of the longitudinal outcome, it is not appropriate if clinical interest lies in making inference or prediction on median, lower, or upper ends of the longitudinal process. To this end, quantile regression provides a flexible, distribution-free way to study covariate effects at different quantiles of the longitudinal outcome and it is robust not only to deviation from normality, but also to outlying observations. In this article, we present and advocate the linear quantile mixed model for the longitudinal process in the joint models framework. Our development is motivated by a large prospective study of Huntington’s disease where primary clinical interest is in utilizing longitudinal motor scores and other early covariates to predict the risk of developing Huntington’s disease. We develop a Bayesian method based on the location–scale representation of the asymmetric Laplace distribution, assess its performance through an extensive simulation study, and demonstrate how this linear quantile mixed model-based joint models approach can be used for making subject-specific dynamic predictions of survival probability.
APA, Harvard, Vancouver, ISO, and other styles
32

Korepanova, Natalia, Heidi Seibold, Verena Steffen, and Torsten Hothorn. "Survival forests under test: Impact of the proportional hazards assumption on prognostic and predictive forests for amyotrophic lateral sclerosis survival." Statistical Methods in Medical Research 29, no. 5 (July 15, 2019): 1403–19. http://dx.doi.org/10.1177/0962280219862586.

Full text
Abstract:
We investigate the effect of the proportional hazards assumption on prognostic and predictive models of the survival time of patients suffering from amyotrophic lateral sclerosis. We theoretically compare the underlying model formulations of several variants of survival forests and implementations thereof, including random forests for survival, conditional inference forests, Ranger, and survival forests with L1 splitting, with two novel variants, namely distributional and transformation survival forests. Theoretical considerations explain the low power of log-rank-based splitting in detecting patterns in non-proportional hazards situations in survival trees and corresponding forests. This limitation can potentially be overcome by the alternative split procedures suggested herein. We empirically investigated this effect using simulation experiments and a re-analysis of the Pooled Resource Open-Access ALS Clinical Trials database of amyotrophic lateral sclerosis survival, giving special emphasis to both prognostic and predictive models.
APA, Harvard, Vancouver, ISO, and other styles
33

Rui Lam, Amanda Yun, Min Min Chan, David Carmody, Ming Ming Teh, Yong Mong Bee, Wynne Hsu, Mong Li Lee, See Kiong Ng, and Marcus Eng Hock Ong. "Predicting Major Adverse Cardiovascular Events in Asian Type 2 Diabetes Patients With Lasso-Cox Regression." Journal of the Endocrine Society 5, Supplement_1 (May 1, 2021): A417—A418. http://dx.doi.org/10.1210/jendso/bvab048.852.

Full text
Abstract:
Abstract Background: South-East Asia has seen a dramatic increase in type 2 diabetes (T2D). Risk prediction models for Major adverse cardiovascular events (MACE) identify patients who may benefit most from intensive prevention strategies. Existing risk prediction models for T2D were developed mainly in Caucasian populations, limiting their generalizability to Asian populations. We developed a Lasso-Cox regression model to predict the 5-year risk of incident MACE in Asian patients with T2DM using data from the largest diabetes registry in Singapore. Methodology: The diabetes registry contained public healthcare data from 9 primary healthcare centers, 4 hospitals and 3 national specialty centers. Data from 120,131 T2D subjects without MACE at baseline, from 2008 to 2018, were used for model development and validation. Patients with less than 5 years of follow-up data were excluded. Lasso-Cox, a semi-parametric variant of the Cox Proportional Hazard Model with l1-regularization, was used to predict individual survival distribution of incident MACE. A total of 69 features within electronic health records, including demographic data, vital signs, laboratory tests, and prescriptions for blood pressure, lipid and glucose-lowering medication were supplied to the model. Regression shrinkage and selection via the lasso method was used to identify variables associated with incident MACE. Identified variables were used to generate individual survival probability curves. Incident MACE was defined as the first occurrence of nonfatal myocardial infarction, nonfatal stroke, and CV disease-related death. Results: A total of 12,535 (10.4%) subjects developed MACE between 2008 and 2018. Model performance was evaluated by time-dependent concordance index and Brier score at 1, 2 and 5 years. The results of 5-fold cross validation shows that the model displayed good discrimination, achieving time-dependent C-statistics of 0.746±0.005, 0.742±0.003 and 0.738±0.002 at 1, 2 and 5 years respectively. The model demonstrated low Brier scores of 0.0355±0.0004, 0.0601±0.0011, 0.104±0.004 at 1, 2 and 5 years respectively, indicating good calibration. Factors most predictive of MACE were age and a history of hypertension and hyperlipidemia. Conclusions: We have developed a risk prediction model for MACE in Asian T2D using a large Singaporean T2D cohort, which can be used to support clinical decision-making. The individual survival probability estimates achieve an average C-statistics of 0.742 and are well-calibrated at 1, 2 and 5 years.
APA, Harvard, Vancouver, ISO, and other styles
34

Fang, Hong-Bin, Tong Tong Wu, Aaron P. Rapoport, and Ming Tan. "Survival analysis with functional covariates for partial follow-up studies." Statistical Methods in Medical Research 25, no. 6 (July 11, 2016): 2405–19. http://dx.doi.org/10.1177/0962280214523586.

Full text
Abstract:
Predictive or prognostic analysis plays an increasingly important role in the era of personalized medicine to identify subsets of patients whom the treatment may benefit the most. Although various time-dependent covariate models are available, such models require that covariates be followed in the whole follow-up period. This article studies a new class of functional survival models where the covariates are only monitored in a time interval that is shorter than the whole follow-up period. This paper is motivated by the analysis of a longitudinal study on advanced myeloma patients who received stem cell transplants and T cell infusions after the transplants. The absolute lymphocyte cell counts were collected serially during hospitalization. Those patients are still followed up if they are alive after hospitalization, while their absolute lymphocyte cell counts cannot be measured after that. Another complication is that absolute lymphocyte cell counts are sparsely and irregularly measured. The conventional method using Cox model with time-varying covariates is not applicable because of the different lengths of observation periods. Analysis based on each single observation obviously underutilizes available information and, more seriously, may yield misleading results. This so-called partial follow-up study design represents increasingly common predictive modeling problem where we have serial multiple biomarkers up to a certain time point, which is shorter than the total length of follow-up. We therefore propose a solution to the partial follow-up design. The new method combines functional principal components analysis and survival analysis with selection of those functional covariates. It also has the advantage of handling sparse and irregularly measured longitudinal observations of covariates and measurement errors. Our analysis based on functional principal components reveals that it is the patterns of the trajectories of absolute lymphocyte cell counts, instead of the actual counts, that affect patient’s disease-free survival time.
APA, Harvard, Vancouver, ISO, and other styles
35

Durand, J., S. Pourchet, and F. Goldwasser. "Prediction of short-term outcome in terminally ill cancer patients." Journal of Clinical Oncology 25, no. 18_suppl (June 20, 2007): 19572. http://dx.doi.org/10.1200/jco.2007.25.18_suppl.19572.

Full text
Abstract:
19572 Background: Oncologists tend to overestimate survival of advanced cancer patients (pts) emphasizing the need for objective prognostic models to prevent futility. Methods: From January 2004 to May 2006, a prospective unicenter study was done in terminally ill cancer pts referred to a palliative care unit. Evaluation at admission included physical examination and the following routine blood tests : total blood count, hemostasis (Prothrombin Time (PT), activated Partial Thromboplastin Time (aPTT)), inflammatory and nutritional proteins (fibrinogen, ferritin, PINI = [alpha-1 acid glycoprotein x C-reactive protein (CRP)] / [albumin x prealbumin]), serum lipids (total and HDL cholesterol), biochemistry (urea, serum creatinine, calcium, total bilirubin, B12 vitamin) and liver enzymes (AST, ALT, LDH). Fatal outcome within the first 48 hours and lack of blood tests were exclusion criteria. A multivariate analysis according to a logistic regression was used to estimate the probability of fatal outcome within 2 weeks (wks) based on baseline patients characteristics. Results: 285 consecutive pts were hospitalized; 246 were evaluable : 133 men and 113 women (median age : 64 years; range: 18–94). Primary tumor was gynecological (18.3%), gastro-intestinal (16.3%), pulmonary (13.4%), hepatic or biliary (13.8%), urological (11.8%), head and neck (9.3%), haematological (3.3%) or others (13.8%). In univariate analysis, survival ≤ 2 wks correlated with male gender (p<0.001), Karnofsky Index (KI) (p<0.001), leucocytes (WBC) (p<0.001), aPTT (p=0.04), Urea (p=0.011), Total bilirubin (p=0.004), AST (p<0.001), LDH (p=0.008), albumin (p=0.042), prealbumin (p=0.004), CRP (p=0.028), ferritin (p=0.007), B12 vitamin (p=0.042) and PINI (p=0.001). In multivariate analysis, male gender (p=0.034), cutaneous metastases (p=0.012), KI ≤ 20% (p=0.002), WBC > 7.5G/l (p=0.012), fibrinogen > 6.5 g/l (p=0.021), LDH > 2N (p=0.035) and PINI > 20 (p=0.007) were predictive. A mathematical model was built to estimate the probability of fatal outcome within 2 wks. Conclusions: We created a tool which may help physicians to better estimate survival of terminally ill cancer pts and to prevent unnecessary overtreatment. Our score needs prospective validation in a multicenter study. No significant financial relationships to disclose.
APA, Harvard, Vancouver, ISO, and other styles
36

Wu, Lunpo, Hongjuan Zheng, Jianfei Fu, Jinlin Du, Shu Zheng, and Liangjing Wang. "The “effect” of T classification on colorectal liver metastasis." Journal of Clinical Oncology 37, no. 15_suppl (May 20, 2019): e15049-e15049. http://dx.doi.org/10.1200/jco.2019.37.15_suppl.e15049.

Full text
Abstract:
e15049 Background: T classification is considered as a detail and credible category of the depth of tumor invasion. Generally, with the increasing T category, the risk of metastases should be continuously rising. However, there is a group of metastatic patients with early T classification, who were supposed to have a low metastatic probability. Our study aims to present the T classification on metastatic liver colorectal cancer (CRLM) in both clinical and biological aspects, and explore preoperative predictions to develop a convenient individual assessment model for clinicians to speculate whether these patients, whose prognosis is extremely poor. Methods: Tissue samples of primary colorectal cancer were obtained at our center. Patients with CRLM during 2010 to 2014 were identified from the Surveillance, Epidemiology, and End Results (SEER) database. Kaplan-Meier and Cox models were used to analyze the survival differences. We identified preoperative prognostic factors based on the Cox analysis and constructed a nomogram model. The predictive accuracy and discriminative ability were measured by concordance index (C-index) and calibration curve. Results: The mRNA array from our hospital showed that there is an obvious difference between the T1/2N0M1 subgroup and the T3/4N0M1 subgroup. Patients with early T classification (T1) were more often have tumors located in rectum, with well differentiation, with no lymph node metastasis and a higher CEA level. Further survival analysis indicated that early T classification (T1) was an independent prognostic factor with poorer survival. When the lymph node (N) status was taken into consideration, patients with T1 N+ had an obvious better survival than T1 N0 patients. A clinical nomogram was constructed based on preoperative factors. The calibration curves for probability of 1-, 2-, and 5-year overall survival showed a good agreement between nomogram prediction and actual observation. Conclusions: The prognosis of T1M1 is extremely poorer than T3/4M1. The prognosis of T1N+ is better than T1N0. It is time to pay more attention to the high-risk monitoring and screening of T1 in early colon cancer.
APA, Harvard, Vancouver, ISO, and other styles
37

Morin, Amy A., Alisha Albert-Green, Douglas G. Woolford, and David L. Martell. "The use of survival analysis methods to model the control time of forest fires in Ontario, Canada." International Journal of Wildland Fire 24, no. 7 (2015): 964. http://dx.doi.org/10.1071/wf14158.

Full text
Abstract:
This paper presents the results from employing survival analysis methods to model the probability distribution of the control time of forest fires. The Kaplan–Meier estimator, log–location–scale models, accelerated failure time models, and Cox proportional hazards (PH) models are described. Historical lightning and people-caused forest fire data from the Province of Ontario, Canada from 1989 through 2004 are employed to illustrate the use of the Cox PH model. We demonstrate how this methodology can be used to examine the association between the control time of a suppressed forest fire and local factors such as weather, vegetation and fuel moisture, as well as fire management variables including the response time between when a fire is reported and the initiation of suppression action. Significant covariates common to both the lightning and people-caused models were the size of the fire at the onset of initial attack, the Fine Fuel Moisture Code and the Initial Spread Index. The response time was also a significant predictor for the control time of lightning-caused fires, whereas the Drought Code and time of day of initial attack were significant for people-caused fires. Larger values of the covariates in these models were associated with larger survival probabilities.
APA, Harvard, Vancouver, ISO, and other styles
38

Wijenayake, Pavithra Rangani, and Takuya Hiroshima. "Prediction of Tree Age Distribution Based on Survival Analysis in Natural Forests: A Case Study of Preserved Permanent Plots in the University of Tokyo Hokkaido Forest, Northern Japan." Environmental Sciences Proceedings 3, no. 1 (November 13, 2020): 50. http://dx.doi.org/10.3390/iecf2020-08077.

Full text
Abstract:
In forests, tree mortality is strongly determined by complex interactions between multiple biotic and abiotic factors, and the analysis of tree mortality is widely implemented in forest management. However, age-based tree mortality remains poorly evaluated quantitatively at the stand scale for unevenly aged forests. The objective of this study was to predict the age distributions of living and dead trees based on survival analyses. We used a combination of tree-ring and census data from the two preserved permanent plots in the University of Tokyo Hokkaido Forest in pan-mixed and sub-boreal natural forests, Hokkaido, northern Japan, to derive site-specific survival models. All the living trees (diameter at breast height, ≥5 cm in 2009) were targeted to identify the tree ages using a RESISTOGRAPH, a semi-nondestructive device. Periodical tree age data with a 10-year age class were used during the observation periods of 2009–2019, and all the changes (i.e., death and new ingrowth) during the periods were recorded. We determined the time stabilities of the survival functions between periods in advance. The results show that the parametric survival analysis with the Weibull distribution successfully yielded the mortality rate, mortality probability, and survival probability in each plot. Finally, we predicted the future age class distributions of living and dead trees of each plot based on the survival analysis results and discussed their management implications. We recommend that the estimated mean lifetimes facilitate making decisions on the selection of harvesting trees in uneven forest management based on selective cutting.
APA, Harvard, Vancouver, ISO, and other styles
39

Wierda, William G., S. OBrien, X. Wang, A. Ferrajoli, S. Faderl, D. Thomas, F. Ravandi, et al. "Weighted Prognostic Models for Survival in Untreated and Previously Treated Patients with CLL." Blood 106, no. 11 (November 16, 2005): 5012. http://dx.doi.org/10.1182/blood.v106.11.5012.5012.

Full text
Abstract:
Abstract The clinical course for patients with chronic lymphocytic leukemia (CLL) is remarkably variable. Patient characteristics have been correlated with meaningful clinical endpoints such as time to treatment, response to treatment, progression-free survival, and overall survival (OS) for patients with CLL. Identification of such characteristics enables informed discussion about timing of treatment, treatment options, and can provide insight into the basic biology of the disease. The Rai staging system identifies risk groups for survival based on characteristics in untreated patients. However, within each stage there is still heterogeneity in survival. In addition to factors used in clinical staging, several other characteristics have been correlated with survival including: age, gender, pattern of marrow infiltration, lymphocyte doubling time, presence of prolymphocytes, presence of chromosome abnormalities, elevated serum levels of beta-2 microglobulin (ß-2M), thymidine kinase, and soluble CD23, IgVH mutation status, and expression of ZAP70 and CD38 by leukemia cells. Alone, each of these independent prognostic factors, including stage, has limited utility in predicting overall survival. A nomogram is a graphic representation of a statistical model with scales for calculating the cumulative affect of weighted variables on the probability of a particular outcome. The strength of using nomograms is that they combine multiple independent variables to predict an outcome and enable appreciation of the prognostic weight of each variable. They are useful in counseling patients and in developing expectations for clinical trials as well as identifying patients "at risk" who should be targeted for aggressive therapy or investigational approaches. We retrospectively evaluated 1607 chemotherapy-naive and 1602 previously treated patients with CLL that presented to MD Anderson Cancer Center to identify independent characteristics that could be used to predict OS. Based on significant characteristics identified in univariate analyses, a multivariate model was developed for OS for each patient group. Characteristics evaluated included age, gender, Rai stage, performance status (PS), # of affected node sites, WBC count, absolute lymphocyte count (ALC), HGB, PLT, ALB, serum alkaline phosphatase (AP) and LDH, % BM lymphocytes, spleen and liver size, ß-2M, # of prior therapies and refractoriness to fludarabine for previously treated patients. Univariate and multivariate analyses identified several patient characteristics at presentation that predicted for overall survival for each patient group. The final multivariate Cox proportional hazards model included the following characteristics for chemotherapy-naive patients: age, ALC. LDH, β2M, Rai stage, and # of involved lymph node groups. For previously treated patients, the final multivariate Cox hazards model included the following significant characteristics: β2M, # prior Rx, age, IgM, PLT, and HGB. Nomograms were constructed for each group using the respective significant characteristics to estimate 2-, 5-, and 10-year survival probability and median survival time. These prognostic model may help patients and clinicians in decision-making as well as in clinical research and design of clinical trials. Nomograms are powerful tools for predicting important clinical endpoints including survival, for counseling patients, and developing and analyzing clinical trials.
APA, Harvard, Vancouver, ISO, and other styles
40

Dueñas-Jurado, J. M., P. A. Gutiérrez, A. Casado-Adam, F. Santos-Luna, A. Salvatierra-Velázquez, S. Cárcel, C. J. C. Robles-Arista, and C. Hervás-Martínez. "New models for donor-recipient matching in lung transplantations." PLOS ONE 16, no. 6 (June 4, 2021): e0252148. http://dx.doi.org/10.1371/journal.pone.0252148.

Full text
Abstract:
Objective One of the main problems of lung transplantation is the shortage of organs as well as reduced survival rates. In the absence of an international standardized model for lung donor-recipient allocation, we set out to develop such a model based on the characteristics of past experiences with lung donors and recipients with the aim of improving the outcomes of the entire transplantation process. Methods This was a retrospective analysis of 404 lung transplants carried out at the Reina Sofía University Hospital (Córdoba, Spain) over 23 years. We analyzed various clinical variables obtained via our experience of clinical practice in the donation and transplantation process. These were used to create various classification models, including classical statistical methods and also incorporating newer machine-learning approaches. Results The proposed model represents a powerful tool for donor-recipient matching, which in this current work, exceeded the capacity of classical statistical methods. The variables that predicted an increase in the probability of survival were: higher pre-transplant and post-transplant functional vital capacity (FVC), lower pre-transplant carbon dioxide (PCO2) pressure, lower donor mechanical ventilation, and shorter ischemia time. The variables that negatively influenced transplant survival were low forced expiratory volume in the first second (FEV1) pre-transplant, lower arterial oxygen pressure (PaO2)/fraction of inspired oxygen (FiO2) ratio, bilobar transplant, elderly recipient and donor, donor-recipient graft disproportion requiring a surgical reduction (Tailor), type of combined transplant, need for cardiopulmonary bypass during the surgery, death of the donor due to head trauma, hospitalization status before surgery, and female and male recipient donor sex. Conclusions These results show the difficulty of the problem which required the introduction of other variables into the analysis. The combination of classical statistical methods and machine learning can support decision-making about the compatibility between donors and recipients. This helps to facilitate reliable prediction and to optimize the grafts for transplantation, thereby improving the transplanted patient survival rate.
APA, Harvard, Vancouver, ISO, and other styles
41

Petros, Firas G., Aradhana M. Venkatesan, Diana Kaya, Chaan S. Ng, Bryan M. Fellman, Jose A. Karam, Christopher G. Wood, and Surena F. Matin. "Conditional survival and landmark analysis for patients with small renal masses undergoing active surveillance at a tertiary care center." Journal of Clinical Oncology 36, no. 6_suppl (February 20, 2018): 609. http://dx.doi.org/10.1200/jco.2018.36.6_suppl.609.

Full text
Abstract:
609 Background: Conditional survival can provide guidance for patients once they have survived a period of time after diagnosis of their disease. We determine conditional survival for patients with small renal masses (SRM) undergoing active surveillance (AS). Methods: Patients were enrolled in a prospective AS registry at our institution between May 2005 and January 2016. Patients with localized SRM ≤4cm were included, with serial radiologic imaging available in-house for re-review. Overall survival (OS) was estimated using the Kaplan-Meier method and modeled via Cox proportional hazards models. The primary end points analyzed were the conditional probability of survival and tumor growth over time. Landmark analysis was used to evaluate survival outcomes. Results: A total of 272 patients were included in this analysis. Mean initial tumor size was 1.74 ± 0.77 cm and mean tumor size closest to the 2-year mark was 1.97 ± 0.83 cm. The likelihood of continued survival to 5 years improved after the 2-year landmark was reached. Patients with tumors < 3cm who survived the first 2-years on AS had a 0.84-0.85 chance of surviving to 5 years, and if they survived 3 years, the probability of surviving to 5 years improved to 0.91. Multivariable Cox proportional hazards analysis of survival revealed eGFR, Charlson comorbidity index (CCI), and tumor size of 3-4cm were significantly predictive of OS both at baseline and at 2-year mark (all p < 0.05). Patients with a tumor size 3-4 cm were at a greater risk of non-RCC death (HR > 3.5; p ≤ 0.001). A linear mixed effects model revealed slow tumor growth (beta: 0.12; p < 0.001) for tumors < 3cm. Adjusted tumor size predictions disclosed parallel growth rates for SRM of < 2cm and 2-2.99cm with insignificant difference (p = 0.969). Conclusions: Our study provides insight into the survival of patients with SRM on AS who have already survived a certain period of time. The conditional survival probability of patients with SRM < 3cm on AS improved after the initial 2 years, suggesting a role for re-counseling for those who survive to the 2-year landmark. Patient factors (renal function and CCI) were significantly associated with survival at baseline and at the 2-year landmark.
APA, Harvard, Vancouver, ISO, and other styles
42

Ambrose, Paul G., Alan Forrest, William A. Craig, Chistopher M. Rubino, Sujata M. Bhavnani, George L. Drusano, and Henry S. Heine. "Pharmacokinetics-Pharmacodynamics of Gatifloxacin in a Lethal Murine Bacillus anthracis Inhalation Infection Model." Antimicrobial Agents and Chemotherapy 51, no. 12 (September 17, 2007): 4351–55. http://dx.doi.org/10.1128/aac.00251-07.

Full text
Abstract:
ABSTRACT We determined the pharmacokinetic-pharmacodynamic (PK-PD) measure most predictive of gatifloxacin efficacy and the magnitude of this measure necessary for survival in a murine Bacillus anthracis inhalation infection model. We then used population pharmacokinetic models for gatifloxacin and simulation to identify dosing regimens with high probabilities of attaining exposures likely to be efficacious in adults and children. In this work, 6- to 8-week-old nonneutropenic female BALB/c mice received aerosol challenges of 50 to 75 50% lethal doses of B. anthracis (Ames strain, for which the gatifloxacin MIC is 0.125 mg/liter). Gatifloxacin was administered at 6- or 8-h intervals beginning 24 h postchallenge for 21 days, and dosing was designed to produce profiles mimicking fractionated concentration-time profiles for humans. Mice were evaluated daily for survival. Hill-type models were fitted to survival data. To identify potentially effective dosing regimens, adult and pediatric population pharmacokinetic models for gatifloxacin and Monte Carlo simulation were used to generate 5,000 individual patient exposure estimates. The ratio of the area under the concentration-time curve from 0 to 24 h (AUC0-24) to the MIC of the drug for the organism (AUC0-24/MIC ratio) was the PK-PD measure most predictive of survival (R 2 = 0.96). The 50% effective dose (ED50) and the ED90 and ED99 corresponded to AUC0-24/MIC ratios of 11.5, 15.8, and 30, respectively, where the maximum effect was 97% survival. Simulation results indicate that a daily gatifloxacin dose of 400 mg for adults and 10 mg/kg of body weight for children gives a 100% probability of attaining the PK-PD target (ED99). Sensitivity analyses suggest that the probability of PK-PD target attainment in adults and children is not affected by increases in MICs for strains of B. anthracis to levels as high as 0.5 mg/liter.
APA, Harvard, Vancouver, ISO, and other styles
43

Kovtun, N. V., I. M. Motuziuk, and R. O. Ganzha. "Using Cox Regression to Forecast of Survival of Women with Multiple Malignant Neoplasms." Statistics of Ukraine 83, no. 4 (December 17, 2018): 65–71. http://dx.doi.org/10.31767/su.4(83)2018.04.08.

Full text
Abstract:
Recently, an increase in the incidence of multiple primary malignant neoplasms has been observed, specifically, when two or more unrelated tumors originate from different organs and appear in the body simultaneously or sequentially, one after another. During past few years, the interval between the first and second reproductive cancer diagnosis has decreased in 6 times – from 11 to just 2 years while probability of surviving the next 3 years after 8.5 years past initial diagnosis has decreased from 0.995 to 0.562. Using performed analysis, this paper provides details of survival modelling for women with breast cancer with the aim to find the most significant factors affecting the likelihood of survival not by chance alone. The data used for research were obtained from Ukrainian National Institute of Cancer covering 1981–2017 period. The modelling was performed using Cox regression with forward effect selection method and stay in p-value boundary equal to 0.15. The forward method firstly computes the adjusted chi-square statistics for each variable. Then, it examines the largest computed statistics and if particular one is significant, the corresponding variable is added to the model. Once the variable is entered, it is never removed from the model. 3 out of 4 factors that appeared to be significant according to forward selection method were confirmed as the significant ones by stepwise selection method. The results of modelling proved the possibility of prediction the survival using certain set of disease features and subjects’ characteristics. Testing of global hypothesis for Beta resulted in rejecting of null hypothesis (Beta = 0) in favor of the alternative one (Beta ≠ 0) thus it was confirmed that the models make sense and can be used to predict survival in women with breast cancer. According to obtained results, the most significant disease features and subjects characteristics appeared to be: type of multiple processes (synchronous or metachronous), presence of relapse and/or metastasis, type and combination of treatment, stage of disease. Cancer with synchronous processes is characterized by greater aggressiveness and it reduces survival by almost 13 times compared with cancer where metachronous processes take place. Even though chemotherapy significantly increases the survival rate of patients, it also impacts the probability of relapses and metastasis occurrence, which are 16 times more likely to occur if chemotherapy was a part of treatment. This gives grounds for assumption that it has an indirect effect on survival and hence needs to be analyzed considering its negative impact on the relapses and metastasis occurrence probability, which, in turn, reduces survival by 10 times. This fact, in our opinion, introduces the need for further in-depth analysis. The significant difference between survival rates in patients with the first and third stages of cancer has been proved – the chances to survive with the disease at the first stage are almost 12 times higher than with disease at the third stage. At the same time, the difference in the survival rates in women with the disease at the second and the third stages is not so big and it is only 1.6 times. The modern method of conducting surgery compared with the standard one appeared to be capable to reduce the risk of relapses and metastases by 2.6 times, while breast conservative surgery in multiple oncological processes – by 3 times compared with mastectomy, which allows to state that both factors have a positive effect on the survival probability and reduce the risk of mortality. Regarding subgroup models built for patients having synchronous process and patients with metachronous processes separately, an increase in the sample size is needed to assess assumed difference in factors affecting survival and to improve predictive abilities of models. This, in turn, requires additional studies during which the necessary amount of data can be collected.
APA, Harvard, Vancouver, ISO, and other styles
44

Schluchter, Mark D., and Annalisa V. Piccorelli. "Shared parameter models for joint analysis of longitudinal and survival data with left truncation due to delayed entry – Applications to cystic fibrosis." Statistical Methods in Medical Research 28, no. 5 (April 4, 2018): 1489–507. http://dx.doi.org/10.1177/0962280218764193.

Full text
Abstract:
Many longitudinal studies observe time to occurrence of a clinical event such as death, while also collecting serial measurements of one or more biomarkers that are predictive of the event, or are surrogate outcomes of interest. Joint modeling can be used to examine the relationship between the biomarker and the event, and also as a way of adjusting analyses of the biomarker for non-ignorable dropout. In settings such as registry studies, an additional complexity is caused when follow-up of subjects is delayed, referred to as left-truncation of follow-up in the survival analysis setting. If not adjusted for, this can cause bias in estimation of parameters of the survival distribution for the clinical event and in parameters of the longitudinal outcome such as the profile or rate of change over time because subjects may die or have the clinical event before follow-up starts. This paper illustrates how a broad class of shared parameter models can be used to jointly model a time to event outcome along with a longitudinal marker using available nonlinear mixed modeling software, when follow-up times are left truncated. Methods are applied to jointly model survival and decline in lung function in cystic fibrosis patients.
APA, Harvard, Vancouver, ISO, and other styles
45

Avila, Olga B., and Harold E. Burkhart. "Modeling survival of loblolly pine trees in thinned and unthinned plantations." Canadian Journal of Forest Research 22, no. 12 (December 1, 1992): 1878–82. http://dx.doi.org/10.1139/x92-245.

Full text
Abstract:
The probability of survival for an individual tree was modeled. A variable screening algorithm, screen, was used to find the best set of predictor variables. Stepwise procedures in SAS and BMDP were also used, and results were compared with those obtained from the SCREEN algorithm. The logistic model, with independent variables that were found to be significant through the SCREEN algorithm, was fitted to the data. The fitted models were validated by splitting the data and applying equations fitted to the estimation set to the data in the testing set. Two methods of data splitting were applied: (i) according to time period and (ii) at random. Validation statistics were similar for both cases. The final distance-dependent logistic model for the unthinned plots included the following independent variables: crown ratio (CR), total height/average height of dominant and codominant trees (HH), and competition index (CI). The logistic model with CR, HH, and quadratic mean diameter/DBH (DD) was used as the distance-independent model of survival for the unthinned plots. The final distance-dependent logistic model for the thinned plots included as predictor variables HH, CI, and CR1.5. For the thinned plots the final distance-independent model used DD, CI, and CR1.5 as independent variables. The logistic models of survival obtained in this study were compared with survival models developed previously for distance-dependent and distance-independent stand simulators. Only slight improvement is shown by the logistic models.
APA, Harvard, Vancouver, ISO, and other styles
46

Hummel, Manuela, Thomas Hielscher, Hans Jürgen Salwender, Christof Scheid, Hans Martin, Uta Bertsch, Hartmut Goldschmidt, Anja Seckinger, and Dirk Hose. "Quantitative Integrative Prediction of Survival Probability in Multiple Myeloma Using Molecular and Clinical Prognostic Factors in 657 Patients Treated with Bortezomib-Based Induction, High-Dose Therapy and Autologous Stem Cell Transplantation." Blood 132, Supplement 1 (November 29, 2018): 403. http://dx.doi.org/10.1182/blood-2018-99-113307.

Full text
Abstract:
Abstract Background Survival in multiple myeloma ranges from months to decades and the majority of patients remain incurable with current treatment approaches. Given this high variability, it would be clinically very useful to quantitatively predict survival on a continuous scale. Current risk prediction models attribute patients to 2-3 groups, i.e. high, intermediate, and low risk. Group size and survival rates largely vary between different systems. Rarely, molecular prognostic factors beyond iFISH are used. Widely accepted standard is the revised ISS score (rISS) including serum B2M, albumin, and adverse prognostic aberrations. Aim of our study was to develop quantitative prediction of individual myeloma patient's three and five year survival probability. We integrate prognostic factors into a comprehensive model, and evaluate its risk discrimination capabilities in relation to rISS. Patients and methods Symptomatic myeloma patients treated up-front with bortezomib-based induction regimen (PAD/PAd/VCD) and intention to undergo high-dose therapy and autologous stem cell transplantation with available GEP and iFISH-data (n=657) were split into training (TG, n=536) and validation group (VG, n=121). In TG and VG, 190 and 22 deaths were observed. Median f/u time was 5.4 and 3.5 years. Distribution of risk factors and 3-year overall survival (OS) were similar in both groups (80% vs 86%). Primary endpoint was OS. The following risk factors were considered for building the prognostic model: age (in years), ISS stage, elevated LDH level (>ULN), creatinine level >2 g/dL, heavy chain type IgA yes/no, del17p13 yes/no, t(4;14) yes/no, +1q21 no/3 copies/>3 copies, GEP-based GEP70-score and proliferation index (GPI). GEP-scores were analyzed as continuous variables. Due to low frequency, t(14;16) was excluded. A multivariable Cox regression model was fitted to estimate the individual prognostic index (PI). A non-stringent backward variable selection procedure with significance level for staying in the model of p=0.5 was applied to remove only surely non-informative predictors. Model selection, calibration, and validation were performed with the rms R package [Harrell 2017]. Harrell's c-index was used to assess the discrimination performance, and to compare the proposed prognostic model to the rISS [Kang 2015]. Results Quantitative Integrative Prediction of Survival Probability. The final Cox model was used to build a nomogram for estimating survival probabilities (Fig. 1). Points are attributed to each of the remaining prognostic factors and summed up. Total points translate into estimated 3-/5-year OS probabilities on a continuous scale. Example is given for an actual patient; 170 total points correspond to a 3-/5-year OS probability of 51/26%, and the contribution of each of the risk factors represented by different colors is visualized (Fig. 1). Of course, the continuous scale can also be used to group patients in low/intermediate/high risk; e.g. a sum of <123/123-171/>171 and <94/94-142/>142 points correspond to 3-/5-year OS probabilities of >80/50-80/<50% respectively. Validation and comparison to rISS. The nomogram was validated (VG) regarding discrimination and calibration [Royston 2013]. Discrimination signifies the ability of the model to distinguish patients with poor and good prognosis. The model showed equally good discrimination in TG (c-index 0.76) and VG (0.75). The time-dependent AUC at 3-years was 0.74 in the VG. In comparison, the c-index for rISS was 0.65 in TG and 0.56 in VG, i.e. significantly lower (P<.001). The AUC of rISS was 0.57 in the VG. The PI was highly significant in the VG (P<.001) and its regression coefficient was 1.04, very close to the optimal value of 1, indicating no obvious bias or overfitting. Calibration, reflecting accuracy of the estimated survival times, was assessed by smoothed calibration plots of expected versus observed survival probabilities on VG and TG (bootstrap). Resampling based evaluation (TG) showed very good calibration, with tendency of too pessimistic predictions for high-risk patients in the VG as the more recent patient cohort. Conclusion We developed and validated individual quantitative nomogram-based prediction of survival which could be used in clinical routine. Here, integration of molecular prognostic factors (GEP-based risk scores and proliferation) gives significantly superior prediction of survival compared to rISS. Disclosures Salwender: Celgene: Honoraria, Other: travel suppport, Research Funding; Amgen: Honoraria, Other: travel suppport, Research Funding; Takeda: Honoraria; Novartis: Honoraria, Other: travel suppport, Research Funding; Bristol-Myers Squibb: Honoraria, Other: travel suppport, Research Funding; Janssen: Honoraria, Other: travel support, Research Funding. Scheid:Janssen: Honoraria; Celgene: Honoraria. Goldschmidt:ArtTempi: Honoraria; Chugai: Honoraria, Research Funding; Novartis: Honoraria, Research Funding; Takeda: Consultancy, Research Funding; Amgen: Consultancy, Research Funding; Mundipharma: Research Funding; Sanofi: Consultancy, Research Funding; Janssen: Consultancy, Honoraria, Research Funding; Celgene: Consultancy, Honoraria, Research Funding; Bristol Myers Squibb: Consultancy, Honoraria, Research Funding; Adaptive Biotechnology: Consultancy. Seckinger:Celgene: Research Funding; EngMab: Research Funding; Sanofi: Research Funding. Hose:EngMab: Research Funding; Celgene: Honoraria, Research Funding; Sanofi: Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
47

Simoes, J. P. C., P. Schoning, and M. Butine. "Prognosis of Canine Mast Cell Tumors: A Comparison of Three Methods." Veterinary Pathology 31, no. 6 (November 1994): 637–47. http://dx.doi.org/10.1177/030098589403100602.

Full text
Abstract:
In this study, age, sex, recurrence, metastasis, death rate, and histologic patterns were in agreement with those of previous reports on canine mast cell tumors. Histologic grading, mitotic index, chromosome nucleolar organizer regions stained with silver (AgNORs), and anti-proliferating cell nuclear antigen (PCNA) were evaluated as indicators of prognosis. Histologic grading, AgNORs estimated in 100 cells, and PCNA-labeled fraction estimated in five high power fields (HPFs) were significantly different between recurring and nonrecurring tumors. Those prognostic factors were also significantly different between tumors that metastasized and those that did not. The survival time was lower in dogs with mast cell tumors with histologic grade 3 (Patnaik's), AgNOR counts higher than 2.25, and PCNA count in five HPFs higher than 261. The significance of these factors as markers for prognosis determined by logistic regression analysis differed with the time period considered. By combining the three most significant prognostic factors in a prognostic index, three models were obtained to determine the probability of nonrecurrence at 3, 6, and 9 months after surgery. The models were accurate in the prediction of the outcome of up to 80% of mast cell tumors. The use of these models provides a less subjective means of prognosticating mast cell tumors than the use of any one component alone.
APA, Harvard, Vancouver, ISO, and other styles
48

Elsensohn, MH, E. Dantony, J. Iwaz, E. Villar, C. Couchoud, and R. Ecochard. "Improving survival in end-stage renal disease: A case study." Statistical Methods in Medical Research 28, no. 12 (November 9, 2018): 3579–90. http://dx.doi.org/10.1177/0962280218811357.

Full text
Abstract:
Background: With the increase of life expectancy, *On behalf of the REIN registry. end-stage renal disease (ESRD) is affecting a growing number of people. Simultaneously, renal replacement therapies (RRTs) have considerably improved patient survival. We investigated the way current RRT practices would affect patients' survival. Methods: We used a multi-state model to represent the transitions between RRTs and the transition to death. The concept of “crude probability of death” combined with this model allowed estimating the proportions of ESRD-related and ESRD-unrelated deaths. Estimating the ESRD-related death rate requires comparing the mortality rate between ESRD patients and the general population. Predicting patients' courses through RRTs and Death states could be obtained by solving a system of Kolmogorov differential equations. The impact of practice on patient survival was quantified using the restricted mean survival time (RMST) which was compared with that of healthy subjects with same characteristics. Results: The crude probability of ESRD-unrelated death was nearly zero in the youngest patients (18–45 years) but was a sizeable part of deaths in the oldest (≥70 years). Moreover, in the oldest patients, the proportion of expected death was higher in patient without vs. with diabetes because the former live older. In men aged 75 years at first RRT, the predicted RMSTs in patients with and without diabetes were, respectively, 61% and 69% those of comparable healthy men. Conclusion: Using the concept of “crude probability of death” with multi-state models is feasible and useful to assess the relative benefits of various treatments in ESRD and help patient long-term management.
APA, Harvard, Vancouver, ISO, and other styles
49

Shouval, Roni, Joshua A. Fein, Myriam Labopin, Fabio Ciceri, Emanuele Angelucci, Didier Blaise, Johanna Tischer, et al. "Prediction of Leukemia-Free Survival Following Haploidentical Stem Cell Transplantation in Acute Myeloid Leukemia: A Study from the Acute Leukemia Working Party of the EBMT." Blood 132, Supplement 1 (November 29, 2018): 485. http://dx.doi.org/10.1182/blood-2018-99-111822.

Full text
Abstract:
Abstract Background: Haploidentical (Haplo) stem cell transplantation (SCT) provide a curative option for nearly all Acute Myeloid Leukemia (AML) patients lacking an HLA matched donor. However, outcomes following Haplo-SCT vary and are dependent on a number of individual features. Integrative prognostic models for decision support towards a Haplo-SCT are lacking. We sought to develop a prediction model of Leukemia-Free Survival (LFS) for AML patients undergoing a Haplo-SCT. Methods: A total of 1,804 de-novo (80%) and secondary (20%) AML patients who received a non-T-cell depleted Haplo-SCT between the years 2005-2017 were included. All patients were reported to the registry of the Acute Leukemia Working Party (ALWP) of the European Society for Blood and Marrow Transplantation (EBMT). To account for non-linear associations, violation of the proportional hazard assumption, and to reduce bias associated with feature selection, a non-parsimonious non-parametric machine learning algorithm, Random Survival Forest (RSF), was used. RSF provides a continuous probabilistic estimation of LFS by fitting an ensemble of decision trees. Variables included in the model were reflective of patient, disease, and transplantation characteristics. Since RSF models are not readily interpretable (i.e., "black box" models) variable importance (VIMP) of covariates included in the model (Xv), were assessed by calculating the difference in prediction error before and after permuting Xv. The model's generalizability and accuracy were tested through repetitive bootstrapping (5000 iterations) and calculation of the C-index. Results: The median age of the patients was 53 years. The majority had an early disease status (complete remission [CR] 1[44%]) with intermediate cytogenetic risk (43%) and were undergoing allogeneic transplantation for the first time (93%). Reduced-intensity conditioning (RIC) was used in 57% of cases, and grafts were from peripheral blood in 54% of transplants. For graft-versus-host disease (GvHD) prophylaxis, 82% of the patients received post-transplant cyclophosphamide (PTCy) and 18% anti-thymocyte globulin (ATG). The median follow-up duration was 2.0 years. In the RSF prediction model, the top-ranking variables (Figure A) were disease status, GvHD prophylaxis, time from diagnosis to transplantation, and age. Bootstrapped C-index of the prediction model was 0.66. Prognostic discrimination was assessed by dividing the predicted LFS probabilities into quartiles that were then used to plot Kaplan-Meier curves, demonstrating LFS ranging from 24.8%-60.1% at 2-years (Figure B). Differing features of the four prognostic groups are listed in the Table. Conclusions: Our group has developed the first prediction model for LFS in AML patients treated with a Haplo-SCT. The model is based on a machine learning technique and provides an individualized estimation of LFS probability. It is conceivable that once this model is verified, it could serve as an important clinical tool when considering a patient to Haplo-SCT. Figure. Figure. Disclosures Angelucci: Vertex Pharmaceuticals Incorporated (MA) and CRISPR CAS9 Therapeutics AG (CH): Other: Chair DMC; Jazz Pharmaceuticals Italy: Other: Local ( national) advisory board; Celgene: Honoraria, Other: Chair DMC; Novartis: Honoraria, Other: Chair Steering Comiittee TELESTO Protocol; Roche Italy: Other: Local (national) advisory board. Tischer:Jazz Pharmaceuticals: Other: Jazz Advisory Board. Mohty:MaaT Pharma: Consultancy, Honoraria.
APA, Harvard, Vancouver, ISO, and other styles
50

Kawakami, Takeshi, Yukiya Narita, Isao Oze, Shigenori Kadowaki, Nozomu Machida, Hiroya Taniguchi, Takashi Ura, et al. "Establishment and validation of prognostic nomograms including HER2 status in metastatic gastric cancer." Journal of Clinical Oncology 34, no. 4_suppl (February 1, 2016): 24. http://dx.doi.org/10.1200/jco.2016.34.4_suppl.24.

Full text
Abstract:
24 Background: It remains unclear whether human epidermal growth factor receptor 2 (HER2) status is an outcome-associated biomarker independent of known prognostic factors for metastatic gastric cancer (MGC). There are few reports on nomograms in MGC, while several studies have been published on nomograms for other cancer types. This retrospective study aimed to develop nomograms that combine HER2 status and other prognostic factors for predicting survival outcome of individual patients with MGC starting first-line treatment. Methods: We used a training set of 838 consecutive patients with MGC starting first-line chemotherapy between 2005 and 2012 in Aichi Cancer Center Hospital (ACC) to establish nomograms that calculate the predicted probability of survival at different time points; overall survival (OS) at 1 and 2 years. The covariates analyzed in this model by Cox proportional hazard models included HER2 status, Eastern Cooperative Oncology Group performance status (PS), history of gastrectomy, serum lactic acid dehydrogenase (LDH), and serum alkaline phosphatase levels (ALP). Nomograms were independently validated using data on 269 consecutive patients with MGC who underwent first-line chemotherapy between 2010 and 2012 in Shizuoka Cancer Center Hospital (SCC). Missing covariate data were estimated using multiple imputation methods. The discriminatory ability and accuracy of the models were assessed using Harrell’s c-index. IHC3+ or IHC2+/ISH+ tumors were defined as HER2-positive. Results: Patient characteristics were as follows: median age, 64 vs. 66 years; ECOG PS 0/1/2, 34%/51%/15% vs. 45%/44%/11%; prior gastrectomy, 42% vs. 39%; 1/ > 1 metastatic sites, 56%/44% vs. 43%/57%; high LDH, 76% vs. 27%; high ALP, 22% vs. 26%; and positive/negative HER2 status, 10%/45% vs. 7%/53%, respectively. At a median follow-up of 12.3 (ACC) and 11.6 (SCC) months, 782 and 248 patients had died, and median OS was 12.5 and 12.4 months (P= 1.00), respectively. The nomograms were capable of predicting an OS with a c-index of 0.68 and 0.58. Conclusions: These nomograms may provide objective and approximate prediction of OS for individual MGC patients in clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography