Academic literature on the topic 'Mixture cure model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Mixture cure model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Mixture cure model"

1

Omer, Mohamed Elamin Abdallah Mohamed Elamin, Mohd Rizam Abu Bakar, Mohd Bakri Adam, and Mohd Shafie Mustafa. "Cure Models with Exponentiated Weibull Exponential Distribution for the Analysis of Melanoma Patients." Mathematics 8, no. 11 (November 2, 2020): 1926. http://dx.doi.org/10.3390/math8111926.

Full text
Abstract:
In the survival data analysis, commonly, it is presumed that all study subjects will eventually have the event of concern. Nonetheless, it tends to be unequivocally expected that a fraction of these subjects will never expose to the event of interest. The cure rate models are usually used to model this type of data. In this paper, we introduced a maximum likelihood estimates analysis for the four-parameter exponentiated Weibull exponential (EWE) distribution in the existence of cured subjects, censored observations, and predictors. Aiming to include the fraction of unsusceptible (cured) individuals in the analysis, a mixture cure model, and two non-mixture cure models—bounded cumulative hazard model, and geometric non-mixture model with EWE distribution—are proposed. The mixture cure model provides a better fit to real data from a Melanoma clinical trial compared to the other two non-mixture cure models.
APA, Harvard, Vancouver, ISO, and other styles
2

TAWEAB, FAUZIA ALI, NOOR AKMA IBRAHIM, and BADER AHMAD I. ALJAWADI. "ESTIMATION OF CURE FRACTION FOR LOGNORMAL RIGHT CENSORED DATA WITH COVARIATES." International Journal of Modern Physics: Conference Series 09 (January 2012): 308–15. http://dx.doi.org/10.1142/s2010194512005363.

Full text
Abstract:
In clinical studies, a proportion of patients might be unsusceptible to the event of interest and can be considered as cured. The survival models that incorporate the cured proportion are known as cure rate models where the most widely used model is the mixture cure model. However, in cancer clinical trials, mixture model is not the appropriate model and the viable alternative is the Bounded Cumulative Hazard (BCH) model. In this paper we consider the BCH model to estimate the cure fraction based on the lognormal distribution. The parametric estimation of the cure fraction for survival data with right censoring with covariates is obtained by using EM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Swain, Prafulla Kumar, Gurprit Grover, and Komal Goel. "Mixture and Non-Mixture Cure Fraction Models Based on Generalized Gompertz Distribution under Bayesian Approach." Tatra Mountains Mathematical Publications 66, no. 1 (June 1, 2016): 121–35. http://dx.doi.org/10.1515/tmmp-2016-0025.

Full text
Abstract:
Abstract The cure fraction models are generally used to model lifetime data with long term survivors. In a cohort of cancer patients, it has been observed that due to the development of new drugs some patients are cured permanently, and some are not cured. The patients who are cured permanently are called cured or long term survivors while patients who experience the recurrence of the disease are termed as susceptibles or uncured. Thus, the population is divided into two groups: a group of cured individuals and a group of susceptible individuals. The proportion of cured individuals after the treatment is typically known as the cure fraction. In this paper, we have introduced a three parameter Gompertz (viz. scale, shape and acceleration) or generalized Gompertz distribution in the presence of cure fraction, censored data and covariates for estimating the proportion of cure fraction through Bayesian Approach. Inferences are obtained using the standard Markov Chain Monte Carlo technique in openBUGS software.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Jiajia, and Yingwei Peng. "Accelerated hazards mixture cure model." Lifetime Data Analysis 15, no. 4 (August 21, 2009): 455–67. http://dx.doi.org/10.1007/s10985-009-9126-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Scolas, Sylvie, Catherine Legrand, Abderrahim Oulhaj, and Anouar El Ghouch. "Diagnostic checks in mixture cure models with interval-censoring." Statistical Methods in Medical Research 27, no. 7 (November 4, 2016): 2114–31. http://dx.doi.org/10.1177/0962280216676502.

Full text
Abstract:
Models for interval-censored survival data presenting a fraction of “cure” or “immune” patients have recently been proposed in the literature, particularly extending the mixture cure model to interval-censored data. However, little is known about the goodness-of-fit of such models. In a mixture cure model, the survival distribution of the entire population is improper and expressed in terms of the survival distribution of uncured individuals, i.e. the latency part of the model, and the probability to experience the event of interest, i.e. the incidence part. To validate a mixture cure model, assumptions made on both parts need to be checked, i.e. the survival distribution of uncured individuals, the link function used in the latency and the linearity of the covariates used in the both parts of the model. In this work, we investigate the Cox-Snell and deviance residuals and show how they can be adapted and used to perform diagnostics checks when all subjects are right- or interval-censored and some subjects are cured with unknown cure status. A large simulation study investigates the ability of these residuals to detect a departure from the assumptions of the mixture model. Developed techniques are applied to a real data set about Alzheimer’s disease.
APA, Harvard, Vancouver, ISO, and other styles
6

Marinho, Anna R. S., and Rosangela H. Loschi. "Bayesian cure fraction models with measurement error in the scale mixture of normal distribution." Statistical Methods in Medical Research 29, no. 9 (January 12, 2020): 2411–44. http://dx.doi.org/10.1177/0962280219893034.

Full text
Abstract:
Cure fraction models have been widely used to model time-to-event data when part of the individuals survives long-term after disease and are considered cured. Most cure fraction models neglect the measurement error that some covariates may experience which leads to poor estimates for the cure fraction. We introduce a Bayesian promotion time cure model that accounts for both mismeasured covariates and atypical measurement errors. This is attained by assuming a scale mixture of the normal distribution to describe the uncertainty about the measurement error. Extending previous works, we also assume that the measurement error variance is unknown and should be estimated. Three classes of prior distributions are assumed to model the uncertainty about the measurement error variance. Simulation studies are performed evaluating the proposed model in different scenarios and comparing it to the standard promotion time cure fraction model. Results show that the proposed models are competitive ones. The proposed model is fitted to analyze a dataset from a melanoma clinical trial assuming that the Breslow depth is mismeasured.
APA, Harvard, Vancouver, ISO, and other styles
7

Folorunso, Serifat A., Timothy A. O. Oluwasola, Angela U. Chukwu, and Akintunde A. Odukogbe. "Application of Modified Generalized–Gamma Mixture Cure Model in the Analysis of Ovarian Cancer Data." Journal of Physics: Conference Series 2123, no. 1 (November 1, 2021): 012041. http://dx.doi.org/10.1088/1742-6596/2123/1/012041.

Full text
Abstract:
Abstract The modeling and analysis of lifetime for terminal diseases such as cancer is a significant aspect of statistical work. This study considered data from thirty-seven women diagnosed with Ovarian Cancer and hospitalized for care at theDepartment of Obstetrics and Gynecology, University of Ibadan, Nigeria. Focus was on the application of a parametric mixture cure model that can handle skewness associated with survival data – a modified generalized-gamma mixture cure model (MGGMCM). The effectiveness of MGGMCM was compared with existing parametric mixture cure models using Akaike Information Criterion, median time-to-cure and variance of the cure rate. It was observed that the MGGMCM is an improved parametric model for the mixture cure model.
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Yingwei, and Jeremy M. G. Taylor. "Residual-based model diagnosis methods for mixture cure models." Biometrics 73, no. 2 (September 6, 2016): 495–505. http://dx.doi.org/10.1111/biom.12582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jahani, Sardar, Mina Hoseini, Rashed Pourhamidi, Mahshid Askari, and Azam Moslemi. "Determining the Factors Affecting Long-Term and Short-Term Survival of Breast Cancer Patients in Rafsanjan Using a Mixture Cure Model." Journal of Research in Health Sciences 21, no. 2 (May 26, 2021): e00516-e00516. http://dx.doi.org/10.34172/jrhs.2021.51.

Full text
Abstract:
Background: Breast cancer is one of the most common causes of death among women worldwide and the second leading cause of death among Iranian women. The incidence of this malignancy in Iran is 22 per 100,000 women. These patients have long-term survival time with advances in medical sciences. The present study aimed to identify the risk factors of breast cancer using Cox proportional hazard and Cox mixture cure models. Study design: It is a retrospective cohort study. Methods: In this cohort study, we recorded the survival time of 140 breast cancer patients referred to Ali Ibn Abitaleb Hospital in Rafsanjan, Iran, from 2001 to 2015. The Kaplan-Meier curve was plotted; moreover, two Cox proportional hazards and the Cox mixture cure models were fitted for the patients. Data analysis was performed using SAS 9.4 M5 software. Results: The mean age of patients was reported as 47.12 ±12.48 years at the commencement of the study. Moreover, 83.57% of patients were censored. The stage of disease was a significant variable in Cox and the survival portion of Cox mixture cure models (P=0.001). The consumption of herbal tea, tumor size, duration of the last lactation, family history of cancer, and the type of treatment were significant variables in the cured proportion of the Cox mixture cure model (P=0.001). Conclusion: The Cox mixture cure model is a flexible model which is able to distinguish between the long-term and short-term survival of breast cancer patients. For breast cancer patients, cure effective factors were the stage of the disease, consumption of herbal tea, tumor size, duration of the last lactation, family history, and the type of treatment.
APA, Harvard, Vancouver, ISO, and other styles
10

Amico, Maïlis, Ingrid Van Keilegom, and Catherine Legrand. "The single‐index/Cox mixture cure model." Biometrics 75, no. 2 (March 29, 2019): 452–62. http://dx.doi.org/10.1111/biom.12999.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Mixture cure model"

1

Krachey, Elizabeth Catherine. "Variations on the Accelerated Failure Time Model: Mixture Distributions, Cure Rates, and Different Censoring Scenarios." NCSU, 2009. http://www.lib.ncsu.edu/theses/available/etd-08182009-102357/.

Full text
Abstract:
The accelerated failure time (AFT) model is a popular model for time-to-event data. It provides a useful alternative when the proportional hazards assumption is in question and it provides an intuitive linear regression interpretation where the logarithm of the survival time is regressed on the covariates. We have explored several deviations from the standard AFT model. Standard survival analysis assumes that in the case of perfect follow-up, every patient will eventually experience the event of interest. However, in some clinical trials, a number of patients may never experience such an event, and in essence, are considered cured from the disease. In such a scenario, the Kaplan-Meier survival curve will level off at a nonzero proportion. Hence there is a window of time in which most or all of the events occur, while heavy censoring occurs in the tail. The two-component mixture cure model provides a means of adjusting the AFT model to account for this cured fraction. Chapters 1 and 2 propose parametric and semiparametric estimation procedures for this cure rate AFT model. Survival analysis methods for interval-censoring have been much slower to develop than for the right-censoring case. This is in part because interval-censored data have a more complex censoring mechanism and because the counting process theory developed for right-censored data does not generalize or extend to interval-censored data. Because of the analytical difficulty associated with interval-censored data, recent estimation strategies have focused on the implementation rather than the large sample theoretical justifications of the semiparametric AFT model. Chapter 3 proposes a semiparametric Bayesian estimation procedure for the AFT model under interval-censored data.
APA, Harvard, Vancouver, ISO, and other styles
2

Erich, Roger Alan. "Regression Modeling of Time to Event Data Using the Ornstein-Uhlenbeck Process." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1342796812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Seppä, K. (Karri). "Quantifying regional variation in the survival of cancer patients." Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789526200118.

Full text
Abstract:
Abstract Monitoring regional variation in the survival of cancer patients is an important tool for assessing realisation of regional equity in cancer care. When regions are small or sparsely populated, the random component in the total variation across the regions becomes prominent. The broad aim of this doctoral thesis is to develop methods for assessing regional variation in the cause-specific and relative survival of cancer patients in a country and for quantifying the public health impact of the regional variation in the presence of competing hazards of death using summary measures that are interpretable also for policy-makers and other stakeholders. Methods for summarising the survival of a patient population with incomplete follow-up in terms of the mean and median survival times are proposed. A cure fraction model with two sets of random effects for regional variation is fitted to cause-specific survival data in a Bayesian framework using Markov chain Monte Carlo simulation. This hierarchical model is extended to the estimation of relative survival where the expected survival is estimated by region and considered as a random quantity. The public health impact of regional variation is quantified by the extra survival time and the number of avoidable deaths that would be gained if the patients achieved the most favourable level of relative survival. The methods proposed were applied to real data sets from the Finnish Cancer Registry. Estimates of the mean and the median survival times of colon and thyroid cancer patients, respectively, were corrected for the bias that was caused by the inherent selection of patients during the period of diagnosis with respect to their age at diagnosis. The cure fraction model allowed estimation of regional variation in cause-specific and relative survival of breast and colon cancer patients, respectively, with a parsimonious number of parameters yielding reasonable estimates also for sparsely populated hospital districts
Tiivistelmä Syöpäpotilaiden elossaolon alueellisen vaihtelun seuraaminen on tärkeää arvioitaessa syövänhoidon oikeudenmukaista jakautumista alueittain. Kun alueet ovat pieniä tai harvaan asuttuja, alueellisen kokonaisvaihtelun satunnainen osa kasvaa merkittäväksi. Tämän väitöstutkimuksen tavoitteena on kehittää menetelmiä, joilla pystytään arvioimaan maan sisäistä alueellista vaihtelua lisäkuolleisuudessa, jonka itse syöpä potilaille aiheuttaa, ja tiivistämään alueellisen vaihtelun kansanterveydellinen merkitys mittalukuihin, jotka ottavat kilpailevan kuolleisuuden huomioon ja ovat myös päättäjien tulkittavissa. Ehdotetuilla menetelmillä voidaan potilaiden ennustetta kuvailla käyttäen elossaolo-ajan keskiarvoa ja mediaania, vaikka potilaiden seuruu olisi keskeneräinen. Potilaiden syykohtaiselle kuolleisuudelle sovitetaan bayesiläisittäin MCMC-simulaatiota hyödyntäen malli, jossa parantuneiden potilaiden osuuden kuvaamisen lisäksi alueellinen vaihtelu esitetään kahden satunnaisefektijoukon avulla. Tämä hierarkkinen malli laajennetaan suhteellisen elossaolon estimointiin, jossa potilaiden odotettu elossaolo estimoidaan alueittain ja siihen liittyvä satunnaisvaihtelu otetaan huomioon. Alueellisen vaihtelun kansanterveydellistä merkitystä mitataan elossaoloajan keskimääräisellä pidentymällä sekä vältettävien kuolemien lukumäärällä, jotka voitaisiin saavuttaa, mikäli suotuisin suhteellisen elossaolon taso saavutettaisiin kaikilla alueilla. Kehitettyjä menetelmiä käytettiin Suomen Syöpärekisterin aineistojen analysointiin. Paksusuoli- ja kilpirauhassyöpäpotilaiden elinaikojen keskiarvojen ja mediaanien estimaatit oikaistiin harhasta, joka aiheutui potilaiden luontaisesta valikoitumisesta diagnosointijakson aikana iän suhteen. Parantuneiden osuuden satunnaisefektimalli mahdollisti rintasyöpäpotilaiden syykohtaisen kuolleisuuden ja paksusuolisyöpäpotilaiden suhteellisen elossaolon kuvaamisen vähäisellä määrällä parametreja ja antoi järkeenkäyvät estimaatit myös harvaan asutuille sairaanhoitopiireille
APA, Harvard, Vancouver, ISO, and other styles
4

Kutal, Durga Hari. "Various Approaches on Parameter Estimation in Mixture and Non-mixture Cure Models." Thesis, Florida Atlantic University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10929031.

Full text
Abstract:

Analyzing life-time data with long-term survivors is an important topic in medical application. Cure models are usually used to analyze survival data with the proportion of cure subjects or long-term survivors. In order to include the proportion of cure subjects, mixture and non-mixture cure models are considered. In this dissertation, we utilize both maximum likelihood and Bayesian methods to estimate model parameters. Simulation studies are carried out to verify the finite sample performance of the estimation methods. Real data analyses are reported to illustrate the goodness-of-fit via Fréchet, Weibull and Exponentiated Exponential susceptible distributions. Among the three parametric susceptible distributions, Fréchet is the most promising.

Next, we extend the non-mixture cure model to include a change point in a covariate for right censored data. The smoothed likelihood approach is used to address the problem of a log-likelihood function which is not differentiable with respect to the change point. The simulation study is based on the non-mixture change point cure model with an exponential distribution for the susceptible subjects. The simulation results revealed a convincing performance of the proposed method of estimation.

APA, Harvard, Vancouver, ISO, and other styles
5

Weston, Claire Louise. "Applications of non-mixture cure models in childhood cancer studies." Thesis, University of Leicester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492826.

Full text
Abstract:
The United Kingdom Children's Cancer Study Group (UKCCSG) was formed in 1977 with the aims of improving the management and advancing knowledge and Ludy of childhood cancers. UKCCSG studies are usually analysed using Cox models to assess whether certain prognostic factors may have an influence on survival. Cox models assume that proportional hazards exist and that all individuals will eventually experience the event of interest resulting in a long-term survival of zero. In childhood cancer, this may not be the case, as survival rates in excess of 70% are often observed. Parametric cure models have been proposed as an alternative method for analysing long term outcome aata m cases such as these.
APA, Harvard, Vancouver, ISO, and other styles
6

Ward, Alexander P. "Modelling Response Patterns for A Large-Scale Mail Survey Study Using Mixture Cure Models." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555587554123989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Calsavara, Vinicius Fernando. "Modelos de sobrevivência com fração de cura usando um termo de fragilidade e tempo de vida Weibull modificada generalizada." Universidade Federal de São Carlos, 2011. https://repositorio.ufscar.br/handle/ufscar/4546.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:04Z (GMT). No. of bitstreams: 1 3451.pdf: 871063 bytes, checksum: 8af58118f0d60c000ca46f5d8bfda544 (MD5) Previous issue date: 2011-02-24
In survival analysis, some studies are characterized by having a significant fraction of units that will never suffer the event of interest, even if accompanied by a long period of time. For the analysis of long-term data, we approach the standard mixture model by Berkson & Gage, where we assume the generalized modified Weibull distribution for the lifetime of individuals at risk. This model includes several classes of models as special cases, allowing its use to discriminate models. The standard mixture model implicitly assume that those individuals experiencing the event of interest possess homogeneous risk. Alternatively, we consider the standard mixture model with a frailty term in order to quantify the unobservable heterogeneity among individuals. This model is characterized by the inclusion of a unobservable random variable, which represents information that can not or have not been observed. We assume multiplicative frailty with a gamma distribution. For the lifetime of individuals at risk, we assume the Weibull distribution, obtaining the frailty Weibull standard mixture model. For both models, we realized simulation studies with the purpose of analyzing the frequentists properties of estimation procedures. Applications to real data set showed the applicability of the proposed models in which parameter estimates were determined using the approaches of maximum likelihood and Bayesian.
Em análise de sobrevivência determinados estudos caracterizam-se por apresentar uma fração significativa de unidades que nunca apresentarão o evento de interesse, mesmo se acompanhados por um longo período de tempo. Para a análise de dados com longa duração, abordamos o modelo de mistura padrão de Berkson & Gage supondo que os tempos de vida dos indivíduos em risco seguem distribuição Weibull modificada generalizada. Este modelo engloba diversas classes de modelos como casos particulares, propiciando o uso deste para discriminar modelos. O modelo abordado assume implicitamente que todos os indivíduos que falharam possuem risco homogêneo. Alternativamente, consideramos o modelo de mistura padrão com um termo de fragilidade com o objetivo de quantificar a heterogeneidade não observável entre os indivíduos. Este modelo é caracterizado pela inclusão de uma variável aleatória não observável, que representa as informações que não podem ou que não foram observadas. Assumimos que a fragilidade atua de forma multiplicativa com distribuição gama. Para os tempos de vida dos indivíduos em risco consideramos a distribuição Weibull, obtendo o modelo de mistura padrão Weibull com fragilidade. Para os dois modelos realizamos estudos de simulação com o objetivo de analisar as propriedades frequentistas dos processos de estimação. Aplicações a conjunto de dados reais mostraram a aplicabilidade dos modelos propostos, em que a estimação dos parâmetros foram determinadas através das abordagens de máxima verossimilhança e Bayesiana.
APA, Harvard, Vancouver, ISO, and other styles
8

Pešout, Pavel. "Přístupy k shlukování funkčních dat." Doctoral thesis, Vysoká škola ekonomická v Praze, 2007. http://www.nusl.cz/ntk/nusl-77066.

Full text
Abstract:
Classification is a very common task in information processing and important problem in many sectors of science and industry. In the case of data measured as a function of a dependent variable such as time, the most used algorithms may not pattern each of the individual shapes properly, because they are interested only in the choiced measurements. For the reason, the presented paper focuses on the specific techniques that directly address the curve clustering problem and classifying new individuals. The main goal of this work is to develop alternative methodologies through the extension to various statistical approaches, consolidate already established algorithms, expose their modified forms fitted to demands of clustering issue and compare some efficient curve clustering methods thanks to reported extensive simulated data experiments. Last but not least is made, for the sake of executed experiments, comprehensive confrontation of effectual utility. Proposed clustering algorithms are based on two principles. Firstly, it is presumed that the set of trajectories may be probabilistic modelled as sequences of points generated from a finite mixture model consisting of regression components and hence the density-based clustering methods using the Maximum Likehood Estimation are investigated to recognize the most homogenous partitioning. Attention is paid to both the Maximum Likehood Approach, which assumes the cluster memberships to be some of the model parameters, and the probabilistic model with the iterative Expectation-Maximization algorithm, that assumes them to be random variables. To deal with the hidden data problem both Gaussian and less conventional gamma mixtures are comprehended with arranging for use in two dimensions. To cope with data with high variability within each subpopulation it is introduced two-level random effects regression mixture with the ability to let an individual vary from the template for its group. Secondly, it is taken advantage of well known K-Means algorithm applied to the estimated regression coefficients, though. The task of the optimal data fitting is devoted, because K-Means is not invariant to linear transformations. In order to overcome this problem it is suggested integrating clustering issue with the Markov Chain Monte Carlo approaches. What is more, this paper is concerned in functional discriminant analysis including linear and quadratic scores and their modified probabilistic forms by using random mixtures. Alike in K-Means it is shown how to apply Fisher's method of canonical scores to the regression coefficients. Experiments of simulated datasets are made that demonstrate the performance of all mentioned methods and enable to choose those with the most result and time efficiency. Considerable boon is the facture of new advisable application advances. Implementation is processed in Mathematica 4.0. Finally, the possibilities offered by the development of curve clustering algorithms in vast research areas of modern science are examined, like neurology, genome studies, speech and image recognition systems, and future investigation with incorporation with ubiquitous computing is not forbidden. Utility in economy is illustrated with executed application in claims analysis of some life insurance products. The goals of the thesis have been achieved.
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Kyeong Eun. "Bayesian models for DNA microarray data analysis." Diss., Texas A&M University, 2005. http://hdl.handle.net/1969.1/2465.

Full text
Abstract:
Selection of signi?cant genes via expression patterns is important in a microarray problem. Owing to small sample size and large number of variables (genes), the selection process can be unstable. This research proposes a hierarchical Bayesian model for gene (variable) selection. We employ latent variables in a regression setting and use a Bayesian mixture prior to perform the variable selection. Due to the binary nature of the data, the posterior distributions of the parameters are not in explicit form, and we need to use a combination of truncated sampling and Markov Chain Monte Carlo (MCMC) based computation techniques to simulate the posterior distributions. The Bayesian model is ?exible enough to identify the signi?cant genes as well as to perform future predictions. The method is applied to cancer classi?cation via cDNA microarrays. In particular, the genes BRCA1 and BRCA2 are associated with a hereditary disposition to breast cancer, and the method is used to identify the set of signi?cant genes to classify BRCA1 and others. Microarray data can also be applied to survival models. We address the issue of how to reduce the dimension in building model by selecting signi?cant genes as well as assessing the estimated survival curves. Additionally, we consider the wellknown Weibull regression and semiparametric proportional hazards (PH) models for survival analysis. With microarray data, we need to consider the case where the number of covariates p exceeds the number of samples n. Speci?cally, for a given vector of response values, which are times to event (death or censored times) and p gene expressions (covariates), we address the issue of how to reduce the dimension by selecting the responsible genes, which are controlling the survival time. This approach enables us to estimate the survival curve when n << p. In our approach, rather than ?xing the number of selected genes, we will assign a prior distribution to this number. The approach creates additional ?exibility by allowing the imposition of constraints, such as bounding the dimension via a prior, which in e?ect works as a penalty. To implement our methodology, we use a Markov Chain Monte Carlo (MCMC) method. We demonstrate the use of the methodology with (a) di?use large B??cell lymphoma (DLBCL) complementary DNA (cDNA) data and (b) Breast Carcinoma data. Lastly, we propose a mixture of Dirichlet process models using discrete wavelet transform for a curve clustering. In order to characterize these time??course gene expresssions, we consider them as trajectory functions of time and gene??speci?c parameters and obtain their wavelet coe?cients by a discrete wavelet transform. We then build cluster curves using a mixture of Dirichlet process priors.
APA, Harvard, Vancouver, ISO, and other styles
10

Gouveia, Bruno Pauka. "Modelo de mistura padrão com tempos de vida exponenciais ponderados." Universidade Federal de São Carlos, 2010. https://repositorio.ufscar.br/handle/ufscar/4544.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:04Z (GMT). No. of bitstreams: 1 3137.pdf: 2333509 bytes, checksum: 17d0f072d443263a81b8c895dc712a3b (MD5) Previous issue date: 2010-03-05
Financiadora de Estudos e Projetos
In this work, we brie_y introduce the concepts of long-term survival analysis. We dedicated ourselves exclusively to the standard mixture cure model from Boag (1949) and Berkson & Gage (1952), showing its deduction and presenting the imunes probability function, which is taken from the model itself and we investigated the identi_ability issues of the mixture model. Motivated by the possibility that a experiment design can lead to a biased sample selection, we studied the weighted probability distributions, more speci_cally the weighted exponential distributions family and its properties. We studied two distributions that belong to this family; namely, the length biased exponential distribution and the beta exponential distribution. Using the GAMLSS package in R, we made some simulation studies intending to evidence the bias that occur when the possibility of a weighted sample is ignored.
Neste trabalho apresentamos brevemente os conceitos que de_nem a análise de sobreviv ência de longa duração. Dedicamo-nos exclusivamente ao modelo de mistura padrão de Boag (1949) e Berkson & Gage (1952), sendo que nos preocupamos com sua formulação, apresentamos a função probabilidade de imunes, que é derivada do próprio modelo e investigamos a questão da identi_cabilidade. Motivados pela possibilidade de que um planejamento experimental leve a uma seleção viciada da amostra, estudamos as distribui ções ponderadas de probabilidade, mais especi_camente a família das distribuições exponenciais ponderadas e suas propriedades. Estudamos duas distribuições pertencentes a essa família, a distribuição exponencial length biased e a distribuição beta exponencial. Fazendo uso do pacote GAMLSS em R, realizamos alguns estudos de simulação com o intuito de evidenciar o erro cometido quando se ignora a possibilidade de que a amostra seja proveniente de uma distribuição ponderada.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Mixture cure model"

1

Lattman, Eaton E., Thomas D. Grant, and Edward H. Snell. Shape Reconstructions from Small Angle Scattering Data. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199670871.003.0004.

Full text
Abstract:
This chapter discusses recovering shape or structural information from SAXS data. Key to any such process is the ability to generate a calculated intensity from a model, and to compare this curve with the experimental one. Models for the particle scattering density can be approximated as pure homogenenous geometric shapes. More complex particle surfaces can be represented by spherical harmonics or by a set of close-packed beads. Sometimes structural information is known for components of a particle. Rigid body modeling attempts to rotate and translate structures relative to one another, such that the resulting scattering profile calculated from the model agrees with the experimental SAXS data. More advanced hybrid modelling procedures aim to incorporate as much structural information as is available, including modelling protein dynamics. Solutions may not always contain a homogeneous set of particles. A common case is the presence of two or more conformations of a single particle or a mixture of oligomeric species. The method of singular value decomposition can extract scattering for conformationally distinct species.
APA, Harvard, Vancouver, ISO, and other styles
2

Bažant, Zdenek P., Jia-Liang Le, and Marco Salviato. Quasibrittle Fracture Mechanics and Size Effect. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192846242.001.0001.

Full text
Abstract:
Many modern engineering structures are composed of brittle heterogenous (a.k.a. quasibrittle) materials. These materials include concrete (an archetype), composites, tough ceramics, rocks, cold asphalt mixtures, and many brittle materials at the microscale. Understanding the failure behavior of these materials is of paramount importance for improving the resilience and sustainability of various engineering structures including civil infrastructure, aircraft, ships, military armors, and microelectronic devices. This book provides a comprehensive treatment of quasibrittle fracture mechanics. It first presents a concise but rigorous and complete treatment of the linear elastic fracture mechanics, which is the foundation of all fracture mechanics. The topics covered include energy balance analysis of fracture, analysis of near-tip field and stress intensity factors, Irwin's relationship, J-integral, calculation of compliance function and deflection, and analysis of interfacial crack. Built upon the content of linear elastic fracture mechanics, the book presents various fundamental concepts of nonlinear fracture mechanics, which include estimation of inelastic zone size, cohesive crack model, equivalent linear elastic fracture mechanics model, R-curve, and crack band model. The book also discusses some more advanced concepts such as the effects of the triaxial stress state in the fracture process zone, nonlocal continuum models, and discrete computational model. The significant part of the book is devoted to the discussion of the energetic and statistical size effects, which is a salient feature of quasibrittle fracture. The book also presents probabilistic fracture mechanics, and its consequent reliability-based structural analysis and design of quasibrittle structures. Finally, the book provides an extensive review of various practical applications of quasibrittle fracture mechanics.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Mixture cure model"

1

Wu, Jianrong. "Survival Trial Design under the Mixture Cure Model." In Statistical Methods for Survival Trial Design, 141–65. Boca Raton : Taylor & Francis, 2018.: Chapman and Hall/CRC, 2018. http://dx.doi.org/10.1201/9780429470172-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Coelho-Barros, Emílio Augusto, Jorge Alberto Achcar, and Josmar Mazucheli. "Mixture and Non-mixture Cure Rate Model Considering the Burr XII Distribution." In Springer Proceedings in Mathematics & Statistics, 217–24. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13881-7_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ibrahim, Noor Akma, Fauzia Taweab, and Jayanthi Arasan. "A Parametric Non-Mixture Cure Survival Model with Censored Data." In Lecture Notes in Electrical Engineering, 231–38. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03967-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wickrama, Kandauda A. S., Tae Kyoung Lee, Catherine Walker O'Neal, and Frederick Lorenz. "Estimating Curve-of-Factors Growth Curve Models." In Higher-Order Growth Curves and Mixture Modeling with Mplus, 49–102. 2nd ed. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003158769-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gordon, Nahida H. "Cure Mixture Models in Breast Cancer Survival Studies." In Lifetime Data: Models in Reliability and Survival Analysis, 107–12. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-1-4757-5654-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wickrama, Kandauda A. S., Tae Kyoung Lee, Catherine Walker O'Neal, and Frederick Lorenz. "Longitudinal Confirmatory Factor Analysis and Curve-of-Factors Growth Curve Models." In Higher-Order Growth Curves and Mixture Modeling with Mplus, 40–48. 2nd ed. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003158769-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wickrama, Kandauda A. S., Tae Kyoung Lee, Catherine Walker O'Neal, and Frederick Lorenz. "Latent Growth Curve Model with Non-Normal Variables." In Higher-Order Growth Curves and Mixture Modeling with Mplus, 249–74. 2nd ed. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003158769-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wickrama, Kandauda A. S., Tae Kyoung Lee, Catherine Walker O'Neal, and Frederick Lorenz. "Extending a Parallel Process Latent Growth Curve Model (PPM) to a Factor-of-Curves Model (FCM)." In Higher-Order Growth Curves and Mixture Modeling with Mplus, 103–19. 2nd ed. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003158769-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wycinka, Ewa, and Tomasz Jurkiewicz. "Mixture Cure Models in Prediction of Time to Default: Comparison with Logit and Cox Models." In Contemporary Trends and Challenges in Finance, 221–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54885-2_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pal, Manisha, Nripes K. Mandal, and Bikas K. Sinha. "Growth Models for Repeated Measurement Mixture Experiments: Optimal Designs for Parameter Estimation and Growth Prediction." In Advances in Growth Curve and Structural Equation Modeling, 81–94. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0980-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Mixture cure model"

1

Safari, Wende Clarence, Ignacio López-de-Ullibarri, and María Amalia Jácome. "Nonparametric Inference for Mixture Cure Model When Cure Information Is Partially Available." In XoveTIC Conference. Basel Switzerland: MDPI, 2021. http://dx.doi.org/10.3390/engproc2021007017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leng, Oh Yit, and Zarina Mohd Khalid. "A comparative study of mixture cure models with covariate." In THE 3RD ISM INTERNATIONAL STATISTICAL CONFERENCE 2016 (ISM-III): Bringing Professionalism and Prestige in Statistics. Author(s), 2017. http://dx.doi.org/10.1063/1.4982849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chudova, Darya, Scott Gaffney, Eric Mjolsness, and Padhraic Smyth. "Translation-invariant mixture models for curve clustering." In the ninth ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/956750.956763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chamroukhi, Faicel, and Herve Glotin. "Mixture model-based functional discriminant analysis for curve classification." In 2012 International Joint Conference on Neural Networks (IJCNN 2012 - Brisbane). IEEE, 2012. http://dx.doi.org/10.1109/ijcnn.2012.6252818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

San Andrés, Luis, Jing Yang, and Xueliang Lu. "On the Leakage, Torque and Dynamic Force Coefficients of an Air in Oil (Wet) Annular Seal: A CFD Analysis Anchored to Test Data." In ASME Turbo Expo 2018: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/gt2018-77140.

Full text
Abstract:
Subsea pumps and compressors must withstand multi-phase flows whose gas volume fraction (GVF) or liquid volume fraction (LVF) varies over a wide range. Gas or liquid content as a dispersed phase in the primary stream affects the leakage, drag torque, and dynamic forced performance of secondary flow components, namely seals, thus affecting the process efficiency and mechanical reliability of pumping/compressing systems, in particular during transient events with sudden changes in gas (or liquid) content. This paper, complementing a parallel experimental program, presents a computational fluid dynamics (CFD) analysis to predict the leakage, drag power and dynamic force coefficients of a smooth surface, uniform clearance annular seal supplied with an air in oil mixture whose inlet GVF varies discretely from 0.0 to 0.9, i.e., from a pure liquid stream to a nearly all gas content mixture. The test seal has uniform radial clearance Cr = 0.203 mm, diameter D = 127 mm, and length L = 0.36 D. The tests were conducted with an inlet pressure/exit pressure ratio equal to 2.5 and a rotor surface speed of 23.3 m/s (3.5 krpm), similar to conditions in a pump neck wear ring seal. The CFD two-phase flow model, first to be anchored to test data, uses an Euler-Euler formulation and delivers information on the precise evolution of the GVF and the gas and liquid streams’ velocity fields. Recreating the test data, the CFD seal mass leakage and drag power decrease steadily as the GVF increases. A multiple-frequency shaft whirl orbit method aids in the calculation of seal reaction force components, and from which dynamic force coefficients, frequency dependent, follow. For operation with a pure liquid, the CFD results and test data produce a constant cross-coupled stiffness, damping, and added mass coefficients, while also verifying predictive formulas typical of a laminar flow. The injection of air in the oil stream, small or large in gas volume, immediately produces force coefficients that are frequency dependent; in particular the direct dynamic stiffness which hardens with excitation frequency. The effect is most remarkable for small GVFs, as low as 0.2. The seal test direct damping and cross-coupled dynamic stiffness continuously drop with an increase in GVF. CFD predictions, along with results from a bulk-flow model (BFM), reproduce the test force coefficients with great fidelity. Incidentally, early engineering practice points out to air injection as a remedy to cure persistent (self-excited) vibration problems in vertical pumps, submersible and large size hydraulic. Presently, the model predictions, supported by the test data, demonstrate that even a small content of gas in the liquid stream significantly raises the seal direct stiffness, thus displacing the system critical speed away to safety. The sound speed of a gas in liquid mixture is a small fraction of those speeds for either the pure oil or the gas, hence amplifying the fluid compressibility that produces the stiffness hardening. The CFD model and a dedicated test rig, predictions and test data complementing each other, enable engineered seals for extreme applications.
APA, Harvard, Vancouver, ISO, and other styles
6

A. Al-Shaher, Abdullah. "MIXTURES OF REGRESSION CURVE MODELS FOR ARABIC CHARACTER RECOGNITION." In 6th International Conference on Computer Science and Information Technology. AIRCC Publishing Corporation, 2019. http://dx.doi.org/10.5121/csit.2019.90207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Arumugam, Sridhar, Adebola S. Kasumu, and Anil K. Mehrotra. "Modeling the Static Cooling of Wax–Solvent Mixtures in a Cylindrical Vessel." In 2012 9th International Pipeline Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/ipc2012-90691.

Full text
Abstract:
Under subsea conditions, the transportation of ‘waxy’ crude oil through pipelines is accompanied by the precipitation and deposition of higher paraffinic compounds as solids (waxes) onto the cooler surfaces of the pipeline. Wax deposition is more pronounced during shut down of a pipeline since the fluid is held at static conditions. In this study, the static cooling of wax–solvent mixtures in a cylindrical vessel was modeled as a moving boundary formulation involving liquid–solid phase transformation. The deposition process during the transient cooling was treated as a partial freezing/solidification process. Also, the effect of the mixture composition and the cooling rate on the Wax Precipitation Temperature (WPT) or the solubility curve of the wax–solvent mixture was taken into consideration when the bulk liquid phase temperature was lowered below the WAT of the initial mixture composition. The predictions for the transient temperature profiles in the liquid and the deposit region, and the location of the liquid–deposit interface were validated with recently reported experimental results [19]. The predictions were also compared with the predictions for the gelling behavior of wax–solvent mixtures under static cooling reported by Bidmus [19]. The predictions for the temperature profile at seven thermocouple locations and the location of the liquid–deposit interface were in agreement with the experimental results and signified the important role of the solubility curve. The mathematical model presented was based on heat transfer considerations and regarded the deposition process to be thermally driven.
APA, Harvard, Vancouver, ISO, and other styles
8

Yusuf, Madaki Umar, and Mohd Rizam B. Abu Bakar. "A Bayesian estimation on right censored survival data with mixture and non-mixture cured fraction model based on Beta-Weibull distribution." In INNOVATIONS THROUGH MATHEMATICAL AND STATISTICAL RESEARCH: Proceedings of the 2nd International Conference on Mathematical Sciences and Statistics (ICMSS2016). Author(s), 2016. http://dx.doi.org/10.1063/1.4952559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ju, Zhaojie, and Honghai Liu. "Hand motion recognition via fuzzy active curve axis Gaussian mixture models: A comparative study." In 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2011. http://dx.doi.org/10.1109/fuzzy.2011.6007367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cosham, Andrew, David G. Jones, Keith Armstrong, Daniel Allason, and Julian Barnett. "Analysis of Two Dense Phase Carbon Dioxide Full-Scale Fracture Propagation Tests." In 2014 10th International Pipeline Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/ipc2014-33080.

Full text
Abstract:
Two full-scale fracture propagation tests have been conducted using dense phase carbon dioxide (CO2)-rich mixtures at the Spadeadam Test Site, United Kingdom (UK). The tests were conducted on behalf of National Grid Carbon, UK, as part of the COOLTRANS research programme. The semi-empirical Two Curve Model, developed by the Battelle Memorial Institute in the 1970s, is widely used to set the (pipe body) toughness requirements for pipelines transporting lean and rich natural gas. However, it has not been validated for applications involving dense phase CO2 or CO2-rich mixtures. One significant difference between the decompression behaviour of dense phase CO2 and a lean or rich gas is the very long plateau in the decompression curve. The objective of the two tests was to determine the level of ‘impurities’ that could be transported by National Grid Carbon in a 914.0 mm outside diameter, 25.4 mm wall thickness, Grade L450 pipeline, with arrest at an upper shelf Charpy V-notch impact energy (toughness) of 250 J. The level of impurities that can be transported is dependent on the saturation pressure of the mixture. Therefore, the first test was conducted at a predicted saturation pressure of 80.5 barg and the second test was conducted at a predicted saturation pressure of 73.4 barg. A running ductile fracture was successfully initiated in the initiation pipe and arrested in the test section in both of the full-scale tests. The main experimental data, including the layout of the test sections, and the decompression and timing wire data, are summarised and discussed. The results of the two full-scale fracture propagation tests demonstrate that the Two Curve Model is not (currently) applicable to liquid or dense phase CO2 or CO2-rich mixtures.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography