Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Concordance probability method.

Articles de revues sur le sujet « Concordance probability method »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Concordance probability method ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Liao, Jason J. Z., et Robert Capen. « An Improved Bland-Altman Method for Concordance Assessment ». International Journal of Biostatistics 7, no 1 (6 janvier 2011) : 1–17. http://dx.doi.org/10.2202/1557-4679.1295.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Slatkin, M. « Detecting small amounts of gene flow from phylogenies of alleles. » Genetics 121, no 3 (1 mars 1989) : 609–12. http://dx.doi.org/10.1093/genetics/121.3.609.

Texte intégral
Résumé :
Abstract The method of coalescents is used to find the probability that none of the ancestors of alleles sampled from a population are immigrants. If that is the case for samples from two or more populations, then there would be concordance between the phylogenies of those alleles and the geographic locations from which they are drawn. This type of concordance has been found in several studies of mitochondrial DNA from natural populations. It is shown that if the number of sequences sampled from each population is reasonably large (10 or more), then this type of concordance suggests that the average number of individuals migrating between populations is likely to be relatively small (Nm less than 1) but the possibility of occasional migrants cannot be excluded. The method is applied to the data of E. Bermingham and J. C. Avise on mtDNA from the bowfin, Amia calva.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Il’in, A. M., A. R. Danilin et S. V. Zakharov. « Application of the concordance method of asymptotic expansions to solving boundary-value problems ». Journal of Mathematical Sciences 125, no 5 (février 2005) : 610–57. http://dx.doi.org/10.1007/pl00021946.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Millwater, Harry R., Michael P. Enright et Simeon H. K. Fitch. « Convergent Zone-Refinement Method for Risk Assessment of Gas Turbine Disks Subject to Low-Frequency Metallurgical Defects ». Journal of Engineering for Gas Turbines and Power 129, no 3 (15 septembre 2006) : 827–35. http://dx.doi.org/10.1115/1.2431393.

Texte intégral
Résumé :
Titanium gas turbine disks are subject to a rare but not insignificant probability of fracture due to metallurgical defects, particularly hard α. A probabilistic methodology has been developed and implemented in concordance with the Federal Aviation Administration (FAA) Advisory Circular 33.14-1 to compute the probability of fracture of gas turbine titanium disks subject to low-frequency metallurgical (hard α) defects. This methodology is further developed here to ensure that a robust, converged, accurate calculation of the probability is computed that is independent of discretization issues. A zone-based material discretization methodology is implemented, then refined locally through further discretization using risk contribution factors as a metric. The technical approach is akin to “h” refinement in finite element analysis; that is, a local metric is used to indicate regions requiring further refinement, and subsequent refinement yields a more accurate solution. Supporting technology improvements are also discussed, including localized finite element refinement and onion skinning for zone subdivision resolution, and a restart database and parallel processing for computational efficiency. A numerical example is presented for demonstration.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Du, Yu, Nicolas Sutton-Charani, Sylvie Ranwez et Vincent Ranwez. « EBCR : Empirical Bayes concordance ratio method to improve similarity measurement in memory-based collaborative filtering ». PLOS ONE 16, no 8 (9 août 2021) : e0255929. http://dx.doi.org/10.1371/journal.pone.0255929.

Texte intégral
Résumé :
Recommender systems aim to provide users with a selection of items, based on predicting their preferences for items they have not yet rated, thus helping them filter out irrelevant ones from a large product catalogue. Collaborative filtering is a widely used mechanism to predict a particular user’s interest in a given item, based on feedback from neighbour users with similar tastes. The way the user’s neighbourhood is identified has a significant impact on prediction accuracy. Most methods estimate user proximity from ratings they assigned to co-rated items, regardless of their number. This paper introduces a similarity adjustment taking into account the number of co-ratings. The proposed method is based on a concordance ratio representing the probability that two users share the same taste for a new item. The probabilities are further adjusted by using the Empirical Bayes inference method before being used to weight similarities. The proposed approach improves existing similarity measures without increasing time complexity and the adjustment can be combined with all existing similarity measures. Experiments conducted on benchmark datasets confirmed that the proposed method systematically improved the recommender system’s prediction accuracy performance for all considered similarity measures.
Styles APA, Harvard, Vancouver, ISO, etc.
6

D'AGOSTINO, M., M. WAGNER, J. A. VAZQUEZ-BOLAND, T. KUCHTA, R. KARPISKOVA, J. HOORFAR, S. NOVELLA et al. « A Validated PCR-Based Method To Detect Listeria monocytogenes Using Raw Milk as a Food Model—Towards an International Standard ». Journal of Food Protection 67, no 8 (1 août 2004) : 1646–55. http://dx.doi.org/10.4315/0362-028x-67.8.1646.

Texte intégral
Résumé :
A PCR assay with an internal amplification control was developed for Listeria monocytogenes. The assay has a 99% detection probability of seven cells per reaction. When tested against 38 L. monocytogenes strains and 52 nontarget strains, the PCR assay was 100% inclusive (positive signal from target) and 100% exclusive (no positive signal from nontarget). The assay was then evaluated in a collaborative trial involving 12 European laboratories, where it was tested against an additional 14 target and 14 nontarget strains. In that trial, the inclusivity was 100% and the exclusivity was 99.4%, and both the accordance (repeatability) and the concordance (reproducibility) were 99.4%. The assay was incorporated within a method for the detection of L. monocytogenes in raw milk, which involves 24 h of enrichment in half-Fraser broth followed by 16 h of enrichment in a medium that can be added directly into the PCR. The performance characteristics of the PCR-based method were evaluated in a collaborative trial involving 13 European laboratories. In that trial, a specificity value (percentage of correct identification of blank samples) of 81.8% was obtained; the accordance was 87.9%, and the concordance was 68.1%. The sensitivity (correct identification of milk samples inoculated with 20 to 200 L. monocytogenes cells per 25 ml) was 89.4%, the accordance was 81.2%, and the concordance was 80.7%. This method provides a basis for the application of routine PCR-based analysis to dairy products and other foodstuffs and should be appropriate for international standardization.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kroner, Daryl G., Megan M. Morrison et Evan M. Lowder. « A Principled Approach to the Construction of Risk Assessment Categories : The Council of State Governments Justice Center Five-Level System ». International Journal of Offender Therapy and Comparative Criminology 64, no 10-11 (20 août 2019) : 1074–90. http://dx.doi.org/10.1177/0306624x19870374.

Texte intégral
Résumé :
Consistent risk category placement of criminal justice clients across instruments will improve the communication of risk. Efforts coordinated by the Council of State Governments (CSG) Justice Center led to the development of a principled (i.e., a system based on a given set of procedures) method of developing risk assessment levels. An established risk assessment instrument (Level of Service Inventory–Revised [LSI-R]) was used to assess the risk-level concordance of the CSG Justice Center Five-Level system. Specifically, concordance was assessed by matching the defining characteristics of the data set with its distribution qualities and by the level/category similarity between the observed reoffending base rate and the statistical probability of reoffending. Support for the CSG Justice Center Five-Level system was found through a probation data set ( N = 24,936) having a greater proportion of offenders in the lower risk levels than a parole/community data set ( N = 36,303). The statistical probabilities of reoffending in each CSG Justice Center system risk level had greater concordance to the observed Five-Level base rates than the base rates from the LSI-R original categories. The concordance evidence for the CSG Justice Center Five-Level system demonstrates the ability of this system to place clients in appropriate risk levels.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Gadyl’shin, R. R. « Concordance method of asymptotic expansions in a singularly-perturbed boundary-value problem for the laplace operator ». Journal of Mathematical Sciences 125, no 5 (février 2005) : 579–609. http://dx.doi.org/10.1007/pl00021941.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Duwarahan, Jeevana, et Lakshika S. Nawarathna. « An Improved Measurement Error Model for Analyzing Unreplicated Method Comparison Data under Asymmetric Heavy-Tailed Distributions ». Journal of Probability and Statistics 2022 (15 décembre 2022) : 1–13. http://dx.doi.org/10.1155/2022/3453912.

Texte intégral
Résumé :
Method comparison studies mainly focus on determining if the two methods of measuring a continuous variable are agreeable enough to be used interchangeably. Typically, a standard mixed-effects model uses to model the method comparison data that assume normality for both random effects and errors. However, these assumptions are frequently violated in practice due to the skewness and heavy tails. In particular, the biases of the methods may vary with the extent of measurement. Thus, we propose a methodology for method comparison data to deal with these issues in the context of the measurement error model (MEM) that assumes a skew- t (ST) distribution for the true covariates and centered Student’s t (cT) distribution for the errors with known error variances, named STcT-MEM. An expectation conditional maximization (ECM) algorithm is used to compute the maximum likelihood (ML) estimates. The simulation study is performed to validate the proposed methodology. This methodology is illustrated by analyzing gold particle data and then compared with the standard measurement error model (SMEM). The likelihood ratio (LR) test is used to identify the most appropriate model among the above models. In addition, the total deviation index (TDI) and concordance correlation coefficient (CCC) were used to check the agreement between the methods. The findings suggest that our proposed framework for analyzing unreplicated method comparison data with asymmetry and heavy tails works effectively for modest and large samples.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Vanbelle, Sophie, et Emmanuel Lesaffre. « Modeling agreement on bounded scales ». Statistical Methods in Medical Research 27, no 11 (8 mai 2017) : 3460–77. http://dx.doi.org/10.1177/0962280217705709.

Texte intégral
Résumé :
Agreement is an important concept in medical and behavioral sciences, in particular in clinical decision making where disagreements possibly imply a different patient management. The concordance correlation coefficient is an appropriate measure to quantify agreement between two scorers on a quantitative scale. However, this measure is based on the first two moments, which could poorly summarize the shape of the score distribution on bounded scales. Bounded outcome scores are common in medical and behavioral sciences. Typical examples are scores obtained on visual analog scales and scores derived as the number of positive items on a questionnaire. These kinds of scores often show a non-standard distribution, like a J- or U-shape, questioning the usefulness of the concordance correlation coefficient as agreement measure. The logit-normal distribution has shown to be successful in modeling bounded outcome scores of two types: (1) when the bounded score is a coarsened version of a latent score with a logit-normal distribution on the [0,1] interval and (2) when the bounded score is a proportion with the true probability having a logit-normal distribution. In the present work, a model-based approach, based on a bivariate generalization of the logit-normal distribution, is developed in a Bayesian framework to assess the agreement on bounded scales. This method permits to directly study the impact of predictors on the concordance correlation coefficient and can be simply implemented in standard Bayesian softwares, like JAGS and WinBUGS. The performances of the new method are compared to the classical approach using simulations. Finally, the methodology is used in two different medical domains: cardiology and rheumatology.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Cai, Tommaso, Salvatore Privitera, Federica Trovato, Paolo Capogrosso, Federico Dehò, Sebastiano Cimino, Michele Rizzo et al. « A Proposal of a New Nomogram to Predict the Need for Testosterone ReplACEment (TRACE) : A Simple Tool for Everyday Clinical Practice ». Journal of Personalized Medicine 12, no 10 (5 octobre 2022) : 1654. http://dx.doi.org/10.3390/jpm12101654.

Texte intégral
Résumé :
International guidelines suggest to use testosterone therapy (TTh) in hypogonadal men presenting symptoms of testosterone deficiency (TD), even if there is no fixed threshold level of T at which TTh should be started. We aimed to develop and validate a nomogram named TRACE (Testosterone ReplACEment) for predicting the need of TTh in patients with “low–normal” total testosterone levels. The following nomogram variables were used: serum T level; serum LH level; BMI; state of nocturnal erections; metabolic comorbidities; and IPSS total score. The nomogram has been tested by calculating concordance probabilities, as well as assaying the calibration of predicted probability of clinical testosterone deficiency and need for TTh, together with the clinical outcome of the TTh. A cohort of 141 patients was used for the development of the nomogram, while a cohort of 123 patients attending another institution was used to externally validate and calibrate it. Sixty-four patients (45.3%) received TTh. Among them, sixty patients (93.7%) reported a significant clinical improvement after TTh. The nomogram had a concordance index of 0.83 [area under the ROC curve 0.81 (95% CI 0.71–0.83)]. In conclusion, the TRACE nomogram accurately predicted the probability of clinical impairment related to TD, and resulted in a simple and reliable method to use to select hypogonadal patients with not clearly pathological testosterone values who will benefit from TTh.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Ragazzo, Michele, Giulio Puleri, Valeria Errichiello, Laura Manzo, Laura Luzzi, Saverio Potenza, Claudia Strafella et al. « Evaluation of OpenArray™ as a Genotyping Method for Forensic DNA Phenotyping and Human Identification ». Genes 12, no 2 (3 février 2021) : 221. http://dx.doi.org/10.3390/genes12020221.

Texte intégral
Résumé :
A custom plate of OpenArray™ technology was evaluated to test 60 single-nucleotide polymorphisms (SNPs) validated for the prediction of eye color, hair color, and skin pigmentation, and for personal identification. The SNPs were selected from already validated subsets (Hirisplex-s, Precision ID Identity SNP Panel, and ForenSeq DNA Signature Prep Kit). The concordance rate and call rate for every SNP were calculated by analyzing 314 sequenced DNA samples. The sensitivity of the assay was assessed by preparing a dilution series of 10.0, 5.0, 1.0, and 0.5 ng. The OpenArray™ platform obtained an average call rate of 96.9% and a concordance rate near 99.8%. Sensitivity testing performed on serial dilutions demonstrated that a sample with 0.5 ng of total input DNA can be correctly typed. The profiles of the 19 SNPs selected for human identification reached a random match probability (RMP) of, on average, 10−8. An analysis of 21 examples of biological evidence from 8 individuals, that generated single short tandem repeat profiles during the routine workflow, demonstrated the applicability of this technology in real cases. Seventeen samples were correctly typed, revealing a call rate higher than 90%. Accordingly, the phenotype prediction revealed the same accuracy described in the corresponding validation data. Despite the reduced discrimination power of this system compared to STR based kits, the OpenArray™ System can be used to exclude suspects and prioritize samples for downstream analyses, providing well-established information about the prediction of eye color, hair color, and skin pigmentation. More studies will be needed for further validation of this technology and to consider the opportunity to implement this custom array with more SNPs to obtain a lower RMP and to include markers for studies of ancestry and lineage.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Kwit, Ewa, Zbigniew Osiński, Antonio Lavazza et Artur Rzeżutka. « Detection of myxoma virus in the classical form of myxomatosis using an AGID assay : statistical assessment of the assay’s diagnostic performance ». Journal of Veterinary Research 64, no 3 (14 juillet 2020) : 369–72. http://dx.doi.org/10.2478/jvetres-2020-0049.

Texte intégral
Résumé :
AbstractIntroductionThe aim of the study was to estimate the diagnostic sensitivity (DSe) and specificity (DSp) of an agar gel immunodiffusion (AGID) assay for detection of myxoma virus (MYXV) in the classical form of myxomatosis and to compare its diagnostic performance to that of molecular methods (IAC-PCR, OIE PCR, and OIE real-time PCR).Material and MethodsA panel of MYXV-positive samples of tissue homogenates with low (1 PCR unit – PCRU) and high (3,125 PCRU) virus levels and outbreak samples were used for method comparison studies. The validation parameters of the AGID assay were assessed using statistical methods.ResultsThe AGID attained DSe of 0.65 (CI95%: 0.53–0.76), DSp of 1.00 (CI95%: 0.40–1.00), and accuracy of 0.67 (CI95%: 0.55–0.76). The assay confirmed its diagnostic usefulness primarily for testing samples containing ≥3,125 PCRU of MYXV DNA. However, in the assaying of samples containing <3,125 PCRU of the virus there was a higher probability of getting false negative results, and only molecular methods showed a 100% sensitivity for samples with low (1 PCRU) virus concentration. The overall concordance of the results between AGID and IAC-PCR was fair (ĸ = 0.40). Full concordance of the results was observed for OIE PCR and OIE real-time PCR when control reference material was analysed.ConclusionsFindings from this study suggest that AGID can be used with some limitations as a screening tool for detection of MYXV infections.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Chi, D. S., Y. Sonoda, N. R. Abu-Rustum, C. S. Awtrey, J. Huh, R. R. Barakat et M. W. Kattan. « Nomogram for survival after primary surgery for bulky stage IIIC ovarian carcinoma ». Journal of Clinical Oncology 24, no 18_suppl (20 juin 2006) : 5058. http://dx.doi.org/10.1200/jco.2006.24.18_suppl.5058.

Texte intégral
Résumé :
5058 Background: Nomograms have been developed for numerous malignancies to predict a specific individual’s probability of long-term survival based on known prognostic factors. To date, no prediction model has been developed for patients with ovarian cancer. The objective of this study was to develop a nomogram to predict the probability of 4-year survival after primary cytoreductive surgery for bulky stage IIIC ovarian carcinoma. Methods: Nomogram predictor variables included age, tumor grade, histologic type, preoperative platelet count, the presence or absence of ascites, and residual disease status after primary cytoreduction. Disease-specific survival was estimated using the Kaplan-Meier method. Cox proportional hazards regression was used for multivariable analysis. The Cox model was the basis for the nomogram. The concordance index was used as an accuracy measure, with bootstrapping to correct for optimistic bias. Calibration plots were constructed. Results: A total of 462patients with bulky stage IIIC ovarian carcinoma underwent primary cytoreductive surgery at our institution during the study period of 1/89 to 12/03, of whom 397 were evaluable for inclusion in the study. The median age of the study population was 60 years (range 22–87). The primary surgeon in all cases was an attending gynecologic oncologist. Postoperatively, all patients received platinum-based systemic chemotherapy. Ovarian cancer-specific survival at 4 years was 51%. A nomogram was constructed on the basis of a Cox regression model and the 6 predictor variables. This nomogram was internally validated using bootstrapping and shown to have excellent calibration with a bootstrap-corrected concordance index of 0.67. Conclusions: A nomogram was developed to predict 4-year disease-specific survival after primary cytoreductive surgery for bulky stage IIIC ovarian carcinoma. The nomogram utilizes 6 predictor variables that are readily accessible, assigns a point value to each variable, and then predicts the probability of 4-year survival based on the total point value for an individual patient. This tool should be useful for patient counseling, clinical trial eligibility determination, postoperative management, and follow-up. No significant financial relationships to disclose.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Luis-Lima, Sergio, Carolina Mas-Sanmartin, Ana Elena Rodríguez-Rodríguez, Esteban Porrini, Alberto Ortiz, Flavio Gaspari, Laura Diaz-Martin et al. « A Simplified Iohexol-Based Method to Measure Renal Function in Sheep Models of Renal Disease ». Biology 9, no 9 (31 août 2020) : 259. http://dx.doi.org/10.3390/biology9090259.

Texte intégral
Résumé :
Sheep are highly adequate models for human renal diseases because of their many similarities in the histology and physiology of kidney and pathogenesis of kidney diseases. However, the lack of a simple method to measure glomerular filtration rate (GFR) limits its use as a model of renal diseases. Hence, we aimed to develop a simple method to measure GFR based on the plasma clearance of iohexol by assessing different pharmacokinetic models: (a) CL2: two-compartment (samples from 15 to 420 min; reference method); (b) CL1: one-compartment (samples from 60 to 420 min); (c) CLlf: CL1 adjusted by a correction formula and (d) SM: simplified CL2 (15 to 300 min). Specific statistics of agreement were used to test the models against CL2. The agreement between CL1 and CL2 was low, but both CL1f and SM showed excellent agreement with CL2, as indicated by a total deviation index of ~5–6%, a concordance correlation of 0.98–0.99% and a coverage probability of 99–100%, respectively. Hence, the SM approach is preferable due to a reduced number of samples and shorter duration of the procedure; two points that improve animal management and welfare.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Robertson, Stephanie, Gustav Stålhammar, Eva Darai-Ramqvist, Mattias Rantalainen, Nicholas P. Tobin, Jonas Bergh et Johan Hartman. « Prognostic value of Ki67 analysed by cytology or histology in primary breast cancer ». Journal of Clinical Pathology 71, no 9 (27 mars 2018) : 787–94. http://dx.doi.org/10.1136/jclinpath-2017-204976.

Texte intégral
Résumé :
AimsThe accuracy of biomarker assessment in breast pathology is vital for therapy decisions. The therapy predictive and prognostic biomarkers oestrogen receptor (ER), progesterone receptor, HER2 and Ki67 may act as surrogates to gene expression profiling of breast cancer. The aims of this study were to investigate the concordance of consecutive biomarker assessment by immunocytochemistry on preoperative fine-needle aspiration cytology versus immunohistochemistry (IHC) on the corresponding resected breast tumours. Further, to investigate the concordance with molecular subtype and correlation to stage and outcome.MethodsTwo retrospective cohorts comprising 385 breast tumours with clinicopathological data including gene expression-based subtype and up to 10-year overall survival data were evaluated.ResultsIn both cohorts, we identified a substantial variation in Ki67 index between cytology and histology and a switch between low and high proliferation within the same tumour in 121/360 cases. ER evaluations were discordant in only 1.5% of the tumours. From cohort 2, gene expression data with PAM50 subtype were used to correlate surrogate subtypes. IHC-based surrogate classification could identify the correct molecular subtype in 60% and 64% of patients by cytology (n=63) and surgical resections (n=73), respectively. Furthermore, high Ki67 in surgical resections but not in cytology was associated with poor overall survival and higher probability for axillary lymph node metastasis.ConclusionsThis study shows considerable differences in the prognostic value of Ki67 but not ER in breast cancer depending on the diagnostic method. Furthermore, our findings show that both methods are insufficient in predicting true molecular subtypes.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Zhao, Binsheng, Shing Mirn Lee, Jing Qi, P. David Mozley, David J. Mauro et Lawrence H. Schwartz. « Minor response rate to predict patient survival. » Journal of Clinical Oncology 31, no 15_suppl (20 mai 2013) : 3635. http://dx.doi.org/10.1200/jco.2013.31.15_suppl.3635.

Texte intégral
Résumé :
3635 Background: RECIST is widely used to evaluate anticancer therapy efficacy, yet its response cutoff values have not been proven biologically or measurement-error relevant. Our previous study showed that the variability in measuring relative change in total tumor burden (TTB%) was +/-10% in metastatic colorectal cancer (mCRC). Our study investigated the impact of a finer gradation of response categories in predicting survival. Methods: 468 patients enrolled in a phase II/III clinical trial evaluating a systemic therapy in mCRC were analyzed. TTB on baseline and 6-week (+/- 3-wk) scans were obtained per RECIST 1.0. Overall survival (OS) was defined from the start of treatment and the date of the scan using the landmark method. The TTB% was summarized using RECIST category and a finer gradation that included cut-offs established in our variability study (<-30%, -30% - -10%, -10% - 10%, 10% - 20%, >20%). The OS and TTB% correlation was evaluated by the Kaplan Meier method and Cox regression. The discriminatory powers of the response summaries were examined using Harrel’s c statistics and concordance probability. Results: Out of the 468 patients, 141 died. The median survival time for patients with a TTB% at 6 weeks of [-30%, -10%] (n=116), [-10%, 10%] (n=171), [10%, 20%] (n=43), >20% (n=62) were 417, 287, 223 and 172 days, respectively. Among those with a change of < -30% (n=76), over half of patients were still alive. The hazard ratio for OS compared to those with a change of <-30% are listed in the Table. The inclusion of additional categories increased both Harrel’s c-statistic (0.69 vs 0.73) and concordance probability (0.63 vs 0.69). Combining the [-10%, 10%] and the [10%, 20%] categories yielded very similar statistics compared to having all 5 categories. Both definitions of OS yielded consistent results. Conclusions: Evidence suggests that, in the context of this study, the minor change category of [-30%, -10%] at 6-wk correlates with longer survival compared to the change category of [-10%, 20%], suggesting a possible re-evaluation of conventional response cut-off values. [Table: see text]
Styles APA, Harvard, Vancouver, ISO, etc.
18

Asencio, Elena, Indranil Banik et Pavel Kroupa. « A massive blow for ΛCDM – the high redshift, mass, and collision velocity of the interacting galaxy cluster El Gordo contradicts concordance cosmology ». Monthly Notices of the Royal Astronomical Society 500, no 4 (5 novembre 2020) : 5249–67. http://dx.doi.org/10.1093/mnras/staa3441.

Texte intégral
Résumé :
ABSTRACT El Gordo (ACT-CL J0102-4915) is an extremely massive galaxy cluster (M200 ≈ 3 × 1015 M⊙) at redshift z = 0.87 composed of two subclusters with a mass ratio of 3.6 merging at speed Vinfall ≈ 2500 km s−1. Such a fast collision between individually rare massive clusters is unexpected in Lambda cold dark matter (ΛCDM) cosmology at such high z. However, this is required for non-cosmological hydrodynamical simulations of the merger to match its observed properties. Here, we determine the probability of finding a similar object in a ΛCDM context using the Jubilee simulation box with a side length of $6 \, h^{-1}$ Gpc. We search for galaxy cluster pairs that have turned around from the cosmic expansion with properties similar to El Gordo in terms of total mass, mass ratio, redshift, and collision velocity relative to virial velocity. We fit the distribution of pair total mass quite accurately, with the fits used in two methods to infer the probability of observing El Gordo in the surveyed region. The more conservative (and detailed) method involves considering the expected distribution of pairwise mass and redshift for analogue pairs with similar dimensionless parameters to El Gordo in the past light-cone of a z = 0 observer. Detecting one pair with its mass and redshift rules out ΛCDM cosmology at 6.16σ. We also use the results of Kraljic and Sarkar to show that the Bullet Cluster is in 2.78σ tension once the sky coverage of its discovery survey is accounted for. Using a χ2 approach, the combined tension can be estimated as 6.43σ. Both collisions arise naturally in a Milgromian dynamics (MOND) cosmology with light sterile neutrinos.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Zhan, Xiangpeng, Tao Chen, Ming Jiang, Wen Deng, Xiaoqiang Liu, Luyao Chen et Bin Fu. « A Novel Nomogram and Risk Classification System Predicting the Cancer-Specific Survival of Muscle-Invasive Bladder Cancer Patients after Partial Cystectomy ». Journal of Oncology 2022 (1 mars 2022) : 1–10. http://dx.doi.org/10.1155/2022/2665711.

Texte intégral
Résumé :
Purpose. To establish a prognostic model that estimates cancer-specific survival (CSS) probability for muscle-invasive bladder cancer patients undergoing partial cystectomy. Patients and Methods. 866 patients from the Surveillance, Epidemiology, and End Results (SEER) database (2004–2015) were enrolled in our study. These patients were randomly divided into the development cohort (n = 608) and validation cohort (n = 258) at a ratio of 7 : 3. A Cox regression was performed to select the predictors associated with CSS. The Kaplan–Meier method was used to analyze the survival outcome between different risk groups. The calibration curves, receiver operating characteristic (ROC) curves, and the concordance index (C-index) were utilized to evaluate the performance of the model. Results. The nomogram incorporated age, histology, T stage, N stage, M stage, regional nodes examined, and tumour size. The C-index of the model was 0.733 (0.696–0.77) in the development cohort, while this value was 0.707 (0.705–0.709) in the validation cohort. The AUC of the nomogram was 0.802 for 1-year, 0.769 for 3-year, and 0.799 for 5-year, respectively, in the development cohort, and was 0.731 for 1-year, 0.748 for 3-year, and 0.752 for 5-year, respectively, in the validation cohort. The calibration curves for 1-year, 3-year, and 5-year CSS showed great concordance. Significant differences were observed between high, medium, and low risk groups ( P < 0.001 ). Conclusions. We have constructed a highly discriminative and precise nomogram and a corresponding risk classification system to predict the cancer-specific survival for muscle-invasive bladder cancer patients undergoing partial cystectomy. The model can assist in the decision on choice of treatment, patient counselling, and follow-up scheduling.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Jolicoeur, E. Marc, Stefan Verheye, Timothy D. Henry, Lawrence Joseph, Serge Doucet, Christopher J. White, Elazer Edelman et Shmuel Banai. « A novel method to interpret early phase trials shows how the narrowing of the coronary sinus concordantly improves symptoms, functional status and quality of life in refractory angina ». Heart 107, no 1 (21 juillet 2020) : 41–46. http://dx.doi.org/10.1136/heartjnl-2020-316644.

Texte intégral
Résumé :
BackgroundReduction of the coronary sinus was shown to improve angina in patients unsuitable for revascularisation. We assessed whether a percutaneous device that reduces the diameter of the coronary sinus improved outcomes across multiple endpoints in a phase II trial.MethodsWe conducted a novel analysis performed as a post hoc efficacy analysis of the COSIRA (Coronary Sinus Reducer for Treatment of Refractory Angina) trial, which enrolled patients with Canadian Cardiovascular Society (CCS) class 3–4 refractory angina. We used four domains: symptoms (CCS Angina Scale), functionality (total exercise duration), ischaemia (imaging) and health-related quality of life. For all domains, we specified a meaningful threshold for change. The primary endpoint was defined as a probability of ≥80% that the reducer exceeded the meaningful threshold on two or more domains (group-level analysis) or that the average efficacy score in the reducer group exceeded the sham control group by at least two points (patient-level analysis).ResultsWe randomised 104 participants to either a device that narrows to coronary sinus (n=52) or a sham implantation (n=52). The reducer group met the prespecified criteria for concordance at the group level and demonstrated improvement in symptoms (0.59 CCS grade, 95% credible interval (CrI)=0.22 to 0.95), total exercise duration (+27.9%, 95% CrI=2.8% to 59.8%) and quality of life (stability +11.2 points, 95% CrI=3.3 to 19.1; perception +11.0, 95% CrI=3.3 to 18.7).ConclusionsThe reducer concordantly improved symptoms, functionality and quality of life compared with a sham intervention in patients with angina unsuitable for coronary revascularisation. Concordant analysis such as this one can help interpret early phase trials and guide the decision to pursue a clinical programme into a larger confirmatory trial.Trail registration numberClinicalTrials.gov identifier: NCT01205893.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Suzuki, Mayumi, Takuma Shibahara et Yoshihiro Muragaki. « A Method to Extract Feature Variables Contributed in Nonlinear Machine Learning Prediction ». Methods of Information in Medicine 59, no 01 (février 2020) : 001–8. http://dx.doi.org/10.1055/s-0040-1701615.

Texte intégral
Résumé :
Abstract Background Although advances in prediction accuracy have been made with new machine learning methods, such as support vector machines and deep neural networks, these methods make nonlinear machine learning models and thus lack the ability to explain the basis of their predictions. Improving their explanatory capabilities would increase the reliability of their predictions. Objective Our objective was to develop a factor analysis technique that enables the presentation of the feature variables used in making predictions, even in nonlinear machine learning models. Methods A factor analysis technique was consisted of two techniques: backward analysis technique and factor extraction technique. We developed a factor extraction technique extracted feature variables that was obtained from the posterior probability distribution of a machine learning model which was calculated by backward analysis technique. Results In evaluation, using gene expression data from prostate tumor patients and healthy subjects, the prediction accuracy of a model of deep neural networks was approximately 5% better than that of a model of support vector machines. Then the rate of concordance between the feature variables extracted in an earlier report using Jensen–Shannon divergence and the ones extracted in this report using backward elimination using Hilbert–Schmidt independence criteria was 40% for the top five variables, 40% for the top 10, and 49% for the top 100. Conclusion The results showed that models can be evaluated from different viewpoints by using different factor extraction techniques. In the future, we hope to use this technique to verify the characteristics of features extracted by factor extraction technique, and to perform clinical studies using the genes, we extracted in this experiment.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Zhang, Serin, Jiang Shao, Disa Yu, Xing Qiu et Jinfeng Zhang. « MatchMixeR : a cross-platform normalization method for gene expression data integration ». Bioinformatics 36, no 8 (6 janvier 2020) : 2486–91. http://dx.doi.org/10.1093/bioinformatics/btz974.

Texte intégral
Résumé :
Abstract Motivation Combining gene expression (GE) profiles generated from different platforms enables previously infeasible studies due to sample size limitations. Several cross-platform normalization methods have been developed to remove the systematic differences between platforms, but they may also remove meaningful biological differences among datasets. In this work, we propose a novel approach that removes the platform, not the biological differences. Dubbed as ‘MatchMixeR’, we model platform differences by a linear mixed effects regression (LMER) model, and estimate them from matched GE profiles of the same cell line or tissue measured on different platforms. The resulting model can then be used to remove platform differences in other datasets. By using LMER, we achieve better bias-variance trade-off in parameter estimation. We also design a computationally efficient algorithm based on the moment method, which is ideal for ultra-high-dimensional LMER analysis. Results Compared with several prominent competing methods, MatchMixeR achieved the highest after-normalization concordance. Subsequent differential expression analyses based on datasets integrated from different platforms showed that using MatchMixeR achieved the best trade-off between true and false discoveries, and this advantage is more apparent in datasets with limited samples or unbalanced group proportions. Availability and implementation Our method is implemented in a R-package, ‘MatchMixeR’, freely available at: https://github.com/dy16b/Cross-Platform-Normalization. Supplementary information Supplementary data are available at Bioinformatics online.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Lovtang, Sara C. P., et Gregg M. Riegel. « Predicting the Occurrence of Downy Brome (Bromus tectorum) in Central Oregon ». Invasive Plant Science and Management 5, no 1 (mars 2012) : 83–91. http://dx.doi.org/10.1614/ipsm-d-11-00029.1.

Texte intégral
Résumé :
AbstractWhere the nonnative annual grass downy brome proliferates, it has changed ecosystem processes, such as nutrient, energy, and water cycles; successional pathways; and fire regimes. The objective of this study was to develop a model that predicts the presence of downy brome in Central Oregon and to test whether high presence correlates with greater cover. Understory data from the U.S. Department of Agriculture (USDA) Forest Service's Current Vegetation Survey (CVS) database for the Deschutes National Forest, the Ochoco National Forest, and the Crooked River National Grassland were compiled, and the presence of downy brome was determined for 1,092 systematically located plots. Logistic regression techniques were used to develop models for predicting downy brome populations. For the landscape including the eastside of the Cascade Mountains to the northwestern edge of the Great Basin, the following were selected as the best predictors of downy brome: low average March precipitation, warm minimum May temperature, few total trees per acre, many western junipers per acre, and a short distance to nearest road. The concordance index = 0.92. Using the equation from logistic regression, a probability for downy brome infestation was calculated for each CVS plot. The plots were assigned to a plant association group (PAG), and the average probability was calculated for the PAGs in which the CVS plots were located. This method could be duplicated in other areas where vegetation inventories take place.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Blum, Mariela A., Yuki Hayashi, Lianchun Xiao, Akihiro Suzuki, Bradley Sabloff, Dipen M. Maru, Takashi Taketa et al. « A nomogram associated with high probability of malignant nodes in the surgical specimen after trimodality therapy of patients with esophageal cancer (EC). » Journal of Clinical Oncology 30, no 15_suppl (20 mai 2012) : e14547-e14547. http://dx.doi.org/10.1200/jco.2012.30.15_suppl.e14547.

Texte intégral
Résumé :
e14547 Background: The presence of malignant lymph nodes (+ypNodes) in the surgical specimen after preoperative chemoradiation (trimodality therapy) in patients with EC portends a poor prognosis for overall survival (OS) and disease-free survival (DFS). Currently, none of the clinical variables highly correlates with +ypNodes. We hypothesized that a combination of clinical variables could generate a model that associates with high likelihood of +ypNodes after trimodality therapy in EC patients. Methods: We report on 293 consecutive EC patients who received trimodality therapy. A multivariate logistic regression analysis that included pretreatment and post-chemoradiation variables identified independent variables that were used to construct a nomogram for +ypNodes after trimodality in EC patients. Results: Of 293 patients, 91 (31.1%) had +ypNodes. OS (p=0.0002) and DFS (p<0.0001) were shorter in patients with +ypNodes compared to those with –ypNodes. In multivariable analysis, the significant variables for +ypNodes were: baseline T-stage (odds ratio [OR], 7.145; 95% confidence interval [CI], 1.381-36.969; p=0.019), baseline N-stage (OR, 2.246; 95% CI, 1.024-4.926; p=0.044), tumor length (OR, 1.178; 95% CI, 1.024-1.357; p=0.022), induction chemotherapy (OR, 0.471; 95% CI, 0.242-0.915; p=0.026), nodal uptake on post-chemoradiation positron emission tomography (OR, 2.923; 95% CI, 1.007-8.485; p=0.049), and enlarged node(s) on post-chemoradiation computerized tomography (OR, 3.465; 95% CI, 1.549-7.753; p=0.002). The normogram after internal validation using the bootstrap method (200 runs) yielded a high concordance index of 0.756. Conclusions: Our nomogram highly correlates with the presence of +ypNodes after chemoradiation and upon validation; it could prove useful in individualizing therapy for EC patients. Supported by UTMDACC and generous donors.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Kingwara, Leonard, Muthoni Karanja, Catherine Ngugi, Geoffrey Kangogo, Kipkerich Bera, Maureen Kimani, Nancy Bowen, Dorcus Abuya, Violet Oramisi et Irene Mukui. « From Sequence Data to Patient Result : A Solution for HIV Drug Resistance Genotyping With Exatype, End to End Software for Pol-HIV-1 Sanger Based Sequence Analysis and Patient HIV Drug Resistance Result Generation ». Journal of the International Association of Providers of AIDS Care (JIAPAC) 19 (1 janvier 2020) : 232595822096268. http://dx.doi.org/10.1177/2325958220962687.

Texte intégral
Résumé :
Introduction: With the rapid scale-up of antiretroviral therapy (ART) to treat HIV infection, there are ongoing concerns regarding probable emergence and transmission of HIV drug resistance (HIVDR) mutations. This scale-up has to lead to an increased need for routine HIVDR testing to inform the clinical decision on a regimen switch. Although the majority of wet laboratory processes are standardized, slow, labor-intensive data transfer and subjective manual sequence interpretation steps are still required to finalize and release patient results. We thus set out to validate the applicability of a software package to generate HIVDR patient results from raw sequence data independently. Methods: We assessed the performance characteristics of Hyrax Bioscience’s Exatype (a sequence data to patient result, fully automated sequence analysis software, which consolidates RECall, MEGA X and the Stanford HIV database) against the standard method (RECall and Stanford database). Exatype is a web-based HIV Drug resistance bioinformatic pipeline available at sanger. exatype.com . To validate the exatype, we used a test set of 135 remnant HIV viral load samples at the National HIV Reference Laboratory (NHRL). Result: We analyzed, and successfully generated results of 126 sequences out of 135 specimens by both Standard and Exatype software. Result production using Exatype required minimal hands-on time in comparison to the Standard (6 computation-hours using the standard method versus 1.5 Exatype computation-hours). Concordance between the 2 systems was 99.8% for 311,227 bases compared. 99.7% of the 0.2% discordant bases, were attributed to nucleotide mixtures as a result of the sequence editing in Recall. Both methods identified similar (99.1%) critical antiretroviral resistance-associated mutations resulting in a 99.2% concordance of resistance susceptibility interpretations. The Base-calling comparison between the 2 methods had Cohen’s kappa (0.97 to 0.99), implying an almost perfect agreement with minimal base calling variation. On a predefined dataset, RECall editing displayed the highest probability to score mixtures accurately 1 vs. 0.71 and the lowest chance to inaccurately assign mixtures to pure nucleotides (0.002–0.0008). This advantage is attributable to the manual sequence editing in RECall. Conclusion: The reduction in hands-on time needed is a benefit when using the Exatype HIV DR sequence analysis platform and result generation tool. There is a minimal difference in base calling between Exatype and standard methods. Although the discrepancy has minimal impact on drug resistance interpretation, allowance of sequence editing in Exatype as RECall can significantly improve its performance.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Luo, Yuan, Hao Zhang, Xiaoli Zeng, Wei Xu, Xun Wang, Ying Zhang et Yan Wang. « Nomogram prediction of caries risk among schoolchildren age 7 years based on a cohort study in Shanghai ». Journal of International Medical Research 49, no 11 (novembre 2021) : 030006052110601. http://dx.doi.org/10.1177/03000605211060175.

Texte intégral
Résumé :
Objective Caries risk assessment tools are essential for identifying and providing treatment for individuals at high risk of developing caries. We aimed to develop a nomogram for the assessment and evaluation of caries risk among Chinese children. Methods We enrolled schoolchildren age 7 years from a primary school in Shanghai. Baseline information of participants was collected using a questionnaire completed by children’s caregivers. A nomogram of a novel prediction scoring model was established based on predictors detected in univariate and multivariate analyses. Predictive accuracy and discriminative ability of the nomogram were calculated using the concordance index (C index). The bootstrap method (1000 samples) was used to decrease overfitting. The net benefit of the model was validated using decision curve analysis. Results Overall, 406 children with complete information and two completed dental examinations were included in the final analysis. The nomogram based on logistic regression model coefficients demonstrated a C index of 0.766 (95% confidence interval: 0.761–0.771) for caries risk. The net benefit of the decision curve analysis was 38.6% at 55% threshold probability. Conclusion This nomogram model, derived using dietary habits, oral hygiene status, and caries experience, showed promising predictive ability to assess the caries risk among Chinese children.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Chaman Baz, Amir Hossein, Elle van de Wiel, Hans Groenewoud, Mark Arntz, Martin Gotthardt, Jaap Deinum et Johan Langenhuijsen. « CXCR4-directed [68Ga]Ga-PentixaFor PET/CT versus adrenal vein sampling performance : a study protocol for a randomised two-step controlled diagnoStic Trial Ultimately comparing hypertenSion outcome in primary aldosteronism (CASTUS) ». BMJ Open 12, no 8 (août 2022) : e060779. http://dx.doi.org/10.1136/bmjopen-2022-060779.

Texte intégral
Résumé :
IntroductionPrimary aldosteronism (PA) is the most common form of secondary hypertension. It is caused by overproduction of aldosterone by either a unilateral aldosterone-producing adenoma (APA) or by bilateral adrenal hyperplasia (BAH). Distinction is crucial, because PA is cured by adrenalectomy in APA and is treated by mineralocorticoid receptor antagonists in BAH. The distinction is currently made by adrenal vein sampling (AVS). AVS is a costly, invasive and complex technical procedure with limited availability and is not superior in terms of outcomes to CT scan-based diagnosis. Thus, there is a need for a cheaper, non-invasive and readily available diagnostic tool in PA. We propose a new diagnostic imaging modality employing the positron emission tomography (PET) tracer [68Ga]Ga-PentixaFor. This tracer has high focal uptake in APAs, whereas low uptake was shown in patients with normal adrenals. Thus, [68Ga]Ga-PentixaFor PET/CT is an imaging modality with the potential to improve subtyping of PA. It is readily available, safe and, as an out-patient procedure, much cheaper diagnostic method than AVS.Methods and analysisWe present a two-step randomised controlled trial (RCT) protocol in which we assess the accuracy of [68Ga]Ga-PentixaFor PET/CT in the first step and compare [68Ga]Ga-PentixaFor PET/CT to AVS in the second step. In the first step, the concordance will be determined between [68Ga]Ga-PentixaFor PET/CT and AVS and a concordance probability is calculated with a Bayesian prediction model. In the second step, we will compare [68Ga]Ga-PentixaFor PET/CT and AVS for clinical outcome and intensity of hypertensive drug use defined as daily defined doses in a RCT.Ethics and disseminationEthics approval was acquired from the medical ethical committee East-Netherlands (METC Oost-Nederland). Results will be disseminated through peer-reviewed articles.Trial registration numberNL9625.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Kittiskulnam, Piyawan, Krittaya Tiskajornsiri, Pisut Katavetin, Tawatchai Chaiwatanarat, Somchai Eiam-Ong et Kearkiat Praditpornsilpa. « The failure of glomerular filtration rate estimating equations among obese population ». PLOS ONE 15, no 11 (18 novembre 2020) : e0242447. http://dx.doi.org/10.1371/journal.pone.0242447.

Texte intégral
Résumé :
Background Obesity is a major public health with increasing numbers of obese individuals are at risk for kidney disease. However, the validity of serum creatinine-based glomerular filtration rate (GFR) estimating equations in obese population is yet to be determined. Methods We evaluated the performance of the reexpressed Modification of Diet in Renal Disease (MDRD), reexpressed MDRD with Thai racial factor, Thai estimated GFR (eGFR) as well as Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations among obese patients, defined as body mass index (BMI) ≥25 kg/m2 with the reference measured GFR (mGFR) determined by 99mTc-diethylene triamine penta-acetic acid (99mTc-DTPA) plasma clearance method. Serum creatinine levels were measured using standardized enzymatic method simultaneously with GFR measurement. The statistical methods in assessing agreement for continuous data including total deviation index (TDI), concordance correlation coefficient (CCC), and coverage probability (CP) for each estimating equation were compared with the reference mGFR. Accuracy within 10% representing the percentage of estimations falling within the range of ±10% of mGFR values for all equations were also tested. Results A total of 240 Thai obese patients were finally recruited with mean BMI of 31.5 ± 5.8 kg/m2. In the total population, all eGFR equations underestimated the reference mGFR. The average TDI values were 55% indicating that 90% of the estimates falling within the range of -55 to +55% of the reference mGFR. The CP values averaged 0.23 and CCC scores ranged from 0.75 to 0.81, reflecting the low to moderate levels of agreement between each eGFR equation and the reference mGFR. The proportions of patients achieving accuracy 10% ranged from 23% for the reexpressed MDRD equation to 33% for the Thai eGFR formula. Among participants with BMI more than 35 kg/m2 (n = 48), the mean error of all equations was extremely wide and significantly higher for all equations compared with the lower BMI category. Also, the strength of agreement evaluated by TDI, CCC, and CP were low in the subset of patients with BMI ≥35 kg/m2. Conclusion Estimating equations generally underestimated the reference mGFR in subjects with obesity. The overall performance of GFR estimating equations demonstrated poor concordance with the reference mGFR among individuals with high BMI levels. In certain clinical settings such as decision for dialysis initiation, the direct measurements of GFR are required to establish real renal function among obese population.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Emura, Takeshi, Casimir Ledoux Sofeu et Virginie Rondeau. « Conditional copula models for correlated survival endpoints : Individual patient data meta-analysis of randomized controlled trials ». Statistical Methods in Medical Research 30, no 12 (9 octobre 2021) : 2634–50. http://dx.doi.org/10.1177/09622802211046390.

Texte intégral
Résumé :
Correlations among survival endpoints are important for exploring surrogate endpoints of the true endpoint. With a valid surrogate endpoint tightly correlated with the true endpoint, the efficacy of a new drug/treatment can be measurable on it. However, the existing methods for measuring correlation between two endpoints impose an invalid assumption: correlation structure is constant across different treatment arms. In this article, we reconsider the definition of Kendall's concordance measure (tau) in the context of individual patient data meta-analyses of randomized controlled trials. According to our new definition of Kendall's tau, its value depends on the treatment arms. We then suggest extending the existing copula (and frailty) models so that their Kendall's tau can vary across treatment arms. Our newly proposed model, a joint frailty-conditional copula model, is the implementation of the new definition of Kendall's tau in meta-analyses. In order to facilitate our approach, we develop an original R function condCox.reg(.) and make it available in the R package joint.Cox ( https://CRAN.R-project.org/package=joint.Cox ). We apply the proposed method to a gastric cancer dataset (3288 patients in 14 randomized trials from the GASTRIC group). This data analysis concludes that Kendall's tau has different values between the surgical treatment arm and the adjuvant chemotherapy arm ( p-value<0.001), whereas disease-free survival remains a valid surrogate at individual level for overall survival in these trials.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Maistrenko, Oleksandr, Vitalii Khoma, Andrii Shcherba, Yurii Olshevskyi, Yurii Pereverzin, Oleh Popkov, Alexander Kornienko, Oleksandr Shatilo et Andriy Maneliyk. « Improving a procedure for determining the factors that influence the need of higher education institutions for specialists of the highest qualification ». Eastern-European Journal of Enterprise Technologies 1, no 3(115) (28 février 2022) : 86–96. http://dx.doi.org/10.15587/1729-4061.2022.251027.

Texte intégral
Résumé :
A method for calculation of coefficients of the impact of factors on the need for specialists of the highest qualification was proposed. The method is based on expert evaluation methods, in particular, on determining the importance, degree of realization, and tendency of factors that affect the need for highly qualified specialists. The method implements the unit of data reliability verification based on the Kendall coefficient of concordance and Pearson criterion. The method applies an original approach to determining the competence of experts, in particular, by taking into consideration self-evaluation, mutual evaluation, and objective evaluation. The proposed method makes it possible to take into account the influence of factors on the need for specialists of the highest qualification with the possibility of forecasting. The totality of factors that influence the need for specialists of the highest qualification and the magnitude of their impact was determined. They were determined by calculating the indicators of each of the criteria regarding importance, realization, and tendency. Determining was carried out using the algorithm for calculating the coefficients of influence of the factors on the need for specialists of the highest qualification. In general, the following groups of factors were determined: conditions of scientific and scientific-pedagogical activity at a certain institution of higher education, the attractiveness of scientific and scientific-pedagogical activity in a certain country (region), development of industry (speciality). A group of 30 experts was selected to determine the numerical values of the factors, which satisfies the condition for achieving a confidence probability of 0.94. The results of the evaluation of expert judgments revealed that the most influential factors are: social protection (0.87), budget for higher education (0.99), remuneration (0.9), and prestige of scientific and pedagogical activities (0.91). The least influential are: the number of primary positions in the area (0.48) and self-realization opportunities at a higher education institution (0.58).
Styles APA, Harvard, Vancouver, ISO, etc.
31

Vossenaar, Marieke, Noel W. Solomons, Siti Muslimatun, Mieke Faber, Olga P. García, Eva Monterrosa, Kesso Gabrielle van Zutphen et Klaus Kraemer. « Nutrient Density as a Dimension of Dietary Quality : Findings of the Nutrient Density Approach in a Multi-Center Evaluation ». Nutrients 13, no 11 (10 novembre 2021) : 4016. http://dx.doi.org/10.3390/nu13114016.

Texte intégral
Résumé :
The nutrient adequacy of a diet is typically assessed by comparing estimated nutrient intakes with established average nutrient requirements; this approach does not consider total energy consumed. In this multinational survey investigation in Indonesia, Mexico, and South Africa, we explore the applications of the “critical nutrient-density approach”—which brings energy requirements into the equation—in the context of public health epidemiology. We conducted 24 h dietary recalls in convenience samples of normal-weight (BMI 18.5–25 kg/m2) or obese (BMI > 30 kg/m2), low-income women in three settings (n = 290). Dietary adequacy was assessed both in absolute terms and using the nutrient density approach. No significant differences in energy and nutrient intakes were observed between normal-weight and obese women within any of the three samples (p > 0.05). Both the cut-point method (% of EAR) and critical nutrient density approach revealed a high probability of inadequate intakes for several micronutrients but with poor concordance between the two methods. We conclude that it may often require some approximate estimate of the habitual energy intake from an empirical source to apply a true critical nutrient density reference for a population or subgroup. This will logically signify that there would be more “problem nutrients” in the diets examined with this nutrient density approach, and efforts toward improved food selection or food- or biofortification will frequently be indicated.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Matthay, Katherine K., Veronique Edeline, Jean Lumbroso, Marie Laure Tanguy, Bernard Asselain, Jean Michel Zucker, Dominique Valteau-Couanet, Olivier Hartmann et Jean Michon. « Correlation of Early Metastatic Response by123I-Metaiodobenzylguanidine Scintigraphy With Overall Response and Event-Free Survival in Stage IV Neuroblastoma ». Journal of Clinical Oncology 21, no 13 (1 juillet 2003) : 2486–91. http://dx.doi.org/10.1200/jco.2003.09.122.

Texte intégral
Résumé :
Purpose: Metaiodobenzylguanidine (MIBG), specifically taken up in cells of sympathetic origin, provides a highly sensitive and specific indicator for the detection of metastases in neuroblastoma. The aim of this study was to correlate early response to therapy by MIBG scan, using a semiquantitative scoring method, with the end induction response and event-free survival (EFS) rate in stage IV neuroblastoma. Patients and Methods: Seventy-five children older than 1 year and with stage IV neuroblastoma had 123I-MIBG scans at diagnosis, after two and four cycles of induction therapy, and before autologous stem-cell transplantation. The scans were read by two independent observers (concordance > 95%) using a semiquantitative method. Absolute and relative (score divided by initial score) MIBG scores were then correlated with overall pretransplantation response, bone marrow response, and EFS. Results: The pretransplantation response rate was 81%, and the 3-year EFS rate was 32%, similar to a concomitant group of 375 stage IV patients. The median relative MIBG scores after two, four, and six cycles were 0.5, 0.24, and 0.12, respectively. The probability of having a complete response or very good partial response before transplantation was significantly higher if the relative score after two cycles was ≤ 0.5, or, if after four cycles, the relative score was ≤ 0.24. Patients with a relative score of ≤ 0.5 after two cycles or a score of ≤ 0.24 after four cycles had an improved EFS rate (P = .053 and .045, respectively). Conclusion: Semiquantitative MIBG score early in therapy provides valuable prognostic information for overall response and EFS, which may be useful in tailoring treatment.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Janicki, Łukasz Jerzy, Wiesław Leoński, Jerzy Stanisław Janicki, Mateusz Nowotarski, Mirosław Dziuk et Ryszard Piotrowicz. « Comparative Analysis of the Diagnostic Effectiveness of SATRO ECG in the Diagnosis of Ischemia Diagnosed in Myocardial Perfusion Scintigraphy Performed Using the SPECT Method ». Diagnostics 12, no 2 (25 janvier 2022) : 297. http://dx.doi.org/10.3390/diagnostics12020297.

Texte intégral
Résumé :
There is a great need for early diagnosis of ischemic heart disease (IHD), the most common cause of which is haemodynamic disorders caused mainly by obstructive atherosclerosis of the coronary arteries. The diagnosis of IHD is usually made with the use of functional tests, which include resting ECG (R) or examination of significant perfusion disorders during exercise using the SPECT method. Despite the fact that the ECG (R) test is commonly used in cardiological diagnostics, it has a limited diagnostic value, especially in people with a low probability of coronary artery disease (CAD). In order to increase the effectiveness of the ECG (R) examination, SATRO ECG software, based on the single fibres heart activity model (SFHAM), was used to evaluate the electrocardiograms. The introduction of new classifiers from the available medical data to the analysis made it possible to evaluate the diagnostic efficacy of SATRO ECG (TOT) in predicting significant perfusion disorders in the exercise SPECT (TOT 2). These disorders are most often caused by obstructive atherosclerosis of the coronary arteries, which is the main cause of CAD. The database of 316 patients (219 men and 97 women, aged 57 ± 10 years) was analyzed using resting and stress ECG, perfusion scintigraphy performed using the SPECT method, and SATRO ECG analysis. The diagnostic efficacy parameters of SATRO ECG (TOT) in predicting significant perfusion abnormalities in the exercise-induced SPECT (TOT 2) study were: sensitivity, 99%; specificity, 91%; concordance, 96%; and positive, 96%, and negative, 97%, predictive values. The Kappa–Cohen coefficient was 0.92, and the statistical significance coefficient was p < 0.001. These results indicate a statistically significant agreement in the diagnosis of IHD in both diagnostic methods used.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Turkson, Anthony Joe, Francis Ayiah-Mensah et Vivian Nimoh. « Handling Censoring and Censored Data in Survival Analysis : A Standalone Systematic Literature Review ». International Journal of Mathematics and Mathematical Sciences 2021 (24 septembre 2021) : 1–16. http://dx.doi.org/10.1155/2021/9307475.

Texte intégral
Résumé :
The study recognized the worth of understanding the how’s of handling censoring and censored data in survival analysis and the potential biases it might cause if researchers fail to identify and handle the concepts with utmost care. We systematically reviewed the concepts of censoring and how researchers have handled censored data and brought all the ideas under one umbrella. The review was done on articles written in the English language spanning from the late fifties to the present time. We googled through NCBI, PubMed, Google scholar and other websites and identified theories and publications on the research topic. Revelation was that censoring has the potential of biasing results and reducing the statistical power of analyses if not handled with the appropriate techniques it requires. We also found that, besides the four main approaches (complete-data analysis method; imputation approach; dichotomizing the data; the likelihood-based approach) to handling censored data, there were several other innovative approaches to handling censored data. These methods include censored network estimation; conditional mean imputation method; inverse probability of censoring weighting; maximum likelihood estimation; Buckley-Janes least squares algorithm; simple multiple imputation strategy; filter algorithm; Bayesian framework; β -substitution method; search-and-score-hill-climbing algorithm and constraint-based conditional independence algorithm; frequentist; Markov chain Monte Carlo for imputed data; quantile regression; random effects hierarchical Cox proportional hazards; Lin’s Concordance Correlation Coefficient; classical maximum likelihood estimate. We infer that the presence of incomplete information about subjects does not necessarily mean that such information must be discarded, rather they must be incorporated into the study for they might carry certain relevant information that holds the key to the understanding of the research. We anticipate that through this review, researchers will develop a deeper understanding of this concept in survival analysis and select the appropriate statistical procedures for such studies devoid of biases.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Chi, D., O. Zivanovic, V. Kolev, C. Yu, D. A. Levine, Y. Sonoda, N. R. Abu-Rustum, J. Huh, R. R. Barakat et M. W. Kattan. « Nomogram for predicting 5-year survival after primary surgery for epithelial ovarian cancer ». Journal of Clinical Oncology 27, no 15_suppl (20 mai 2009) : 5523. http://dx.doi.org/10.1200/jco.2009.27.15_suppl.5523.

Texte intégral
Résumé :
5523 Background: Nomograms have been shown to be superior to traditional staging systems for predicting an individual's probability of long-term survival. Our objective was to develop a nomogram based on established prognostic factors to predict the probability of 5-year disease-specific survival (DSS) after primary surgery for patients with epithelial ovarian cancer (EOC) and to compare its predictive accuracy with the currently used FIGO staging system. Methods: We identified all pts with EOC who had their primary staging/cytoreductive surgery at our institution from January 1996-December 2004. DSS was estimated using the Kaplan-Meier method. We analyzed 28 clinical and pathologic factors for prognostic significance. Significant factors on univariate analysis were then included in the Cox proportional hazards regression model, which identified the factors to be used to construct the nomogram. The concordance index (CI) was used as an accuracy measure, with bootstrapping to correct for optimistic bias. Calibration plots were constructed. Results: There were 478 evaluable pts on the study. The median age was 58 years (range 25–96). The primary surgeon in all cases was an attending gynecologic oncologist. All patients received platinum-based systemic chemotherapy postop. DSS at 5 years was 52%. The most predictive nomogram was constructed using the following 7 predictor variables: age, ASA status, family history suggestive of hereditary breast/ovarian cancer syndrome, preoperative serum albumin level, FIGO stage, tumor histology, and residual disease status after primary surgery. This nomogram was internally validated using bootstrapping and shown to have excellent calibration with a bootstrap-corrected CI of 0.721. The CI for FIGO staging alone was significantly less at 0.62 (p = 0.002). Conclusions: We developed a nomogram to predict 5-year DSS after primary surgery for EOC. The nomogram uses 7 variables that are readily accessible, assigns a point value to each variable, and then predicts the probability of 5-year survival based on the total point value for an individual patient. This tool is more accurate than FIGO staging and should be useful for patient counseling, clinical trial eligibility determination, postop management, and follow-up. No significant financial relationships to disclose.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Ravindranath, Arun, Naresh Devineni, Upmanu Lall et Paulina Concha Larrauri. « Season-ahead forecasting of water storage and irrigation requirements – an application to the southwest monsoon in India ». Hydrology and Earth System Sciences 22, no 10 (4 octobre 2018) : 5125–41. http://dx.doi.org/10.5194/hess-22-5125-2018.

Texte intégral
Résumé :
Abstract. Water risk management is a ubiquitous challenge faced by stakeholders in the water or agricultural sector. We present a methodological framework for forecasting water storage requirements and present an application of this methodology to risk assessment in India. The application focused on forecasting crop water stress for potatoes grown during the monsoon season in the Satara district of Maharashtra. Pre-season large-scale climate predictors used to forecast water stress were selected based on an exhaustive search method that evaluates for highest ranked probability skill score and lowest root-mean-squared error in a leave-one-out cross-validation mode. Adaptive forecasts were made in the years 2001 to 2013 using the identified predictors and a non-parametric k-nearest neighbors approach. The accuracy of the adaptive forecasts (2001–2013) was judged based on directional concordance and contingency metrics such as hit/miss rate and false alarms. Based on these criteria, our forecasts were correct 9 out of 13 times, with two misses and two false alarms. The results of these drought forecasts were compared with precipitation forecasts from the Indian Meteorological Department (IMD). We assert that it is necessary to couple informative water stress indices with an effective forecasting methodology to maximize the utility of such indices, thereby optimizing water management decisions.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Gul, Sanam, Rosy Ilyas, Shoukat Ali Lohar et Mansoor Ahmed. « Exploring Gender Differences in the Use of Hedges in Pakistani Engineering Research ». Education and Linguistics Research 6, no 1 (10 mars 2020) : 101. http://dx.doi.org/10.5296/elr.v6i1.16646.

Texte intégral
Résumé :
Previous studies suggest significant differences in academic writing between gender-based studies and various disciplines. As, English for Academic Purpose (EAP) is used as the source of communication as “an international language” which not only reflects on readers and writers but its professional, social, cultural, linguistic and educational settlements also (Canagarajah, 2002). Hedges play important role in academic writing. For this, the aim of present study was to investigate the use of hedges in Pakistani engineering research articles on gender-based level. The present study examined Pakistani research articles from two disciplines of Civil engineering and Electrical engineering to find out the frequencies and functions of hedges on gender-based level. For this reason, Hyland and Tse’s (2004) Interpersonal model of metadiscourse was employed to identify the list of hedges and to see the similarities and differences in the use of hedges on gender-based level. The corpus was built of 100 research articles. The total number of articles was 100 from Civil engineering and Electrical engineering discipline consisting on male and female writers. For this study, mixed method (qualitative and quantitative) strategy was employed. Sampling of the study was probability and stratified sampling. For analysis, Anconc.3.4.4 (a concordance tool) was applied to find out the frequencies and differences in the use of hedges.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Kim, Naru, In Woong Han, Youngju Ryu, Dae Wook Hwang, Jin Seok Heo, Dong Wook Choi et Sang Hyun Shin. « Predictive Nomogram for Early Recurrence after Pancreatectomy in Resectable Pancreatic Cancer : Risk Classification Using Preoperative Clinicopathologic Factors ». Cancers 12, no 1 (6 janvier 2020) : 137. http://dx.doi.org/10.3390/cancers12010137.

Texte intégral
Résumé :
The survival of patients with pancreatic ductal adenocarcinoma (PDAC) is closely related to recurrence. It is necessary to classify the risk factors for early recurrence and to develop a tool for predicting the initial outcome after surgery. Among patients with resected resectable PDAC at Samsung Medical Center (Seoul, Korea) between January 2007 and December 2016, 631 patients were classified as the training set. Analyses identifying preoperative factors affecting early recurrence after surgery were performed. When the p-value estimated from univariable Cox’s proportional hazard regression analysis was <0.05, the variables were included in multivariable analysis and used for establishing the nomogram. The established nomogram predicted the probability of early recurrence within 12 months after surgery in resectable PDAC. One thousand bootstrap resamplings were used to validate the nomogram. The concordance index was 0.665 (95% confidence interval [CI], 0.637–0.695), and the incremental area under the curve was 0.655 (95% CI, 0.631–0.682). We developed a web-based calculator, and the nomogram is freely available at http://pdac.smchbp.org/. This is the first nomogram to predict early recurrence after surgery for resectable PDAC in the preoperative setting, providing a method to allow proceeding to treatment customized according to the risk of individual patients.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Wu, Wenrui, Huanxi Zhang, Jinghong Tan, Qian Fu, Jun Li, Chenglin Wu, Huiting Huang et al. « Eplet-Predicted Antigens : An Attempt to Introduce Eplets into Unacceptable Antigen Determination and Calculated Panel-Reactive Antibody Calculation Facilitating Kidney Allocation ». Diagnostics 12, no 12 (28 novembre 2022) : 2983. http://dx.doi.org/10.3390/diagnostics12122983.

Texte intégral
Résumé :
(1) Calculated panel-reactive antibody (CPRA) is a measure of sensitization based on unacceptable antigens (UAs). Determination of UAs based on single-antigen bead assays at allele or antigen levels may be inappropriate. We aimed to introduce eplets for better assessment of sensitization; (2) 900 recipients and 1427 donors were enrolled for candidate or donor pools, respectively. Eplets were from the HLA Epitope Registry. UAs were determined by anti-HLA antibodies identified using LIFECODES Single Antigen (LSA) kits. CPRA values were calculated using a simplified method of donor filtering; (3) HLA antigens containing all eplets of an HLA antigen in LSA kits (LSA antigen) were defined as eplet-predicted (EP) antigens, the reactivity of which could be predicted by that LSA antigen. High reactivity concordance was found between LSA and EP antigens. More HLA antigens were covered by EP antigens in the population than LSA antigens. CPRA values at the EP level were higher than at the allele level and lower than at the antigen level. The EP antigens facilitated UA determination for non-LSA antigens and avoided acute rejection; (4) UA determination using EP antigens can lead to more accurate assessment of sensitization, enabling a high probability of compatible organs and a low risk of adverse outcomes.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Kommers, Ivar, David Bouget, André Pedersen, Roelant S. Eijgelaar, Hilko Ardon, Frederik Barkhof, Lorenzo Bello et al. « Glioblastoma Surgery Imaging—Reporting and Data System : Standardized Reporting of Tumor Volume, Location, and Resectability Based on Automated Segmentations ». Cancers 13, no 12 (8 juin 2021) : 2854. http://dx.doi.org/10.3390/cancers13122854.

Texte intégral
Résumé :
Treatment decisions for patients with presumed glioblastoma are based on tumor characteristics available from a preoperative MR scan. Tumor characteristics, including volume, location, and resectability, are often estimated or manually delineated. This process is time consuming and subjective. Hence, comparison across cohorts, trials, or registries are subject to assessment bias. In this study, we propose a standardized Glioblastoma Surgery Imaging Reporting and Data System (GSI-RADS) based on an automated method of tumor segmentation that provides standard reports on tumor features that are potentially relevant for glioblastoma surgery. As clinical validation, we determine the agreement in extracted tumor features between the automated method and the current standard of manual segmentations from routine clinical MR scans before treatment. In an observational consecutive cohort of 1596 adult patients with a first time surgery of a glioblastoma from 13 institutions, we segmented gadolinium-enhanced tumor parts both by a human rater and by an automated algorithm. Tumor features were extracted from segmentations of both methods and compared to assess differences, concordance, and equivalence. The laterality, contralateral infiltration, and the laterality indices were in excellent agreement. The native and normalized tumor volumes had excellent agreement, consistency, and equivalence. Multifocality, but not the number of foci, had good agreement and equivalence. The location profiles of cortical and subcortical structures were in excellent agreement. The expected residual tumor volumes and resectability indices had excellent agreement, consistency, and equivalence. Tumor probability maps were in good agreement. In conclusion, automated segmentations are in excellent agreement with manual segmentations and practically equivalent regarding tumor features that are potentially relevant for neurosurgical purposes. Standard GSI-RADS reports can be generated by open access software.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Bevilacqua, J. L. B., M. W. Kattan, C. Yu, S. Koifman, I. E. Mattos, R. J. Koifman et A. Bergmann. « Nomograms for predicting the risk of arm lymphedema after axillary dissection in breast cancer. » Journal of Clinical Oncology 29, no 27_suppl (20 septembre 2011) : 8. http://dx.doi.org/10.1200/jco.2011.29.27_suppl.8.

Texte intégral
Résumé :
8 Background: Lymphedema (LE) after axillary dissection (AD) is a multifactorial, chronic, and disabling condition that currently affects an estimated 4 million people worldwide. Although several risk factors have been described, it is difficult to estimate the risk in individual patients. We therefore developed nomograms based on a large data set. Methods: Clinicopathological features were collected from a prospective cohort study of 1,054 women with unilateral breast cancer undergoing AD as part of their surgical treatment from 8/2001 to 11/2002. LE was defined as a volume difference of at least 200 mL between arms at 6 months or more after surgery. The cumulative incidence of LE was ascertained by the Kaplan-Meier method, and Cox proportional hazards models were used to predict the risk of developing LE based on the available data at each timepoint: (I) preoperatively; (II) within 6 months from surgery; and (III) 6 months or later after surgery. Results: The 5-year cumulative incidence of LE was 30.3%. Independent risk factors for LE were age, body mass index, ipsilateral arm chemotherapy infusions, level of AD, location of radiotherapy field, development of postoperative seroma, infection, and early edema. When applied to the validation set, the concordance indexes were 0.706, 0.729, and 0.736 for models I, II, and III, respectively. Conclusions: The proposed nomograms can help physicians and patients to predict the 5-year probability of LE after AD for breast cancer. Free online versions of the nomograms will be available.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Panta, Manisha, Avdesh Mishra, Md Tamjidul Hoque et Joel Atallah. « ClassifyTE : a stacking-based prediction of hierarchical classification of transposable elements ». Bioinformatics 37, no 17 (3 mars 2021) : 2529–36. http://dx.doi.org/10.1093/bioinformatics/btab146.

Texte intégral
Résumé :
Abstract Motivation Transposable Elements (TEs) or jumping genes are DNA sequences that have an intrinsic capability to move within a host genome from one genomic location to another. Studies show that the presence of a TE within or adjacent to a functional gene may alter its expression. TEs can also cause an increase in the rate of mutation and can even mediate duplications and large insertions and deletions in the genome, promoting gross genetic rearrangements. The proper classification of identified jumping genes is important for analyzing their genetic and evolutionary effects. An effective classifier, which can explain the role of TEs in germline and somatic evolution more accurately, is needed. In this study, we examine the performance of a variety of machine learning (ML) techniques and propose a robust method, ClassifyTE, for the hierarchical classification of TEs with high accuracy, using a stacking-based ML method. Results We propose a stacking-based approach for the hierarchical classification of TEs. When trained on three different benchmark datasets, our proposed system achieved 4%, 10.68% and 10.13% average percentage improvement (using the hF measure) compared to several state-of-the-art methods. We developed an end-to-end automated hierarchical classification tool based on the proposed approach, ClassifyTE, to classify TEs up to the super-family level. We further evaluated our method on a new TE library generated by a homology-based classification method and found relatively high concordance at higher taxonomic levels. Thus, ClassifyTE paves the way for a more accurate analysis of the role of TEs. Availability and implementation The source code and data are available at https://github.com/manisa/ClassifyTE. Supplementary information Supplementary data are available at Bioinformatics online.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Ribas-Barba, Lourdes, Lluís Serra-Majem, Blanca Román-Viñas, Joy Ngo et Alicia García-Álvarez. « Effects of dietary assessment methods on assessing risk of nutrient intake adequacy at the population level : from theory to practice ». British Journal of Nutrition 101, S2 (juillet 2009) : S64—S72. http://dx.doi.org/10.1017/s0007114509990596.

Texte intégral
Résumé :
The present study evaluated how applying different dietary methods affects risk assessment of inadequate intakes at the population level. A pooled analysis was conducted using data from two Spanish regional representative surveys both applying similar methodology with a total sample of 2615 individuals aged 12–80. Diet was assessed in the entire sample applying data from one 24 h recall (24HR), a mean of two non-consecutive 24HR, both crude and adjusted for intraindividual variability, and a FFQ. Intakes of vitamins A, C, E, thiamin, riboflavin, niacin, vitamin B6, vitamin B12, Fe, Mg, P and Zn were compared to the average nutrient requirement (ANR or estimated average requirement) in the entire sample and also excluding under-reporters applying the ANR cut-point method (and the probability approach for Fe). Higher percentages of intakes below the ANR were seen for 1–24HR and the mean of 2–24HR, except for nutrients with the highest rates of inadequacy (vitamins A, E, folate and Mg). For these micronutrients, higher percentages of inadequacy were obtained by adjusted 24HR data and the lowest with FFQ. For the remaining nutrients, adjusted data gave the lowest inadequacy percentages. The best concordance was seen between 2–24HR and 1–24HR as well as for adjusted 24HR, with the least observed between FFQ and the other methods. Exclusion of under-reporters considerably reduced inadequacy in both daily methods and FFQ. Crude daily data gave higher estimates of inadequate intakes than adjusted data or FFQ. Reproducibility of daily methods was also reasonably good. Results may differ depending on the micronutrient thus impeding reaching conclusions/recommendations common for all micronutrients.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Adeoye, John, Mohamad Koohi-Moghadam, Anthony Wing Ip Lo, Raymond King-Yin Tsang, Velda Ling Yu Chow, Li-Wu Zheng, Siu-Wai Choi, Peter Thomson et Yu-Xiong Su. « Deep Learning Predicts the Malignant-Transformation-Free Survival of Oral Potentially Malignant Disorders ». Cancers 13, no 23 (1 décembre 2021) : 6054. http://dx.doi.org/10.3390/cancers13236054.

Texte intégral
Résumé :
Machine-intelligence platforms for the prediction of the probability of malignant transformation of oral potentially malignant disorders are required as adjunctive decision-making platforms in contemporary clinical practice. This study utilized time-to-event learning models to predict malignant transformation in oral leukoplakia and oral lichenoid lesions. A total of 1098 patients with oral white lesions from two institutions were included in this study. In all, 26 features available from electronic health records were used to train four learning algorithms—Cox-Time, DeepHit, DeepSurv, random survival forest (RSF)—and one standard statistical method—Cox proportional hazards model. Discriminatory performance, calibration of survival estimates, and model stability were assessed using a concordance index (c-index), integrated Brier score (IBS), and standard deviation of the averaged c-index and IBS following training cross-validation. This study found that DeepSurv (c-index: 0.95, IBS: 0.04) and RSF (c-index: 0.91, IBS: 0.03) were the two outperforming models based on discrimination and calibration following internal validation. However, DeepSurv was more stable than RSF upon cross-validation. External validation confirmed the utility of DeepSurv for discrimination (c-index—0.82 vs. 0.73) and RSF for individual survival estimates (0.18 vs. 0.03). We deployed the DeepSurv model to encourage incipient application in clinical practice. Overall, time-to-event models are successful in predicting the malignant transformation of oral leukoplakia and oral lichenoid lesions.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Cui, Huan, Fei Tu, Cheng Zhang, Chunmao Zhang, Kui Zhao, Juxiang Liu, Shishan Dong, Ligong Chen, Jun Liu et Zhendong Guo. « Real-Time Reverse Transcription Recombinase-Aided Amplification Assay for Rapid Amplification of the N Gene of SARS-CoV-2 ». International Journal of Molecular Sciences 23, no 23 (3 décembre 2022) : 15269. http://dx.doi.org/10.3390/ijms232315269.

Texte intégral
Résumé :
COVID-19 was officially declared a global pandemic disease on 11 March 2020, with severe implications for healthcare systems, economic activity, and human life worldwide. Fast and sensitive amplification of the severe acute respiratory syndrome virus 2 (SARS-CoV-2) nucleic acids is critical for controlling the spread of this disease. Here, a real-time reverse transcription recombinase-aided amplification (RT-RAA) assay, targeting conserved positions in the nucleocapsid protein gene (N gene) of SARS-CoV-2, was successfully established for SARS-CoV-2. The assay was specific to SARS-CoV-2, and there was no cross-reaction with other important viruses. The sensitivity of the real-time RT-RAA assay was 142 copies per reaction at 95% probability. Furthermore, 100% concordance between the real-time RT-RAA and RT-qPCR assays was achieved after testing 72 clinical specimens. Further linear regression analysis indicated a significant correlation between the real-time RT-RAA and RT-qPCR assays with an R2 value of 0.8149 (p < 0.0001). In addition, the amplicons of the real-time RT-RAA assay could be directly visualized by a portable blue light instrument, making it suitable for the rapid amplification of SARS-CoV-2 in resource-limited settings. Therefore, the real-time RT-RAA method allows the specific, sensitive, simple, rapid, and reliable detection of SARS-CoV-2.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Bichindaritz, Isabelle, Guanghui Liu et Christopher Bartlett. « Integrative survival analysis of breast cancer with gene expression and DNA methylation data ». Bioinformatics 37, no 17 (3 mars 2021) : 2601–8. http://dx.doi.org/10.1093/bioinformatics/btab140.

Texte intégral
Résumé :
Abstract Motivation Integrative multi-feature fusion analysis on biomedical data has gained much attention recently. In breast cancer, existing studies have demonstrated that combining genomic mRNA data and DNA methylation data can better stratify cancer patients with distinct prognosis than using single signature. However, those existing methods are simply combining these gene features in series and have ignored the correlations between separate omics dimensions over time. Results In the present study, we propose an adaptive multi-task learning method, which combines the Cox loss task with the ordinal loss task, for survival prediction of breast cancer patients using multi-modal learning instead of performing survival analysis on each feature dataset. First, we use local maximum quasi-clique merging (lmQCM) algorithm to reduce the mRNA and methylation feature dimensions and extract cluster eigengenes respectively. Then, we add an auxiliary ordinal loss to the original Cox model to improve the ability to optimize the learning process in training and regularization. The auxiliary loss helps to reduce the vanishing gradient problem for earlier layers and helps to decrease the loss of the primary task. Meanwhile, we use an adaptive weights approach to multi-task learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. Finally, we build an ordinal cox hazards model for survival analysis and use long short-term memory (LSTM) method to predict patients’ survival risk. We use the cross-validation method and the concordance index (C-index) for assessing the prediction effect. Stringent cross-verification testing processes for the benchmark dataset and two additional datasets demonstrate that the developed approach is effective, achieving very competitive performance with existing approaches. Availability and implementation https://github.com/bhioswego/ML_ordCOX.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Gardner, Sue E., Stephen L. Hillis et Rita A. Frantz. « Clinical Signs of Infection in Diabetic Foot Ulcers With High Microbial Load ». Biological Research For Nursing 11, no 2 (14 janvier 2009) : 119–28. http://dx.doi.org/10.1177/1099800408326169.

Texte intégral
Résumé :
Aims. One proposed method to diagnose diabetic foot ulcers (DFUs) for infection is clinical examination. Twelve different signs of infection have been reported. The purpose of this study was to examine diagnostic validity of each individual clinical sign, a combination of signs recommended by the Infectious Disease Society of America (IDSA), and a composite predictor based on all signs of localized wound infection in identifying DFU infection, among a sample of DFUs. Methods. A cross-sectional research design was used. Sixty-four individuals with DFUs were recruited from a Department of Veterans Affairs Medical Center and an academic-affiliated hospital. Each DFU was independently assessed by 2 research team members using the clinical signs and symptoms checklist. Tissue specimens were then obtained via wound biopsy and quantitatively processed. Ulcers with more than 106 organisms per gram of tissue were defined as having high microbial load. Individual signs and the IDSA combination were assessed for validity by calculating sensitivity, specificity, and concordance probability. The composite predictor was analyzed using c-index and receiver operating curves. Results. Twenty-five (39%) of the DFUs had high microbial loads. No individual sign was a significant predictor of high microbial load. The IDSA combination was not a significant predictor either. The c-index of the composite predictor was .645 with a 95% confidence interval of .559-.732. Conclusions. Individual signs of infection do not perform well nor does the IDSA combination of signs. However, a composite predictor based on all signs provides a moderate level of discrimination, suggesting clinical use. Larger sample sizes and alternate reference standards are recommended.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Dylenok, A. A., V. V. Rybachkov, V. N. Malashenko, S. V. Kashin, L. B. Shubin et A. B. Vasin. « Predicting optimal surgeon volume in patients with early gastric cancer ». Creative surgery and oncology 12, no 4 (2 janvier 2023) : 282–87. http://dx.doi.org/10.24060/2076-3093-2022-12-4-282-287.

Texte intégral
Résumé :
Introduction. The incidence of gastric cancer remains high, despite the increase in the share of stage I–II cancers — 37.1% in 2019. Surgical treatment remains relevant even in patients with “early” forms of gastric cancer (EGC). Therefore, the reliable means for determining the surgeon volume in such patients are to be urgently developed.Aim. To estimate the probability of building a stable predictive model for patients with EGC in order to choose the proper surgical intervention.Materials and methods. Th e research involved the data obtained from “Database of patients with gastric cancer, reflecting statistics of patients with a particular variant of surgical intervention, treated at Yaroslavl Regional Clinical Oncological Hospital during the period from 2009 to 2019”. All patients (n = 266) received different volume of surgery: intraluminal surgery (n = 128), wedge gastric resection (n = 36), classical gastrectomy or subtotal gastric resection (n = 102). According to the volume of intervention, the patients were ratified into several study groups. Statistical analysis involved case records of three groups of patients and was conducted using MedCalc Statistical Soft ware version 20.022 and Statistica 12.5.Results. Ten factors were identified to form a patient model corresponding to each method of surgical treatment. Th e fairness of the division of patients into groups was checked by ROC-analysis in order to determine sensitivity and specificity of the set of criteria for the division. Th e following characteristics of the mathematical model were obtained by means of ROC analysis: concordance coefficient = 88.24%, AUC = 0.893; index J = 0.811; Se = 87.92; Sp = 89.04; +LR = 3.27; -LR = 1.31.Conclusion. Introduction of this approach into clinical practice decreased the rate of gastrectomies and gastric resections by 15% for the last three years.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Sousa-e-Silva, Paulo, Manuel J. Coelho-e-Silva, Andre Seabra, Daniela C. Costa, Diogo V. Martinho, João P. Duarte, Tomás Oliveira et al. « Skeletal age assessed by TW2 using 20-bone, carpal and RUS score systems : Intra-observer and inter-observer agreement among male pubertal soccer players ». PLOS ONE 17, no 8 (23 août 2022) : e0271386. http://dx.doi.org/10.1371/journal.pone.0271386.

Texte intégral
Résumé :
The purpose of this study was to determine intra- and inter-observer agreement for the three skeletal ages derived from the TW2 method among male pubertal soccer players. The sample included 142 participants aged 11.0–15.3 years. Films of the left hand-wrist were evaluated twice by each of two observers. Twenty bones were rated and three scoring systems used to determine SA adopting the TW2 version: 20-bone, CARPAL and RUS. Overall agreement rates were 95.1% and 93.8% for, respectively, Observer A and Observer B. Although, agreement rates between observers differed for 13 bones (5 carpals, metacarpal-I, metacarpal-III, metacarpal-V, proximal phalanges-I, III and V, distal phalanx-III), intra-class correlationa were as follows: 0.990 (20-bone), 0.969 (CARPAL), and 0.988 (RUS). For the three SA protocols, BIAS was negligible: 0.02 years (20-bone), 0.04 years (CARPAL), and 0.03 years (RUS). Observer-associated error was not significant for 20-bone SA (TEM = 0.25 years, %CV = 1.86) neither RUS SA (TEM = 0.31 years, %CV = 2.22). Although the mean difference for CARPAL SAs between observers (observer A: 12.48±1.18 years; observer B: 12.29±1.24 years; t = 4.662, p<0.01), the inter-observer disagreement had little impact (TEM: 0.34 years: %CV: 2.78). The concordance between bone-specific developmental stages seemed was somewhat more problematic for the carpals than for the long bones. Finally, when error due to the observer is not greater than one stage and the replicated assignments had equal probability for being lower or higher compared to initial assignments, the effect on SAs was trivial or small.
Styles APA, Harvard, Vancouver, ISO, etc.
50

BABII, Iryna, Vasyl ZDERNYK et Olena KOSIIUK. « EVALUATION OF STRATEGIC RISKS IN THE ACTIVITIES OF ENTERPRISE IN THE PROCESS OF BUSINESS PLANNING ». Herald of Khmelnytskyi National University. Economic sciences 308, no 4 (28 juillet 2022) : 31–37. http://dx.doi.org/10.31891/2307-5740-2022-308-4-5.

Texte intégral
Résumé :
The article examines the relevance of identifying and assessing strategic risks in business planning in conditions of uncertainty in the environment of the enterprise’s functioning, and examines the essence and nature of threats and risks. It has been proven that modern business conditions require constant monitoring of risk-generating factors to create an effective and flexible system of business planning of an enterprise in conditions of instability. The typology of risks was studied, their varieties were characterized, and a diagram of their interaction was constructed. The article establishes that strategic risk management is an integral part of drawing up and implementing an enterprise’s business plan. Features of analysis and assessment of preventable risks, strategic risks and internal risks in the process of business planning are considered. The authors provide an example of assessing the strategic risks of project implementation within the business plan. Using the example of the researched enterprise, the article highlights separate groups of project implementation risks in the business planning process. Based on the method of expert assessments, the average probability of occurrence of a particular type of risk was calculated as the arithmetic mean of the probabilities according to the opinions of three experts, as well as the total assessment of each type of risk taking into account the average weight of the risk of the implementation of the new project. The stages of assessing the consistency of experts’ opinions using the mathematical method of calculating the concordance coefficient are presented and detailed. On the example of the researched enterprise, it was established that the risks of each group can have a different impact on the implementation of the project, therefore, the measures developed in the future for their prevention will be determined by the factors of the external and internal environment. It is summarized that the effectiveness of business planning and the effectiveness of the activities of economic entities largely depend on the adopted method of risk assessment.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie