Littérature scientifique sur le sujet « Concordance probability method »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Concordance probability method ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Concordance probability method"

1

Liao, Jason J. Z., et Robert Capen. « An Improved Bland-Altman Method for Concordance Assessment ». International Journal of Biostatistics 7, no 1 (6 janvier 2011) : 1–17. http://dx.doi.org/10.2202/1557-4679.1295.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Slatkin, M. « Detecting small amounts of gene flow from phylogenies of alleles. » Genetics 121, no 3 (1 mars 1989) : 609–12. http://dx.doi.org/10.1093/genetics/121.3.609.

Texte intégral
Résumé :
Abstract The method of coalescents is used to find the probability that none of the ancestors of alleles sampled from a population are immigrants. If that is the case for samples from two or more populations, then there would be concordance between the phylogenies of those alleles and the geographic locations from which they are drawn. This type of concordance has been found in several studies of mitochondrial DNA from natural populations. It is shown that if the number of sequences sampled from each population is reasonably large (10 or more), then this type of concordance suggests that the average number of individuals migrating between populations is likely to be relatively small (Nm less than 1) but the possibility of occasional migrants cannot be excluded. The method is applied to the data of E. Bermingham and J. C. Avise on mtDNA from the bowfin, Amia calva.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Il’in, A. M., A. R. Danilin et S. V. Zakharov. « Application of the concordance method of asymptotic expansions to solving boundary-value problems ». Journal of Mathematical Sciences 125, no 5 (février 2005) : 610–57. http://dx.doi.org/10.1007/pl00021946.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Millwater, Harry R., Michael P. Enright et Simeon H. K. Fitch. « Convergent Zone-Refinement Method for Risk Assessment of Gas Turbine Disks Subject to Low-Frequency Metallurgical Defects ». Journal of Engineering for Gas Turbines and Power 129, no 3 (15 septembre 2006) : 827–35. http://dx.doi.org/10.1115/1.2431393.

Texte intégral
Résumé :
Titanium gas turbine disks are subject to a rare but not insignificant probability of fracture due to metallurgical defects, particularly hard α. A probabilistic methodology has been developed and implemented in concordance with the Federal Aviation Administration (FAA) Advisory Circular 33.14-1 to compute the probability of fracture of gas turbine titanium disks subject to low-frequency metallurgical (hard α) defects. This methodology is further developed here to ensure that a robust, converged, accurate calculation of the probability is computed that is independent of discretization issues. A zone-based material discretization methodology is implemented, then refined locally through further discretization using risk contribution factors as a metric. The technical approach is akin to “h” refinement in finite element analysis; that is, a local metric is used to indicate regions requiring further refinement, and subsequent refinement yields a more accurate solution. Supporting technology improvements are also discussed, including localized finite element refinement and onion skinning for zone subdivision resolution, and a restart database and parallel processing for computational efficiency. A numerical example is presented for demonstration.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Du, Yu, Nicolas Sutton-Charani, Sylvie Ranwez et Vincent Ranwez. « EBCR : Empirical Bayes concordance ratio method to improve similarity measurement in memory-based collaborative filtering ». PLOS ONE 16, no 8 (9 août 2021) : e0255929. http://dx.doi.org/10.1371/journal.pone.0255929.

Texte intégral
Résumé :
Recommender systems aim to provide users with a selection of items, based on predicting their preferences for items they have not yet rated, thus helping them filter out irrelevant ones from a large product catalogue. Collaborative filtering is a widely used mechanism to predict a particular user’s interest in a given item, based on feedback from neighbour users with similar tastes. The way the user’s neighbourhood is identified has a significant impact on prediction accuracy. Most methods estimate user proximity from ratings they assigned to co-rated items, regardless of their number. This paper introduces a similarity adjustment taking into account the number of co-ratings. The proposed method is based on a concordance ratio representing the probability that two users share the same taste for a new item. The probabilities are further adjusted by using the Empirical Bayes inference method before being used to weight similarities. The proposed approach improves existing similarity measures without increasing time complexity and the adjustment can be combined with all existing similarity measures. Experiments conducted on benchmark datasets confirmed that the proposed method systematically improved the recommender system’s prediction accuracy performance for all considered similarity measures.
Styles APA, Harvard, Vancouver, ISO, etc.
6

D'AGOSTINO, M., M. WAGNER, J. A. VAZQUEZ-BOLAND, T. KUCHTA, R. KARPISKOVA, J. HOORFAR, S. NOVELLA et al. « A Validated PCR-Based Method To Detect Listeria monocytogenes Using Raw Milk as a Food Model—Towards an International Standard ». Journal of Food Protection 67, no 8 (1 août 2004) : 1646–55. http://dx.doi.org/10.4315/0362-028x-67.8.1646.

Texte intégral
Résumé :
A PCR assay with an internal amplification control was developed for Listeria monocytogenes. The assay has a 99% detection probability of seven cells per reaction. When tested against 38 L. monocytogenes strains and 52 nontarget strains, the PCR assay was 100% inclusive (positive signal from target) and 100% exclusive (no positive signal from nontarget). The assay was then evaluated in a collaborative trial involving 12 European laboratories, where it was tested against an additional 14 target and 14 nontarget strains. In that trial, the inclusivity was 100% and the exclusivity was 99.4%, and both the accordance (repeatability) and the concordance (reproducibility) were 99.4%. The assay was incorporated within a method for the detection of L. monocytogenes in raw milk, which involves 24 h of enrichment in half-Fraser broth followed by 16 h of enrichment in a medium that can be added directly into the PCR. The performance characteristics of the PCR-based method were evaluated in a collaborative trial involving 13 European laboratories. In that trial, a specificity value (percentage of correct identification of blank samples) of 81.8% was obtained; the accordance was 87.9%, and the concordance was 68.1%. The sensitivity (correct identification of milk samples inoculated with 20 to 200 L. monocytogenes cells per 25 ml) was 89.4%, the accordance was 81.2%, and the concordance was 80.7%. This method provides a basis for the application of routine PCR-based analysis to dairy products and other foodstuffs and should be appropriate for international standardization.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kroner, Daryl G., Megan M. Morrison et Evan M. Lowder. « A Principled Approach to the Construction of Risk Assessment Categories : The Council of State Governments Justice Center Five-Level System ». International Journal of Offender Therapy and Comparative Criminology 64, no 10-11 (20 août 2019) : 1074–90. http://dx.doi.org/10.1177/0306624x19870374.

Texte intégral
Résumé :
Consistent risk category placement of criminal justice clients across instruments will improve the communication of risk. Efforts coordinated by the Council of State Governments (CSG) Justice Center led to the development of a principled (i.e., a system based on a given set of procedures) method of developing risk assessment levels. An established risk assessment instrument (Level of Service Inventory–Revised [LSI-R]) was used to assess the risk-level concordance of the CSG Justice Center Five-Level system. Specifically, concordance was assessed by matching the defining characteristics of the data set with its distribution qualities and by the level/category similarity between the observed reoffending base rate and the statistical probability of reoffending. Support for the CSG Justice Center Five-Level system was found through a probation data set ( N = 24,936) having a greater proportion of offenders in the lower risk levels than a parole/community data set ( N = 36,303). The statistical probabilities of reoffending in each CSG Justice Center system risk level had greater concordance to the observed Five-Level base rates than the base rates from the LSI-R original categories. The concordance evidence for the CSG Justice Center Five-Level system demonstrates the ability of this system to place clients in appropriate risk levels.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Gadyl’shin, R. R. « Concordance method of asymptotic expansions in a singularly-perturbed boundary-value problem for the laplace operator ». Journal of Mathematical Sciences 125, no 5 (février 2005) : 579–609. http://dx.doi.org/10.1007/pl00021941.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Duwarahan, Jeevana, et Lakshika S. Nawarathna. « An Improved Measurement Error Model for Analyzing Unreplicated Method Comparison Data under Asymmetric Heavy-Tailed Distributions ». Journal of Probability and Statistics 2022 (15 décembre 2022) : 1–13. http://dx.doi.org/10.1155/2022/3453912.

Texte intégral
Résumé :
Method comparison studies mainly focus on determining if the two methods of measuring a continuous variable are agreeable enough to be used interchangeably. Typically, a standard mixed-effects model uses to model the method comparison data that assume normality for both random effects and errors. However, these assumptions are frequently violated in practice due to the skewness and heavy tails. In particular, the biases of the methods may vary with the extent of measurement. Thus, we propose a methodology for method comparison data to deal with these issues in the context of the measurement error model (MEM) that assumes a skew- t (ST) distribution for the true covariates and centered Student’s t (cT) distribution for the errors with known error variances, named STcT-MEM. An expectation conditional maximization (ECM) algorithm is used to compute the maximum likelihood (ML) estimates. The simulation study is performed to validate the proposed methodology. This methodology is illustrated by analyzing gold particle data and then compared with the standard measurement error model (SMEM). The likelihood ratio (LR) test is used to identify the most appropriate model among the above models. In addition, the total deviation index (TDI) and concordance correlation coefficient (CCC) were used to check the agreement between the methods. The findings suggest that our proposed framework for analyzing unreplicated method comparison data with asymmetry and heavy tails works effectively for modest and large samples.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Vanbelle, Sophie, et Emmanuel Lesaffre. « Modeling agreement on bounded scales ». Statistical Methods in Medical Research 27, no 11 (8 mai 2017) : 3460–77. http://dx.doi.org/10.1177/0962280217705709.

Texte intégral
Résumé :
Agreement is an important concept in medical and behavioral sciences, in particular in clinical decision making where disagreements possibly imply a different patient management. The concordance correlation coefficient is an appropriate measure to quantify agreement between two scorers on a quantitative scale. However, this measure is based on the first two moments, which could poorly summarize the shape of the score distribution on bounded scales. Bounded outcome scores are common in medical and behavioral sciences. Typical examples are scores obtained on visual analog scales and scores derived as the number of positive items on a questionnaire. These kinds of scores often show a non-standard distribution, like a J- or U-shape, questioning the usefulness of the concordance correlation coefficient as agreement measure. The logit-normal distribution has shown to be successful in modeling bounded outcome scores of two types: (1) when the bounded score is a coarsened version of a latent score with a logit-normal distribution on the [0,1] interval and (2) when the bounded score is a proportion with the true probability having a logit-normal distribution. In the present work, a model-based approach, based on a bivariate generalization of the logit-normal distribution, is developed in a Bayesian framework to assess the agreement on bounded scales. This method permits to directly study the impact of predictors on the concordance correlation coefficient and can be simply implemented in standard Bayesian softwares, like JAGS and WinBUGS. The performances of the new method are compared to the classical approach using simulations. Finally, the methodology is used in two different medical domains: cardiology and rheumatology.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Concordance probability method"

1

Choudhary, Pankaj K. « ASSESSMENT OF AGREEMENT AND SELECTION OF THE BEST INSTRUMENT IN METHOD COMPARISON STUDIES ». The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1029109764.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

ROTA, MATTEO. « Cut-pont finding methods for continuous biomarkers ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/40114.

Texte intégral
Résumé :
My PhD dissertation deals with statistical methods for cut-point finding for continuous biomarkers. Categorization is often needed for clinical decision making when dealing with diagnostic (or prognostic) biomarkers and a dichotomous or censored failure time outcome. This allows the definition of two or more prognostic risk groups, or also patient’s stratifications for inclusion in randomized clinical trials (RCTs). We investigate the following cut-point finding methods: minimum P-value, Youden index, concordance probability and point closest to-(0,1) corner in the ROC plane. We compare them by assuming both Normal and Gamma biomarker distributions, showing whether they lead to the identification of the same true cut-point and further investigating their performance by simulation. Within the framework of censored survival data, we will consider here new estimation approaches of the optimal cut-point, which use a conditional weighting method to estimate the true positive and false positive fractions. Motivating examples on real datasets are discussed within the dissertation for both the dichotomous and censored failure time outcome. In all simulation scenarios, the point closest-to-(0,1) corner in the ROC plane and concordance probability approaches outperformed the other methods. Both these methods showed good performance in the estimation of the optimal cut-point of a biomarker. However, to improve results communicability, the Youden index or the concordance probability associated to the estimated cut-point could be reported to summarize the associated classification accuracy. The use of the minimum P-value approach for cut-point finding is not recommended because its objective function is computed under the null hypothesis of absence of association between the true disease status and X. This is in contrast with the presence of some discrimination potential of the biomarker X that leads to the dichotomization issue. The investigated cut-point finding methods are based on measures, i.e. sensitivity and specificity, defined conditionally on the outcome. My PhD dissertation opens the question on whether these methods could be applied starting from predictive values, that typically represent the most useful information for clinical decisions on treatments. However, while sensitivity and specificity are invariant to disease prevalence, predictive values vary across populations with different disease prevalence. This is an important drawback of the use of predictive values for cut-point finding. More in general, great care should be taken when establishing a biomarker cut-point for clinical use. Methods for categorizing new biomarkers are often essential in clinical decision-making even if categorization of a continuous biomarker is gained at a considerable loss of power and information. In the future, new methods involving the study of the functional form between the biomarker and the outcome through regression techniques such as fractional polynomials or spline functions should be considered to alternatively define cut-points for clinical use. Moreover, in spite of the aforementioned drawback related to the use of predictive values, we also think that additional new methods for cut-point finding should be developed starting from predictive values.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Concordance probability method"

1

Jin, Zhezhen, et Mounir Mesbah. « Unidimensionality, Agreement and Concordance Probability ». Dans Statistical Models and Methods for Reliability and Survival Analysis, 1–19. Hoboken, USA : John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118826805.ch1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Concordance probability method"

1

Wang, Hua, et Jun Liu. « Tolerance Simulation of Thin-Walled C-Section Composite Beam in Wingbox Assembly Under Preloading ». Dans ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-50273.

Texte intégral
Résumé :
Tolerance simulation’s reliability depends on the concordance between the input probability distribution and the practical situation. Pre-loading induced changes in the probability distribution should be considered in the structure’s tolerance simulation, especially for composite structures. The paper presents a tolerance simulation method for the thin-walled C-section composite beam (TC2B) assembling under preloading, that is prescribed clamping force. Based on FEA model of TC2B, the preloading-modified probability distribution function of the R angle spring-in deviation is proposed. Thickness variations of the TC2B are obtained from the data of the downscaled composite wingbox. These parts’ variations are input to the tolerance simulation software, and the final assembly variations are obtained. The assembly of the downscaled wingbox illustrates the effect of preloading on the probability distribution of the R angle spring-in deviation. The results have shown that tolerance simulation with the modified probability distribution is more accurate than the initial normal distribution. The tolerance simulation work presented in the paper will enhance the understanding of the composite parts assembling with spring-in deviations, and help systematically improving the precision control efficiency in civil aircraft industry.
Styles APA, Harvard, Vancouver, ISO, etc.
2

French, Anna, et Timothy M. Kowalewski. « Laparoscopic Skill Classification Using the Two-Third Power Law and the Isogony Principle ». Dans 2017 Design of Medical Devices Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/dmd2017-3341.

Texte intégral
Résumé :
Surgical skill evaluation is a field that attempts to improve patient outcomes by accurately assessing surgeon proficiency. An important application of the information gathered from skill evaluation is providing feedback to the surgeon on their performance. The most commonly utilized methods for judging skill all depend on some type of human intervention. Expert panels are considered the gold standard for skill evaluation, but are cost prohibitive and often take weeks or months to deliver scores. The Fundamentals of Laparoscopic Surgery (FLS) is a widely adopted surgical training regime. Its scoring method is based on task time and number of task-specific errors, which currently requires a human proctor to calculate. This scoring method requires prior information on the distribution of scores among skill levels, which creates a problem any time a new training module or technique is introduced. These scores are not normally provided while training for the FLS skills test, and [1] has shown that FLS scoring does not lend any additional information over sorting skill levels based on task time. Crowd sourced methods such as those in [2] have also been used to provide feedback and have shown concordance with patient outcomes, however it still takes a few hours to generate scores after a training session. It is desired to find an assessment method that can deliver a score immediately following a training module (or even in real time) and depends neither on human intervention nor on task-specific probability distributions. It is hypothesized that isogony-based surgical tool motion analysis discerns surgical skill level independent of task time.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Concordance probability method"

1

Weller, Joel I., Derek M. Bickhart, Micha Ron, Eyal Seroussi, George Liu et George R. Wiggans. Determination of actual polymorphisms responsible for economic trait variation in dairy cattle. United States Department of Agriculture, janvier 2015. http://dx.doi.org/10.32747/2015.7600017.bard.

Texte intégral
Résumé :
The project’s general objectives were to determine specific polymorphisms at the DNA level responsible for observed quantitative trait loci (QTLs) and to estimate their effects, frequencies, and selection potential in the Holstein dairy cattle breed. The specific objectives were to (1) localize the causative polymorphisms to small chromosomal segments based on analysis of 52 U.S. Holstein bulls each with at least 100 sons with high-reliability genetic evaluations using the a posteriori granddaughter design; (2) sequence the complete genomes of at least 40 of those bulls to 20 coverage; (3) determine causative polymorphisms based on concordance between the bulls’ genotypes for specific polymorphisms and their status for a QTL; (4) validate putative quantitative trait variants by genotyping a sample of Israeli Holstein cows; and (5) perform gene expression analysis using statistical methodologies, including determination of signatures of selection, based on somatic cells of cows that are homozygous for contrasting quantitative trait variants; and (6) analyze genes with putative quantitative trait variants using data mining techniques. Current methods for genomic evaluation are based on population-wide linkage disequilibrium between markers and actual alleles that affect traits of interest. Those methods have approximately doubled the rate of genetic gain for most traits in the U.S. Holstein population. With determination of causative polymorphisms, increasing the accuracy of genomic evaluations should be possible by including those genotypes as fixed effects in the analysis models. Determination of causative polymorphisms should also yield useful information on gene function and genetic architecture of complex traits. Concordance between QTL genotype as determined by the a posteriori granddaughter design and marker genotype was determined for 30 trait-by-chromosomal segment effects that are segregating in the U.S. Holstein population; a probability of <10²⁰ was used to accept the null hypothesis that no segregating gene within the chromosomal segment was affecting the trait. Genotypes for 83 grandsires and 17,217 sons were determined by either complete sequence or imputation for 3,148,506 polymorphisms across the entire genome. Variant sites were identified from previous studies (such as the 1000 Bull Genomes Project) and from DNA sequencing of bulls unique to this project, which is one of the largest marker variant surveys conducted for the Holstein breed of cattle. Effects for stature on chromosome 11, daughter pregnancy rate on chromosome 18, and protein percentage on chromosome 20 met 3 criteria: (1) complete or nearly complete concordance, (2) nominal significance of the polymorphism effect after correction for all other polymorphisms, and (3) marker coefficient of determination >40% of total multiple-regression coefficient of determination for the 30 polymorphisms with highest concordance. The missense polymorphism Phe279Tyr in GHR at 31,909,478 base pairs on chromosome 20 was confirmed as the causative mutation for fat and protein concentration. For effect on fat percentage, 12 additional missensepolymorphisms on chromosome 14 were found that had nearly complete concordance with the suggested causative polymorphism (missense mutation Ala232Glu in DGAT1). The markers used in routine U.S. genomic evaluations were increased from 60,000 to 80,000 by adding markers for known QTLs and markers detected in BARD and other research projects. Objectives 1 and 2 were completely accomplished, and objective 3 was partially accomplished. Because no new clear-cut causative polymorphisms were discovered, objectives 4 through 6 were not completed.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie