Journal articles on the topic 'Diagnostic bias'

To see the other types of publications on this topic, follow the link: Diagnostic bias.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Diagnostic bias.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ford, Michael H. "Diagnostic Bias?" Psychiatric Services 36, no. 11 (November 1985): 1218–19. http://dx.doi.org/10.1176/ps.36.11.1218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chaturvedi, Santosh K. "Munchausen by Proxy: Diagnostic Bias?" Annals of Saudi Medicine 12, no. 1 (January 1992): 107. http://dx.doi.org/10.5144/0256-4947.1992.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Armenakis, A. "Diagnostic bias in organizational consultation." Omega 18, no. 6 (1990): 563–72. http://dx.doi.org/10.1016/0305-0483(90)90048-e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Escobar, Javier I. "Diagnostic Bias: Racial and Cultural Issues." Psychiatric Services 63, no. 9 (September 2012): 847. http://dx.doi.org/10.1176/appi.ps.20120p847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Heijenbrok-Kal, Majanka H., and M. G. Myriam Hunink. "Adjusting for bias in diagnostic reports." American Journal of Medicine 112, no. 4 (March 2002): 322–24. http://dx.doi.org/10.1016/s0002-9343(02)01042-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shah, Hasmukh C. "Diagnostic bias in occupational epidemiologic studies." American Journal of Industrial Medicine 24, no. 2 (August 1993): 249–50. http://dx.doi.org/10.1002/ajim.4700240214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hughes, J. M., C. Penney, S. Boyd, and P. Daley. "Risk of bias and limits of reporting in diagnostic accuracy studies for commercial point-of-care tests for respiratory pathogens." Epidemiology and Infection 146, no. 6 (March 21, 2018): 747–56. http://dx.doi.org/10.1017/s0950268818000596.

Full text
Abstract:
AbstractCommercial point-of-care (POC) diagnostic tests for Group A Streptococcus, Streptococcus pneumoniae, and influenza virus have large potential diagnostic and financial impact. Many published reports on test performance, often funded by diagnostics companies, are prone to bias. The Standards for Reporting of Diagnostic Accuracy (STARD 2015) are a protocol to encourage accurate, transparent reporting. The Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool evaluates risk of bias and transportability of results. We used these tools to evaluate diagnostic test accuracy studies of POC studies for three respiratory pathogens. For the 96 studies analysed, compliance was <25% for 14/34 STARD 2015 standards, and 3/7 QUADAS-2 domains showed a high risk of bias. All reports lacked reporting of at least one criterion. These biases should be considered in the interpretation of study results.
APA, Harvard, Vancouver, ISO, and other styles
8

Furukawa, T. A. "Sources of bias in diagnostic accuracy studies and the diagnostic process." Canadian Medical Association Journal 174, no. 4 (February 14, 2006): 481–82. http://dx.doi.org/10.1503/cmaj.060014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kleijnen, Jos, Marie Westwood, and Penny Whiting. "Applicability of diagnostic studies – statistics, bias and estimates of diagnostic accuracy." Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen 105, no. 7 (January 2011): 498–503. http://dx.doi.org/10.1016/j.zefq.2011.07.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Field, Morton H. "Cognitive bias and diagnostic error (November 2015)." Cleveland Clinic Journal of Medicine 83, no. 6 (June 2016): 407–8. http://dx.doi.org/10.3949/ccjm.83c.06003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Al Fattani, AreejAbdul Ghani, and Abdulla Aljoudi. "Sources of bias in diagnostic accuracy studies." Journal of Applied Hematology 6, no. 4 (2015): 178. http://dx.doi.org/10.4103/1658-5127.171991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pedersen, Eva SL, and Carmen CM de Jong. "Addressing selection bias in diagnostic accuracy studies." Pediatrics International 61, no. 8 (August 2019): 840. http://dx.doi.org/10.1111/ped.13860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Egglin, T. K. "Context bias. A problem in diagnostic radiology." JAMA: The Journal of the American Medical Association 276, no. 21 (December 4, 1996): 1752–55. http://dx.doi.org/10.1001/jama.276.21.1752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Schmidt, Robert L., and Rachel E. Factor. "Understanding Sources of Bias in Diagnostic Accuracy Studies." Archives of Pathology & Laboratory Medicine 137, no. 4 (April 1, 2013): 558–65. http://dx.doi.org/10.5858/arpa.2012-0198-ra.

Full text
Abstract:
Context.—Accuracy is an important feature of any diagnostic test. There has been an increasing awareness of deficiencies in study design that can create bias in estimates of test accuracy. Many pathologists are unaware of these sources of bias. Objective.—To explain the causes and increase awareness of several common types of bias that result from deficiencies in the design of diagnostic accuracy studies. Data Sources.—We cite examples from the literature and provide calculations to illustrate the impact of study design features on estimates of diagnostic accuracy. In a companion article by Schmidt et al in this issue, we use these principles to evaluate diagnostic studies associated with a specific diagnostic test for risk of bias and reporting quality. Conclusions.—There are several sources of bias that are unique to diagnostic accuracy studies. Because pathologists are both consumers and producers of such studies, it is important that they be aware of the risk of bias.
APA, Harvard, Vancouver, ISO, and other styles
15

Sambanis, Nicholas, and Alexander Michaelides. "A Comment on Diagnostic Tools for Counterfactual Inference." Political Analysis 17, no. 1 (2009): 89–106. http://dx.doi.org/10.1093/pan/mpm032.

Full text
Abstract:
We evaluate two diagnostic tools used to determine if counterfactual analysis requires extrapolation. Counterfactuals based on extrapolation are model dependent and might not support empirically valid inferences. The diagnostics help researchers identify those counterfactual “what if” questions that are empirically plausible. We show, through simple Monte Carlo experiments, that these diagnostics will often detect extrapolation, suggesting that there is a risk of biased counterfactual inference when there is no such risk of extrapolation bias in the data. This is because the diagnostics are affected by what we call the n/k problem: as the number of data points relative to the number of explanatory variables decreases, the diagnostics are more likely to detect the risk of extrapolation bias even when such risk does not exist. We conclude that the diagnostics provide too severe a test for many data sets used in political science.
APA, Harvard, Vancouver, ISO, and other styles
16

O’Sullivan, Jack W., Amitava Banerjee, Carl Heneghan, and Annette Pluddemann. "Verification bias." BMJ Evidence-Based Medicine 23, no. 2 (February 27, 2018): 54–55. http://dx.doi.org/10.1136/bmjebm-2018-110919.

Full text
Abstract:
This article is part of the Catalogue of Bias series. We present a description of verification bias, and outline its potential impact on research studies and the preventive steps to minimise its risk. We also present teaching slides in the online supplementary file. Verification bias (sometimes referred to as ‘work-up bias’) concerns the test(s) used to confirm a diagnosis within a diagnostic accuracy study. Verification bias occurs when only a proportion of the study participants receive confirmation of the diagnosis by the reference standard test, or if some participants receive a different reference standard test.
APA, Harvard, Vancouver, ISO, and other styles
17

Aoki, Yosuke. "2. Introducing Representative Cognitive Bias (in Diagnostic Reasoning)." Nihon Naika Gakkai Zasshi 108, Suppl (February 28, 2019): 139b—140a. http://dx.doi.org/10.2169/naika.108.139b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Aoki, Yosuke. "2. Introducing Representative Cognitive Bias (in Diagnostic Reasoning)." Nihon Naika Gakkai Zasshi 108, no. 9 (September 10, 2019): 1842–46. http://dx.doi.org/10.2169/naika.108.1842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bates, Ann S., Peter A. Margolis, and Arthur T. Evans. "Verification bias in pediatric studies evaluating diagnostic tests." Journal of Pediatrics 122, no. 4 (April 1993): 585–90. http://dx.doi.org/10.1016/s0022-3476(05)83540-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mower, William R. "Evaluating Bias and Variability in Diagnostic Test Reports." Annals of Emergency Medicine 33, no. 1 (January 1999): 85–91. http://dx.doi.org/10.1016/s0196-0644(99)70422-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Petersen, Per Hyltoft, Carl-Henric de Verdier, Torgny Groth, Callum G. Fraser, Ole Blaabjerg, and Mogens Hørder. "The influence of analytical bias on diagnostic misclassifications." Clinica Chimica Acta 260, no. 2 (April 1997): 189–206. http://dx.doi.org/10.1016/s0009-8981(96)06496-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Diamond, George A. "Selection bias and the evaluation of diagnostic tests." Journal of Chronic Diseases 39, no. 5 (January 1986): 359–60. http://dx.doi.org/10.1016/0021-9681(86)90121-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Moreno-Rebollo, J. "Miscellanea. Influence diagnostic in survey sampling: conditional bias." Biometrika 86, no. 4 (December 1, 1999): 923–28. http://dx.doi.org/10.1093/biomet/86.4.923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Worster, Andrew, and Christopher Carpenter. "Incorporation bias in studies of diagnostic tests: how to avoid being biased about bias." CJEM 10, no. 02 (March 2008): 174–75. http://dx.doi.org/10.1017/s1481803500009891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Leppänen, Leo, Hanna Tuulonen, and Stefanie Sirén-Heikel. "Automated Journalism as a Source of and a Diagnostic Device for Bias in Reporting." Media and Communication 8, no. 3 (July 10, 2020): 39–49. http://dx.doi.org/10.17645/mac.v8i3.3022.

Full text
Abstract:
In this article we consider automated journalism from the perspective of bias in news text. We describe how systems for automated journalism could be biased in terms of both the information content and the lexical choices in the text, and what mechanisms allow human biases to affect automated journalism even if the data the system operates on is considered neutral. Hence, we sketch out three distinct scenarios differentiated by the technical transparency of the systems and the level of cooperation of the system operator, affecting the choice of methods for investigating bias. We identify methods for diagnostics in each of the scenarios and note that one of the scenarios is largely identical to investigating bias in non-automatically produced texts. As a solution to this last scenario, we suggest the construction of a simple news generation system, which could enable a type of analysis-by-proxy. Instead of analyzing the system, to which the access is limited, one would generate an approximation of the system which can be accessed and analyzed freely. If successful, this method could also be applied to analysis of human-written texts. This would make automated journalism not only a target of bias diagnostics, but also a diagnostic device for identifying bias in human-written news.
APA, Harvard, Vancouver, ISO, and other styles
26

Vos, Laura M., Andrea H. L. Bruning, Johannes B. Reitsma, Rob Schuurman, Annelies Riezebos-Brilman, Andy I. M. Hoepelman, and Jan Jelrik Oosterheert. "Rapid Molecular Tests for Influenza, Respiratory Syncytial Virus, and Other Respiratory Viruses: A Systematic Review of Diagnostic Accuracy and Clinical Impact Studies." Clinical Infectious Diseases 69, no. 7 (January 28, 2019): 1243–53. http://dx.doi.org/10.1093/cid/ciz056.

Full text
Abstract:
Abstract We systematically reviewed available evidence from Embase, Medline, and the Cochrane Library on diagnostic accuracy and clinical impact of commercially available rapid (results <3 hours) molecular diagnostics for respiratory viruses as compared to conventional molecular tests. Quality of included studies was assessed using the Quality Assessment of Diagnostic Accuracy Studies criteria for diagnostic test accuracy (DTA) studies, and the Cochrane Risk of Bias Assessment and Risk of Bias in Nonrandomized Studies of Interventions criteria for randomized and observational impact studies, respectively. Sixty-three DTA reports (56 studies) were meta-analyzed with a pooled sensitivity of 90.9% (95% confidence interval [CI], 88.7%–93.1%) and specificity of 96.1% (95% CI, 94.2%–97.9%) for the detection of either influenza virus (n = 29), respiratory syncytial virus (RSV) (n = 1), influenza virus and RSV (n = 19), or a viral panel including influenza virus and RSV (n = 14). The 15 included impact studies (5 randomized) were very heterogeneous and results were therefore inconclusive. However, we suggest that implementation of rapid diagnostics in hospital care settings should be considered.
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Chanmin, Xiaoyan Lin, and Kerrie P. Nelson. "Measuring rater bias in diagnostic tests with ordinal ratings." Statistics in Medicine 40, no. 17 (May 9, 2021): 4014–33. http://dx.doi.org/10.1002/sim.9011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mull, Nikhil, James B. Reilly, and Jennifer S. Myers. "In reply: Cognitive bias and diagnostic error (November 2015)." Cleveland Clinic Journal of Medicine 83, no. 6 (June 2016): 408. http://dx.doi.org/10.3949/ccjm.83c.06004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mizock, Lauren, and Debra Harkins. "Diagnostic Bias and Conduct Disorder: Improving Culturally Sensitive Diagnosis." Child & Youth Services 32, no. 3 (July 2011): 243–53. http://dx.doi.org/10.1080/0145935x.2011.605315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Rutjes, A. W. S. "Evidence of bias and variation in diagnostic accuracy studies." Canadian Medical Association Journal 174, no. 4 (February 14, 2006): 469–76. http://dx.doi.org/10.1503/cmaj.050090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Spangler, William D. "Single-Source Response Bias in the Job Diagnostic Survey." Psychological Reports 65, no. 2 (October 1989): 531–46. http://dx.doi.org/10.2466/pr0.1989.65.2.531.

Full text
Abstract:
Tests of the job characteristics model using the Job Diagnostic Survey have been criticized in the literature for having single-source response bias. To test this criticism, undergraduate and graduate students used the Job Diagnostic Survey to describe their job as “student” (the pretest). The same students then worked at and described a contrived job using the survey. Results from the current study suggested that personality and instrument characteristics had relatively minimal effects on interscale correlations of the scores in the survey within and across situations. However, response biases attributable to priming, consistency, and implicit theories artificially inflated interscale correlations.
APA, Harvard, Vancouver, ISO, and other styles
32

de Groot, Joris A. H., Nandini Dendukuri, Kristel J. M. Janssen, Johannes B. Reitsma, Patrick M. M. Bossuyt, and Karel G. M. Moons. "Adjusting for Differential-verification Bias in Diagnostic-accuracy Studies." Epidemiology 22, no. 2 (March 2011): 234–41. http://dx.doi.org/10.1097/ede.0b013e318207fc5c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Moreno-Rebollo, J. L., A. Muñoz-Reyes, M. D. Jiménez-Gamero, and J. Muñoz-Pichardo. "Influence diagnostic in survey sampling: Estimating the conditional bias." Metrika 55, no. 3 (June 1, 2002): 209–14. http://dx.doi.org/10.1007/s001840100142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

WELLS, CAROLYN K., and ALVAN R. FEINSTEIN. "DETECTION BIAS IN THE DIAGNOSTIC PURSUIT OF LUNG CANCER." American Journal of Epidemiology 128, no. 5 (November 1988): 1016–26. http://dx.doi.org/10.1093/oxfordjournals.aje.a115046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Forrest, Laura Urbanski. "Gender Bias in the Diagnostic Classification of Mental Disorders." Contemporary Psychology: A Journal of Reviews 43, no. 10 (October 1998): 697–98. http://dx.doi.org/10.1037/001810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Boonstra, Philip S., Roderick J. A. Little, Brady T. West, Rebecca R. Andridge, and Fernanda Alvarado-Leiton. "A Simulation Study of Diagnostics for Selection Bias." Journal of Official Statistics 37, no. 3 (September 1, 2021): 751–69. http://dx.doi.org/10.2478/jos-2021-0033.

Full text
Abstract:
Abstract A non-probability sampling mechanism arising from nonresponse or non-selection is likely to bias estimates of parameters with respect to a target population of interest. This bias poses a unique challenge when selection is ‘non-ignorable’, that is, dependent on the unobserved outcome of interest, since it is then undetectable and thus cannot be ameliorated. We extend a simulation study by Nishimura et al. (2016) adding two recently published statistics: the ‘standardized measure of unadjusted bias’ (SMUB) and ‘standardized measure of adjusted bias’ (SMAB), which explicitly quantify the extent of bias (in the case of SMUB) or nonignorable bias (in the case of SMAB) under the assumption that a specified amount of nonignorable selection exists. Our findings suggest that this new sensitivity diagnostic is more correlated with, and more predictive of, the true, unknown extent of selection bias than other diagnostics, even when the underlying assumed level of non-ignorability is incorrect.
APA, Harvard, Vancouver, ISO, and other styles
37

Kühnisch, Jan, Mila Janjic Rankovic, Svetlana Kapor, Ina Schüler, Felix Krause, Stavroula Michou, Kim Ekstrand, et al. "Identifying and Avoiding Risk of Bias in Caries Diagnostic Studies." Journal of Clinical Medicine 10, no. 15 (July 22, 2021): 3223. http://dx.doi.org/10.3390/jcm10153223.

Full text
Abstract:
Caries diagnostic studies differ with respect to their design, included patients/tooth samples, use of diagnostic and reference methods, calibration, blinding and data reporting. Such heterogeneity makes comparisons between studies difficult and could represent a substantial risk of bias (RoB) when it is not identified. Therefore, the present report aims to describe the development and background of a RoB assessment tool for caries diagnostic studies. The expert group developed and agreed to use a RoB assessment tool during three workshops. Here, existing instruments (e.g., QUADAS 2 and the Joanna Briggs Institute Reviewers’ Manual) influenced the hierarchy and phrasing of the signalling questions that were adapted to the specific dental purpose. The tailored RoB assessment tool that was created consists of 16 signalling questions that are organized in four domains. This tool considers the selection/spectrum bias (1), the bias of the index (2) and reference tests (3), and the bias of the study flow and data analysis (4) and can be downloaded from the journal website. This paper explores possible sources of heterogeneity and bias in caries diagnostic studies and summarizes the relevant methodological aspects.
APA, Harvard, Vancouver, ISO, and other styles
38

Mamede, Sílvia, Marco Antonio de Carvalho-Filho, Rosa Malena Delbone de Faria, Daniel Franci, Maria do Patrocinio Tenorio Nunes, Ligia Maria Cayres Ribeiro, Julia Biegelmeyer, Laura Zwaan, and Henk G. Schmidt. "‘Immunising’ physicians against availability bias in diagnostic reasoning: a randomised controlled experiment." BMJ Quality & Safety 29, no. 7 (January 27, 2020): 550–59. http://dx.doi.org/10.1136/bmjqs-2019-010079.

Full text
Abstract:
BackgroundDiagnostic errors have often been attributed to biases in physicians’ reasoning. Interventions to ‘immunise’ physicians against bias have focused on improving reasoning processes and have largely failed.ObjectiveTo investigate the effect of increasing physicians’ relevant knowledge on their susceptibility to availability bias.Design, settings and participantsThree-phase multicentre randomised experiment with second-year internal medicine residents from eight teaching hospitals in Brazil.InterventionsImmunisation: Physicians diagnosed one of two sets of vignettes (either diseases associated with chronic diarrhoea or with jaundice) and compared/contrasted alternative diagnoses with feedback. Biasing phase (1 week later): Physicians were biased towards either inflammatory bowel disease or viral hepatitis. Diagnostic performance test: All physicians diagnosed three vignettes resembling inflammatory bowel disease, three resembling hepatitis (however, all with different diagnoses). Physicians who increased their knowledge of either chronic diarrhoea or jaundice 1 week earlier were expected to resist the bias attempt.Main outcome measurementsDiagnostic accuracy, measured by test score (range 0–1), computed for subjected-to-bias and not-subjected-to-bias vignettes diagnosed by immunised and not-immunised physicians.ResultsNinety-one residents participated in the experiment. Diagnostic accuracy differed on subjected-to-bias vignettes, with immunised physicians performing better than non-immunised physicians (0.40 vs 0.24; difference in accuracy 0.16 (95% CI 0.05 to 0.27); p=0.004), but not on not-subjected-to-bias vignettes (0.36 vs 0.41; difference −0.05 (95% CI −0.17 to 0.08); p=0.45). Bias only hampered non-immunised physicians, who performed worse on subjected-to-bias than not-subjected-to-bias vignettes (difference −0.17 (95% CI −0.28 to −0.05); p=0.005); immunised physicians’ accuracy did not differ (p=0.56).ConclusionsAn intervention directed at increasing knowledge of clinical findings that discriminate between similar-looking diseases decreased physicians’ susceptibility to availability bias, reducing diagnostic errors, in a simulated setting. Future research needs to examine the degree to which the intervention benefits other disease clusters and performance in clinical practice.Trial registration number68745917.1.1001.0068.
APA, Harvard, Vancouver, ISO, and other styles
39

Garner, William A., Douglas C. Strohmer, Cynthia A. Langford, and George J. Boas. "Diagnostic and Treatment Overshadowing Bias Across Disabilities: Are Rehabilitation Professionals Immune?" Journal of Applied Rehabilitation Counseling 25, no. 2 (June 1, 1994): 33–37. http://dx.doi.org/10.1891/0047-2220.25.2.33.

Full text
Abstract:
The concept of diagnostic overshadowing has historically been applied only to clients with mental retardation. The possibility that diagnostic overshadowing impacts other disability categories was explored in this study. This study examined the robustness of diagnostic overshadowing bias when applied to rehabilitation counselor judgments about clients with physical disabilities, as well clients with mental retardation. A total of 89 rehabilitation professionals were presented with a case scenario which was identical except for the specific disability condition described. The professionals then completed a questionnaire that related to diagnostic impressions and treatment recommendations. Diagnostic overshadowing was exhibited with both mental retardation and physical disabilities. However, no overshadowmg was noted for treatment recommendations.
APA, Harvard, Vancouver, ISO, and other styles
40

Crowley, Ryan J., Yuan Jin Tan, and John P. A. Ioannidis. "Empirical assessment of bias in machine learning diagnostic test accuracy studies." Journal of the American Medical Informatics Association 27, no. 7 (June 17, 2020): 1092–101. http://dx.doi.org/10.1093/jamia/ocaa075.

Full text
Abstract:
Abstract Objective Machine learning (ML) diagnostic tools have significant potential to improve health care. However, methodological pitfalls may affect diagnostic test accuracy studies used to appraise such tools. We aimed to evaluate the prevalence and reporting of design characteristics within the literature. Further, we sought to empirically assess whether design features may be associated with different estimates of diagnostic accuracy. Materials and Methods We systematically retrieved 2 × 2 tables (n = 281) describing the performance of ML diagnostic tools, derived from 114 publications in 38 meta-analyses, from PubMed. Data extracted included test performance, sample sizes, and design features. A mixed-effects metaregression was run to quantify the association between design features and diagnostic accuracy. Results Participant ethnicity and blinding in test interpretation was unreported in 90% and 60% of studies, respectively. Reporting was occasionally lacking for rudimentary characteristics such as study design (28% unreported). Internal validation without appropriate safeguards was used in 44% of studies. Several design features were associated with larger estimates of accuracy, including having unreported (relative diagnostic odds ratio [RDOR], 2.11; 95% confidence interval [CI], 1.43-3.1) or case-control study designs (RDOR, 1.27; 95% CI, 0.97-1.66), and recruiting participants for the index test (RDOR, 1.67; 95% CI, 1.08-2.59). Discussion Significant underreporting of experimental details was present. Study design features may affect estimates of diagnostic performance in the ML diagnostic test accuracy literature. Conclusions The present study identifies pitfalls that threaten the validity, generalizability, and clinical value of ML diagnostic tools and provides recommendations for improvement.
APA, Harvard, Vancouver, ISO, and other styles
41

Graber, Mark A. "Heuristics and medical errors. Part 2: How to make better medical decisions." Russian Family Doctor 25, no. 1 (March 15, 2021): 45–52. http://dx.doi.org/10.17816/rfd62009.

Full text
Abstract:
This publication is a continuation of the article published in the 4th issue of the journal Russian family doctor for 2020 Heuristics, language and medical errors, which described the ways of making medical decisions that can lead to errors in patient management tactics, in particular affect of heuristics / visceral bias, attribution error, frame of reference, availability bias, one-word-one-meaning-fallacy. This article discusses additional sources of diagnostic error, including diagnosis momentum, confirmation bias, representativeness, and premature closure also the conflict that arises from diagnostic uncertainty is discussed. All errors in the tactics and the diagnostic process are illustrated by clinical cases from the personal practice of the author of the article.
APA, Harvard, Vancouver, ISO, and other styles
42

Chaves, Antônio Barbosa, Alexandre Sampaio Moura, Rosa Malena Delbone de Faria, and Ligia Cayres Ribeiro. "The use of deliberate reflection to reduce confirmation bias among orthopedic surgery residents." Scientia Medica 32, no. 1 (March 7, 2022): e42216. http://dx.doi.org/10.15448/1980-6108.2022.1.42216.

Full text
Abstract:
Introduction: cognitive biases might affect decision-making processes such as clinical reasoning and confirmation bias is among the most important ones. The use of strategies that stimulate deliberate reflection during the diagnostic process seems to reduce availability bias, but its effect in reducing confirmation bias needs to be evaluated.Aims: to examine whether deliberate reflection reduces confirmation bias and increases the diagnostic accuracy of orthopedic residents solving written clinical cases.Methods: experimental study comparing the diagnostic accuracy of orthopedic residents in the resolution of eight written clinical cases containing a referral diagnosis. Half of the written cases had a wrong referral diagnosis. One group of residents used deliberate reflection (RG), which stimulates comparison and contrast of clinical hypotheses in a systematic manner, and a control group (CG), was asked to provide differential diagnoses with no further instruction. The study included 55 third-year orthopedic residents, 27 allocated to the RG and 28 to the CG.Results: residents on the RG had higher diagnostic scores than the CG for clinical cases with a correct referral diagnosis (62.0±20.1 vs. 49.1±21.0 respectively; p = 0.021). For clinical cases with incorrect referral diagnosis, diagnostic accuracy was similar between residents on the RG and those on the CG (39.8±24.3 vs. 44.6±26.7 respectively; p = 0.662). We observed an overall confirmation bias in 26.3% of initial diagnoses (non-analytic phase) and 19.5% of final diagnoses (analytic phase) when solving clinical cases with incorrect referral diagnosis. Residents from RG showed a reduction in confirmation of incorrect referral diagnosis when comparing the initial diagnosis given in the non-analytic phase with the one provided as the final diagnosis (25.9±17.7 vs. 17.6±18.1, respectively; Cohen d: 0.46; p = 0.003). In the CG, the reduction in the confirmation of incorrect diagnosis was not statistically significant.Conclusions: confirmation bias was present when residents solved written clinical cases with incorrect referral diagnoses, and deliberate reflection reduced such bias. Despite the reduction in confirmation bias, diagnostic accuracy of residents from the RG was similar to those from the CG when solving the set of clinical cases with a wrong referral diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
43

Watari, Takashi, Yasuharu Tokuda, Yu Amano, Kazumichi Onigata, and Hideyuki Kanda. "Cognitive Bias and Diagnostic Errors among Physicians in Japan: A Self-Reflection Survey." International Journal of Environmental Research and Public Health 19, no. 8 (April 12, 2022): 4645. http://dx.doi.org/10.3390/ijerph19084645.

Full text
Abstract:
This cross-sectional study aimed to clarify how cognitive biases and situational factors related to diagnostic errors among physicians. A self-reflection questionnaire survey on physicians’ most memorable diagnostic error cases was conducted at seven conferences: one each in Okayama, Hiroshima, Matsue, Izumo City, and Osaka, and two in Tokyo. Among the 147 recruited participants, 130 completed and returned the questionnaires. We recruited primary care physicians working in various specialty areas and settings (e.g., clinics and hospitals). Results indicated that the emergency department was the most common setting (47.7%), and the highest frequency of errors occurred during night-time work. An average of 3.08 cognitive biases was attributed to each error. The participants reported anchoring bias (60.0%), premature closure (58.5%), availability bias (46.2%), and hassle bias (33.1%), with the first three being most frequent. Further, multivariate logistic regression analysis for cognitive bias showed that emergency room care can easily induce cognitive bias (adjusted odds ratio 3.96, 95% CI 1.16−13.6, p-value = 0.028). Although limited to a certain extent by its sample collection, due to the sensitive nature of information regarding physicians’ diagnostic errors, this study nonetheless shows correlations with environmental factors (emergency room care situations) that induce cognitive biases which, in turn, cause diagnostic errors.
APA, Harvard, Vancouver, ISO, and other styles
44

Pan, Lin-Lin, Richard Grotjahn, and Joe Tribbia. "Sources of CAM3 temperature bias during northern winter from diagnostic study of the temperature bias equation." Climate Dynamics 35, no. 7-8 (June 28, 2009): 1411–27. http://dx.doi.org/10.1007/s00382-009-0608-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Rosen, Eyal, Tomer Goldberger, Ilan Beitlitum, Dan Littner, and Igor Tsesis. "Diagnosis Efficacy of Cone-Beam Computed Tomography in Endodontics—A Systematic Review of High-Level-Evidence Studies." Applied Sciences 12, no. 3 (January 18, 2022): 938. http://dx.doi.org/10.3390/app12030938.

Full text
Abstract:
Introduction: The integration of clinical inspection and diagnostic imaging forms the basis for endodontic diagnosis, decision making, treatment planning, and outcome assessments. In recent years, CBCT imaging has become a common diagnostic tool in endodontics. CBCT should only be used to ensure that the benefits to the patient exceed the risks. As such, our aim in this study was to evaluate the high level diagnostic efficacy studies and their risk of bias. Methods: A systematic search of the literature was conducted to identify studies evaluating the use of CBCT imaging in endodontics. The following databases were searched: Medline (PubMed), Scopus, and Cochrane Central. The identified studies were subjected to rigorous inclusion criteria. Studies considered as having a high efficacy level were then subjected to a risk of bias assessment using the Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy. Results: Initially, 1568 articles were identified for possible inclusion in the review. Following title and abstract assessment, duplicate removal, and a full-text evaluation, 22 studies were included. Of those studies, 2 had a low risk of bias and 20 had a high risk of bias. Six studies investigated non-surgical treatment, eight investigated surgical treatment, two investigated both non-surgical and surgical treatment, and six studies investigated diagnostic thinking or decision making. Conclusion: The evidence for the influence of CBCT on decision making and treatment outcomes in endodontics is predominantly based on studies with a high risk of bias.
APA, Harvard, Vancouver, ISO, and other styles
46

Schmidt, Robert L. "Verification bias is common in cytopathology studies on diagnostic accuracy." CytoJournal 11 (May 22, 2014): 13. http://dx.doi.org/10.4103/1742-6413.132994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Meeks, Suzanne. "Age bias in the diagnostic decision-making behavior of clinicians." Professional Psychology: Research and Practice 21, no. 4 (1990): 279–84. http://dx.doi.org/10.1037/0735-7028.21.4.279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Serbic, Danijela, and Tamar Pincus. "Diagnostic uncertainty and recall bias in chronic low back pain." Pain 155, no. 8 (August 2014): 1540–46. http://dx.doi.org/10.1016/j.pain.2014.04.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Qiang. "Improvement of bias and generalizability for computer-aided diagnostic schemes." Computerized Medical Imaging and Graphics 31, no. 4-5 (June 2007): 338–45. http://dx.doi.org/10.1016/j.compmedimag.2007.02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kosinski, Andrzej S., and Huiman X. Barnhart. "Accounting for Nonignorable Verification Bias in Assessment of Diagnostic Tests." Biometrics 59, no. 1 (March 2003): 163–71. http://dx.doi.org/10.1111/1541-0420.00019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography