Journal articles on the topic 'Assessment bias'

To see the other types of publications on this topic, follow the link: Assessment bias.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Assessment bias.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

McClintock, Adelaide H., Tyra Fainstad, Joshua Jauregui, and Lalena M. Yarris. "Countering Bias in Assessment." Journal of Graduate Medical Education 13, no. 5 (October 1, 2021): 725–26. http://dx.doi.org/10.4300/jgme-d-21-00722.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Taylor, Ronald L. "Bias in Cognitive Assessment." Diagnostique 17, no. 1 (October 1991): 3–5. http://dx.doi.org/10.1177/153450849101700101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Reschly, Daniel J. "Bias in Cognitive Assessment." Diagnostique 17, no. 1 (October 1991): 86–90. http://dx.doi.org/10.1177/153450849101700108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Howell, Kenneth W., Susan S. Bigelow, Elizabeth L. Moore, and Ange M. Evoy. "Bias in Authentic Assessment." Diagnostique 19, no. 1 (October 1993): 387–400. http://dx.doi.org/10.1177/153450849301900105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Howell, Dennis G. "Political Bias Obscured Assessment." Journal - American Water Works Association 87, no. 5 (May 1995): 8. http://dx.doi.org/10.1002/j.1551-8833.1995.tb06354.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mahtani, Kamal, Elizabeth A. Spencer, Jon Brassey, and Carl Heneghan. "Catalogue of bias: observer bias." BMJ Evidence-Based Medicine 23, no. 1 (January 24, 2018): 23–24. http://dx.doi.org/10.1136/ebmed-2017-110884.

Full text
Abstract:
This article is part of a series featured from the Catalogue of Bias introduced in this volume of BMJ Evidence-Based Medicine that describes biases and outlines their potential impact in research studies. Observer bias is systematic discrepancy from the truth during the process of observing and recording information for a study. Many healthcare observations are at risk of this bias. Evidence shows that treatment effect estimates can be exaggerated by a third to two-thirds in the presence of observer bias in outcome assessment. Preventing observer bias involves proper masking in intervention studies including the use of matched placebo interventions where appropriate and training of observers to make assessment consistent and reduce biases resulting from conscious or unconscious prejudices. Where observers are involved in a research study, it is probably not possible for the study to be entirely free of observer biases.
APA, Harvard, Vancouver, ISO, and other styles
7

Aggarwal, Swati, Tushar Sinha, Yash Kukreti, and Siddarth Shikhar. "Media bias detection and bias short term impact assessment." Array 6 (July 2020): 100025. http://dx.doi.org/10.1016/j.array.2020.100025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bytzer, Peter. "Information Bias in Endoscopic Assessment." American Journal of Gastroenterology 102, no. 8 (August 2007): 1585–87. http://dx.doi.org/10.1111/j.1572-0241.2006.00911.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chernin, Jeffrey, Janice Miner Holden, and Cynthia Chandler. "Bias in Psychological Assessment: Heterosexism." Measurement and Evaluation in Counseling and Development 30, no. 2 (July 1, 1997): 68–76. http://dx.doi.org/10.1080/07481756.1997.12068922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Campbell, Thomas, Chris Dollaghan, Herbert Needleman, and Janine Janosky. "Reducing Bias in Language Assessment." Journal of Speech, Language, and Hearing Research 40, no. 3 (June 1997): 519–25. http://dx.doi.org/10.1044/jslhr.4003.519.

Full text
Abstract:
One potential solution to the problem of eliminating bias in language assessment is to identify valid measures that are not affected by subjects' prior knowledge or experience. In this study, 156 randomly selected school-age boys (31% majority; 69% minority) participated in three “processing-dependent” language measures, designed to minimize the contributions of prior knowledge on performance, and one traditional “knowledge-dependent” language test. As expected, minority subjects obtained significantly lower scores than majority participants on the knowledge-dependent test, but the groups did not differ on any of the processingdependent measures. These results suggest that processing-dependent measures hold considerable promise for distinguishing between children with language disorders, whose poor language performance reflects fundamental psycholinguistic deficits, and children with language differences attributable to differing experiential backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
11

Ferris, Steven J., Rob A. Kempton, Ian J. Deary, Elizabeth J. Austin, and Margaret V. Shorter. "Carryover Bias in Visual Assessment." Perception 30, no. 11 (November 2001): 1363–73. http://dx.doi.org/10.1068/p2917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Miller, Dave P., Mardi Gomberg-Maitland, and Marc Humbert. "Survivor bias and risk assessment." European Respiratory Journal 40, no. 3 (August 31, 2012): 530–32. http://dx.doi.org/10.1183/09031936.00094112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mishra, Himanshu, Arul Mishra, and Oscar Moreno. "Bias in Spatial Risk Assessment." Management Science 61, no. 4 (April 2015): 851–63. http://dx.doi.org/10.1287/mnsc.2014.1912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Steblay, Nancy K., and Gary L. Wells. "Assessment of bias in police lineups." Psychology, Public Policy, and Law 26, no. 4 (November 2020): 393–412. http://dx.doi.org/10.1037/law0000287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fanelli, Daniele, Rodrigo Costas, and John P. A. Ioannidis. "Meta-assessment of bias in science." Proceedings of the National Academy of Sciences 114, no. 14 (March 20, 2017): 3714–19. http://dx.doi.org/10.1073/pnas.1618569114.

Full text
Abstract:
Numerous biases are believed to affect the scientific literature, but their actual prevalence across disciplines is unknown. To gain a comprehensive picture of the potential imprint of bias in science, we probed for the most commonly postulated bias-related patterns and risk factors, in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was overall relatively small. However, we consistently observed a significant risk of small, early, and highly cited studies to overestimate effects and of studies not published in peer-reviewed journals to underestimate them. We also found at least partial confirmation of previous evidence suggesting that US studies and early studies might report more extreme effects, although these effects were smaller and more heterogeneously distributed across meta-analyses and disciplines. Authors publishing at high rates and receiving many citations were, overall, not at greater risk of bias. However, effect sizes were likely to be overestimated by early-career researchers, those working in small or long-distance collaborations, and those responsible for scientific misconduct, supporting hypotheses that connect bias to situational factors, lack of mutual control, and individual integrity. Some of these patterns and risk factors might have modestly increased in intensity over time, particularly in the social sciences. Our findings suggest that, besides one being routinely cautious that published small, highly-cited, and earlier studies may yield inflated results, the feasibility and costs of interventions to attenuate biases in the literature might need to be discussed on a discipline-specific and topic-specific basis.
APA, Harvard, Vancouver, ISO, and other styles
16

Bradley, Clare. "Sex Bias in Student Assessment Overlooked?" Assessment & Evaluation in Higher Education 18, no. 1 (January 1993): 3–8. http://dx.doi.org/10.1080/0260293930180101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Calsyn, Robert J., and W. Dean Klinkenberg. "Response Bias in Needs Assessment Studies." Evaluation Review 19, no. 2 (April 1995): 217–25. http://dx.doi.org/10.1177/0193841x9501900206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Edwards, Valerie J., Robert F. Anda, Dale F. Nordenberg, Vincent J. Felitti, David F. Williamson, and Jean A. Wright. "Bias assessment for child abuse survey:." Child Abuse & Neglect 25, no. 2 (February 2001): 307–12. http://dx.doi.org/10.1016/s0145-2134(00)00238-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

HAMMOND, T. O., and D. L. VERBYLA. "Optimistic bias in classification accuracy assessment." International Journal of Remote Sensing 17, no. 6 (April 1996): 1261–66. http://dx.doi.org/10.1080/01431169608949085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Clarke, Lee. "Politics and bias in risk assessment." Social Science Journal 25, no. 2 (June 1, 1988): 155–65. http://dx.doi.org/10.1016/0362-3319(88)90003-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Arends, P., R. J. S. Wagner, and M. Mrazik. "SCAT Symptom Recall Bias in Concussed Athletes." Archives of Clinical Neuropsychology 34, no. 5 (July 2019): 733. http://dx.doi.org/10.1093/arclin/acz026.03.

Full text
Abstract:
Abstract Purpose The assessment of concussed athletes uses standardized tools like the Sport Concussion Assessment Tool (SCAT). The purpose of this study was to evaluate whether concussed athletes accurately recalled baseline functioning. Methods A retrospective cohort analysis of University football student-athletes from 2014-16 seasons was conducted. Forty-six student-athletes (M=19.7, SD=1.8) who suffered a concussion during the competitive season underwent a post-concussion assessment with a sports medicine physician within 24 hours of injury. Assessments included use of the symptom report from The SCAT3. Athletes were asked to recall their pre-injury baseline functioning. Results Of the 46 athletes who underwent assessments, 22 reported having at least 1 symptom (M=1.4, SD=2.0) at baseline evaluations. Yet at the initial medical evaluation, only 10 of these athletes correctly recalled having symptoms prior to injury, and none of them accurately reported their total symptom report. Paired t-tests revealed significant differences between an athletes predicted recall of the number of symptoms (M=0.7, SD=1.2) and total symptom scores (M=2.1, SD=3.4) conditions; t(45)=-3.28, p=.002, d=0.48. Conclusion Having accurate information for diagnosing concussions is important during concussion medical evaluations. Many clinicians depend on an athlete’s subjective reporting of symptom change. Our results indicate that an athlete’s accuracy of pre-injury functioning is poor, consistent with what has been termed “the good old days bias.” The findings suggest having baseline assessments can be helpful and clinicians may need to carefully understand an athletes pre-morbid functioning.
APA, Harvard, Vancouver, ISO, and other styles
22

Faggion, Clovis Mariano. "Risk of bias assessment should not go beyond reporting assessment." Journal of Clinical Epidemiology 72 (April 2016): 126–27. http://dx.doi.org/10.1016/j.jclinepi.2015.11.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Blankenberger, McChesney, Schnebly, Moranski, and Dell. "Measuring Racial Bias and General Education Assessment." Journal of General Education 66, no. 1-2 (2018): 42. http://dx.doi.org/10.5325/jgeneeduc.66.1-2.0042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

&NA;. "Significant bias in retrospective assessment of teratogenicity." Inpharma Weekly &NA;, no. 1214 (November 1999): 21. http://dx.doi.org/10.2165/00128413-199912140-00047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

&NA;. "Significant bias in retrospective assessment of teratogenicity." Reactions Weekly &NA;, no. 778 (November 1999): 2. http://dx.doi.org/10.2165/00128415-199907780-00004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Haque, Abid, Kovosh Dastan, Nathan Patel, Candice Passerella, and Miriam Michael. "Racial Bias in the Assessment of Pain." Archives of Physical Medicine and Rehabilitation 103, no. 3 (March 2022): e41-e42. http://dx.doi.org/10.1016/j.apmr.2022.01.116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Carone, Dominic A. "Assessment of response bias in neurocognitive evaluations." NeuroRehabilitation 36, no. 4 (July 20, 2015): 387–400. http://dx.doi.org/10.3233/nre-151228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kerkhoff, Christian, Hans R. Künsch, and Christoph Schär. "Assessment of Bias Assumptions for Climate Models." Journal of Climate 27, no. 17 (August 28, 2014): 6799–818. http://dx.doi.org/10.1175/jcli-d-13-00716.1.

Full text
Abstract:
Abstract Climate scenarios make implicit or explicit assumptions about the extrapolation of climate model biases from current to future time periods. Such assumptions are inevitable because of the lack of future observations. This manuscript reviews different bias assumptions found in the literature and provides measures to assess their validity. The authors explicitly separate climate change from multidecadal variability to systematically analyze climate model biases in seasonal and regional surface temperature averages, using global and regional climate models (GCMs and RCMs) from the Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) project over Europe. For centennial time scales, it is found that a linear bias extrapolation for GCMs is best supported by the analysis: that is, it is generally not correct to assume that model biases are independent of the climate state. Results also show that RCMs behave markedly differently when forced with different drivers. RCM and GCM biases are not additive, and there is a significant interaction component in the bias of the RCM–GCM model chain that depends on both the RCM and GCM considered. This result questions previous studies that deduce biases (and ultimately projections) in RCM–GCM combinations from reanalysis-driven simulations. The authors suggest that the aforementioned interaction component derives from the refined RCM representation of dynamical and physical processes in the lower troposphere, which may nonlinearly depend upon the larger-scale circulation stemming from the driving GCM. The authors’ analyses also show that RCMs provide added value and that the combined RCM–GCM approach yields, in general, smaller biases in seasonal surface temperature and interannual variability, particularly in summer and even for spatial scales that are, in principle, well resolved by the GCMs.
APA, Harvard, Vancouver, ISO, and other styles
29

Chatterjee, Nilanjan, and Sholom Wacholder. "Validation Studies: Bias, Efficiency, and Exposure Assessment." Epidemiology 13, no. 5 (September 2002): 503–6. http://dx.doi.org/10.1097/00001648-200209000-00004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Roizen, Michael F., and Alicia Toledano. "Technology Assessment and the Learning Contamination Bias." Anesthesia & Analgesia 79, no. 3 (September 1994): 410???412. http://dx.doi.org/10.1213/00000539-199409000-00003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Becker, Gilbert. "Bias in the assessment of gender differences." American Psychologist 51, no. 2 (February 1996): 154–55. http://dx.doi.org/10.1037/0003-066x.51.2.154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Cramer, JA. "PHP4: BIAS IN CATEGORICAL MEDICATION COMPLIANCE ASSESSMENT." Value in Health 6, no. 3 (May 2003): 199. http://dx.doi.org/10.1016/s1098-3015(10)63851-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Vanstraelen, Michel, and Richard Duffett. "Bias in the assessment of psychiatric emergencies." Psychiatric Bulletin 20, no. 3 (March 1996): 181. http://dx.doi.org/10.1192/pb.20.3.181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Fuchs, Douglas, Lynn S. Fuchs, Maryann H. Power, and Ann M. Dailey. "Bias in the Assessment of Handicapped Children." American Educational Research Journal 22, no. 2 (January 1985): 185–98. http://dx.doi.org/10.3102/00028312022002185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

van de Vijver, Fons J. R., and Ype H. Poortinga. "Towards an Integrated Analysis of Bias in Cross-Cultural Assessment." European Journal of Psychological Assessment 13, no. 1 (January 1997): 29–37. http://dx.doi.org/10.1027/1015-5759.13.1.29.

Full text
Abstract:
A central methodological aspect of cross-cultural assessment is the interpretability of intergroup differences: Do scores obtained by subjects from different cultural groups have the same psychological meaning? Equivalence (or the absence of bias) is required in making valid cross-cultural comparisons. As cross-cultural comparisons are becoming increasingly popular and important, the problem of bias and its detection is receiving increased attention from researchers. Three kinds of bias are discussed and illustrated, namely construct bias, method bias, and item bias (or differential item functioning). Methods to identify bias are reviewed. An overview is given of common sources of each kind of bias. It is argued that an integrated treatment of all forms of bias is needed to enhance the validity of cross-cultural comparisons. The predominant focus on item bias techniques has the unfortunate implication that construct and method bias are examined insufficiently.
APA, Harvard, Vancouver, ISO, and other styles
36

Veesart, Amanda, and Alison Barron. "Unconscious bias." Nursing Made Incredibly Easy! 18, no. 2 (2020): 47–49. http://dx.doi.org/10.1097/01.nme.0000653208.69994.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Olson, Robert, Maureen Parkinson, and Michael McKenzie. "Selection Bias Introduced by Neuropsychological Assessments." Canadian Journal of Neurological Sciences / Journal Canadien des Sciences Neurologiques 37, no. 2 (March 2010): 264–68. http://dx.doi.org/10.1017/s0317167100010039.

Full text
Abstract:
Objective:Two prospective studies in patient with brain tumours were performed comparing the Mini Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA). The first assessed their feasibility and the second compared their diagnostic accuracy against a four-hour neuropsychological assessment (NPA). The introduction of the NPA decreased accrual and retention rates. We were therefore concerned regarding potential selection bias.Methods:Ninety-two patients were prospectively accrued and subsequently divided into three categories: a) no NPA required b) withdrew consent to NPA c) completed NPA. In order to quantify any potential bias introduced by the NPA, patient demographics and cognitive test scores were compared between the three groups.Results:There were significant differences in age (p<0.001), education (p=0.034), dexamethasone use (p=0.002), MMSE (p=0.005), and MoCA scores (p<0.001) across the different study groups. Furthermore, with increasing involvement of the NPA, patients' cognitive scores and educational status increased, while their age, dexamethasone use, and opioid use all decreased. Individuals who completed the NPA had higher MoCA scores than individuals who were not asked to complete the NPA (24.7 vs. 20.5; p < 0.001). In addition, this relationship held when restricting the analyses to individuals with brain metastases (p < 0.001).Conclusions:In this study, the lengthy NPA chosen introduced a statistically and clinically significant source of selection bias. These results highlight the importance of selecting brief and well tolerated assessments when possible. However, researchers are challenged by weighing the improved selection bias associated with brief assessments at the cost of reduced diagnostic accuracy.
APA, Harvard, Vancouver, ISO, and other styles
38

Orr, Margaret Terry, Ray Pecheone, Jon D. Snyder, Joseph Murphy, Ameetha Palanki, Barbara Beaudin, Liz Hollingworth, and Joan L. Buttram. "Performance Assessment for Principal Licensure: Evidence From Content and Face Validation and Bias Review." Journal of Research on Leadership Education 13, no. 2 (April 4, 2017): 109–38. http://dx.doi.org/10.1177/1942775117701179.

Full text
Abstract:
This article presents the validity bias review feedback and outcomes of new performance-based assessments to evaluate candidates seeking principal licensure. Until now, there has been little empirical work on performance assessment for principal licensure. One state recently developed a multi-task performance assessment for leaders and has implemented it for statewide use in initial principal licensure decisions; this development process is described here, focusing on content validity and bias review, and incorporates candidate and program faculty validiation as well. The results demonstrate the content validity, relevance, and feasibility of this new performance assessment for leaders, and yield implications for leader assessment generally.
APA, Harvard, Vancouver, ISO, and other styles
39

Simonson, Richard J., Joseph R. Keebler, Rosemarie Fernandez, Elizabeth H. Lazzara, and Alex Chaparro. "Over Triage: Injury Classification Mistake or Hindsight Bias?" Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 11, no. 1 (October 2022): 7–12. http://dx.doi.org/10.1177/2327857922111001.

Full text
Abstract:
Patient triage is a critical stage in providing patients with the appropriate level of care required. Multiple metrics are considered in determining appropriate triage at the time of assessment. Due to the complexity of healthcare intervention, patients are often under- or over-triaged. Initiatives to reduce incorrect triages have been developed and implemented. These initiatives, however, may be based on hindsight bias and subsequently result in inaccurate assessments of triage accuracy and lead to improper triage-based education initiatives. This submission proposes the application of the SEIPS framework as a method of mitigating challenges introduced in the triage accuracy assessments due to this potential hindsight bias.
APA, Harvard, Vancouver, ISO, and other styles
40

Cooney, Carisa M., Pathik Aravind, C. Scott Hultman, Kristen P. Broderick, Robert A. Weber, Sebastian Brooke, Damon S. Cooney, and Scott D. Lifchez. "An Analysis of Gender Bias in Plastic Surgery Resident Assessment." Journal of Graduate Medical Education 13, no. 4 (August 1, 2021): 500–506. http://dx.doi.org/10.4300/jgme-d-20-01394.1.

Full text
Abstract:
ABSTRACT Background Previous studies have shown men and women attending physicians rate or provide operating room (OR) autonomy differently to men and women residents, with men attendings providing higher ratings and more OR autonomy to men residents. Particularly with the advent of competency-based training in plastic surgery, differential advancement of trainees influenced by gender bias could have detrimental effects on resident advancement and time to graduation. Objective We determined if plastic surgery residents are assessed differently according to gender. Methods Three institutions' Operative Entrustability Assessment (OEA) data were abstracted from inception through November 2018 from MileMarker, a web-based program that stores trainee operative skill assessments of CPT-coded procedures. Ratings are based on a 5-point scale. Linear regression with postgraduate year adjustment was applied to all completed OEAs to compare men and women attendings' assessments of men and women residents. Results We included 8377 OEAs completed on 64 unique residents (25% women) by 51 unique attendings (29% women): men attendings completed 83% (n = 6972; 5859 assessments of men residents; 1113 of women residents) and women attendings completed 17% (n = 1405; 1025 assessments of men residents; 380 of women residents). Adjusted analysis showed men attendings rated women residents lower than men residents (P &lt; .001); scores by women attendings demonstrated no significant difference (P = .067). Conclusions Our dataset including 4.5 years of data from 3 training programs showed men attendings scored women plastic surgery residents lower than their men counterparts.
APA, Harvard, Vancouver, ISO, and other styles
41

Brooks, Elizabeth N., and Christopher M. Legault. "Retrospective forecasting — evaluating performance of stock projections for New England groundfish stocks." Canadian Journal of Fisheries and Aquatic Sciences 73, no. 6 (June 2016): 935–50. http://dx.doi.org/10.1139/cjfas-2015-0163.

Full text
Abstract:
Projections are used to explore scenarios for catch advice and rebuilding and are an important tool for sustainably managing fisheries. We tested each projection specification for 12 groundfish stocks in the Northwest Atlantic to identify sources of bias and evaluate techniques for reducing bias. Projections were made from assessments using virtual population analysis (VPA) with 1–7 years of recent data removed from the full time series and were then compared with results from a VPA assessment on the full time series of data. The main source of bias in projections was the assessment model estimates of the numbers at age in the terminal model year + 1 (Na,T+1). Recruitment was responsible for more bias in projections beyond 3 years, when population numbers begin to be dominated by cohorts that were statistically generated. Retrospective analysis was performed and several adjustment factors to reduce bias were tested. Even after adjusting for bias, the remaining bias in projections was non-negligible. The direction of bias generally resulted in projected spawning stock biomass (SSB) and catch being overestimated, and the bias in catch was nearly always larger than in SSB. Scientists need to clearly communicate the direction and magnitude of this bias, managers need to consider this additional uncertainty when specifying future catch limits, and both scientists and managers need to develop more robust control rules so that objectives are achieved.
APA, Harvard, Vancouver, ISO, and other styles
42

Socha, J., NC Coops, and W. Ochal. "Assessment of age bias in site index equations." iForest - Biogeosciences and Forestry 9, no. 3 (June 1, 2016): 402–8. http://dx.doi.org/10.3832/ifor1548-008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Walsh, Danielle S. "Proceed, With Caution: Unconscious Bias in Technical Assessment." Journal of Graduate Medical Education 13, no. 5 (October 1, 2021): 673–74. http://dx.doi.org/10.4300/jgme-d-21-00800.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Marshall, Iain J., Joel Kuiper, and Byron C. Wallace. "Automating Risk of Bias Assessment for Clinical Trials." IEEE Journal of Biomedical and Health Informatics 19, no. 4 (July 2015): 1406–12. http://dx.doi.org/10.1109/jbhi.2015.2431314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kuo, Yong-Fang, James E. Montie, and Vahakn B. Shahinian. "Reducing Bias in the Assessment of Treatment Effectiveness." Medical Care 50, no. 5 (May 2012): 374–80. http://dx.doi.org/10.1097/mlr.0b013e318245a086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Allen, Lyle M., Grant L. Iverson, and Paul Green. "Computerized Assessment of Response Bias in Forensic Neuropsychology." Journal of Forensic Neuropsychology 3, no. 1-2 (February 18, 2003): 205–25. http://dx.doi.org/10.1300/j151v03n01_02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Zilberberg, Marya D. "Assessment of Reporting Bias forClostridium difficileHospitalizations, United States." Emerging Infectious Diseases 14, no. 8 (August 2008): 1334. http://dx.doi.org/10.3201/eid1408.080446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Houssa, Marine, Nathalie Nader-Grosbois, and Alexandra Volckaert. "Assessment of Hostile Attribution Bias in Early Childhood." Psychology 09, no. 05 (2018): 958–76. http://dx.doi.org/10.4236/psych.2018.95060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Posavac, Steven S., J. Joško Brakus, Shailendra Pratap Jain, and Maria L. Cronley. "Selective assessment and positivity bias in environmental valuation." Journal of Experimental Psychology: Applied 12, no. 1 (2006): 43–49. http://dx.doi.org/10.1037/1076-898x.12.1.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Simon, Charlann S. "Limiting Bias in the Assessment of Bilingual Students." Journal of Childhool Communication Disorders 16, no. 1 (May 1994): 53–55. http://dx.doi.org/10.1177/152574019401600109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography