Academic literature on the topic 'External validity bias'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'External validity bias.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "External validity bias"

1

Lei, Yang, James C. Bezdek, Simone Romano, Nguyen Xuan Vinh, Jeffrey Chan, and James Bailey. "Ground truth bias in external cluster validity indices." Pattern Recognition 65 (May 2017): 58–70. http://dx.doi.org/10.1016/j.patcog.2016.12.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Andrews, Isaiah, and Emily Oster. "A simple approximation for evaluating external validity bias." Economics Letters 178 (May 2019): 58–62. http://dx.doi.org/10.1016/j.econlet.2019.02.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cipriani, Andrea, Marianna Purgato, and Corrado Barbui. "Why internal and external validity of experimental studies are relevant for clinical practice?" Epidemiologia e Psichiatria Sociale 18, no. 2 (June 2009): 101–3. http://dx.doi.org/10.1017/s1121189x00000968.

Full text
Abstract:
In randomised controlled trials (RCTs) there are two types of validity: internal validity and external validity. Internal validity refers to the extent to which the observed difference between groups can be correctly attributed to the intervention under investigation. In other words, it is the extent to which the design and conduct of the trial eliminate error. Internal validity might be threatened by two types of errors: systematic error (also called bias) and chance error (also called random error or statistical error) (Keirse & Hanssens, 2000. Systematic error, or bias, may be the consequence of erroneous ways of collecting, analysing and interpreting data. This may produce differences between treatments that are not real, with an overestimation or an underestimation of the true beneficial or harmful effect of an intervention (Juni et al., 2001). In RCTs there are four types of bias: selection bias (when the groups differ in baseline characteristics because of the way participants are selected), performance bias (when the care provided to the trial participants differs systematically between the experimental and control group), detection bias (when there are systematic differences in outcome assessment), and attrition bias (when the loss of participants from the study systematically differs between the experimental and control group). By contrast, chance error, or statistical error, is due to outcome variability that may arise by chance alone. Studies with small sample sizes are more likely to incur in this type of error than studies with large sample sizes. Thus, the risk of random error may be minimised by recruiting sufficiently large samples of patients.
APA, Harvard, Vancouver, ISO, and other styles
4

Mayeda, Elizabeth Rose, Eleanor Hayes-Larson, and Hailey Banack. "Who’s in and Who’s Out? Selection Bias in Aging Research." Innovation in Aging 4, Supplement_1 (December 1, 2020): 822. http://dx.doi.org/10.1093/geroni/igaa057.2998.

Full text
Abstract:
Abstract Selection bias presents a major threat to both internal and external validity in aging research. “Selection bias” refers to sample selection processes that lead to statistical associations in the study sample that are biased estimates of causal effects in the population of interest. These processes can lead to: (1) results that do not generalize to the population of interest (threat to external validity) or (2) biased effect estimates (associations that do not represent causal effects for any population, including the people in the sample; a threat to internal validity). In this presentation, we give an overview of selection bias in aging research. We will describe processes that can give rise to selection bias, highlight why they are particularly pervasive in this field, and present several examples of selection bias in aging research. We end with a brief summary of strategies to prevent and correct for selection bias in aging research.
APA, Harvard, Vancouver, ISO, and other styles
5

Moore, Randall P., and W. Douglas Robinson. "ARTIFICIAL BIRD NESTS, EXTERNAL VALIDITY, AND BIAS IN ECOLOGICAL FIELD STUDIES." Ecology 85, no. 6 (June 2004): 1562–67. http://dx.doi.org/10.1890/03-0088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bell, Stephen H., Robert B. Olsen, Larry L. Orr, and Elizabeth A. Stuart. "Estimates of External Validity Bias When Impact Evaluations Select Sites Nonrandomly." Educational Evaluation and Policy Analysis 38, no. 2 (June 2016): 318–35. http://dx.doi.org/10.3102/0162373715617549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khorsan, Raheleh, and Cindy Crawford. "External Validity and Model Validity: A Conceptual Approach for Systematic Review Methodology." Evidence-Based Complementary and Alternative Medicine 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/694804.

Full text
Abstract:
Background. Evidence rankings do not consider equally internal (IV), external (EV), and model validity (MV) for clinical studies including complementary and alternative medicine/integrative health care (CAM/IHC) research. This paper describe this model and offers an EV assessment tool (EVAT©) for weighing studies according to EV and MV in addition to IV.Methods. An abbreviated systematic review methodology was employed to search, assemble, and evaluate the literature that has been published on EV/MV criteria. Standard databases were searched for keywords relating to EV, MV, and bias-scoring from inception to Jan 2013. Tools identified and concepts described were pooled to assemble a robust tool for evaluating these quality criteria.Results. This study assembled a streamlined, objective tool to incorporate for the evaluation of quality of EV/MV research that is more sensitive to CAM/IHC research.Conclusion.Improved reporting on EV can help produce and provide information that will help guide policy makers, public health researchers, and other scientists in their selection, development, and improvement in their research-tested intervention. Overall, clinical studies with high EV have the potential to provide the most useful information about “real-world” consequences of health interventions. It is hoped that this novel tool which considers IV, EV, and MV on equal footing will better guide clinical decision making.
APA, Harvard, Vancouver, ISO, and other styles
8

Huddleston, R. Joseph. "Think Ahead: Cost Discounting and External Validity in Foreign Policy Survey Experiments." Journal of Experimental Political Science 6, no. 02 (November 19, 2018): 108–19. http://dx.doi.org/10.1017/xps.2018.22.

Full text
Abstract:
AbstractThis paper considers the implications of construal level theory in the context of survey experiments probing foreign policy opinion formation. Psychology research demonstrates that people discount the long-term consequences of decisions, thinking about distal or hypothetical events more abstractly than immediate scenarios. I argue that this tendency introduces a bias into survey experiments on foreign policy opinion. Respondents reasoning about an impending military engagement are likelier to consider its costs than are those reasoning in the abstract hypothetical environment. I provide evidence of this bias by replicating a common audience costs experimental design and introducing a prompt to consider casualties. I find that priming respondents to articulate their expectations about casualties in a foreign intervention reduces support and dampens the experimental effect, thereby cutting the estimated absolute audience cost substantially. This result suggests a gap between how survey respondents approach hypothetical and real situations of military intervention.
APA, Harvard, Vancouver, ISO, and other styles
9

He, Yuanda, Qi Zhou, Sheng Lin, and Liping Zhao. "Validity Evaluation Method Based on Data Driving for On-Line Monitoring Data of Transformer under DC-Bias." Sensors 20, no. 15 (August 3, 2020): 4321. http://dx.doi.org/10.3390/s20154321.

Full text
Abstract:
The DC-bias monitoring device of a transformer is easily affected by external noise interference, equipment aging, and communication failure, which makes it difficult to guarantee the validity of monitoring data and causes great problems for future data analysis. For this reason, this paper proposes a validity evaluation method based on data driving for the on-line monitoring data of a transformer under DC-bias. First, the variation rule and threshold range of monitoring data for neutral point DC, vibration, and noise of the transformer under different working conditions are obtained through statistical analysis. Then, the data validity criterion of DC bias monitoring data is proposed to achieve a comprehensive evaluation of data validity based on data threshold, continuity, impact, and correlation. In addition, case studies are carried out on the real measured data of the DC bias magnetic monitoring system of a regional power grid by using this evaluation method. The results show that the proposed method can systematically and comprehensively evaluate the validity of the DC bias monitoring data and can judge whether the monitoring device fails to a certain extent.
APA, Harvard, Vancouver, ISO, and other styles
10

Larrabee, Glenn J. "Performance Validity and Symptom Validity in Neuropsychological Assessment." Journal of the International Neuropsychological Society 18, no. 4 (May 8, 2012): 625–30. http://dx.doi.org/10.1017/s1355617712000240.

Full text
Abstract:
AbstractFailure to evaluate the validity of an examinee's neuropsychological test performance can alter prediction of external criteria in research investigations, and in the individual case, result in inaccurate conclusions about the degree of impairment resulting from neurological disease or injury. The terms performance validity referring to validity of test performance (PVT), and symptom validity referring to validity of symptom report (SVT), are suggested to replace less descriptive terms such as effort or response bias. Research is reviewed demonstrating strong diagnostic discrimination for PVTs and SVTs, with a particular emphasis on minimizing false positive errors, facilitated by identifying performance patterns or levels of performance that are atypical for bona fide neurologic disorder. It is further shown that false positive errors decrease, with a corresponding increase in the positive probability of malingering, when multiple independent indicators are required for diagnosis. The rigor of PVT and SVT research design is related to a high degree of reproducibility of results, and large effect sizes of d=1.0 or greater, exceeding effect sizes reported for several psychological and medical diagnostic procedures. (JINS, 2012, 18, 1–7)
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "External validity bias"

1

Natter, Martin, and Markus Feurstein. "Correcting for CBC model bias. A hybrid scanner data - conjoint model." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2001. http://epub.wu.ac.at/880/1/document.pdf.

Full text
Abstract:
Choice-Based Conjoint (CBC) models are often used for pricing decisions, especially when scanner data models cannot be applied. Up to date, it is unclear how Choice-Based Conjoint (CBC) models perform in terms of forecasting real-world shop data. In this contribution, we measure the performance of a Latent Class CBC model not by means of an experimental hold-out sample but via aggregate scanner data. We find that the CBC model does not accurately predict real-world market shares, thus leading to wrong pricing decisions. In order to improve its forecasting performance, we propose a correction scheme based on scanner data. Our empirical analysis shows that the hybrid method improves the performance measures considerably. (author's abstract)
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
2

"Assessment of bias, inter-rater reliability, and external validity in the use of mobile phone surveys for monitoring bed net coverage and use indicators in Tanzania." Tulane University, 2020.

Find full text
Abstract:
archives@tulane.edu
Introduction: Mass distribution of insecticide-treated nets (ITNs) is a core malaria prevention strategy that has proven to be efficacious and cost-effective in low- and middle-income countries (LMIC). Monitoring ITN coverage, use, and access has relied on household surveys which are expensive and time consuming. Recently, mobile phone survey (MPS) methodologies have emerged as a comparatively inexpensive alternative to large scale population-based household surveys and are becoming increasingly attractive considering the rapid growth trend of mobile phone ownership in LMIC. The overall research objective of the current body of work is to determine if interactive voice response (IVR) MPS can serve to rapidly and reliably monitor ITN indicators in LMIC. Methods: Data collection used either household surveys or IVR MPS – all of which included a module on bed net ownership, access, and use. The first study aim analyzed data from the last five nationally representative household surveys conducted in Tanzania in order to assess and quantify the potential for bias as a result of using MPS over traditional household surveys in estimating bed net coverage indicators. The conceptual design compares surveyed households reporting mobile phone ownership, and thus the potential for participation in an MPS, against all other households regardless of mobile phone ownership over the course of a 10-year period. The second study aim was designed as an individual-level test of inter-rater reliability of bed net indicator estimates between a face-to-face household survey and a follow-up IVR MPS to these same households. The third study aim was designed as a population-level test of external validity comparing ITN coverage indicator results from a nationally representative random-digit dial (RDD) IVR MPS and the malaria module from a nationally representative household survey. Results: Household mobile phone ownership increased by over 50 percentage points from 28.1% in 2007-08 to 81.5% in 2017. In more recent years, survey results show that bias in measuring ITN coverage indicators is minimal under a scenario that compares estimates calculated from DHS surveys for all households against those households reporting mobile phone ownership. For the four ITN coverage indicators assessed using the 2017 MIS data, national-level measures of bias did not exceed a 2.5-percentage point difference for mobile phone-owning households compared to the overall sample of households. Further, regional measures of bias for these same indicators rarely exceeded ± 3-percentage points in 2017. The second study aim, which compared bed net indicator estimates between the small-scale a household survey and a follow-up MPS, found that agreement between survey modalities was variable depending on the indicator, but was highest for household ownership of at least one bed net of any type (Gwet’s AC1 = 0.8). There was low agreement for indicators calculated from counts reflected in the low concurrent validity of key data elements used to calculate bed net use and access indicators. The third study aim comparing bed net indicator estimates from a national household and IVR RDD survey found that the external validity was variable but, in general, the RDD MPS tended to underestimate bed net indicators at the national level. Differences in bed net indicator estimates ranged from 3 to 23-percentage points but overall, it appeared that indicators non-specific to net treatment status demonstrated less bias in measurement through the RDD MPS when compared against the nationally representative household survey. Conclusions: According to estimates, mobile phone ownership has increased drastically in Tanzania since 2007 suggesting that MPS could presently be used to track population-level indicators of ITN coverage, among others. The IVR MPS methodology we applied has the potential to serve as a mechanism that can accurately estimate certain bed net indicators – primarily those that would make use of data elements derived from binary response options. Their use could be scaled to much larger RDD surveys to collect discrete packets of information. At a total cost of approximately US$22,000 (2017 USD) to obtain nationally and regionally representative bed net indicator estimates, the cost-for-information benefit is promising, but more research needs to be done to optimize question sets in order to ensure RDD survey results are able to repeatedly track with face-to-face household survey results.
1
Matt Worges
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "External validity bias"

1

Elwood, Mark. Selection of subjects for study. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199682898.003.0005.

Full text
Abstract:
This chapter discussed principles of subject selection and defines target, source, eligible, entrant and participant populations. Selection issues and selection bias may affect internal validity, external validity, and modify the hypothesis being tested. It shows methods to reduce selection biases and to define participation rate and response rate. Principles for the selection of the exposed or test group and the comparison groups are shown for all studies. In randomised trials, intention-to-treat analysis, contamination, blinding, data monitoring, stopping rules, the CONSORT format, and trial registration are discussed. For observational studies, it shows the purpose of control groups, issues of definition and choice of controls, institutional and community controls, and frequency and individual matching. Many examples are given.
APA, Harvard, Vancouver, ISO, and other styles
2

Elwood, Mark. The diagnosis of causation. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199682898.003.0010.

Full text
Abstract:
This chapter brings the book together, showing the overall scheme of assessment of causation for one study or in many studies, based on 20 questions in five sections. The scheme includes describing the key features of the study; then assessing observation bias, confounding, and chance variation. The chapter presents the consideration of the positive features of causation: the Bradford Hill guidelines of time relationship, strength, dose-response, consistency, and specificity, leading to an assessment of internal validity. External validity (generalisability) relates to the eligible, source, and target populations. Comparisons with other studies assess consistency and specificity further, but also plausibility and coherence, including analogy and experimental evidence. The chapter shows the overall decision process. Applications to non-causal associations, other types of study, and in designing a study are discussed. In part two, the chapter shows applications of causal reasoning to clinical care and health policy, including hierarchies of evidence, methods used by important groups, and the GRADE system.
APA, Harvard, Vancouver, ISO, and other styles
3

Elwood, Mark. Critical Appraisal of Epidemiological Studies and Clinical Trials. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199682898.001.0001.

Full text
Abstract:
This book presents a system of critical appraisal applicable to clinical, epidemiological and public health studies and to many other fields. It assumes no prior knowledge. The methods are relevant to students, practitioners and policymakers. The book shows how to assess if the results of one study or of many studies show a causal effect. The book discusses study designs: randomised and non-randomised trials, cohort studies, case-control studies, and surveys, showing the presentation of results including person-time and survival analysis, and issues in the selection of subjects. The system shows how to describe a study, how to detect and assess selection biases, observation bias, confounding, and chance variation, and how to assess internal validity and external validity (generalisability). Statistical methods are presented assuming no previous knowledge, and showing applications to each study design. Positive features of causation including strength, dose-response, and consistency are discussed. The book shows how to do systematic reviews and meta-analyses, and discusses publication bias. Systems of assessing all evidence are shown, leading to a general method of critical appraisal based on 20 key questions in five groups, which can be applied to any type of study or any topic. Six chapters show the application of this method to randomised trials, prospective and retrospective cohort studies, and case-control studies. An appendix summarises key statistical methods, each with a worked example. Each main chapter has self-test questions, with answers provided.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography