Journal articles on the topic 'Correct counterfactual'

To see the other types of publications on this topic, follow the link: Correct counterfactual.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Correct counterfactual.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ippolito, Michela. "Counterfactuals and Conditional Questions under Discussion." Semantics and Linguistic Theory 23 (August 24, 2013): 194. http://dx.doi.org/10.3765/salt.v23i0.2659.

Full text
Abstract:
In this paper I investigate the issue of the context-dependence of counterfactual conditionals and how the context constrains similarity in selecting the right set of worlds necessary in order to arrive at their correct truth-conditions. I will review previous proposals and conclude that the puzzle of how we measure similarity and thus resolve the context-dependence of counterfactuals remains unsolved. I will then consider an alternative based on the idea of discourse structure and the concept of a question under discussion.
APA, Harvard, Vancouver, ISO, and other styles
2

Mizuno, Teruyuki, and Stefan Kaufmann. "Past-as-Past in counterfactual desire reports: a view from Japanese." Semantics and Linguistic Theory 1 (December 29, 2022): 83. http://dx.doi.org/10.3765/salt.v1i0.5345.

Full text
Abstract:
The semantic contribution of Fake Past in counterfactual expressions has been actively debated in recent semantic literature. This study deepens our current understanding of this natural language phenomenon by digging into the behavior of Past tense in Japanese counterfactual desire reports. We show that the Past-as-Past approach to Fake Past makes correct predictions about its semantic behavior.
APA, Harvard, Vancouver, ISO, and other styles
3

Walker, Andreas, and Maribel Romero. "Counterfactual Donkey Sentences: A Strict Conditional Analysis." Semantics and Linguistic Theory 25 (November 17, 2015): 288. http://dx.doi.org/10.3765/salt.v25i0.3056.

Full text
Abstract:
We explore a distinction between ‘high’ and ‘low’ readings in counterfactual donkey sentences and observe three open issues in the current literature on these sentences: (i) van Rooij (2006) and Wang (2009) make different empirical predictions with respect to the availability of ‘high’ donkey readings. We settle this question in favour of van Rooij’s (2006) analysis. (ii) This analysis overgenerates with respect to weak readings in so-called ‘identificational’ donkey sentences. We argue that pronouns in these sentences should not be analysed as donkey pronouns, but as concealed questions or as part of a cleft. (iii) The analysis also undergenerates with respect to NPI licensing in counterfactual antecedents. We propose a strict conditional semantics for counterfactual donkey sentences that derives the correct licensing facts.
APA, Harvard, Vancouver, ISO, and other styles
4

King, Gary, and Langche Zeng. "Empirical versus Theoretical Claims about Extreme Counterfactuals: A Response." Political Analysis 17, no. 1 (2009): 107–12. http://dx.doi.org/10.1093/pan/mpn010.

Full text
Abstract:
In response to the data-based measures of model dependence proposed in King and Zeng (2006), Sambanis and Michaelides (2008) propose alternative measures that rely upon assumptions untestable in observational data. If these assumptions are correct, then their measures are appropriate and ours, based solely on the empirical data, may be too conservative. If instead, and as is usually the case, the researcher is not certain of the precise functional form of the data generating process, the distribution from which the data are drawn, and the applicability of these modeling assumptions to new counterfactuals, then the data-based measures proposed in King and Zeng (2006) are much preferred. After all, the point of model dependence checks is to verify empirically, rather than to stipulate by assumption, the effects of modeling assumptions on counterfactual inferences.
APA, Harvard, Vancouver, ISO, and other styles
5

Charman, Steve D., and Gary L. Wells. "Can eyewitnesses correct for external influences on their lineup identifications? The actual/counterfactual assessment paradigm." Journal of Experimental Psychology: Applied 14, no. 1 (March 2008): 5–20. http://dx.doi.org/10.1037/1076-898x.14.1.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

PICARD, MARC. "/s/-deletion in Old French and the aftermath of compensatory lengthening." Journal of French Language Studies 14, no. 1 (March 2004): 1–7. http://dx.doi.org/10.1017/s0959269504001371.

Full text
Abstract:
It has recently been argued by Gess (2001) that the long vowels resulting from the compensatory lengthening that emerged in the wake of preconsonantal /s/-deletion in Old French had all been shortened by the sixteenth century. Given that many of these long vowels are still present in Canadian French, this conclusion cannot possibly be correct. What will be shown here is precisely how Gess' methodology led him to obtain such counterfactual results.
APA, Harvard, Vancouver, ISO, and other styles
7

Maria Doose, Anna. "Methods for Calculating Cartel Damages: A Survey." Zeitschrift für Wettbewerbsrecht 12, no. 3 (September 11, 2014): 282–99. http://dx.doi.org/10.15375/zwer-2014-0304.

Full text
Abstract:
AbstractThe paper focuses on the various methods used to quantify cartel damages, which have become more and more important as private damage suits in the aftermath of antitrust litigation increase. The approaches implementation is embedded into current legal environments with regards to the estimation approaches being used for quantification of cartel damages. The direct comparison of methods shows that difference methods convince due to their simplicity and plausibility in results as well as replicability. Cost-based approaches have to overcome hurdles but still are easy to conduct and comparatively more accurate. In contrast, price prediction takes market changes into account and the market simulation presents the most sophisticated and flexible approach, provided that assumptions are correct and correctly implemented, and therefore approaches the „real world“ counterfactual as approximate as possible.
APA, Harvard, Vancouver, ISO, and other styles
8

Mata, André. "Further Tests of the Metacognitive Advantage Model." Psihologijske teme 28, no. 1 (2019): 115–24. http://dx.doi.org/10.31820/pt.28.1.6.

Full text
Abstract:
This study tested whether people have an accurate sense of how good their reasoning is, as measured by their confidence in their responses, and how good they feel after they give those responses. First, incorrect responders were unjustifiably confident in their responses. However, correct responders were even more confident, and this confidence boost was found to come from their awareness of alternative solutions that are intuitive but incorrect. An affect measure revealed the same pattern: correct responders felt better, and incorrect responders felt worse, after they solved reasoning problems, but this was only the case when post-reasoning affect was measured after participants were instructed to think of alternative solutions. Implications are discussed for the possibility of implicit error monitoring, the role of counterfactual thinking in meta-reasoning, and the use of affective measures in meta-reasoning research.
APA, Harvard, Vancouver, ISO, and other styles
9

Ludema, Rodney, and Mark Wu. "What is Price Suppression in Abnormal Economic Times? Reflections in Light of the Russia–Commercial Vehicles Ruling." World Trade Review 19, no. 2 (April 2020): 182–95. http://dx.doi.org/10.1017/s1474745620000166.

Full text
Abstract:
AbstractThis article discusses the ambiguity found in the WTO Anti-Dumping Agreement concerning price suppression analysis. Previous case law has established that investigating authorities undertaking to highlight price suppression must conduct a counterfactual analysis. This article examines the difficulties that investigating authorities face in performing such an analysis when the investigating period overlaps with a financial crisis or other abnormal economic circumstances. It suggests that the Appellate Body was correct to require consideration of how profit margins and costs are affected by market circumstances, but ought to pay further attention to the behavior of firms in imperfectly competitive markets.
APA, Harvard, Vancouver, ISO, and other styles
10

Tešić, Marko, and Ulrike Hahn. "Can counterfactual explanations of AI systems’ predictions skew lay users’ causal intuitions about the world? If so, can we correct for that?" Patterns 3, no. 12 (December 2022): 100635. http://dx.doi.org/10.1016/j.patter.2022.100635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kearsley, Aaron, Nellie Lew, and Clark Nardinelli. "A Retrospective and Commentary on FDA’s Bar Code Rule." Journal of Benefit-Cost Analysis 9, no. 3 (2018): 496–518. http://dx.doi.org/10.1017/bca.2018.11.

Full text
Abstract:
Food and Drug Administration (FDA) published a final regulation in 2004 that requires pharmaceutical manufacturers to place linear bar codes on certain human drug and biological products. The intent was that bar codes would be part of a system where healthcare professionals would use bar code scanning equipment and software to electronically verify against a patient’s medication regimen that the correct medication is being given to the patient before it is administered, which could ultimately reduce medication errors. In the 2004 prospective regulatory impact analysis, FDA anticipated that the rule would stimulate widespread adoption of bar code medication administration technology among hospitals and other facilities, thereby generating public health benefits in the form of averted medication errors. FDA estimated that annualized net benefits would be $5.3 billion. In this retrospective analysis, we reassess the costs and benefits of the bar code rule and our original model and assumptions. Employing the most recent data available on actual adoption rates of bar code medication administration technology since 2004 and other key determinants of the costs and benefits, we examine the impacts of the bar code rule since its implementation and identify approaches to improve the accuracy of future analyses. In this retrospective study, we use alternative models of health information technology diffusion to create counterfactual scenarios against which we compare the benefits and costs of the bar code rule. The magnitudes of the costs and benefits of the 2004 rule are sensitive to assumptions about the counterfactual technology adoption rate, with the upper-bound range of calculated annualized net benefits between $2.7 billion and $6.6 billion depending on the baseline scenario considered.Disclaimer: The findings, interpretations, and conclusions expressed in this article are those of the authors in their private capacities, and they do not represent the views of the Food and Drug Administration.
APA, Harvard, Vancouver, ISO, and other styles
12

Westendorff, Stephanie, Daniel Kaping, Stefan Everling, and Thilo Womelsdorf. "Prefrontal and anterior cingulate cortex neurons encode attentional targets even when they do not apparently bias behavior." Journal of Neurophysiology 116, no. 2 (August 1, 2016): 796–811. http://dx.doi.org/10.1152/jn.00027.2016.

Full text
Abstract:
Neurons in anterior cingulate and prefrontal cortex (ACC/PFC) carry information about behaviorally relevant target stimuli. This information is believed to affect behavior by exerting a top-down attentional bias on stimulus selection. However, attention information may not necessarily be a biasing signal but could be a corollary signal that is not directly related to ongoing behavioral success, or it could reflect the monitoring of targets similar to an eligibility trace useful for later attentional adjustment. To test this suggestion we quantified how attention information relates to behavioral success in neurons recorded in multiple subfields in macaque ACC/PFC during a cued attention task. We found that attention cues activated three separable neuronal groups that encoded spatial attention information but were differently linked to behavioral success. A first group encoded attention targets on correct and error trials. This group spread across ACC/PFC and represented targets transiently after cue onset, irrespective of behavior. A second group encoded attention targets on correct trials only, closely predicting behavior. These neurons were not only prevalent in lateral prefrontal but also in anterior cingulate cortex. A third group encoded target locations only on error trials. This group was evident in ACC and PFC and was activated in error trials “as if” attention was shifted to the target location but without evidence for such behavior. These results show that only a portion of neuronaly available information about attention targets biases behavior. We speculate that additionally a unique neural subnetwork encodes counterfactual attention information.
APA, Harvard, Vancouver, ISO, and other styles
13

Castro, Jairo Guillermo Isaza. "Occupational segregation, selection effects and gender wage differences: evidence from urban Colombia." APUNTES DEL CENES 33, no. 57 (September 22, 2014): 73. http://dx.doi.org/10.19053/22565779.2905.

Full text
Abstract:
This paper assesses the effects of occupational segregation on the gender wage gap in urban Colombia between 1986 and 2000. The empirical methodology involves a two step procedure where by the occupational distributions ofworkers by gender aremodelled using a multinomial logit model in the first stage. In the second stage, the multinomial logit estimates are used not only to derive a counterfactual occupational distribution of women in the absence of workplace discrimination but also to correct for selectivity bias in thewage equations for each occupational category using the procedure suggested by Lee (1983). Besides the explained and unexplained components in conventional decompositions of the gender wage gap, this methodology differentiates between the justified and unjustified effects of the gender allocation ofworkers across occupational categories. The results for urban Colombia indicate that controlling for selectivity bias at the occupational category level is found to be relevant in all years reviewed in this study. They also suggest that a changing composition of the female labour supply in terms of un observables (i.e., ability and motivation) is playing a role in the dramatic reduction of the observed wage gap.
APA, Harvard, Vancouver, ISO, and other styles
14

Senadza, Bernardin, Edward Nketiah-Amponsah, and Samuel Ampaw. "Nonfarm diversification and the well-being of rural farm households in developing countries: Evidence from Ghana using new dataset." Review of Economics 69, no. 3 (December 19, 2018): 207–29. http://dx.doi.org/10.1515/roe-2018-0002.

Full text
Abstract:
Abstract This paper examines the impact of participation in both farm and nonfarm activities on both household consumption expenditure per adult equivalent and household per capita income, in rural Ghana. The objective is to ascertain whether the results are sensitive to the choice of well-being measure. We use a nationally representative dataset on 8,059 rural farm households collected in 2012/13. In order to account for potential selectivity and endogeneity biases, which previous studies failed to correct for, we adopt the endogenous switching regression (ESR) estimation technique. We find diversified households to be systematically different from their undiversified counterparts in terms of socioeconomic and demographic centeracteristics, thus justifying the empirical method used. Our results indicate a higher observed mean consumption for the diversified sub-sample compared to its counterfactual, implying that households participating in nonfarm enterprise activities in addition to farming have greater mean consumption compared to households engaged solely in farming. Similar conclusions are reached when income instead is used as the well-being indicator. Our findings, thus, indicate that the well-being implication of farm-nonfarm diversification is insensitive to the choice of well-being measure.
APA, Harvard, Vancouver, ISO, and other styles
15

Hubin, Donald C. "Non-tuism." Canadian Journal of Philosophy 21, no. 4 (December 1991): 441–68. http://dx.doi.org/10.1080/00455091.1991.10717256.

Full text
Abstract:
Contractarians view justice (or, more ambitiously, all of morality) as being defined by a contract made by rational individuals. No one supposes that this contract is actual, and the fact that it is merely hypothetical raises a number of questions both about the assumptions under which it would be actual and about the force of hypothetical agreement that is contingent on these assumptions.Particular contractarian theories must specify the circumstances of the agreement and the endowments, beliefs, desires, and degree and type of rationality of the agents. How these issues are settled determines the force of the hypothetical agreement. The fact that ignorant people who desired only universal suffering would, under duress, agree to a certain principle gives us no reason to believe the principle is a correct moral principle or to think it rational to accept or act on it: some counterfactual assumptions undermine entirely the moral force of hypothetical agreement. On the other hand, to take people just as they are, with their current beliefs, desires, endowments, and all, is to endorse their ignorance and mistakes as well as any previous injustice that affects their bargaining power.
APA, Harvard, Vancouver, ISO, and other styles
16

Bracco, Fabrizio, Cinzia Modafferi, and Luca Ferraris. "The role of media in community resilience: Hindsight bias in media narratives after the 2014 Genoa flood." Geopolitical, Social Security and Freedom Journal 1, no. 1 (November 1, 2018): 128–51. http://dx.doi.org/10.2478/gssfj-2018-0007.

Full text
Abstract:
Abstract Aim: A massive flood due to exceptional rainfalls devastated the town of Genoa on 9 October 2014. Media reports focused on the disaster, its causes and the political accountabilities. Reading facts after the event is commonly biased by the hindsight perspective and the aim of the paper is to investigate the amount and the potential effects of hindsight bias in terms of citizens risk perception and community resilience. Method: We performed a qualitative analysis of the narratives in the national and local news reports during the aftermath to investigate occurrences of a blaming attitude and cognitive biases. Results: The results showed a considerable amount of sentences that were focused on blaming the forecasters, the Civil Protection System, and the local administration. Many narratives were affected by hindsight bias and described the events as simple and linear chain reactions. This led to counterfactual biases, assuming that a simple intervention on a single factor could have prevented the tragic outcome. Conclusion: We claim that the biased nature of the media narratives could affect the citizens’ risk perception and their attitude towards the institutions, increasing their exposure to future flood-related threats. We propose the appropriate language would generate correct cognitive frames and, therefore, safer behaviour.
APA, Harvard, Vancouver, ISO, and other styles
17

Bach, Kent. "Newcomb's Problem: The $1,000,000 Solution." Canadian Journal of Philosophy 17, no. 2 (June 1987): 409–25. http://dx.doi.org/10.1080/00455091.1987.10716444.

Full text
Abstract:
The more you think about it, the more baffling Newcomb's Problem becomes. To most people, at first it is obvious which solution is correct (not that they agree on which one), but their confidence can be eroded easily. Only a puzzled few are torn between the two right from the start, and for years so was I. But at last, thanks to a certain metaargument, one solution came to seem obvious to me. And yet, imagining myself actually faced with Newcomb's choice, I started to worry that I might experience just enough last-minute ambivalence to unsettle my confidence in that argument. Fortunately, I have found a strategy to ensure making the right choice when the chips are down.Not only is Newcomb's Problem puzzling in its own right, it is philosophically significant. The appeal of both solutions reflects a conflict between two plausible conceptions of rational choice. In making a decision, should one consider all of its probabilistic consequences or only its causal consequences? Each conception has its supporters, but some philosophers find them both defensible and see no hope of resolving the conflict. I think the conflict can be resolved, at least in the context of Newcomb's Problem, by properly assessing the relevant counterfactual conditionals.
APA, Harvard, Vancouver, ISO, and other styles
18

Sánchez-Encalada, Sonia, Myrna Mar Talavera-Torres, Antonio R. Villa-Romero, Marcela Agudelo-Botero, and Rosa María Wong-Chew. "Impact Evaluation of An Interdisciplinary Educational Intervention to Health Professionals for the Treatment of Mild to Moderate Child Malnutrition in Mexico: A Difference-in-Differences Analysis." Healthcare 10, no. 12 (November 30, 2022): 2411. http://dx.doi.org/10.3390/healthcare10122411.

Full text
Abstract:
The prevalence of undernutrition in Mexican children younger than 5 years old has been 14% since 2006. There are clinical practice guidelines for mild to moderate malnutrition in children in the Mexican health system; however, they are not applied. In addition, the knowledge and practices of health professionals (HP) to treat malnutrition in health centers are insufficient to perform adequate assessments and correct treatments. An impact evaluation of an interdisciplinary educational intervention was carried out on 78 HPs for the treatment of children with mild to moderate malnutrition of low resources, with 39 in the intervention group and 37 in the counterfactual group, estimated as the comparison group. A Food and Agriculture Organization (FAO)-validated questionnaire adapted to child malnutrition about knowledge, attitudes, and practices was applied before, after, and 2 months after a malnutrition workshop. The difference-in-differences analysis showed that the educational intervention group had a significant improvement in knowledge, attitudes, and practices before and after the intervention (grades of 54.6 to 79.2 respectively, p = 0.0001), compared with the comparison group (grades of 79.2 and 53.4, respectively, p = 0.0001), which was maintained over two months (grades of 71.8 versus 49.8, p = 0.0001, respectively). The multivariate analysis showed that the probability of improvement in learning by 30% was 95-fold higher in the educational intervention group versus the comparison group, OR = 95.1 (95% CI 14.9–603.0), and this factor was independent of sex, age, education, or hospital position. Despite the availability of clinical practice guidelines for the assessment and treatment for child malnutrition, education in malnutrition for HPs is effective and needed to achieve a significant improvement in children’s health.
APA, Harvard, Vancouver, ISO, and other styles
19

Chiappa, Silvia. "Path-Specific Counterfactual Fairness." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7801–8. http://dx.doi.org/10.1609/aaai.v33i01.33017801.

Full text
Abstract:
We consider the problem of learning fair decision systems from data in which a sensitive attribute might affect the decision along both fair and unfair pathways. We introduce a counterfactual approach to disregard effects along unfair pathways that does not incur in the same loss of individual-specific information as previous approaches. Our method corrects observations adversely affected by the sensitive attribute, and uses these to form a decision. We leverage recent developments in deep learning and approximate inference to develop a VAE-type method that is widely applicable to complex nonlinear models.
APA, Harvard, Vancouver, ISO, and other styles
20

Tucker, Aviezer. "Historiographic Counterfactuals and the Philosophy of Historiography." Journal of the Philosophy of History 10, no. 3 (November 17, 2016): 333–48. http://dx.doi.org/10.1163/18722636-12341340.

Full text
Abstract:
Philosophers and historians debate not only the correct analysis of historiographic counterfactuals and their possible utilities for historiography and its philosophy but whether they can be more than speculative. This introduction presents the articles in the special issue on historiographic counterfactuals, show how they hang together and what are the main agreements and disagreements among the authors. Finally, it argues that the debate over historiographic counterfactuals spills over now into the debate about applied or practical historiography, what we can learn from historiography.
APA, Harvard, Vancouver, ISO, and other styles
21

Nekhamkin, Valery, and Arkadiy Nekhamkin. "Occupational personnel selection during military operations (based on the memoirs of military leaders during the Great Patriotic War): socio-philosophical analysis." Socium i vlast 4 (2020): 82–93. http://dx.doi.org/10.22394/1996-0522-2020-4-82-93.

Full text
Abstract:
Introduction. Taking the Workers’ and Peasants’ Red Army of 1941—1945 as an example the authors identify features of the personnel selection in the army during military operations, conditions, requirements, criteria, qualities necessary for promotion to higher command positions. The aim of the study is to identify the mechanism of personnel selection in the armed forces during military operations. Methods. The authors use the following general scientific methods: modeling, structural and functional, systemic and comparative analysis; movement from the abstract to the concrete and from the concrete to the abstract. The authors make use of P.A. Sorokin’s theory of vertical mobility. For further research of the problem, the methodology of synergetics and counterfactual modeling of the past can be involved. Scientific novelty of the research. The following existing concepts of the Red Army officers’ dynamics in 1941—1945 are generalized: positive, negative, moral and psychological selection. Their basic concepts and shortcomings are revealed. An initiativeintellectual concept of the officers’ selection in war is formulated on the basis of P. Sorokin’s theory of vertical mobility. The author identify specific conditions, requirements, and criteria influencing the selection of commanders during the war. Results. A clear opposition to the conditions, requirements and selection criteria in peacetime and wartime is given. The following specific criteria for selecting officers of the Red Army during the war are highlighted: the presence of combat experience; non-conformism; the ability to take personal responsibility; analyze and correct errors; nonstandard thinking. Military action creates specific conditions that give rise to the selection criteria for commanders. Conclusions. The selection criteria for personnel in peacetime and wartime armies differ sharply. At the beginning of battle actions, stereotyped commanders prevailed. Such people are replaced by commanders who are able to go beyond the established canons, orders of the chiefs. The work presents a diagram showing the stages of selecting officers in a war: military actions — conditions — selection criteria - requirements — new qualities of a person adapted to combat operations. This leads to the success of military operations, and, consequently, to the promotion of officers in rank and position.
APA, Harvard, Vancouver, ISO, and other styles
22

Beebee, Helen. "Counterfactual Dependence and Broken Barometers: A Response to Flichman's Argument." Crítica (México D. F. En línea) 29, no. 86 (January 8, 1997): 107–19. http://dx.doi.org/10.22201/iifs.18704905e.1997.1065.

Full text
Abstract:
El artículo consiste en una defensa del análisis contrafáctico de la causación de David Lewis en contra de un argumento presentado por primera vez por Eduardo Flichman. El argumento de Flichman involucra una situación en la cual tienen lugar los tres sucesos siguientes: p : una presión atmosférica de 1000mb b : el funcionamiento correcto del barómetro r : una lectura en el barómetro de 1000mb Si el análisis de Lewis ha de tener éxito, la fórmula contrafáctica (1) ~O(r) □→ ~O(p) debe ser falsa. Pero Lewis mismo justifica la afirmación de que (1) es falsa mediante la siguiente observación: ``Si la lectura hubiese sido más alta, ¿habría habido una mayor presión? O ¿habría estado funcionando mal el barómetro? Lo segundo suena mejor: una lectura más alta habría sido una lectura incorrecta." Flichman infiere de esta aserción que Lewis está comprometido con: (2) ~O(r) □→ ~O(b) Se sigue del análisis de Lewis de la causación que r causó b; y esto, desde luego, es falso. Sostengo que Flichman se equivoca al inferir que Lewis está comprometido con (2) a partir del hecho de que haga la afirmación citada más arriba. Flichman supone que hay cierto suceso b que corresponde al hecho de que el barómetro funcione bien. Pero de hecho la única manera de lograr que (2) sea verdadera es suponer que b puede caracterizarse esencialmente como el hecho de que el barómetro tenga una propiedad disposicional (a saber, la disposición de ofrecer la lectura correcta), y Lewis niega de manera explícita que las propiedades disposicionales puedan ser propiedades esenciales de los sucesos. Por otra parte, si suponemos que la propiedad disposicional es simplemente una característica accidental de b, entonces (2) es falsa. Por lo tanto, la afirmación de Lewis citada más arriba no puede interpretarse razonablemente como la afirmación de que cierto suceso b depende contrafácticamente de r; y, por lo tanto, la objeción de Flichman no funciona. [Traducción: Héctor Islas]
APA, Harvard, Vancouver, ISO, and other styles
23

Zheng, Wenjing, Maya Petersen, and Mark J. van der Laan. "Doubly Robust and Efficient Estimation of Marginal Structural Models for the Hazard Function." International Journal of Biostatistics 12, no. 1 (May 1, 2016): 233–52. http://dx.doi.org/10.1515/ijb-2015-0036.

Full text
Abstract:
Abstract In social and health sciences, many research questions involve understanding the causal effect of a longitudinal treatment on mortality (or time-to-event outcomes in general). Often, treatment status may change in response to past covariates that are risk factors for mortality, and in turn, treatment status may also affect such subsequent covariates. In these situations, Marginal Structural Models (MSMs), introduced by Robins (1997. Marginal structural models Proceedings of the American Statistical Association. Section on Bayesian Statistical Science, 1–10), are well-established and widely used tools to account for time-varying confounding. In particular, a MSM can be used to specify the intervention-specific counterfactual hazard function, i. e. the hazard for the outcome of a subject in an ideal experiment where he/she was assigned to follow a given intervention on their treatment variables. The parameters of this hazard MSM are traditionally estimated using the Inverse Probability Weighted estimation Robins (1999. Marginal structural models versus structural nested models as tools for causal inference. In: Statistical models in epidemiology: the environment and clinical trials. Springer-Verlag, 1999:95–134), Robins et al. (2000), (IPTW, van der Laan and Petersen (2007. Causal effect models for realistic individualized treatment and intention to treat rules. Int J Biostat 2007;3:Article 3), Robins et al. (2008. Estimaton and extrapolation of optimal treatment and testing strategies. Statistics in Medicine 2008;27(23):4678–721)). This estimator is easy to implement and admits Wald-type confidence intervals. However, its consistency hinges on the correct specification of the treatment allocation probabilities, and the estimates are generally sensitive to large treatment weights (especially in the presence of strong confounding), which are difficult to stabilize for dynamic treatment regimes. In this paper, we present a pooled targeted maximum likelihood estimator (TMLE, van der Laan and Rubin (2006. Targeted maximum likelihood learning. The International Journal of Biostatistics 2006;2:1–40)) for MSM for the hazard function under longitudinal dynamic treatment regimes. The proposed estimator is semiparametric efficient and doubly robust, offering bias reduction over the incumbent IPTW estimator when treatment probabilities may be misspecified. Moreover, the substitution principle rooted in the TMLE potentially mitigates the sensitivity to large treatment weights in IPTW. We compare the performance of the proposed estimator with the IPTW and a on-targeted substitution estimator in a simulation study.
APA, Harvard, Vancouver, ISO, and other styles
24

Van Duuren, Mike, Barbara Dossett, and Dawn Robinson. "Gauging Children’s Understanding of Artificially Intelligent Objects: A Presentation of “Counterfactuals”." International Journal of Behavioral Development 22, no. 4 (December 1998): 871–89. http://dx.doi.org/10.1080/016502598384207.

Full text
Abstract:
Children aged 5 to 11 years and a comparison group of adults were presented with two instances where the behaviour of a computational object was contrary to what might normally be expected of such a device. In both instances findings are discussed with regard to children’s understanding of a computer program and resulting computational behaviour generally. In the first study, children viewed a ”lm featuring a number of robots either acting as traditionally programmed devices or, alternatively, with apparent intentionality. We examine to what extent, if at all, children were aware of this difference. Findings indicated that although the younger children mentioned other alleged differences between the robots, the issue of different loci of control was not a salient one. In the second study, children were encouraged to type two kinds of questions into a computer. The first kind (simple maths questions) required a general solution procedure commonly accessible to a computational object. The second (details of a biographical nature) did not. With respect to the first as well as the second kind of questions the computer was seen to provide apparently correct answers. Findings showed that although with increasing age children were better at articulating the difference between rote- and rule-generated solutions generally, this was not generally accompanied by an accurate assessment of the kinds of problems that could normally be expected to be solved by a computer.
APA, Harvard, Vancouver, ISO, and other styles
25

OLSSON, BJÖRN, BARBARA GAWRONSKA, and BJÖRN ERLENDSSON. "DERIVING PATHWAY MAPS FROM AUTOMATED TEXT ANALYSIS USING A GRAMMAR-BASED APPROACH." Journal of Bioinformatics and Computational Biology 04, no. 02 (April 2006): 483–501. http://dx.doi.org/10.1142/s0219720006002041.

Full text
Abstract:
We demonstrate how automated text analysis can be used to support the large-scale analysis of metabolic and regulatory pathways by deriving pathway maps from textual descriptions found in the scientific literature. The main assumption is that correct syntactic analysis combined with domain-specific heuristics provides a good basis for relation extraction. Our method uses an algorithm that searches through the syntactic trees produced by a parser based on a Referent Grammar formalism, identifies relations mentioned in the sentence, and classifies them with respect to their semantic class and epistemic status (facts, counterfactuals, hypotheses). The semantic categories used in the classification are based on the relation set used in KEGG (Kyoto Encyclopedia of Genes and Genomes), so that pathway maps using KEGG notation can be automatically generated. We present the current version of the relation extraction algorithm and an evaluation based on a corpus of abstracts obtained from PubMed. The results indicate that the method is able to combine a reasonable coverage with high accuracy. We found that 61% of all sentences were parsed, and 97% of the parse trees were judged to be correct. The extraction algorithm was tested on a sample of 300 parse trees and was found to produce correct extractions in 90.5% of the cases.
APA, Harvard, Vancouver, ISO, and other styles
26

Kupczynski, Marian. "Closing the Door on Quantum Nonlocality." Entropy 20, no. 11 (November 15, 2018): 877. http://dx.doi.org/10.3390/e20110877.

Full text
Abstract:
Bell-type inequalities are proven using oversimplified probabilistic models and/or counterfactual definiteness (CFD). If setting-dependent variables describing measuring instruments are correctly introduced, none of these inequalities may be proven. In spite of this, a belief in a mysterious quantum nonlocality is not fading. Computer simulations of Bell tests allow people to study the different ways in which the experimental data might have been created. They also allow for the generation of various counterfactual experiments’ outcomes, such as repeated or simultaneous measurements performed in different settings on the same “photon-pair”, and so forth. They allow for the reinforcing or relaxing of CFD compliance and/or for studying the impact of various “photon identification procedures”, mimicking those used in real experiments. Data samples consistent with quantum predictions may be generated by using a specific setting-dependent identification procedure. It reflects the active role of instruments during the measurement process. Each of the setting-dependent data samples are consistent with specific setting-dependent probabilistic models which may not be deduced using non-contextual local realistic or stochastic hidden variables. In this paper, we will be discussing the results of these simulations. Since the data samples are generated in a locally causal way, these simulations provide additional strong arguments for closing the door on quantum nonlocality.
APA, Harvard, Vancouver, ISO, and other styles
27

Yan, Guanpeng, and Qiang Chen. "rcm: A command for the regression control method." Stata Journal: Promoting communications on statistics and Stata 22, no. 4 (December 2022): 842–83. http://dx.doi.org/10.1177/1536867x221140960.

Full text
Abstract:
The regression control method, also known as the panel-data approach for program evaluation (Hsiao, Ching, and Wan, 2012, Journal of Applied Econometrics 27: 705–740; Hsiao and Zhou, 2019, Journal of Applied Econometrics 34: 463–481), is a convenient method for causal inference in panel data that exploits cross-sectional correlation to construct counterfactual outcomes for a single treated unit by linear regression. In this article, we present the rcm command, which efficiently implements the regression control method with or without covariates. Available methods for model selection include best subset, lasso, and forward stepwise and backward stepwise regression, while available selection criteria include the corrected Akaike information criterion, the Akaike information criterion, the Bayesian information criterion, the modified Bayesian information criterion, and cross-validation. Estimation and counterfactual predictions can be made by ordinary least squares, lasso, or postlasso ordinary least squares. For statistical inference, both the in-space placebo test using fake treatment units and the in-time placebo test using a fake treatment time can be implemented. The rcm command produces a series of graphs for visualization along the way. We demonstrate the use of the rcm command by revisiting classic examples of political and economic integration between Hong Kong and mainland China (Hsiao, Ching, and Wan 2012) and German reunification (Abadie, Diamond, and Hainmueller, 2015, American Journal of Political Science 59: 495–510).
APA, Harvard, Vancouver, ISO, and other styles
28

Canavire-Bacarreza, Gustavo, Luis Castro Peñarrieta, and Darwin Ugarte Ontiveros. "Outliers in Semi-Parametric Estimation of Treatment Effects." Econometrics 9, no. 2 (April 30, 2021): 19. http://dx.doi.org/10.3390/econometrics9020019.

Full text
Abstract:
Outliers can be particularly hard to detect, creating bias and inconsistency in the semi-parametric estimates. In this paper, we use Monte Carlo simulations to demonstrate that semi-parametric methods, such as matching, are biased in the presence of outliers. Bad and good leverage point outliers are considered. Bias arises in the case of bad leverage points because they completely change the distribution of the metrics used to define counterfactuals; good leverage points, on the other hand, increase the chance of breaking the common support condition and distort the balance of the covariates, which may push practitioners to misspecify the propensity score or the distance measures. We provide some clues to identify and correct for the effects of outliers following a reweighting strategy in the spirit of the Stahel-Donoho (SD) multivariate estimator of scale and location, and the S-estimator of multivariate location (Smultiv). An application of this strategy to experimental data is also implemented.
APA, Harvard, Vancouver, ISO, and other styles
29

Dobbins, Sarah, Erin Hubbard, and Heather C. Leutwyler. "AGING WITH SCHIZOPHRENIA: RACIAL DISPARITIES IN COGNITIVE IMPAIRMENT." Innovation in Aging 3, Supplement_1 (November 2019): S271—S272. http://dx.doi.org/10.1093/geroni/igz038.1008.

Full text
Abstract:
Abstract Introduction: Cognitive impairment (CI) is a core feature of schizophrenia (SCZ). Comorbidities such as substances/smoking, metabolic syndrome, and medications contribute to CI. There are racial disparities in CI in the general US population. No study has evaluated CI racial disparities among people with schizophrenia (PWSCZ). The aims of this study were to describe the clinical/psychosocial correlates and racial disparities in CI. Methods: Cognitive performance in PWSCZ over 55 years old was measured using MATRICS Consensus Cognitive Battery (N=66). We calculated age- and gender-corrected scores for global cognitive performance, which represent number of standard deviations away from the mean of the MCCB normative sample. Clinical and sociodemographic data were collected. A counterfactual approach was used to explore mediation of CI through education. Results: Our “all-comer” convenience sample represented 57.6% white, 25.8% black, and 16.6% other non-white groups. There was a black/non-black disparity in cognitive score (-2.33 v.s. -1.68, t=2.843, p<.01). This difference remained significant in a regression model adjusted for age, substance use, smoking, education, antipsychotic medication, and positive/negative symptoms (-.6611, [95%CI:-1.12,-.20], overall F8, 57=3.690, p=.0016). In the mediation analysis, education accounted for 19% of the disparity in CI. In the counterfactual scenario in which education was distributed equally, education accounted for 48% of the disparity. Conclusion: There are significant racial disparities in cognitive performance among older PWSCZ, and educational attainment may account for a sizable portion of the disparity.
APA, Harvard, Vancouver, ISO, and other styles
30

Krishna, Amrith, Sebastian Riedel, and Andreas Vlachos. "ProoFVer: Natural Logic Theorem Proving for Fact Verification." Transactions of the Association for Computational Linguistics 10 (2022): 1013–30. http://dx.doi.org/10.1162/tacl_a_00503.

Full text
Abstract:
Abstract Fact verification systems typically rely on neural network classifiers for veracity prediction, which lack explainability. This paper proposes ProoFVer, which uses a seq2seq model to generate natural logic-based inferences as proofs. These proofs consist of lexical mutations between spans in the claim and the evidence retrieved, each marked with a natural logic operator. Claim veracity is determined solely based on the sequence of these operators. Hence, these proofs are faithful explanations, and this makes ProoFVer faithful by construction. Currently, ProoFVer has the highest label accuracy and the second best score in the FEVER leaderboard. Furthermore, it improves by 13.21% points over the next best model on a dataset with counterfactual instances, demonstrating its robustness. As explanations, the proofs show better overlap with human rationales than attention-based highlights and the proofs help humans predict model decisions correctly more often than using the evidence directly.1
APA, Harvard, Vancouver, ISO, and other styles
31

Harvey, Frank P. "President Al Gore and the 2003 Iraq War: A Counterfactual Test of Conventional “W”isdom." Canadian Journal of Political Science 45, no. 1 (March 2012): 1–32. http://dx.doi.org/10.1017/s0008423911000904.

Full text
Abstract:
Abstract.The almost universally accepted explanation for the Iraq war is very clear and consistent, namely, the US decision to attack Saddam Hussein's regime on March 19, 2003, was a product of the ideological agenda, misguided priorities, intentional deceptions and grand strategies of President George W. Bush and prominent “neoconservatives” and “unilateralists” on his national security team. Notwithstanding the widespread appeal of this version of history, however, the Bush-neocon war thesis (which I have labelledneoconism) remains an unsubstantiated assertion, a “theory” without theoretical content or historical context, a position lacking perspective and a seriously underdeveloped argument absent a clearly articulated logical foundation.Neoconismis, in essence, a popular historical account that overlooks a substantial collection of historical facts and relevant causal mechanisms that, when combined, represent a serious challenge to the core premises of accepted wisdom. This article corrects these errors, in part, by providing a much stronger account of events and strategies that pushed the US-UK coalition closer to war. The analysis is based on both factual and counterfactual evidence, combines causal mechanisms derived from multiple levels of analysis and ultimately confirms the role of path dependence and momentum as a much stronger explanation for the sequence of decisions that led to war.Résumé.L'explication quasi-universellement acceptée de la guerre d'Irak est très claire et sans équivoque : la décision des États-Unis de renverser le régime de Saddam Hussein le 19 mars 2003 était le résultat d'un programme idéologique, de priorités erronées, de déceptions intentionnelles, de grandes manœuvres stratégiques du président George W. Bush, d'éminents «néoconservateurs» et partisans de l'« unilatéralisme » présents dans l'équipe chargée de la sécurité nationale. Certes cette version de l'histoire constitue une idée largement répandue, mais la thèse de la guerre-néocon-de-Bush – que je désigne sous le terme neoconism – demeure une assertion dénuée de fondements, une ‘théorie’ sans contenu théorique ou contexte historique, un point de vue sans perspective, un argument qui ne fait pas de poids, et qui ne repose sur aucun raisonnement logique clairement articulé. Le neoconism est essentiellement un compte rendu historique populaire qui néglige une ensemble important de faits historiques et de mécanismes de causalité pertinents qui, mis ensemble, constituent un défi taille aux principaux prémisses de la sagesse acceptée. Le présent article se propose de corriger en partie les erreurs surévoquées, en en fournissant un compte rendu beaucoup plus solide des faits et stratégies qui ont amené la coalition États-Unis – Royaume-Uni à aller en guerre contre le régime irakien d'alors. L'analyse se fonde à la fois sur des preuves factuelles et contrefactuelles, avec l'appui des mécanismes de cause à effet inspirés de différents niveaux d'analyse, et confirme enfin le rôle joué par le concept de Path dependence (Dépendance au chemin emprunté) et de la dynamique comme explication beaucoup plus convaincante de la série de décisions ayant conduit à la guerre.
APA, Harvard, Vancouver, ISO, and other styles
32

Norkus, Zenonas. "Kiek kartų Lietuvoje buvo restauruotas kapitalizmas? Apie dvi Lietuvos okupacijas ir jų žalos skaičiavimus." Sociologija. Mintis ir veiksmas 33, no. 2 (January 1, 2013): 91–133. http://dx.doi.org/10.15388/socmintvei.2013.2.3807.

Full text
Abstract:
Santrauka. Straipsnyje lyginamos kaizerinės (1915–1918 m.) ir sovietinės (1940–1941, 1944–1990 m.) okupacijų laikais Lietuvoje susikūrusios politinės ekonominės Oberosto (Vokietijos Rytų fronto vadui pavaldžios okupacinės zonos, kurios didžiąją dalį sudarė Lietuvos teritorija) ir LTSR politinės ekonominės sistemos. Oberoste Vokietijos Rytų fronto kariuomenės vadai Paulius Hindenburgas ir Erichas Ludendorffas sukūrė pirmąją moderniausiais laikais planuojamo komandinio administracinio ūkio siste­mą, kurios pirmąja laboratorija tapo okupuota Lietuva. 1917–1918 m. tapę faktiniais Vokietijos diktato­riais, Oberosto įkūrėjai Lietuvoje išbandytą ūkio sistemą mėgino įdiegti metropolijoje. Nors šis mėginimas iki galo nepavyko, karinio socializmo kūrimas Vokietijoje jau 1917 m. pažengė pakankamai toli, kad taptų inspiracijos šaltiniu bolševikams, kuriant sovietinį valstybinio socializmo modelį, kuris 1940 m. „sugrįžo“ į Lietuvą. Kai 1990–1992 m. Lietuvoje buvo atkuriama kapitalistinė ūkio santvarka, tai mūsų šalies istorijoje įvyko jau antrą kartą, nes taip pat ir 1918–1922 m. kartu su nepriklausomos valstybės kūrimu buvo atkuriama kapitalistinė ūkio santvarka. Šiuolaikinė Lietuva yra pateikusi okupacijos žalos atlygi­nimo sąskaitą SSRS teisių perėmėjai Rusijai, o tarpukario Lietuva okupacijos žalos atlyginimo reikalavo iš Weimaro Vokietijos. Tačiau jeigu tarpukario Lietuva reikalavo atlyginti tik tiesioginę žalą, šiuolaikinė Lietuva siekia taip pat ir netiesioginės žalos atlyginimo. Pagrindinę šios žalos dalį sudaro 1940–1990 m. Lietuvos negautos nacionalinės pajamos, kurių dydžio įvertinimas priklauso nuo prielaidų, kokie būtų kontrafaktinės nepriklausomos kapitalistinės Lietuvos ūkio raidos rezultatai 1990 m. Straipsnyje patei­kiami du – optimistinis (1990 m. „dausų Lietuvos“ kaip „antrosios Suomijos“) ir pesimistinis (1990 m. kontrafaktinė Lietuva kaip „Baltijos Urugvajus“) modeliai. Pagrindiniai žodžiai: kapitalizmo restauracija, komandinė administracinė sistema, Hindenburgo programa, okupacijos žalos atlyginimas, kapitalizmo įvairovė. Key words: command administrative system, Hindenburg programme, restoration of capitalism, com­pensation of occupation damage, varieties of capitalism. SUMMARY HOW MANY TIMES CAPITALISM WAS RESTORED IN LITHUANIA? ON TWO OCCUPATIONS OF LITHUANIA AND THEIR DAMAGE CALCULATIONS The paper compares the political economic systems under German (1915–1918) and Soviet (1940–1941, 1944–1990 m.) occupations in Lithuania. During the World War I, Lithuania was part of the Ger­man occupation zone Ober Ost, ruled by the higher commando of the German Eastern front (Oberbefe­hlshaber Ost). The German military command of Eastern front under Paul Hindenburg and Erich Luden­dorff used Lithuania as a laboratory for large scale social experiment, creating the first planned command administrative economy in the world. After they were promoted to the higher commando of all German armed forced and established in 1917–1918 de facto military dictatorship over Germany, they made the attempt to establish the Ober Ost system in the metropole. Although the realization of the complete „Hin­denburg programme“ did fail, by 1917 Germany lived under military socialism (Kriegssozialismus) and coercive economy, which became the example and source of inspiration for Bolsheviks constructing Soviet model of state socialism. In 1940, this model came back to Lithuania, history making the full circle. This means that the market transition in 1990–1992 was second restoration of capitalism in Lithuania, because in 1918–1922 the capitalist economic system also was restored here jointly with the establishment of na­tional state. Contemporary Lithuania demands from Russia to pay for damage inflicted on Lithuanian economy by Soviet occupation, and interwar Lithuania did demand the same form Weimar Germany in 1922–1923. However, while interwar Lithuania did ask to pay only direct occupation damage, contem­porary Lithuania demands to compensate also the indirect damage. The main part of this damage is the loss of the national income which Lithuania did not receive in 1940–1990 because the efficient capitalist economic system was replaced by the less productive state socialist system during this time. However, the calculations of the indirect damage incorrectly assume that all varieties of capitalism are more efficient in the developing countries in comparison with command administrative system. The assumption that the variety of capitalism which existed in Lithuania by 1940 (state cooperative capitalism) was not less efficient than Stalinist Soviet socialism is politically correct one, as much as the expectation that under this system independent Lithuania would become advanced technological frontier country („second Finland“) by 1990. Nevertheless, the counterfactual development path of the independent capitalist Lithuania in 1940–1990 would include critical conjunctures and crossroads, which could end with Lithuania entering „low road“ development path. Tellingly, Latin American capitalist country Uruguay (similar to Lithuania and other Baltic culture by its size and economc structure) had higher GDP per capita level than Lithuania in 1940, but by 1990 this level was lower than in Soviet Lithuania. Importantly, Uruguay never was under Soviet Russian occupation, did not construct socialism or suffered war damage. Pastaba: Tyrimas finansuotas Europos socialinio fondo lėšomis pagal visuotinės dotacijos priemonę (Nr. VP1-3.1-ŠMM-07-K-01-010). The research for this paper was funded by European Social Fund under the Global Grant measure (Nr. VP1-3.1-ŠMM-07-K-01-010).
APA, Harvard, Vancouver, ISO, and other styles
33

Kebede, Endale, Anne Goujon, and Wolfgang Lutz. "Stalls in Africa’s fertility decline partly result from disruptions in female education." Proceedings of the National Academy of Sciences 116, no. 8 (February 4, 2019): 2891–96. http://dx.doi.org/10.1073/pnas.1717288116.

Full text
Abstract:
Population projections for sub-Saharan Africa have, over the past decade, been corrected upwards because in a number of countries, the earlier declining trends in fertility stalled around 2000. While most studies so far have focused on economic, political, or other factors around 2000, here we suggest that in addition to those period effects, the phenomenon also matched up with disruptions in the cohort trends of educational attainment of women after the postindependence economic and political turmoil. Disruptions likely resulted in a higher proportion of poorly educated women of childbearing age in the late 1990s and early 2000s than there would have been otherwise. In addition to the direct effects of education on lowering fertility, these less-educated female cohorts were also more vulnerable to adverse period effects around 2000. To explore this hypothesis, we combine individual-level data from Demographic and Health Surveys for 18 African countries with and without fertility stalls, thus creating a pooled dataset of more than two million births to some 670,000 women born from 1950 to 1995 by level of education. Statistical analyses indicate clear discontinuities in the improvement of educational attainment of subsequent cohorts of women and stronger sensitivity of less-educated women to period effects. We assess the magnitude of the effect of educational discontinuity through a comparison of the actual trends with counterfactual trends based on the assumption of no education stalls, resulting in up to half a child per woman less in 2010 and 13 million fewer live births over the 1995–2010 period.
APA, Harvard, Vancouver, ISO, and other styles
34

Schmidt, Harald, Dorothy E. Roberts, and Nwamaka D. Eneanya. "Sequential organ failure assessment, ventilator rationing and evolving triage guidance: new evidence underlines the need to recognise and revise, unjust allocation frameworks." Journal of Medical Ethics 48, no. 2 (October 11, 2021): 136–38. http://dx.doi.org/10.1136/medethics-2021-107696.

Full text
Abstract:
We respond to recent comments on our proposal to improve justice in ventilator triage, in which we used as an example New Jersey’s (NJ) publicly available and legally binding Directive Number 2020-03. We agree with Bernard Lo and Doug White that equity implications of triage frameworks should be continually reassessed, which is why we offered six concrete options for improvement, and called for monitoring the consequences of adopted triage models. We disagree with their assessment that we mis-characterised their Model Guidance, as included in the NJ Directive, in ways that undermine our conclusions. They suggest we erroneously described their model as a two-criterion allocation framework; that recognising other operant criterion reveals it ‘likely mitigate[s] rather than exacerbate[s] racial disparities during triage’, and allege that concerns about inequitable outcomes are ‘without evidence’. We highlight two major studies robustly demonstrating why concerns about disparate outcomes are justified. We also show that White and Lo seek to retrospectively—and counterfactually—correct the version of the Model Guideline included in the NJ Directive. However, as our facsimile reproductions show, neither the alleged four-criteria form, nor other key changes, such as dropping the Sequential Organ Failure Assessment score, are found in the Directive. These points matter because (1) our conclusions hence stand, (2) because the public version of the Model Guidance had not been updated to reduce the risk of inequitable outcomes until June 2021 and (3) NJ’s Directive still does not reflect these revisions, and, hence, represents a less equitable version, as acknowledged by its authors. We comment on broader policy implications and call for ways of ensuring accurate, transparent and timely updates for users of high-stakes guidelines.
APA, Harvard, Vancouver, ISO, and other styles
35

Ippolito, Michela. "Counterfactuals and Conditional Questions under Discussion." Semantics and Linguistic Theory, April 3, 2015, 194. http://dx.doi.org/10.3765/salt.v0i0.2659.

Full text
Abstract:
In this paper I investigate the issue of the context-dependence of counterfactual conditionals and how the context constrains similarity in selecting the right set of worlds necessary in order to arrive at their correct truth-conditions. I will review previous proposals and conclude that the puzzle of how we measure similarity and thus resolve the context-dependence of counterfactuals remains unsolved. I will then consider an alternative based on the idea of discourse structure and the concept of a question under discussion.
APA, Harvard, Vancouver, ISO, and other styles
36

Bianco, Elisabetta. "Storia controfattuale e great men in Erodoto e Tucidide." Erga-Logoi. Rivista di storia, letteratura, diritto e culture dell'antichità 9, no. 1 (July 6, 2021). http://dx.doi.org/10.7358/erga-2021-001-bian.

Full text
Abstract:
Thinking about the role of great men in virtual history of contemporary age, in this paper we intend to conduct an analysis of this theme starting from some significant texts of Herodotus and Thucydides, to evaluate the existence of a recourse to counterfactual reasoning in connection with the role of the individual also in Greek historiography. It emerges that counterfactuals, used perhaps not always intentionally, but, in any case, as a powerful narrative tool, help to define causal relationships and to highlight the important factors, moral and political responsibilities, including above all the ability of the leader to take reasonable decisions. The story of the past as it could have been, or, in other words, counterfactual history and not just real history, could thus encourage readers to reflect in a more engaging way than through the historical account alone, judging more actively the behaviour of great men of the past and learning from their decisions, both correct and incorrect.
APA, Harvard, Vancouver, ISO, and other styles
37

Haslinger, Nina, and Viola Schmitt. "What embedded counterfactuals tell us about the semantics of attitudes." Linguistics Vanguard, July 15, 2022. http://dx.doi.org/10.1515/lingvan-2021-0032.

Full text
Abstract:
Abstract We discuss German examples where counterfactuals restricting an epistemic modal are embedded under glauben ‘believe’. Such sentences raise a puzzle for the analysis of counterfactuals, modals, and belief attributions within possible-worlds semantics. Their truth conditions suggest that the modal’s domain is determined exclusively by the subject’s belief state, but evaluating the counterfactual separately at each of the subject’s doxastic alternatives does not yield the correct quantificational domain: the domain ends up being determined by the facts of each particular world, which include propositions the subject does not believe. We therefore revise the semantics of counterfactuals: counterfactuals still rely on an ordering among worlds that can be derived from a premise set (Kratzer, Angelika. 1978. Semantik der Rede: Kontexttheorie – Modalwörter – Konditionalsätze (Monographien Linguistik und Kommunikationswissenschaft 38). Königstein: Scriptor, 2012 [1981]a. The notional category of modality. In Modals and conditionals (Oxford studies in theoretical linguistics 36), 27–69. Oxford: Oxford University Press), but rather than uniquely characterizing a world, this premise set can be compatible with multiple worlds. In belief contexts, the attitude subject’s belief state as a whole determines the relevant ordering. This, in turn, motivates a revision of the semantics of believe: following Yalcin’s work on epistemic modals (Yalcin, Seth. 2007. Epistemic modals. Mind 116. 983–1026), we submit that evaluation indices are complex, consisting of a world and an ordering among worlds. Counterfactuals are sensitive to the ordering component of an index. Attitude verbs shift both components, relativizing the ordering to the attitude subject.
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Jonathan T. H. "Equitable compensation and Brickenden: fiduciary loyalty and causation." Trusts & Trustees, October 20, 2020. http://dx.doi.org/10.1093/tandt/ttaa069.

Full text
Abstract:
Abstract This article reconsiders the causation rules of equitable compensation for breaches of non-custodial fiduciary duties. It argues that these rules should correspond with and reflect the nature of fiduciary duties. Being a compensatory remedy, correct identification of the breach and the relevant counterfactual are therefore crucial. The failure to do so has led to much unwarranted criticism of the Privy Council’s decision in Brickenden. It also argues that a fiduciary is precluded by principle and policy from relying on the ‘escape route’ argument that the loss would have been suffered by his principal anyway.
APA, Harvard, Vancouver, ISO, and other styles
39

Song, Changcheng. "Financial Illiteracy and Pension Contributions: A Field Experiment on Compound Interest in China." Review of Financial Studies, July 10, 2019. http://dx.doi.org/10.1093/rfs/hhz074.

Full text
Abstract:
Abstract I conduct a field experiment to study the relationship between peoples’ misunderstanding of compound interest and their pension contributions in rural China. I find that explaining the concept of compound interest to subjects increased pension contributions by roughly 40%. The treatment effect is larger for those who underestimate compound interest than for those who overestimate compound interest. Moreover, financial education enables households to partially correct their misunderstanding of compound interest. I structurally estimate the level of misunderstanding of compound interest and conduct a counterfactual welfare analysis: lifetime utility increases by about 10% if subjects’ misunderstanding of compound interest is eliminated.
APA, Harvard, Vancouver, ISO, and other styles
40

"Supplemental Material for Can Eyewitnesses Correct for External Influences on Their Lineup Identifications? The Actual/Counterfactual Assessment Paradigm." Journal of Experimental Psychology: Applied, 2008. http://dx.doi.org/10.1037/1076-898x.14.1.5.supp.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Nippold, Marilyn A., Kristin Shinham, and Scott LaFavre. "Mastering the Grammar of Past Tense Counterfactual Sentences: A Pilot Study of Adolescents Attending Low-Income Schools." Folia Phoniatrica et Logopaedica, December 8, 2020, 1–10. http://dx.doi.org/10.1159/000511902.

Full text
Abstract:
<b><i>Background/Aim:</i></b> This pilot study was designed to determine if adolescents had mastered the grammar of past tense counterfactual (PTCF) sentences (e.g., “If Julie had done all of the track workouts, she might have won the state meet”). Of interest was their ability to use the modal, auxiliary, and past participle verbs correctly in the main clause of a PTCF sentence. Prior research had indicated that PTCF sentences were challenging to older children. Hence, we wished to determine if PTCF sentences would continue to challenge adolescents. <b><i>Methods:</i></b> The participants were two groups of adolescents, who were aged 13 and 16 years, and a control group of young adults having a mean age of 22 years (<i>n</i> = 40 per group). Each participant read a set of four fables and completed a PTCF sentence based on the story. Each incomplete sentence contained a subordinate clause that employed the past perfect verb form (e.g., “If the fox <i>had been</i> able to jump higher…”). The participant’s task was to complete the sentence in writing, generating a grammatically correct main clause that contained the present perfect verb form (e.g., “he <i>would have been</i> able to reach the delicious grapes.”). <b><i>Results:</i></b> On the PTCF sentences task, the 16-year-olds earned a higher mean raw score than the 13-year-olds, but the two groups did not show a statistically significant difference. However, the 22-year-olds performed significantly better than the 13-year-olds. It was also found that using the correct form of the past participle verb was the most difficult aspect of the task for all three groups, and that mastering the grammar of PTCF sentences continued into adulthood. <b><i>Discussion/Conclusion:</i></b> The PTCF sentence is a late linguistic attainment, perhaps due to its infrequent occurrence in spoken language. The study offers implications for the concept of grammatical mastery and for the distinction between prescriptive and descriptive grammar.
APA, Harvard, Vancouver, ISO, and other styles
42

John, Leslie K., George Loewenstein, Andrew Marder, and Michael L. Callaham. "Effect of revealing authors’ conflicts of interests in peer review: randomized controlled trial." BMJ, November 6, 2019, l5896. http://dx.doi.org/10.1136/bmj.l5896.

Full text
Abstract:
Abstract Objective To assess the effect of disclosing authors’ conflict of interest declarations to peer reviewers at a medical journal. Design Randomized controlled trial. Setting Manuscript review process at the Annals of Emergency Medicine. Participants Reviewers (n=838) who reviewed manuscripts submitted between 2 June 2014 and 23 January 2018 inclusive (n=1480 manuscripts). Intervention Reviewers were randomized to either receive (treatment) or not receive (control) authors’ full International Committee of Medical Journal Editors format conflict of interest disclosures before reviewing manuscripts. Reviewers rated the manuscripts as usual on eight quality ratings and were then surveyed to obtain “counterfactual scores”—that is, the scores they believed they would have given had they been assigned to the opposite arm—as well as attitudes toward conflicts of interest. Main outcome measure Overall quality score that reviewers assigned to the manuscript on submitting their review (1 to 5 scale). Secondary outcomes were scores the reviewers submitted for the seven more specific quality ratings and counterfactual scores elicited in the follow-up survey. Results Providing authors’ conflict of interest disclosures did not affect reviewers’ mean ratings of manuscript quality (M control =2.70 (SD 1.11) out of 5; M treatment =2.74 (1.13) out of 5; mean difference 0.04, 95% confidence interval –0.05 to 0.14), even for manuscripts with disclosed conflicts (M control = 2.85 (1.12) out of 5; M treatment =2.96 (1.16) out of 5; mean difference 0.11, –0.05 to 0.26). Similarly, no effect of the treatment was seen on any of the other seven quality ratings that the reviewers assigned. Reviewers acknowledged conflicts of interest as an important matter and believed that they could correct for them when they were disclosed. However, their counterfactual scores did not differ from actual scores (M actual =2.69; M counterfactual =2.67; difference in means 0.02, 0.01 to 0.02). When conflicts were reported, a comparison of different source types (for example, government, for-profit corporation) found no difference in effect. Conclusions Current ethical standards require disclosure of conflicts of interest for all scientific reports. As currently implemented, this practice had no effect on any quality ratings of real manuscripts being evaluated for publication by real peer reviewers.
APA, Harvard, Vancouver, ISO, and other styles
43

Galanis, Spyros, and Stelios Kotronis. "Updating Awareness and Information Aggregation." B.E. Journal of Theoretical Economics, July 1, 2020. http://dx.doi.org/10.1515/bejte-2018-0193.

Full text
Abstract:
AbstractThe ability of markets to aggregate information through prices is examined in a dynamic environment with unawareness. We find that if all traders are able to minimally update their awareness when they observe a price that is counterfactual to their private information, they will eventually reach an agreement, thus generalising the result of Geanakoplos and Polemarchakis (1982). Moreover, if the traded security is separable, then agreement is on the correct price and there is information aggregation, thus generalizing the result of Ostrovsky (2012) for non-strategic traders. We find that a trader increases her awareness if and only if she is able to become aware of something that other traders are already aware of and, under a mild condition, never becomes aware of anything more. In other words, agreement is more the result of understanding each other, rather than being unboundedly sophisticated.
APA, Harvard, Vancouver, ISO, and other styles
44

Yenen, Alp. "Envisioning Turco-Arab Co-Existence between Empire and Nationalism." Die Welt des Islams, April 8, 2020, 1–41. http://dx.doi.org/10.1163/15700607-00600a17.

Full text
Abstract:
Abstract The idea of a continued Turco-Arab co-existence under the Ottoman Sultanate might appear counterfactual or marginal – if not nostalgic – from the sober vantage of knowing “the end of history”. The Ottoman Empire neither survived the Great War nor made way for a multinational co-existence of Turks and Arabs. For contemporaries, however, different models of federalism and multinationalism offered solutions to save the Ottoman Empire and safeguard Turco-Arab co-existence. While the federalist ideas of Ottoman Arabs are far better known in the academic literature, in regards to Ottoman Turks, the commonplace interpretations follow the teleology of the Turkish nation-state formation. In order to correct this misperception, I will illustrate the existence of corresponding Turkish voices and visions of federalism and multinationalism. Envisioning Turco-Arab co-existence was a serious feature of policy debates, especially in the years of crisis from the Balkan Wars to the settlement of post-Ottoman nation-states in the aftermath of the First World War.
APA, Harvard, Vancouver, ISO, and other styles
45

Mulder, R. A., and F. A. Muller. "Modal-Logical Reconstructions of Thought Experiments." Erkenntnis, January 30, 2023. http://dx.doi.org/10.1007/s10670-022-00655-2.

Full text
Abstract:
AbstractSorensen (Thought experiments, Oxford University Press, New York, 1992) has provided two modal-logical schemas to reconstruct the logical structure of two types of destructive thought experiments: the Necessity Refuter and the Possibility Refuter. The schemas consist of five propositions which Sorensen claims but does not prove to be inconsistent. We show that the five propositions, as presented by Sorensen, are not inconsistent, but by adding a premise (and a logical truth), we prove that the resulting sextet of premises is inconsistent. Häggqvist (Can J Philos 39(1):55–76, 2009) has provided a different modal-logical schema (Counterfactual Refuter), which is equivalent to four premises, again claimed to be inconsistent. We show that this schema also is not inconsistent, for similar reasons. Again, we add another premise to achieve inconsistency. The conclusion is that all three modal-logical reconstructions of the arguments that accompany thought experiments, two by Sorensen and one by Häggqvist, have now been made rigorously correct. This may inaugurate new avenues to respond to destructive thought experiments.
APA, Harvard, Vancouver, ISO, and other styles
46

Hodson, Nathan, and Susan Bewley. "Is one narrative enough? Analytical tools should match the problems they address." Journal of Medical Ethics, September 8, 2020, medethics—2020–106309. http://dx.doi.org/10.1136/medethics-2020-106309.

Full text
Abstract:
Jeff Nisker describes his personal experience of a diagnosis of advanced prostate cancer and the kindnesses he received from friendly doctors. He claims that this narrative account supports the promotion of Prostate Specific Antigen (PSA) screening for asymptomatic men and impugns statisticians, mistakenly thinking that their opposition to PSA screening derives from concerns about financial cost. The account inadvertently demonstrates the danger of over-reliance on a single ethical tool for critical analysis. In the first part of this response, we describe the statistical evidence. The most reliable Cochrane meta-analyses have not shown that PSA screening saves lives overall. Moreover, the high false positive rate of PSA screening leads to overinvestigation which results in unnecessary anxiety and increased cases of unnecessary sepsis, urinary incontinence and sexual dysfunction. Then we describe how narrative ethics alone is an insufficient tool to make claims about policies, such as PSA screening, which have hidden harms. Although Nisker’s story-telling is compelling and evokes emotions, narrative ethics of this sort have an inherent bias against people who would be harmed by the counterfactual. Particular care must be taken to look for and consider those untellable stories. Ethicists who only consider narratives which are readily at hand risk harming those who are voiceless or protected by the status quo. PSA screening is the wrong tool to reduce prostate cancer deaths and narrative ethics is the wrong tool to appraise this policy. It is vital that the correct theoretical tools are applied to the medical and ethical questions under scrutiny.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Junzhe, and Elias Bareinboim. "Fairness in Decision-Making — The Causal Explanation Formula." Proceedings of the AAAI Conference on Artificial Intelligence 32, no. 1 (April 25, 2018). http://dx.doi.org/10.1609/aaai.v32i1.11564.

Full text
Abstract:
AI plays an increasingly prominent role in society since decisions that were once made by humans are now delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals' incarceration, and the hiring of new employees, and it's not difficult to envision that they will in the future underpin most of the decisions in society. Despite the high complexity entailed by this task, there is still not much understanding of basic properties of such systems. For instance, we currently cannot detect (neither explain nor correct) whether an AI system can be deemed fair (i.e., is abiding by the decision-constraints agreed by society) or it is reinforcing biases and perpetuating a preceding prejudicial practice. Issues of discrimination have been discussed extensively in political and legal circles, but there exists still not much understanding of the formal conditions that a system must meet to be deemed fair. In this paper, we use the language of structural causality (Pearl, 2000) to fill in this gap. We start by introducing three new fine-grained measures of transmission of change from stimulus to effect, which we called counterfactual direct (Ctf-DE), indirect (Ctf-IE), and spurious (Ctf-SE) effects. We then derive what we call the causal explanation formula, which allows the AI designer to quantitatively evaluate fairness and explain the total observed disparity of decisions through different discriminatory mechanisms. We apply these measures to various discrimination analysis tasks and run extensive simulations, including detection, evaluation, and optimization of decision-making under fairness constraints. We conclude studying the trade-off between different types of fairness criteria (outcome and procedural), and provide a quantitative approach to policy implementation and the design of fair AI systems.
APA, Harvard, Vancouver, ISO, and other styles
48

Fischer, Thomas. "Beyond Bias: Counterfactually Interpretable Items As Criterion for Correct Leadership Research." Academy of Management Proceedings 2022, no. 1 (August 2022). http://dx.doi.org/10.5465/ambpp.2022.373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Nunes, Anthony P., Danni Zhao, William M. Jesdale, and Kate L. Lapane. "Multiple imputation to quantify misclassification in observational studies of the cognitively impaired: an application for pain assessment in nursing home residents." BMC Medical Research Methodology 21, no. 1 (June 26, 2021). http://dx.doi.org/10.1186/s12874-021-01327-5.

Full text
Abstract:
Abstract Background Despite experimental evidence suggesting that pain sensitivity is not impaired by cognitive impairment, observational studies in nursing home residents have observed an inverse association between cognitive impairment and resident-reported or staff-assessed pain. Under the hypothesis that the inverse association may be partially attributable to differential misclassification due to recall and communication limitations, this study implemented a missing data approach to quantify the absolute magnitude of misclassification of pain, pain frequency, and pain intensity by level of cognitive impairment. Methods Using the 2016 Minimum Data Set 3.0, we conducted a cross-sectional study among newly admitted US nursing home residents. Pain presence, severity, and frequency is assessed via resident-reported measures. For residents unable to communicate their pain, nursing home staff document pain based on direct resident observation and record review. We estimate a counterfactual expected level of pain in the absence of cognitive impairment by multiply imputing modified pain indicators for which the values were retained for residents with no/mild cognitive impairment and set to missing for residents with moderate/severe cognitive impairment. Absolute differences (∆) in the presence and magnitude of pain were calculated as the difference between documented pain and the expected level of pain. Results The difference between observed and expected resident reported pain was greater in residents with severe cognitive impairment (∆ = -10.2%, 95% Confidence Interval (CI): -10.9% to -9.4%) than those with moderate cognitive impairment (∆ = -4.5%, 95% CI: -5.4% to -3.6%). For staff-assessed pain, the magnitude of apparent underreporting was similar between residents with moderate impairment (∆ = -7.2%, 95% CI: -8.3% to -6.0%) and residents with severe impairment (∆ = -7.2%, 95% CI: -8.0% to -6.3%). Pain characterized as “mild” had the highest magnitude of apparent underreporting. Conclusions In residents with moderate to severe cognitive impairment, documentation of any pain was lower than expected in the absence of cognitive impairment. This finding supports the hypothesis that an inverse association between pain and cognitive impairment may be explained by differential misclassification. This study highlights the need to develop analytic and/or procedural solutions to correct for recall/reporter bias resulting from cognitive impairment.
APA, Harvard, Vancouver, ISO, and other styles
50

Salih, Hatim, Will McCutcheon, Jonte R. Hance, and John Rarity. "The laws of physics do not prohibit counterfactual communication." npj Quantum Information 8, no. 1 (May 18, 2022). http://dx.doi.org/10.1038/s41534-022-00564-w.

Full text
Abstract:
AbstractIt has been conjectured that counterfactual communication is impossible, even for post-selected quantum particles. We strongly challenge this by proposing precisely such a counterfactual scheme where—unambiguously—none of Alice’s photons that correctly contribute to her information about Bob’s message have been to Bob. We demonstrate counterfactuality experimentally by means of weak measurements, and conceptually using consistent histories—thus simultaneously satisfying both criteria without loopholes. Importantly, the fidelity of Alice learning Bob’s bit can be made arbitrarily close to unity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography