Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Contrastive, scenario and counterfactual explanations.

Статті в журналах з теми "Contrastive, scenario and counterfactual explanations"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-16 статей у журналах для дослідження на тему "Contrastive, scenario and counterfactual explanations".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Barzekar, Hosein, and Susan McRoy. "Achievable Minimally-Contrastive Counterfactual Explanations." Machine Learning and Knowledge Extraction 5, no. 3 (August 3, 2023): 922–36. http://dx.doi.org/10.3390/make5030048.

Повний текст джерела
Анотація:
Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lai, Chengen, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan, and Guangneng Hu. "Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2849–57. http://dx.doi.org/10.1609/aaai.v38i3.28065.

Повний текст джерела
Анотація:
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems. Existing post-hoc methods have achieved significant progress in obtaining a plausible explanation. However, such post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation for answers without considering the facts in images; and 3) Semantic perturbation insensitivity, the model can not recognize the semantic changes caused by small perturbations. These problems reduce the faithfulness of explanations generated by models. To address the above issues, we propose a novel self-supervised Multi-level Contrastive Learning based natural language Explanation model (MCLE) for VQA with semantic-level, image-level, and instance-level factual and counterfactual samples. MCLE extracts discriminative features and aligns the feature spaces from explanations with visual question and answer to generate more consistent explanations. We conduct extensive experiments, ablation analysis, and case study to demonstrate the effectiveness of our method on two VQA-NLE benchmarks.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sokol, Kacper, and Peter Flach. "Desiderata for Interpretability: Explaining Decision Tree Predictions with Counterfactuals." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 10035–36. http://dx.doi.org/10.1609/aaai.v33i01.330110035.

Повний текст джерела
Анотація:
Explanations in machine learning come in many forms, but a consensus regarding their desired properties is still emerging. In our work we collect and organise these explainability desiderata and discuss how they can be used to systematically evaluate properties and quality of an explainable system using the case of class-contrastive counterfactual statements. This leads us to propose a novel method for explaining predictions of a decision tree with counterfactuals. We show that our model-specific approach exploits all the theoretical advantages of counterfactual explanations, hence improves decision tree interpretability by decoupling the quality of the interpretation from the depth and width of the tree.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kenny, Eoin M., and Mark T. Keane. "On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11575–85. http://dx.doi.org/10.1609/aaai.v35i13.17377.

Повний текст джерела
Анотація:
There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs. In response to this disquiet, counterfactual explanations have become very popular in eXplainable AI (XAI) due to their asserted computational, psychological, and legal benefits. In contrast however, semi-factuals (which appear to be equally useful) have surprisingly received no attention. Most counterfactual methods address tabular rather than image data, partly because the non-discrete nature of images makes good counterfactuals difficult to define; indeed, generating plausible counterfactual images which lie on the data manifold is also problematic. This paper advances a novel method for generating plausible counterfactuals and semi-factuals for black-box CNN classifiers doing computer vision. The present method, called PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all “exceptional” features in a test image to be “normal” from the perspective of the counterfactual class, to generate plausible counterfactual images. Two controlled experiments compare this method to others in the literature, showing that PIECE generates highly plausible counterfactuals (and the best semi-factuals) on several benchmark measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zahedi, Zahra, Sailik Sengupta, and Subbarao Kambhampati. "‘Why Didn’t You Allocate This Task to Them?’ Negotiation-Aware Task Allocation and Contrastive Explanation Generation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 9 (March 24, 2024): 10243–51. http://dx.doi.org/10.1609/aaai.v38i9.28890.

Повний текст джерела
Анотація:
In this work, we design an Artificially Intelligent Task Allocator (AITA) that proposes a task allocation for a team of humans. A key property of this allocation is that when an agent with imperfect knowledge (about their teammate's costs and/or the team's performance metric) contests the allocation with a counterfactual, a contrastive explanation can always be provided to showcase why the proposed allocation is better than the proposed counterfactual. For this, we consider a negotiation process that produces a negotiation-aware task allocation and, when contested, leverages a negotiation tree to provide a contrastive explanation. With human subject studies, we show that the proposed allocation indeed appears fair to a majority of participants and, when not, the explanations generated are judged as convincing and easy to comprehend.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Niiniluoto, Ilkka. "Explanation by Idealized Theories." Kairos. Journal of Philosophy & Science 20, no. 1 (June 1, 2018): 43–63. http://dx.doi.org/10.2478/kjps-2018-0003.

Повний текст джерела
Анотація:
AbstractThe use of idealized scientific theories in explanations of empirical facts and regularities is problematic in two ways: they don’t satisfy the condition that the explanans is true, and they may fail to entail the explanandum. An attempt to deal with the latter problem was proposed by Hempel and Popper with their notion of approximate explanation. A more systematic perspective on idealized explanations was developed with the method of idealization and concretization by the Poznan school (Nowak, Krajewski) in the 1970s. If idealizational laws are treated as counterfactual conditionals, they can be true or truthlike, and the concretizations of such laws may increase their degree of truthlikeness. By replacing Hempel’s truth requirement with the condition that an explanatory theory is truthlike one can distinguish several important types of approximate, corrective, and contrastive explanations by idealized theories. The conclusions have important consequences for the debates about scientific realism and anti-realism.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Darwiche, Adnan, and Chunxi Ji. "On the Computation of Necessary and Sufficient Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5582–91. http://dx.doi.org/10.1609/aaai.v36i5.20498.

Повний текст джерела
Анотація:
The complete reason behind a decision is a Boolean formula that characterizes why the decision was made. This recently introduced notion has a number of applications, which include generating explanations, detecting decision bias and evaluating counterfactual queries. Prime implicants of the complete reason are known as sufficient reasons for the decision and they correspond to what is known as PI explanations and abductive explanations. In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision. We justify this terminology semantically and show that necessary reasons correspond to what is known as contrastive explanations. We also study the computation of complete reasons for multi-class decision trees and graphs with nominal and numeric features for which we derive efficient, closed-form complete reasons. We further investigate the computation of shortest necessary and sufficient reasons for a broad class of complete reasons, which include the derived closed forms and the complete reasons for Sentential Decision Diagrams (SDDs). We provide an algorithm which can enumerate their shortest necessary reasons in output polynomial time. Enumerating shortest sufficient reasons for this class of complete reasons is hard even for a single reason. For this problem, we provide an algorithm that appears to be quite efficient as we show empirically.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Crawford, Beverly. "Germany's Future Political Challenges: Imagine that The New Yorker Profiled the German Chancellor in 2015." German Politics and Society 23, no. 4 (December 1, 2005): 69–87. http://dx.doi.org/10.3167/gps.2005.230404.

Повний текст джерела
Анотація:
What follows is a fictitious scenario, a "thought experiment," meant to project a particular future for Germany if certain assumptions hold. Scenarios are hypotheses that rest on a set of assumptions and one or two "wild cards." They can reveal forces of change that might be otherwise hidden, discard those that are not plausible, and describe the future of trends that are relatively certain. Indeed, scenarios create a particular future in the same way that counterfactual methods create a different past. Counterfactual methods predict how events would have unfolded had a few elements of the story been changed, with a focus on varying conditions that seem important and that can be manipulated. For instance, to explore the effects of military factors on the likelihood of war, one might ask: "how would pre 1914 diplomacy have evolved if the leaders of Europe had not believed that conquest was easy?" Or to explore the importance of broad social and political factors in causing Nazi aggression: "How might the 1930s have unfolded had Hitler died in 1932?" The greater the impact of the posited changes, the more important the actual factors that were manipulated. Assuming that the structure of explanation and prediction are the same, scenario writing pursues a similar method. But, instead of seeking alternative explanations for the past, scenarios project relative certainties and then manipulate the important but uncertain factors, to create a plausible story about the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Woodcock, Claire, Brent Mittelstadt, Dan Busbridge, and Grant Blank. "The Impact of Explanations on Layperson Trust in Artificial Intelligence–Driven Symptom Checker Apps: Experimental Study." Journal of Medical Internet Research 23, no. 11 (November 3, 2021): e29386. http://dx.doi.org/10.2196/29386.

Повний текст джерела
Анотація:
Background Artificial intelligence (AI)–driven symptom checkers are available to millions of users globally and are advocated as a tool to deliver health care more efficiently. To achieve the promoted benefits of a symptom checker, laypeople must trust and subsequently follow its instructions. In AI, explanations are seen as a tool to communicate the rationale behind black-box decisions to encourage trust and adoption. However, the effectiveness of the types of explanations used in AI-driven symptom checkers has not yet been studied. Explanations can follow many forms, including why-explanations and how-explanations. Social theories suggest that why-explanations are better at communicating knowledge and cultivating trust among laypeople. Objective The aim of this study is to ascertain whether explanations provided by a symptom checker affect explanatory trust among laypeople and whether this trust is impacted by their existing knowledge of disease. Methods A cross-sectional survey of 750 healthy participants was conducted. The participants were shown a video of a chatbot simulation that resulted in the diagnosis of either a migraine or temporal arteritis, chosen for their differing levels of epidemiological prevalence. These diagnoses were accompanied by one of four types of explanations. Each explanation type was selected either because of its current use in symptom checkers or because it was informed by theories of contrastive explanation. Exploratory factor analysis of participants’ responses followed by comparison-of-means tests were used to evaluate group differences in trust. Results Depending on the treatment group, two or three variables were generated, reflecting the prior knowledge and subsequent mental model that the participants held. When varying explanation type by disease, migraine was found to be nonsignificant (P=.65) and temporal arteritis, marginally significant (P=.09). Varying disease by explanation type resulted in statistical significance for input influence (P=.001), social proof (P=.049), and no explanation (P=.006), with counterfactual explanation (P=.053). The results suggest that trust in explanations is significantly affected by the disease being explained. When laypeople have existing knowledge of a disease, explanations have little impact on trust. Where the need for information is greater, different explanation types engender significantly different levels of trust. These results indicate that to be successful, symptom checkers need to tailor explanations to each user’s specific question and discount the diseases that they may also be aware of. Conclusions System builders developing explanations for symptom-checking apps should consider the recipient’s knowledge of a disease and tailor explanations to each user’s specific need. Effort should be placed on generating explanations that are personalized to each user of a symptom checker to fully discount the diseases that they may be aware of and to close their information gap.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rahimi, Saeed, Antoni B. Moore, and Peter A. Whigham. "Beyond Objects in Space-Time: Towards a Movement Analysis Framework with ‘How’ and ‘Why’ Elements." ISPRS International Journal of Geo-Information 10, no. 3 (March 22, 2021): 190. http://dx.doi.org/10.3390/ijgi10030190.

Повний текст джерела
Анотація:
Current spatiotemporal data has facilitated movement studies to shift objectives from descriptive models to explanations of the underlying causes of movement. From both a practical and theoretical standpoint, progress in developing approaches for these explanations should be founded on a conceptual model. This paper presents such a model in which three conceptual levels of abstraction are proposed to frame an agent-based representation of movement decision-making processes: ‘attribute,’ ‘actor,’ and ‘autonomous agent’. These in combination with three temporal, spatial, and spatiotemporal general forms of observations distinguish nine (3 × 3) representation typologies of movement data within the agent framework. Thirdly, there are three levels of cognitive reasoning: ‘association,’ ‘intervention,’ and ‘counterfactual’. This makes for 27 possible types of operation embedded in a conceptual cube with the level of abstraction, type of observation, and degree of cognitive reasoning forming the three axes. The conceptual model is an arena where movement queries and the statement of relevant objectives takes place. An example implementation of a tightly constrained spatiotemporal scenario to ground the agent-structure was summarised. The platform has been well-defined so as to accommodate different tools and techniques to drive causal inference in computational movement analysis as an immediate future step.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Sokol, Kacper, and Peter Flach. "One Explanation Does Not Fit All." KI - Künstliche Intelligenz 34, no. 2 (February 4, 2020): 235–50. http://dx.doi.org/10.1007/s13218-020-00637-y.

Повний текст джерела
Анотація:
Abstract The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system’s operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations—a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up “What if?” questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats. Furthermore, we discuss the challenges of mirroring the explainee’s mental model, which is the main building block of intelligible human–machine interactions. We also deliberate on the risks of allowing the explainee to freely manipulate the explanations and thereby extracting information about the underlying predictive model, which might be leveraged by malicious actors to steal or game the model. Finally, building an end-to-end interactive explainability system is a challenging engineering task; unless the main goal is its deployment, we recommend “Wizard of Oz” studies as a proxy for testing and evaluating standalone interactive explainability algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Li, Hanzhe, Jingjing Gu, Xinjiang Lu, Dazhong Shen, Yuting Liu, YaNan Deng, Guoliang Shi, and Hui Xiong. "Beyond Relevance: Factor-level Causal Explanation for User Travel Decisions with Counterfactual Data Augmentation." ACM Transactions on Information Systems, March 22, 2024. http://dx.doi.org/10.1145/3653673.

Повний текст джерела
Анотація:
Point-of-Interest (POI) recommendation, an important research hotspot in the field of urban computing, plays a crucial role in urban construction. While understanding the process of users’ travel decisions and exploring the causality of POI choosing is not easy due to the complex and diverse influencing factors in urban travel scenarios. Moreover, the spurious explanations caused by severe data sparsity, i.e., misrepresenting universal relevance as causality, may also hinder us from understanding users’ travel decisions. To this end, in this paper, we propose a factor-level causal explanation generation framework based on counterfactual data augmentation for user travel decisions, named Factor-level Causal Explanation for User Travel Decisions (FCE-UTD), which can distinguish between true and false causal factors and generate true causal explanations. Specifically, we first assume that a user decision is composed of a set of several different factors. Then, by preserving the user decision structure with a joint counterfactual contrastive learning paradigm, we learn the representation of factors and detect the relevant factors. Next, we further identify true causal factors by constructing counterfactual decisions with a counterfactual representation generator, in particular, it can not only augment the dataset and mitigate the sparsity but also contribute to clarifying the causal factors from other false causal factors that may cause spurious explanations. Besides, a causal dependency learner is proposed to identify causal factors for each decision by learning causal dependency scores. Extensive experiments conducted on three real-world datasets demonstrate the superiority of our approach in terms of check-in rate, fidelity, and downstream tasks under different behavior scenarios. The extra case studies also demonstrate the ability of FCE-UTD to generate causal explanations in POI choosing.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Banerjee, Soumya, Pietro Lio, Peter B. Jones, and Rudolf N. Cardinal. "A class-contrastive human-interpretable machine learning approach to predict mortality in severe mental illness." npj Schizophrenia 7, no. 1 (December 2021). http://dx.doi.org/10.1038/s41537-021-00191-y.

Повний текст джерела
Анотація:
AbstractMachine learning (ML), one aspect of artificial intelligence (AI), involves computer algorithms that train themselves. They have been widely applied in the healthcare domain. However, many trained ML algorithms operate as ‘black boxes’, producing a prediction from input data without a clear explanation of their workings. Non-transparent predictions are of limited utility in many clinical domains, where decisions must be justifiable. Here, we apply class-contrastive counterfactual reasoning to ML to demonstrate how specific changes in inputs lead to different predictions of mortality in people with severe mental illness (SMI), a major public health challenge. We produce predictions accompanied by visual and textual explanations as to how the prediction would have differed given specific changes to the input. We apply it to routinely collected data from a mental health secondary care provider in patients with schizophrenia. Using a data structuring framework informed by clinical knowledge, we captured information on physical health, mental health, and social predisposing factors. We then trained an ML algorithm and other statistical learning techniques to predict the risk of death. The ML algorithm predicted mortality with an area under receiver operating characteristic curve (AUROC) of 0.80 (95% confidence intervals [0.78, 0.82]). We used class-contrastive analysis to produce explanations for the model predictions. We outline the scenarios in which class-contrastive analysis is likely to be successful in producing explanations for model predictions. Our aim is not to advocate for a particular model but show an application of the class-contrastive analysis technique to electronic healthcare record data for a disease of public health significance. In patients with schizophrenia, our work suggests that use or prescription of medications like antidepressants was associated with lower risk of death. Abuse of alcohol/drugs and a diagnosis of delirium were associated with higher risk of death. Our ML models highlight the role of co-morbidities in determining mortality in patients with schizophrenia and the need to manage co-morbidities in these patients. We hope that some of these bio-social factors can be targeted therapeutically by either patient-level or service-level interventions. Our approach combines clinical knowledge, health data, and statistical learning, to make predictions interpretable to clinicians using class-contrastive reasoning. This is a step towards interpretable AI in the management of patients with schizophrenia and potentially other diseases.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

White, Adam, Kwun Ho Ngan, James Phelan, Kevin Ryan, Saman Sadeghi Afgeh, Constantino Carlos Reyes-Aldasoro, and Artur d’Avila Garcez. "Contrastive counterfactual visual explanations with overdetermination." Machine Learning, May 17, 2023. http://dx.doi.org/10.1007/s10994-023-06333-w.

Повний текст джерела
Анотація:
AbstractA novel explainable AI method called CLEAR Image is introduced in this paper. CLEAR Image is based on the view that a satisfactory explanation should be contrastive, counterfactual and measurable. CLEAR Image seeks to explain an image’s classification probability by contrasting the image with a representative contrast image, such as an auto-generated image obtained via adversarial learning. This produces a salient segmentation and a way of using image perturbations to calculate each segment’s importance. CLEAR Image then uses regression to determine a causal equation describing a classifier’s local input–output behaviour. Counterfactuals are also identified that are supported by the causal equation. Finally, CLEAR Image measures the fidelity of its explanation against the classifier. CLEAR Image was successfully applied to a medical imaging case study where it outperformed methods such as Grad-CAM and LIME by an average of 27% using a novel pointing game metric. CLEAR Image also identifies cases of causal overdetermination, where there are multiple segments in an image that are sufficient individually to cause the classification probability to be close to one.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Kuhl, Ulrike, André Artelt, and Barbara Hammer. "Let's go to the Alien Zoo: Introducing an experimental framework to study usability of counterfactual explanations for machine learning." Frontiers in Computer Science 5 (March 21, 2023). http://dx.doi.org/10.3389/fcomp.2023.1087929.

Повний текст джерела
Анотація:
IntroductionTo foster usefulness and accountability of machine learning (ML), it is essential to explain a model's decisions in addition to evaluating its performance. Accordingly, the field of explainable artificial intelligence (XAI) has resurfaced as a topic of active research, offering approaches to address the “how” and “why” of automated decision-making. Within this domain, counterfactual explanations (CFEs) have gained considerable traction as a psychologically grounded approach to generate post-hoc explanations. To do so, CFEs highlight what changes to a model's input would have changed its prediction in a particular way. However, despite the introduction of numerous CFE approaches, their usability has yet to be thoroughly validated at the human level.MethodsTo advance the field of XAI, we introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework. The Alien Zoo provides the means to evaluate usability of CFEs for gaining new knowledge from an automated system, targeting novice users in a domain-general context. As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study.ResultsOur results suggest the efficacy of the Alien Zoo framework for empirically investigating aspects of counterfactual explanations in a game-type scenario and a low-knowledge domain. The proof of concept study reveals that users benefit from receiving CFEs compared to no explanation, both in terms of objective performance in the proposed iterative learning task, and subjective usability.DiscussionWith this work, we aim to equip research groups and practitioners with the means to easily run controlled and well-powered user studies to complement their otherwise often more technology-oriented work. Thus, in the interest of reproducible research, we provide the entire code, together with the underlying models and user data: https://github.com/ukuhl/IntroAlienZoo.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Burns, Alex. "Doubting the Global War on Terror." M/C Journal 14, no. 1 (January 24, 2011). http://dx.doi.org/10.5204/mcj.338.

Повний текст джерела
Анотація:
Photograph by Gonzalo Echeverria (2010)Declaring War Soon after Al Qaeda’s terrorist attacks on 11 September 2001, the Bush Administration described its new grand strategy: the “Global War on Terror”. This underpinned the subsequent counter-insurgency in Afghanistan and the United States invasion of Iraq in March 2003. Media pundits quickly applied the Global War on Terror label to the Madrid, Bali and London bombings, to convey how Al Qaeda’s terrorism had gone transnational. Meanwhile, international relations scholars debated the extent to which September 11 had changed the international system (Brenner; Mann 303). American intellectuals adopted several variations of the Global War on Terror in what initially felt like a transitional period of US foreign policy (Burns). Walter Laqueur suggested Al Qaeda was engaged in a “cosmological” and perpetual war. Paul Berman likened Al Qaeda and militant Islam to the past ideological battles against communism and fascism (Heilbrunn 248). In a widely cited article, neoconservative thinker Norman Podhoretz suggested the United States faced “World War IV”, which had three interlocking drivers: Al Qaeda and trans-national terrorism; political Islam as the West’s existential enemy; and nuclear proliferation to ‘rogue’ countries and non-state actors (Friedman 3). Podhoretz’s tone reflected a revival of his earlier Cold War politics and critique of the New Left (Friedman 148-149; Halper and Clarke 56; Heilbrunn 210). These stances attracted widespread support. For instance, the United States Marine Corp recalibrated its mission to fight a long war against “World War IV-like” enemies. Yet these stances left the United States unprepared as the combat situations in Afghanistan and Iraq worsened (Ricks; Ferguson; Filkins). Neoconservative ideals for Iraq “regime change” to transform the Middle East failed to deal with other security problems such as Pakistan’s Musharraf regime (Dorrien 110; Halper and Clarke 210-211; Friedman 121, 223; Heilbrunn 252). The Manichean and open-ended framing became a self-fulfilling prophecy for insurgents, jihadists, and militias. The Bush Administration quietly abandoned the Global War on Terror in July 2005. Widespread support had given way to policymaker doubt. Why did so many intellectuals and strategists embrace the Global War on Terror as the best possible “grand strategy” perspective of a post-September 11 world? Why was there so little doubt of this worldview? This is a debate with roots as old as the Sceptics versus the Sophists. Explanations usually focus on the Bush Administration’s “Vulcans” war cabinet: Vice President Dick Cheney, Secretary of Defense Donald Rumsfield, and National Security Advisor Condoleezza Rice, who later became Secretary of State (Mann xv-xvi). The “Vulcans” were named after the Roman god Vulcan because Rice’s hometown Birmingham, Alabama, had “a mammoth fifty-six foot statue . . . [in] homage to the city’s steel industry” (Mann x) and the name stuck. Alternatively, explanations focus on how neoconservative thinkers shaped the intellectual climate after September 11, in a receptive media climate. Biographers suggest that “neoconservatism had become an echo chamber” (Heilbrunn 242) with its own media outlets, pundits, and think-tanks such as the American Enterprise Institute and Project for a New America. Neoconservatism briefly flourished in Washington DC until Iraq’s sectarian violence discredited the “Vulcans” and neoconservative strategists like Paul Wolfowitz (Friedman; Ferguson). The neoconservatives' combination of September 11’s aftermath with strongly argued historical analogies was initially convincing. They conferred with scholars such as Bernard Lewis, Samuel P. Huntington and Victor Davis Hanson to construct classicist historical narratives and to explain cultural differences. However, the history of the decade after September 11 also contains mis-steps and mistakes which make it a series of contingent decisions (Ferguson; Bergen). One way to analyse these contingent decisions is to pose “what if?” counterfactuals, or feasible alternatives to historical events (Lebow). For instance, what if September 11 had been a chemical and biological weapons attack? (Mann 317). Appendix 1 includes a range of alternative possibilities and “minimal rewrites” or slight variations on the historical events which occurred. Collectively, these counterfactuals suggest the role of agency, chance, luck, and the juxtaposition of better and worse outcomes. They pose challenges to the classicist interpretation adopted soon after September 11 to justify “World War IV” (Podhoretz). A ‘Two-Track’ Process for ‘World War IV’ After the September 11 attacks, I think an overlapping two-track process occurred with the “Vulcans” cabinet, neoconservative advisers, and two “echo chambers”: neoconservative think-tanks and the post-September 11 media. Crucially, Bush’s “Vulcans” war cabinet succeeded in gaining civilian control of the United States war decision process. Although successful in initiating the 2003 Iraq War this civilian control created a deeper crisis in US civil-military relations (Stevenson; Morgan). The “Vulcans” relied on “politicised” intelligence such as a United Kingdom intelligence report on Iraq’s weapons development program. The report enabled “a climate of undifferentiated fear to arise” because its public version did not distinguish between chemical, biological, radiological or nuclear weapons (Halper and Clarke, 210). The cautious 2003 National Intelligence Estimates (NIE) report on Iraq was only released in a strongly edited form. For instance, the US Department of Energy had expressed doubts about claims that Iraq had approached Niger for uranium, and was using aluminium tubes for biological and chemical weapons development. Meanwhile, the post-September 11 media had become a second “echo chamber” (Halper and Clarke 194-196) which amplified neoconservative arguments. Berman, Laqueur, Podhoretz and others who framed the intellectual climate were “risk entrepreneurs” (Mueller 41-43) that supported the “World War IV” vision. The media also engaged in aggressive “flak” campaigns (Herman and Chomsky 26-28; Mueller 39-42) designed to limit debate and to stress foreign policy stances and themes which supported the Bush Administration. When former Central Intelligence Agency director James Woolsey’s claimed that Al Qaeda had close connections to Iraqi intelligence, this was promoted in several books, including Michael Ledeen’s War Against The Terror Masters, Stephen Hayes’ The Connection, and Laurie Mylroie’s Bush v. The Beltway; and in partisan media such as Fox News, NewsMax, and The Weekly Standard who each attacked the US State Department and the CIA (Dorrien 183; Hayes; Ledeen; Mylroie; Heilbrunn 237, 243-244; Mann 310). This was the media “echo chamber” at work. The group Accuracy in Media also campaigned successfully to ensure that US cable providers did not give Al Jazeera English access to US audiences (Barker). Cosmopolitan ideals seemed incompatible with what the “flak” groups desired. The two-track process converged on two now infamous speeches. US President Bush’s State of the Union Address on 29 January 2002, and US Secretary of State Colin Powell’s presentation to the United Nations on 5 February 2003. Bush’s speech included a line from neoconservative David Frumm about North Korea, Iraq and Iran as an “Axis of Evil” (Dorrien 158; Halper and Clarke 139-140; Mann 242, 317-321). Powell’s presentation to the United Nations included now-debunked threat assessments. In fact, Powell had altered the speech’s original draft by I. Lewis “Scooter” Libby, who was Cheney’s chief of staff (Dorrien 183-184). Powell claimed that Iraq had mobile biological weapons facilities, linked to Abu Musab al-Zarqawi. However, the International Atomic Energy Agency’s (IAEA) Mohamed El-Baradei, the Defense Intelligence Agency, the State Department, and the Institute for Science and International Security all strongly doubted this claim, as did international observers (Dorrien 184; Halper and Clarke 212-213; Mann 353-354). Yet this information was suppressed: attacked by “flak” or given little visible media coverage. Powell’s agenda included trying to rebuild an international coalition and to head off weather changes that would affect military operations in the Middle East (Mann 351). Both speeches used politicised variants of “weapons of mass destruction”, taken from the counterterrorism literature (Stern; Laqueur). Bush’s speech created an inflated geopolitical threat whilst Powell relied on flawed intelligence and scientific visuals to communicate a non-existent threat (Vogel). However, they had the intended effect on decision makers. US Under-Secretary of Defense, the neoconservative Paul Wolfowitz, later revealed to Vanity Fair that “weapons of mass destruction” was selected as an issue that all potential stakeholders could agree on (Wilkie 69). Perhaps the only remaining outlet was satire: Armando Iannucci’s 2009 film In The Loop parodied the diplomatic politics surrounding Powell’s speech and the civil-military tensions on the Iraq War’s eve. In the short term the two track process worked in heading off doubt. The “Vulcans” blocked important information on pre-war Iraq intelligence from reaching the media and the general public (Prados). Alternatively, they ignored area specialists and other experts, such as when Coalition Provisional Authority’s L. Paul Bremer ignored the US State Department’s fifteen volume ‘Future of Iraq’ project (Ferguson). Public “flak” and “risk entrepreneurs” mobilised a range of motivations from grief and revenge to historical memory and identity politics. This combination of private and public processes meant that although doubts were expressed, they could be contained through the dual echo chambers of neoconservative policymaking and the post-September 11 media. These factors enabled the “Vulcans” to proceed with their “regime change” plans despite strong public opposition from anti-war protestors. Expressing DoubtsMany experts and institutions expressed doubt about specific claims the Bush Administration made to support the 2003 Iraq War. This doubt came from three different and sometimes overlapping groups. Subject matter experts such as the IAEA’s Mohamed El-Baradei and weapons development scientists countered the UK intelligence report and Powell’s UN speech. However, they did not get the media coverage warranted due to “flak” and “echo chamber” dynamics. Others could challenge misleading historical analogies between insurgent Iraq and Nazi Germany, and yet not change the broader outcomes (Benjamin). Independent journalists one group who gained new information during the 1990-91 Gulf War: some entered Iraq from Kuwait and documented a more humanitarian side of the war to journalists embedded with US military units (Uyarra). Finally, there were dissenters from bureaucratic and institutional processes. In some cases, all three overlapped. In their separate analyses of the post-September 11 debate on intelligence “failure”, Zegart and Jervis point to a range of analytic misperceptions and institutional problems. However, the intelligence community is separated from policymakers such as the “Vulcans”. Compartmentalisation due to the “need to know” principle also means that doubting analysts can be blocked from releasing information. Andrew Wilkie discovered this when he resigned from Australia’s Office for National Assessments (ONA) as a transnational issues analyst. Wilkie questioned the pre-war assessments in Powell’s United Nations speech that were used to justify the 2003 Iraq War. Wilkie was then attacked publicly by Australian Prime Minister John Howard. This overshadowed a more important fact: both Howard and Wilkie knew that due to Australian legislation, Wilkie could not publicly comment on ONA intelligence, despite the invitation to do so. This barrier also prevented other intelligence analysts from responding to the “Vulcans”, and to “flak” and “echo chamber” dynamics in the media and neoconservative think-tanks. Many analysts knew that the excerpts released from the 2003 NIE on Iraq was highly edited (Prados). For example, Australian agencies such as the ONA, the Department of Foreign Affairs and Trade, and the Department of Defence knew this (Wilkie 98). However, analysts are trained not to interfere with policymakers, even when there are significant civil-military irregularities. Military officials who spoke out about pre-war planning against the “Vulcans” and their neoconservative supporters were silenced (Ricks; Ferguson). Greenlight Capital’s hedge fund manager David Einhorn illustrates in a different context what might happen if analysts did comment. Einhorn gave a speech to the Ira Sohn Conference on 15 May 2002 debunking the management of Allied Capital. Einhorn’s “short-selling” led to retaliation from Allied Capital, a Securities and Exchange Commission investigation, and growing evidence of potential fraud. If analysts adopted Einhorn’s tactics—combining rigorous analysis with targeted, public denunciation that is widely reported—then this may have short-circuited the “flak” and “echo chamber” effects prior to the 2003 Iraq War. The intelligence community usually tries to pre-empt such outcomes via contestation exercises and similar processes. This was the goal of the 2003 NIE on Iraq, despite the fact that the US Department of Energy which had the expertise was overruled by other agencies who expressed opinions not necessarily based on rigorous scientific and technical analysis (Prados; Vogel). In counterterrorism circles, similar disinformation arose about Aum Shinrikyo’s biological weapons research after its sarin gas attack on Tokyo’s subway system on 20 March 1995 (Leitenberg). Disinformation also arose regarding nuclear weapons proliferation to non-state actors in the 1990s (Stern). Interestingly, several of the “Vulcans” and neoconservatives had been involved in an earlier controversial contestation exercise: Team B in 1976. The Central Intelligence Agency (CIA) assembled three Team B groups in order to evaluate and forecast Soviet military capabilities. One group headed by historian Richard Pipes gave highly “alarmist” forecasts and then attacked a CIA NIE about the Soviets (Dorrien 50-56; Mueller 81). The neoconservatives adopted these same tactics to reframe the 2003 NIE from its position of caution, expressed by several intelligence agencies and experts, to belief that Iraq possessed a current, covert program to develop weapons of mass destruction (Prados). Alternatively, information may be leaked to the media to express doubt. “Non-attributable” background interviews to establishment journalists like Seymour Hersh and Bob Woodward achieved this. Wikileaks publisher Julian Assange has recently achieved notoriety due to US diplomatic cables from the SIPRNet network released from 28 November 2010 onwards. Supporters have favourably compared Assange to Daniel Ellsberg, the RAND researcher who leaked the Pentagon Papers (Ellsberg; Ehrlich and Goldsmith). Whilst Elsberg succeeded because a network of US national papers continued to print excerpts from the Pentagon Papers despite lawsuit threats, Assange relied in part on favourable coverage from the UK’s Guardian newspaper. However, suspected sources such as US Army soldier Bradley Manning are not protected whilst media outlets are relatively free to publish their scoops (Walt, ‘Woodward’). Assange’s publication of SIPRNet’s diplomatic cables will also likely mean greater restrictions on diplomatic and military intelligence (Walt, ‘Don’t Write’). Beyond ‘Doubt’ Iraq’s worsening security discredited many of the factors that had given the neoconservatives credibility. The post-September 11 media became increasingly more critical of the US military in Iraq (Ferguson) and cautious about the “echo chamber” of think-tanks and media outlets. Internet sites for Al Jazeera English, Al-Arabiya and other networks have enabled people to bypass “flak” and directly access these different viewpoints. Most damagingly, the non-discovery of Iraq’s weapons of mass destruction discredited both the 2003 NIE on Iraq and Colin Powell’s United Nations presentation (Wilkie 104). Likewise, “risk entrepreneurs” who foresaw “World War IV” in 2002 and 2003 have now distanced themselves from these apocalyptic forecasts due to a series of mis-steps and mistakes by the Bush Administration and Al Qaeda’s over-calculation (Bergen). The emergence of sites such as Wikileaks, and networks like Al Jazeera English and Al-Arabiya, are a response to the politics of the past decade. They attempt to short-circuit past “echo chambers” through providing access to different sources and leaked data. The Global War on Terror framed the Bush Administration’s response to September 11 as a war (Kirk; Mueller 59). Whilst this prematurely closed off other possibilities, it has also unleashed a series of dynamics which have undermined the neoconservative agenda. The “classicist” history and historical analogies constructed to justify the “World War IV” scenario are just one of several potential frameworks. “Flak” organisations and media “echo chambers” are now challenged by well-financed and strategic alternatives such as Al Jazeera English and Al-Arabiya. Doubt is one defence against “risk entrepreneurs” who seek to promote a particular idea: doubt guards against uncritical adoption. Perhaps the enduring lesson of the post-September 11 debates, though, is that doubt alone is not enough. What is needed are individuals and institutions that understand the strategies which the neoconservatives and others have used, and who also have the soft power skills during crises to influence critical decision-makers to choose alternatives. Appendix 1: Counterfactuals Richard Ned Lebow uses “what if?” counterfactuals to examine alternative possibilities and “minimal rewrites” or slight variations on the historical events that occurred. The following counterfactuals suggest that the Bush Administration’s Global War on Terror could have evolved very differently . . . or not occurred at all. Fact: The 2003 Iraq War and 2001 Afghanistan counterinsurgency shaped the Bush Administration’s post-September 11 grand strategy. Counterfactual #1: Al Gore decisively wins the 2000 U.S. election. Bush v. Gore never occurs. After the September 11 attacks, Gore focuses on international alliance-building and gains widespread diplomatic support rather than a neoconservative agenda. He authorises Special Operations Forces in Afghanistan and works closely with the Musharraf regime in Pakistan to target Al Qaeda’s muhajideen. He ‘contains’ Saddam Hussein’s Iraq through measurement and signature, technical intelligence, and more stringent monitoring by the International Atomic Energy Agency. Minimal Rewrite: United 93 crashes in Washington DC, killing senior members of the Gore Administration. Fact: U.S. Special Operations Forces failed to kill Osama bin Laden in late November and early December 2001 at Tora Bora. Counterfactual #2: U.S. Special Operations Forces kill Osama bin Laden in early December 2001 during skirmishes at Tora Bora. Ayman al-Zawahiri is critically wounded, captured, and imprisoned. The rest of Al Qaeda is scattered. Minimal Rewrite: Osama bin Laden’s death turns him into a self-mythologised hero for decades. Fact: The UK Blair Government supplied a 50-page intelligence dossier on Iraq’s weapons development program which the Bush Administration used to support its pre-war planning. Counterfactual #3: Rogue intelligence analysts debunk the UK Blair Government’s claims through a series of ‘targeted’ leaks to establishment news sources. Minimal Rewrite: The 50-page intelligence dossier is later discovered to be correct about Iraq’s weapons development program. Fact: The Bush Administration used the 2003 National Intelligence Estimate to “build its case” for “regime change” in Saddam Hussein’s Iraq. Counterfactual #4: A joint investigation by The New York Times and The Washington Post rebuts U.S. Secretary of State Colin Powell’s speech to the United National Security Council, delivered on 5 February 2003. Minimal Rewrite: The Central Intelligence Agency’s whitepaper “Iraq’s Weapons of Mass Destruction Programs” (October 2002) more accurately reflects the 2003 NIE’s cautious assessments. Fact: The Bush Administration relied on Ahmed Chalabi for its postwar estimates about Iraq’s reconstruction. Counterfactual #5: The Bush Administration ignores Chalabi’s advice and relies instead on the U.S. State Department’s 15 volume report “The Future of Iraq”. Minimal Rewrite: The Coalition Provisional Authority appoints Ahmed Chalabi to head an interim Iraqi government. Fact: L. Paul Bremer signed orders to disband Iraq’s Army and to De-Ba’athify Iraq’s new government. Counterfactual #6: Bremer keeps Iraq’s Army intact and uses it to impose security in Baghdad to prevent looting and to thwart insurgents. Rather than a De-Ba’athification policy, Bremer uses former Baath Party members to gather situational intelligence. Minimal Rewrite: Iraq’s Army refuses to disband and the De-Ba’athification policy uncovers several conspiracies to undermine the Coalition Provisional Authority. AcknowledgmentsThanks to Stephen McGrail for advice on science and technology analysis.References Barker, Greg. “War of Ideas”. PBS Frontline. Boston, MA: 2007. ‹http://www.pbs.org/frontlineworld/stories/newswar/video1.html› Benjamin, Daniel. “Condi’s Phony History.” Slate 29 Aug. 2003. ‹http://www.slate.com/id/2087768/pagenum/all/›. Bergen, Peter L. The Longest War: The Enduring Conflict between America and Al Qaeda. New York: The Free Press, 2011. Berman, Paul. Terror and Liberalism. W.W. Norton & Company: New York, 2003. Brenner, William J. “In Search of Monsters: Realism and Progress in International Relations Theory after September 11.” Security Studies 15.3 (2006): 496-528. Burns, Alex. “The Worldflash of a Coming Future.” M/C Journal 6.2 (April 2003). ‹http://journal.media-culture.org.au/0304/08-worldflash.php›. Dorrien, Gary. Imperial Designs: Neoconservatism and the New Pax Americana. New York: Routledge, 2004. Ehrlich, Judith, and Goldsmith, Rick. The Most Dangerous Man in America: Daniel Ellsberg and the Pentagon Papers. Berkley CA: Kovno Communications, 2009. Einhorn, David. Fooling Some of the People All of the Time: A Long Short (and Now Complete) Story. Hoboken NJ: John Wiley & Sons, 2010. Ellison, Sarah. “The Man Who Spilled The Secrets.” Vanity Fair (Feb. 2011). ‹http://www.vanityfair.com/politics/features/2011/02/the-guardian-201102›. Ellsberg, Daniel. Secrets: A Memoir of Vietnam and the Pentagon Papers. New York: Viking, 2002. Ferguson, Charles. No End in Sight, New York: Representational Pictures, 2007. Filkins, Dexter. The Forever War. New York: Vintage Books, 2008. Friedman, Murray. The Neoconservative Revolution: Jewish Intellectuals and the Shaping of Public Policy. New York: Cambridge UP, 2005. Halper, Stefan, and Jonathan Clarke. America Alone: The Neo-Conservatives and the Global Order. New York: Cambridge UP, 2004. Hayes, Stephen F. The Connection: How Al Qaeda’s Collaboration with Saddam Hussein Has Endangered America. New York: HarperCollins, 2004. Heilbrunn, Jacob. They Knew They Were Right: The Rise of the Neocons. New York: Doubleday, 2008. Herman, Edward S., and Noam Chomsky. Manufacturing Consent: The Political Economy of the Mass Media. Rev. ed. New York: Pantheon Books, 2002. Iannucci, Armando. In The Loop. London: BBC Films, 2009. Jervis, Robert. Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War. Ithaca NY: Cornell UP, 2010. Kirk, Michael. “The War behind Closed Doors.” PBS Frontline. Boston, MA: 2003. ‹http://www.pbs.org/wgbh/pages/frontline/shows/iraq/›. Laqueur, Walter. No End to War: Terrorism in the Twenty-First Century. New York: Continuum, 2003. Lebow, Richard Ned. Forbidden Fruit: Counterfactuals and International Relations. Princeton NJ: Princeton UP, 2010. Ledeen, Michael. The War against The Terror Masters. New York: St. Martin’s Griffin, 2003. Leitenberg, Milton. “Aum Shinrikyo's Efforts to Produce Biological Weapons: A Case Study in the Serial Propagation of Misinformation.” Terrorism and Political Violence 11.4 (1999): 149-158. Mann, James. Rise of the Vulcans: The History of Bush’s War Cabinet. New York: Viking Penguin, 2004. Morgan, Matthew J. The American Military after 9/11: Society, State, and Empire. New York: Palgrave Macmillan, 2008. Mueller, John. Overblown: How Politicians and the Terrorism Industry Inflate National Security Threats, and Why We Believe Them. New York: The Free Press, 2009. Mylroie, Laurie. Bush v The Beltway: The Inside Battle over War in Iraq. New York: Regan Books, 2003. Nutt, Paul C. Why Decisions Fail. San Francisco: Berrett-Koelher, 2002. Podhoretz, Norman. “How to Win World War IV”. Commentary 113.2 (2002): 19-29. Prados, John. Hoodwinked: The Documents That Reveal How Bush Sold Us a War. New York: The New Press, 2004. Ricks, Thomas. Fiasco: The American Military Adventure in Iraq. New York: The Penguin Press, 2006. Stern, Jessica. The Ultimate Terrorists. Boston, MA: Harvard UP, 2001. Stevenson, Charles A. Warriors and Politicians: US Civil-Military Relations under Stress. New York: Routledge, 2006. Walt, Stephen M. “Should Bob Woodward Be Arrested?” Foreign Policy 10 Dec. 2010. ‹http://walt.foreignpolicy.com/posts/2010/12/10/more_wikileaks_double_standards›. Walt, Stephen M. “‘Don’t Write If You Can Talk...’: The Latest from WikiLeaks.” Foreign Policy 29 Nov. 2010. ‹http://walt.foreignpolicy.com/posts/2010/11/29/dont_write_if_you_can_talk_the_latest_from_wikileaks›. Wilkie, Andrew. Axis of Deceit. Melbourne: Black Ink Books, 2003. Uyarra, Esteban Manzanares. “War Feels like War”. London: BBC, 2003. Vogel, Kathleen M. “Iraqi Winnebagos™ of Death: Imagined and Realized Futures of US Bioweapons Threat Assessments.” Science and Public Policy 35.8 (2008): 561–573. Zegart, Amy. Spying Blind: The CIA, the FBI and the Origins of 9/11. Princeton NJ: Princeton UP, 2007.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії