To see the other types of publications on this topic, follow the link: Explanation.

Journal articles on the topic 'Explanation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Explanation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Atanasova, Pepa, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. "Diagnostics-Guided Explanation Generation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 10445–53. http://dx.doi.org/10.1609/aaai.v36i10.21287.

Full text
Abstract:
Explanations shed light on a machine learning model's rationales and can aid in identifying deficiencies in its reasoning process. Explanation generation models are typically trained in a supervised way given human explanations. When such annotations are not available, explanations are often selected as those portions of the input that maximise a downstream task's performance, which corresponds to optimising an explanation's Faithfulness to a given model. Faithfulness is one of several so-called diagnostic properties, which prior work has identified as useful for gauging the quality of an explanation without requiring annotations. Other diagnostic properties are Data Consistency, which measures how similar explanations are for similar input instances, and Confidence Indication, which shows whether the explanation reflects the confidence of the model. In this work, we show how to directly optimise for these diagnostic properties when training a model to generate sentence-level explanations, which markedly improves explanation quality, agreement with human rationales, and downstream task performance on three complex reasoning tasks.
APA, Harvard, Vancouver, ISO, and other styles
2

Clark, Stephen R. L. "The Limits of Explanation: Limited Explanations." Royal Institute of Philosophy Supplement 27 (March 1990): 195–210. http://dx.doi.org/10.1017/s1358246100005117.

Full text
Abstract:
When I was first approached to read a paper at the conference from which this volume takes its beginning I expected that Flint Schier, with whom I had taught a course on the Philosophy of Biology in my years at Glasgow, would be with us to comment and to criticize. I cannot let this occasion pass without expressing once again my own sense of loss. I am sure that we would all have gained by his presence, and hope that he would find things both to approve, and disapprove, in the following venture.
APA, Harvard, Vancouver, ISO, and other styles
3

Kleih, Björn-Christian. "Die mündliche Erklärung zur Abstimmung gemäß § 31 Absatz 1 GOBT – eine parlamentarische Wundertüte mit Potenzial?" Zeitschrift für Parlamentsfragen 51, no. 4 (2020): 865–87. http://dx.doi.org/10.5771/0340-1758-2020-4-865.

Full text
Abstract:
According to the Rules of Procedure of the German Bundestag (”GOBT”), every Member of Parliament is granted a five minutes’ verbal explanation of vote . It is granted for nearly every kind of vote in the House . The verbal explanation is often considered a privilege to MPs going against the position taken by their group . Yet, it is also used to confirm the party position and it is abused to continue already closed debates . In either case, they can be a grab bag for both parliament’s plenum and its president; the verbal explanation’s content is only revealed when the explanation is given . A quantitative and qualitative analysis of the explanations given in the Bundestag shows that explanations from dissenters contribute quantitatively, but not to a large extent . While members of the coalition more often declare to go against their parliamentary party group, members of the opposition tend to confirm the line of their respective party . When used to reveal personal implications in the decision‑making process, the verbal explanation is meaningful and widely accepted .
APA, Harvard, Vancouver, ISO, and other styles
4

Fogelin, Lars. "Inference to the Best Explanation: A Common and Effective Form of Archaeological Reasoning." American Antiquity 72, no. 4 (October 2007): 603–26. http://dx.doi.org/10.2307/25470436.

Full text
Abstract:
Processual and postprocessual archaeologists implicitly employ the same epistemological system to evaluate the worth of different explanations: inference to the best explanation. This is good since inference to the best explanation is the most effective epistemological approach to archaeological reasoning available. Underlying the logic of inference to the best explanation is the assumption that the explanation that accounts for the most evidence is also most likely to be true. This view of explanation often reflects the practice of archaeological reasoning better than either the hypothetico-deductive method or hermeneutics. This article explores the logic of inference to the best explanation and provides clear criteria to determine what makes one explanation better than another. Explanations that are empirically broad, general, modest, conservative, simple, testable, and address many perspectives are better than explanations that are not. This article also introduces a system of understanding explanation that emphasizes the role of contrastive pairings in the construction of specific explanations. This view of explanation allows for a better understanding of when, and when not, to engage in the testing of specific explanations.
APA, Harvard, Vancouver, ISO, and other styles
5

Brdnik, Saša, Vili Podgorelec, and Boštjan Šumak. "Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics." Electronics 12, no. 12 (June 8, 2023): 2594. http://dx.doi.org/10.3390/electronics12122594.

Full text
Abstract:
This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master’s students presented with global feature explanations also reported high trust in this form of explanation. The highest measured explanation satisfaction was observed with the local feature explanation technique in the group of bachelor’s and master’s students, with master’s students additionally expressing high satisfaction with the global feature importance explanation. A detailed overview shows that the two observed groups of students displayed consensus in favored explanation techniques when evaluating trust and explanation satisfaction. Certainty explanation techniques were perceived with lower trust and satisfaction than were local feature relevance explanation techniques. The correlation between itemized results was documented and measured with the Trust in Automation questionnaire and Explanation Satisfaction Scale questionnaire. Master’s-level students self-reported an overall higher understanding of the explanations and higher overall satisfaction with explanations and perceived the explanations as less harmful.
APA, Harvard, Vancouver, ISO, and other styles
6

Weisberg, Deena Skolnick, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, and Jeremy R. Gray. "The Seductive Allure of Neuroscience Explanations." Journal of Cognitive Neuroscience 20, no. 3 (March 2008): 470–77. http://dx.doi.org/10.1162/jocn.2008.20040.

Full text
Abstract:
Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people's abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good explanation vs. bad explanation) × 2 (without neuroscience vs. with neuroscience) design. Crucially, the neuroscience information was irrelevant to the logic of the explanation, as confirmed by the expert subjects. Subjects in all three groups judged good explanations as more satisfying than bad ones. But subjects in the two nonexpert groups additionally judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. The neuroscience information had a particularly striking effect on nonexperts' judgments of bad explanations, masking otherwise salient problems in these explanations.
APA, Harvard, Vancouver, ISO, and other styles
7

Skorupski, John. "Explanation in the Social Sciences: Explanation and Understanding in Social Science." Royal Institute of Philosophy Supplement 27 (March 1990): 119–34. http://dx.doi.org/10.1017/s1358246100005075.

Full text
Abstract:
Hempelian orthodoxy on the nature of explanation in general, and on explanation in the social sciences in particular, holds that(a) full explanations are arguments(b) full explanations must include at least one law(c) reason explanations are causalDavid Ruben disputes (a) and (b) but he does not dispute (c). Nor does he dispute that ‘explanations in both natural and social science need laws in other ways, even when not as part of the explanation itself (p. 97 above). The distance between his view and the covering law theory, he points out, ‘is not as great as it may first appear to be’ (p. 97 above).
APA, Harvard, Vancouver, ISO, and other styles
8

Swinburne, Richard. "The Limits of Explanation: The Limits of Explanation." Royal Institute of Philosophy Supplement 27 (March 1990): 177–93. http://dx.doi.org/10.1017/s1358246100005105.

Full text
Abstract:
In purporting to explain the occurrence of some event or process we cite the causal factors which, we assert, brought it about or keeps it in being. The explanation is a true one if those factors did indeed bring it about or keep it in being. In discussing explanation I shall henceforward (unless I state otherwise) concern myself only with true explanations. I believe that there are two distinct kinds of way in which causal factors operate in the world, two distinct kinds of causality, and so two distinct kinds of explanation. For historical reasons, I shall call these kinds of causality and explanations ‘scientific’ and ‘personal’; but I do not imply that there is anything unscientific in a wide sense in invoking personal explanation.
APA, Harvard, Vancouver, ISO, and other styles
9

Gillett, Carl. "WHY CONSTITUTIVE MECHANISTIC EXPLANATION CANNOT BE CAUSAL." American Philosophical Quarterly 57, no. 1 (January 1, 2020): 31–50. http://dx.doi.org/10.2307/48570644.

Full text
Abstract:
Abstract In his “New Consensus” on explanation, Wesley Salmon (1989) famously argued that there are two kinds of scientific explanation: global, derivational, and unifying explanations, and then local, ontic explanations backed by causal relations. Following Salmon’s New Consensus, the dominant view in philosophy of science is what I term “neo-Causalism” which assumes that all ontic explanations of singular fact/event are causal explanations backed by causal relations, and that scientists only search for causal patterns or relations and only offer causal explanations of singular facts/events. I argue that there are foundational, and fatal, flaws in the neo-Causal picture. The relations backing constitutive mechanistic explanations of activities of wholes using activities of parts, as well as other species of compositional explanation, cannot be causal relations. Treating them as causal or causation-like is therefore plausibly a category mistake. Compositional explanations in the sciences represent instead a sui generis kind of ontic explanation of singular fact/event backed by sui generis compositional relations. We thus need a pluralistic revision of Salmon’s New Consensus on explanation to reflect these findings.
APA, Harvard, Vancouver, ISO, and other styles
10

Morton, Adam. "Mathematical Modelling and Contrastive Explanation." Canadian Journal of Philosophy Supplementary Volume 16 (1990): 251–70. http://dx.doi.org/10.1080/00455091.1990.10717228.

Full text
Abstract:
This is an enquiry into flawed explanations. Most of the effort in studies of the concept of explanation, scientific or otherwise, has gone into the contrast between clear cases of explanation and clear non-explanations.
APA, Harvard, Vancouver, ISO, and other styles
11

Hardcastle, Valerie Gray. "[Explanation] Is Explanation Better." Philosophy of Science 64, no. 1 (March 1997): 154–60. http://dx.doi.org/10.1086/392540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Redhead, Michael. "Explanation in Physics: Explanation." Royal Institute of Philosophy Supplement 27 (March 1990): 135–54. http://dx.doi.org/10.1017/s1358246100005087.

Full text
Abstract:
In what sense do the sciences explain? Or do they merely describe what is going on without answering why-questions at all. But cannot description at an appropriate ‘level’ provide all that we can reasonably ask of an explanation? Well, what do we mean by explanation anyway? What, if anything, gets left out when we provide a so-called scientific explanation? Are there limits of explanation in general, and scientific explanation, in particular? What are the criteria for a good explanation? Is it possible to satisfy all the desiderata simultaneously? If not, which should we regard as paramount? What is the connection between explanation and prediction? What exactly is it that statistical explanations explain? These are some of the questions that have generated a very extensive literature in the philosophy of science. In attempting to answer them, definite views will have to be taken on related matters, such as physical laws, causality, reduction, and questions of evidence and confirmation, of theory and observation, realism versus antirealism, and the objectivity and rationality of science. I will state my own views on these matters, in the course of this essay. To argue for everything in detail and to do justice to all the alternative views, would fill a book, perhaps several books. I want to lead up fairly quickly to modern physics, and review the explanatory situation there in rather more detail.
APA, Harvard, Vancouver, ISO, and other styles
13

Chalyi, Serhii, and Volodymyr Leshchynskyi. "POSSIBLE EVALUATION OF THE CORRECTNESS OF EXPLANATIONS TO THE END USER IN AN ARTIFICIAL INTELLIGENCE SYSTEM." Advanced Information Systems 7, no. 4 (December 3, 2023): 75–79. http://dx.doi.org/10.20998/2522-9052.2023.4.10.

Full text
Abstract:
The subject of this paper is the process of evaluation of explanations in an artificial intelligence system. The aim is to develop a method for forming a possible evaluation of the correctness of explanations for the end user in an artificial intelligence system. The evaluation of the correctness of explanations makes it possible to increase the user's confidence in the solution of an artificial intelligence system and, as a result, to create conditions for the effective use of this solution. Aims: to structure explanations according to the user's needs; to develop an indicator of the correctness of explanations using the theory of possibilities; to develop a method for evaluating the correctness of explanations using the possibilities approach. The approaches used are a set-theoretic approach to describe the elements of explanations in an artificial intelligence system; a possibility approach to provide a representation of the criterion for evaluating explanations in an intelligent system; a probabilistic approach to describe the probabilistic component of the evaluation of explanations. The following results are obtained. The explanations are structured according to the needs of the user. It is shown that the explanation of the decision process is used by specialists in the development of intelligent systems. Such an explanation represents a complete or partial sequence of steps to derive a decision in an artificial intelligence system. End users mostly use explanations of the result presented by an intelligent system. Such explanations usually define the relationship between the values of input variables and the resulting prediction. The article discusses the requirements for evaluating explanations, considering the needs of internal and external users of an artificial intelligence system. It is shown that it is advisable to use explanation fidelity evaluation for specialists in the development of such systems, and explanation correctness evaluation for external users. An explanation correctness assessment is proposed that uses the necessity indicator in the theory of possibilities. A method for evaluation of explanation fidelity is developed. Conclusions. The scientific novelty of the obtained results is as follows. A possible method for assessing the correctness of an explanation in an artificial intelligence system using the indicators of possibility and necessity is proposed. The method calculates the necessity of using the target value of the input variable in the explanation, taking into account the possibility of choosing alternative values of the variables, which makes it possible to ensure that the target value of the input variable is necessary for the explanation and that the explanation is correct.
APA, Harvard, Vancouver, ISO, and other styles
14

Kostić, Daniel, and Kareem Khalifa. "The directionality of topological explanations." Synthese 199, no. 5-6 (November 1, 2021): 14143–65. http://dx.doi.org/10.1007/s11229-021-03414-y.

Full text
Abstract:
AbstractProponents of ontic conceptions of explanation require all explanations to be backed by causal, constitutive, or similar relations. Among their justifications is that only ontic conceptions can do justice to the ‘directionality’ of explanation, i.e., the requirement that if X explains Y, then not-Y does not explain not-X. Using topological explanations as an illustration, we argue that non-ontic conceptions of explanation have ample resources for securing the directionality of explanations. The different ways in which neuroscientists rely on multiplexes involving both functional and anatomical connectivity in their topological explanations vividly illustrate why ontic considerations are frequently (if not always) irrelevant to explanatory directionality. Therefore, directionality poses no problem to non-ontic conceptions of explanation.
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Yuhao, Shi-Jun Luo, Hyoil Han, Jun Miyazaki, and Alfrin Letus Saldanha. "Generating Personalized Explanations for Recommender Systems Using a Knowledge Base." International Journal of Multimedia Data Engineering and Management 12, no. 4 (October 2021): 20–37. http://dx.doi.org/10.4018/ijmdem.2021100102.

Full text
Abstract:
In the last decade, we have seen an increase in the need for interpretable recommendations. Explaining why a product is recommended to a user increases user trust and makes the recommendations more acceptable. The authors propose a personalized explanation generation system, PEREXGEN (personalized explanation generation) that generates personalized explanations for recommender systems using a model-agnostic approach. The proposed model consists of a recommender and an explanation module. Since they implement a model-agnostic approach to generate personalized explanations, they focus more on the explanation module. The explanation module consists of a task-specialized item knowledge graph (TSI-KG) generation from a knowledge base and an explanation generation component. They employ the MovieLens and Wikidata datasets and evaluate the proposed system's model-agnostic properties using conventional and state-of-the-art recommender systems. The user study shows that PEREXGEN generates more persuasive and natural explanations.
APA, Harvard, Vancouver, ISO, and other styles
16

Schurz, Gerhard. "Causality and Unification: How Causality Unifies Statistical Regularities." THEORIA. An International Journal for Theory, History and Foundations of Science 30, no. 1 (March 17, 2015): 73. http://dx.doi.org/10.1387/theoria.11913.

Full text
Abstract:
Two key ideas of scientific explanation - explanations as causal information and explanation as unification - have frequently been set into mutual opposition. This paper proposes a "dialectical solution" to this conflict, by arguing that causal explanations are preferable to non-causal explanations because they lead to a higher degree of unification at the level of the explanation of statistical regularities. The core axioms of the theory of causal nets (TC) are justified because they give the best if not the only unifying explanation of two statistical phenomena: screening off and linking up. Alternative explanation attempts are discussed and it is shown why they don't work. It is demonstrated that not the core of TC but extended versions of TC have empirical content, by means of which they can generate independently testable predictions.
APA, Harvard, Vancouver, ISO, and other styles
17

Janssen, Annelli. "Het web-model." Algemeen Nederlands Tijdschrift voor Wijsbegeerte 111, no. 3 (October 1, 2019): 419–32. http://dx.doi.org/10.5117/antw2019.3.007.jans.

Full text
Abstract:
Abstract The web-model: A new model of explanation for neuroimaging studiesWhat can neuroimaging tell us about the relation between our brain and our mind? A lot, or so I argue. But neuroscientists should update their model of explanation. Currently, many explanations are (implicitly) based on what I call the ‘mapping model’: a model of explanation which centers on mapping relations between cognition and the brain. I argue that these mappings give us very little information, and that instead, we should focus on finding causal relations. If we take a difference-making approach to causation, we can find manipulation patterns between neural and cognitive phenomena and start constructing satisfying explanations in neuroimaging studies: explanations based on what I call the web-model of explanation. This model of explanation not only contrasts with the mapping model, but is also different from Craver’s constitutive mechanistic model of explanation (2007), which takes the constitutive relation to be the main explanatory relation. Taking the difference-making idea of the importance of manipulation and control seriously, means that sometimes, causal relations are preferred over constitutive relations. If we follow the web-model of explanation, we can do justice to the central role that causation should play in neuroscientific explanations.
APA, Harvard, Vancouver, ISO, and other styles
18

Gilbert, Nigel. "Explanation and dialogue." Knowledge Engineering Review 4, no. 3 (September 1989): 235–47. http://dx.doi.org/10.1017/s026988890000504x.

Full text
Abstract:
AbstractRecent approaches to providing advisory knowledge-based systems with explanation capabilities are reviewed. The importance of explaining a system's behaviour and conclusions was recognized early in the development of expert systems. Initial approaches were based on the presentation of an edited proof trace to the user, but while helpful for debugging knowledge bases, these explanations are of limited value to most users. Current work aims to expand the kinds of explanation which can be offered and to embed explanations into a dialogue so that the topic of the explanation can be negotiated between the user and the system. This raises issues of mutual knowledge and dialogue control which are discussed in the review.
APA, Harvard, Vancouver, ISO, and other styles
19

Jansson, Lina. "Network explanations and explanatory directionality." Philosophical Transactions of the Royal Society B: Biological Sciences 375, no. 1796 (February 24, 2020): 20190318. http://dx.doi.org/10.1098/rstb.2019.0318.

Full text
Abstract:
Network explanations raise foundational questions about the nature of scientific explanation. The challenge discussed in this article comes from the fact that network explanations are often thought to be non-causal, i.e. they do not describe the dynamical or mechanistic interactions responsible for some behaviour, instead they appeal to topological properties of network models describing the system. These non-causal features are often thought to be valuable precisely because they do not invoke mechanistic or dynamical interactions and provide insights that are not available through causal explanations. Here, I address a central difficulty facing attempts to move away from causal models of explanation; namely, how to recover the directionality of explanation. Within causal models, the directionality of explanation is identified with the direction of causation. This solution is no longer available once we move to non-causal accounts of explanation. I will suggest a solution to this problem that emphasizes the role of conditions of application. In doing so, I will challenge the idea that sui generis mathematical dependencies are the key to understand non-causal explanations. The upshot is a conceptual account of explanation that accommodates the possibility of non-causal network explanations. It also provides guidance for how to evaluate such explanations. This article is part of the theme issue ‘Unifying the essential concepts of biological networks: biological insights and philosophical foundations’.
APA, Harvard, Vancouver, ISO, and other styles
20

PREISENDÖRFER, PETER, ANSGAR BITZ, and FRANS J. BEZUIDENHOUT. "IN SEARCH OF BLACK ENTREPRENEURSHIP: WHY IS THERE A LACK OF ENTREPRENEURIAL ACTIVITY AMONG THE BLACK POPULATION IN SOUTH AFRICA?" Journal of Developmental Entrepreneurship 17, no. 01 (March 2012): 1250006. http://dx.doi.org/10.1142/s1084946712500069.

Full text
Abstract:
Compared to other ethnic groups, the black population of South Africa has a low participation rate in entrepreneurship activities. The research question of this article is to explain this empirical fact. Based on twenty-four expert interviews, five patterns of explanation are presented and elaborated: a historical apartheid explanation, a financial resources explanation, a human capital explanation, a traits and mindset explanation and a social capital and network explanation. The historical apartheid explanation cannot be qualified independently of the other explanations as a distinctive explanation of its own. Although missing financial resources and shortages of human capital are the factors most often mentioned by the experts, and probably the most important ones, the remaining two explanations (mindset and social network) also deserve attention. A point argued in the conclusion of this article is that socio-cultural values, and the concept of "social capital" in particular, merit further investigation with respect to the question of why there is a lack of black entrepreneurship in South Africa.
APA, Harvard, Vancouver, ISO, and other styles
21

Ceylan, İsmail İlkan, Thomas Lukasiewicz, Enrico Malizia, Cristian Molinaro, and Andrius Vaicenavičius. "Preferred Explanations for Ontology-Mediated Queries under Existential Rules." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (May 18, 2021): 6262–70. http://dx.doi.org/10.1609/aaai.v35i7.16778.

Full text
Abstract:
Recently, explanations for query answers under existential rules have been investigated, where an explanation is an inclusion-minimal subset of a given database that, together with the ontology, entails the query. In this paper, we take a step further and study explanations under different minimality criteria. In particular, we first study cardinality-minimal explanations and hence focus on deriving explanations of minimum size. We then study a more general preference order induced by a weight distribution. We assume that every database fact is annotated with a (penalization) weight, and we are interested in explanations with minimum overall weight. For both preference orders, we study a variety of explanation problems, such as recognizing a preferred explanation, all preferred explanations, a relevant or necessary fact, and the existence of a preferred explanation not containing forbidden sets of facts. We provide a detailed complexity analysis for all the aforementioned problems, thereby providing a more complete picture for explaining query answers under existential rules.
APA, Harvard, Vancouver, ISO, and other styles
22

Rittle-Johnson, Bethany, and Abbey M. Loehr. "Eliciting explanations: Constraints on when self-explanation aids learning." Psychonomic Bulletin & Review 24, no. 5 (July 1, 2016): 1501–10. http://dx.doi.org/10.3758/s13423-016-1079-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Mamun, Tauseef Ibne, Kenzie Baker, Hunter Malinowski, Rober R. Hoffman, and Shane T. Mueller. "Assessing Collaborative Explanations of AI using Explanation Goodness Criteria." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (September 2021): 988–93. http://dx.doi.org/10.1177/1071181321651307.

Full text
Abstract:
Explainable AI represents an increasingly important category of systems that attempt to support human understanding and trust in machine intelligence and automation. Typical systems rely on algorithms to help understand underlying information about decisions and establish justified trust and reliance. Researchers have proposed using goodness criteria to measure the quality of explanations as a formative evaluation of an XAI system, but these criteria have not been systematically investigated in the literature. To explore this, we present a novel collaborative explanation system (CXAI) and propose several goodness criteria to evaluate the quality of its explanations. Results suggest that the explanations provided by this system are typically correct, informative, written in understandable ways, and focus on explanation of larger scale data patterns than are typically generated by algorithmic XAI systems. Implications for how these criteria may be applied to other XAI systems are discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Páez, Andrés. "Artificial explanations: the epistemological interpretation of explanation in AI." Synthese 170, no. 1 (July 1, 2008): 131–46. http://dx.doi.org/10.1007/s11229-008-9361-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Richards, Graham. "The Psychology of Explanation." History & Philosophy of Psychology 7, no. 1 (2005): 53–61. http://dx.doi.org/10.53841/bpshpp.2005.7.1.53.

Full text
Abstract:
While there are extensive literatures on the nature of scientific explanation, ‘psychological explanation’ and the logical character of ‘good’ explanations, relatively little has been written that considers the seeking and offering of explanations as psychological phenomena in their own right. In this paper it is suggested that, fundamentally, explanations are responses to specific puzzles and that a ‘good’ explanation is, operationally, simply one that leaves the person requiring it no longer feeling puzzled. (To achieve this it may of course have to meet all kinds of criteria set by the individual in question.) What we need, therefore, is a taxonomy or analysis of the way in which things become puzzling, and of the strategies appropriate in each case for dissipating the puzzle. A tentative exercise of this kind is presented. Adopting this apparently straightforward approach, however, does seem to raise more profound epistemological issues, only hinted at here.
APA, Harvard, Vancouver, ISO, and other styles
26

Caro-Martínez, Marta, Guillermo Jiménez-Díaz, and Juan A. Recio-García. "Conceptual Modeling of Explainable Recommender Systems: An Ontological Formalization to Guide Their Design and Development." Journal of Artificial Intelligence Research 71 (July 24, 2021): 557–89. http://dx.doi.org/10.1613/jair.1.12789.

Full text
Abstract:
With the increasing importance of e-commerce and the immense variety of products, users need help to decide which ones are the most interesting to them. This is one of the main goals of recommender systems. However, users’ trust may be compromised if they do not understand how or why the recommendation was achieved. Here, explanations are essential to improve user confidence in recommender systems and to make the recommendation useful. Providing explanation capabilities into recommender systems is not an easy task as their success depends on several aspects such as the explanation’s goal, the user’s expectation, the knowledge available, or the presentation method. Therefore, this work proposes a conceptual model to alleviate this problem by defining the requirements of explanations for recommender systems. Our goal is to provide a model that guides the development of effective explanations for recommender systems as they are correctly designed and suited to the user’s needs. Although earlier explanation taxonomies sustain this work, our model includes new concepts not considered in previous works. Moreover, we make a novel contribution regarding the formalization of this model as an ontology that can be integrated into the development of proper explanations for recommender systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Chalyi, Serhii, and Irina Leshchynska. "THE CONCEPTUAL MENTAL MODEL OF EXPLANATION IN AN ARTIFICIAL INTELLIGENCE SYSTEM." Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, no. 1 (9) (July 15, 2023): 70–75. http://dx.doi.org/10.20998/2079-0023.2023.01.11.

Full text
Abstract:
The subject of research is the process of formation of explanations in artificial intelligence systems. To solve the problem of the opacity of decision-making in artificial intelligence systems, users should receive an explanation of the decisions made. The explanation allows you to trust these solutions and ensure their use in practice. The purpose of the work is to develop a conceptual mental model of explanation to determine the basic dependencies that determine the relationship between input data, as well as actions to obtain a result in an intelligent system, and its final solution. To achieve the goal, the following tasks are solved: structuring approaches to building mental models of explanations; construction of a conceptual mental model of explanation based on a unified representation of the user's knowledge. Conclusions. The structuring of approaches to the construction of mental models of explanations in intelligent systems has been carried out. Mental models are designed to reflect the user's perception of an explanation. Causal, statistical, semantic, and conceptual approaches to the construction of mental models of explanation are distinguished. It is shown that the conceptual model sets generalized schemes and principles regarding the process of functioning of the intellectual system. Its further detailing is carried out on the basis of a causal approach in the case of constructing an explanation for processes, a statistical approach when constructing an explanation about the result of the system's work, as well as a semantic approach when harmonizing the explanation with the user's basic knowledge. A three-level conceptual mental model of the explanation is proposed, containing levels of concepts regarding the basic principles of the functioning of the artificial intelligence system, an explanation that details this concept in an acceptable and understandable way for the user, as well as basic knowledge about the subject area, which is the basis for the formation of the explanation. In a practical aspect, the proposed model creates conditions for building and organizing a set of agreed explanations that describe the process and result of the intelligent system, considering the possibility of their perception by the user.
APA, Harvard, Vancouver, ISO, and other styles
28

Izza, Yacine, Alexey Ignatiev, and Joao Marques-Silva. "On Tackling Explanation Redundancy in Decision Trees." Journal of Artificial Intelligence Research 75 (September 29, 2022): 261–321. http://dx.doi.org/10.1613/jair.1.13575.

Full text
Abstract:
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models. The interpretability of decision trees motivates explainability approaches by so-called intrinsic interpretability, and it is at the core of recent proposals for applying interpretable ML models in high-risk applications. The belief in DT interpretability is justified by the fact that explanations for DT predictions are generally expected to be succinct. Indeed, in the case of DTs, explanations correspond to DT paths. Since decision trees are ideally shallow, and so paths contain far fewer features than the total number of features, explanations in DTs are expected to be succinct, and hence interpretable. This paper offers both theoretical and experimental arguments demonstrating that, as long as interpretability of decision trees equates with succinctness of explanations, then decision trees ought not be deemed interpretable. The paper introduces logically rigorous path explanations and path explanation redundancy, and proves that there exist functions for which decision trees must exhibit paths with explanation redundancy that is arbitrarily larger than the actual path explanation. The paper also proves that only a very restricted class of functions can be represented with DTs that exhibit no explanation redundancy. In addition, the paper includes experimental results substantiating that path explanation redundancy is observed ubiquitously in decision trees, including those obtained using different tree learning algorithms, but also in a wide range of publicly available decision trees. The paper also proposes polynomial-time algorithms for eliminating path explanation redundancy, which in practice require negligible time to compute. Thus, these algorithms serve to indirectly attain irreducible, and so succinct, explanations for decision trees. Furthermore, the paper includes novel results related with duality and enumeration of explanations, based on using SAT solvers as witness-producing NP-oracles.
APA, Harvard, Vancouver, ISO, and other styles
29

Faul, Bogdan V. "Externalism about Moral Responsibility: Modification of A. Mele’s Thought Experiment." Ethical Thought 21, no. 1 (2021): 40–49. http://dx.doi.org/10.21146/2074-4870-2021-21-1-40-49.

Full text
Abstract:
The author modifies A. Mele’s thought experiment for externalism about moral res­ponsibility, which suggests that the agent’s history partially determines whether the agent is morally responsible for particular actions, or the consequences of actions. The original thought experiment constructs a situation in which the individual is not morally responsible for the killing because of manipulation, that is, for a reason external to the agent. A. Mele’s theory was criticized by A.V. Mertsalov, D.B. Volkov, and V.V. Vasiliev at the seminar orga­nized by the Moscow Center for Consciousness. The arguments against A. Mele's theory had the following structure: A.A. Mele does not show that the historical explanation is the best explanation, because there are competing explanations, no less convincing, which are in­compatible with A. Mele’s externalism. The author explicates and analyzes the expla­nations offered by philosophers from the Moscow Center for Consciousness: the explanation from identity, the explanation from self-identification, the explanation from the condition of knowledge, the explanation from future states. Although these explanations apply to Mele’s original thought experiment, they cannot explain the absence of moral responsibility in the modified thought experiment proposed by the author: the explanations from identity and self-identification are excluded by the gradual change in the agent structure of personality; the explanation of knowledge conditions is refuted by including knowledge of manipulation in the conditions of the thought experiment; the explanation of future states is excluded by removing relevant future states from the thought experiment.
APA, Harvard, Vancouver, ISO, and other styles
30

Madumal, Prashan. "Explainable Agency in Reinforcement Learning Agents." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (April 3, 2020): 13724–25. http://dx.doi.org/10.1609/aaai.v34i10.7134.

Full text
Abstract:
This thesis explores how reinforcement learning (RL) agents can provide explanations for their actions and behaviours. As humans, we build causal models to encode cause-effect relations of events and use these to explain why events happen. Taking inspiration from cognitive psychology and social science literature, I build causal explanation models and explanation dialogue models for RL agents. By mimicking human-like explanation models, these agents can provide explanations that are natural and intuitive to humans.
APA, Harvard, Vancouver, ISO, and other styles
31

Khyade, Vitthalrao Bhimasha, Priti Madhukar Gaikwad, and Pranita Rajendra Vare. "Explanation of Nymphalidae Butterflies." International Academic Journal of Science and Engineering 05, no. 02 (December 19, 2018): 87–110. http://dx.doi.org/10.9756/iajse/v5i1/1810029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hiller, Sara, Stefan Rumann, Kirsten Berthold, and Julian Roelle. "Example-based learning: should learners receive closed-book or open-book self-explanation prompts?" Instructional Science 48, no. 6 (September 1, 2020): 623–49. http://dx.doi.org/10.1007/s11251-020-09523-4.

Full text
Abstract:
AbstractIn learning from examples, students are often first provided with basic instructional explanations of new principles and concepts and second with examples thereof. In this sequence, it is important that learners self-explain by generating links between the basic instructional explanations’ content and the examples. Therefore, it is well established that learners receive self-explanation prompts. However, there is hardly any research on whether these prompts should be provided in a closed-book format—in which learners cannot access the basic instructional explanations during self-explaining and thus have to retrieve the main content of the instructional explanations that is needed to explain the examples from memory (i.e., retrieval practice)—or in an open-book format in which learners can access the instructional explanations during self-explaining. In two experiments, we varied whether learners received closed- or open-book self-explanation prompts. We also varied whether learners were prompted to actively process the main content of the basic instructional explanations before they proceeded to the self-explanation prompts. When the learners were not prompted to actively process the basic instructional explanations, closed-book prompts yielded detrimental effects on immediate and delayed (1 week) posttest performance. When the learners were prompted to actively process the basic instructional explanations beforehand, closed-book self-explanation prompts were not less beneficial than open-book prompts regarding performance on a delayed posttest. We conclude that at least when the retention interval does not exceed 1 week, closed-book self-explanation prompts do not entail an added value and can even be harmful in comparison to open-book ones.
APA, Harvard, Vancouver, ISO, and other styles
33

Lei, Xia, Jia-Jiang Lin, Xiong-Lin Luo, and Yongkai Fan. "Explaining deep residual networks predictions with symplectic adjoint method." Computer Science and Information Systems, no. 00 (2023): 47. http://dx.doi.org/10.2298/csis230310047l.

Full text
Abstract:
Understanding deep residual networks (ResNets) decisions are receiving much attention as a way to ensure their security and reliability. Recent research, however, lacks theoretical analysis to guarantee the faithfulness of explanations and could produce an unreliable explanation. In order to explain ResNets predictions, we suggest a provably faithful explanation for ResNet using a surrogate explainable model, a neural ordinary differential equation network (Neural ODE). First, ResNets are proved to converge to a Neural ODE and the Neural ODE is regarded as a surrogate model to explain the decision-making attribution of the ResNets. And then the decision feature and the explanation map of inputs belonging to the target class for Neural ODE are generated via the symplectic adjoint method. Finally, we prove that the explanations of Neural ODE can be sufficiently approximate to ResNet. Experiments show that the proposed explanation method has higher faith fulness with lower computational cost than other explanation approaches and it is effective for troubleshooting and optimizing a model by the explanation.
APA, Harvard, Vancouver, ISO, and other styles
34

Castro, Eduardo. "A deductive-nomological model for mathematical scientific explanation." Principia: an international journal of epistemology 24, no. 1 (April 28, 2020): 1–27. http://dx.doi.org/10.5007/1808-1711.2020v24n1p1.

Full text
Abstract:
I propose a deductive-nomological model for mathematical scientific explanation. In this regard, I modify Hempel’s deductive-nomological model and test it against some of the following recent paradigmatic examples of the mathematical explanation of empirical facts: the seven bridges of Königsberg, the North American synchronized cicadas, and Hénon-Heiles Hamiltonian systems. I argue that mathematical scientific explanations that invoke laws of nature are qualitative explanations, and ordinary scientific explanations that employ mathematics are quantitative explanations. I analyse the repercussions of this deductivenomological model on causal explanations.
APA, Harvard, Vancouver, ISO, and other styles
35

Maroney, James J., Timothy J. Rupert, and Martha L. Wartick. "The Perceived Fairness of Taxing Social Security Benefits: The Effect of Explanations Based on Different Dimensions of Tax Equity." Journal of the American Taxation Association 24, no. 2 (September 1, 2002): 79–92. http://dx.doi.org/10.2308/jata.2002.24.2.79.

Full text
Abstract:
In this study, we construct explanations for the taxation of social security benefits based on previously identified dimensions of fairness (exchange, horizontal, and vertical equity). We then conduct an experiment to examine whether providing senior citizen taxpayers with explanations increases the perceived fairness of taxing social security. The results indicate that for those subjects with the greatest self-interest (subjects currently taxed on a portion of their social security benefits), the exchange equity explanation had the most consistent positive effects on both acceptance of the explanation and on the perceived fairness of taxing social security benefits. On the other hand, for those subjects not currently taxed on their social security benefits, the vertical equity explanation was more likely to be accepted than either the exchange or horizontal equity explanation. However, while these subjects agreed with the vertical equity explanation, it did not increase their fairness perceptions. These findings illustrate how important it is for tax policy makers striving to increase perceptions of fairness to carefully consider and develop explanations for tax provisions.
APA, Harvard, Vancouver, ISO, and other styles
36

Kwan, Kai-man. "An Atheistic Argument from Naturalistic Explanations of Religious Belief: A Preliminary Reply to Robert Nola." Religions 13, no. 11 (November 10, 2022): 1084. http://dx.doi.org/10.3390/rel13111084.

Full text
Abstract:
Robert Nola has recently defended an argument against the existence of God on the basis of naturalistic explanations of religious belief. I will critically evaluate his argument in this paper. Nola’s argument takes the form of an inference to the best explanation: since the naturalistic stance offers a better explanation of religious belief relative to the theistic explanation, the ontology of God(s) is eliminated. I rebut Nola’s major assumption that naturalistic explanations and theistic explanations of religion are incompatible. I go on to criticize Nola’s proposed naturalistic explanations: Freudianism, a Hypersensitive Agency Detection Device, and a Moralising Mind-Policing God. I find these inadequate as actual explanations of religious belief. Even if they are correct, they will not show that theism is false. So Nola’s argument fails to convince.
APA, Harvard, Vancouver, ISO, and other styles
37

Ruben, David-Hillel. "Explanation in the Social Sciences: Singular Explanation and the Social Sciences." Royal Institute of Philosophy Supplement 27 (March 1990): 95–117. http://dx.doi.org/10.1017/s1358246100005063.

Full text
Abstract:
Are explanations in the social sciences fundamentally (logically or structurally) different from explanations in the natural sciences? Many philosophers think that they are, and I call such philosophers ‘difference theorists’. Many difference theorists locate that difference in the alleged fact that only in the natural sciences does explanation essentially include laws.
APA, Harvard, Vancouver, ISO, and other styles
38

Baron, Sam, Mark Colyvan, and David Ripley. "A Counterfactual Approach to Explanation in Mathematics." Philosophia Mathematica 28, no. 1 (December 2, 2019): 1–34. http://dx.doi.org/10.1093/philmat/nkz023.

Full text
Abstract:
ABSTRACT Our goal in this paper is to extend counterfactual accounts of scientific explanation to mathematics. Our focus, in particular, is on intra-mathematical explanations: explanations of one mathematical fact in terms of another. We offer a basic counterfactual theory of intra-mathematical explanations, before modelling the explanatory structure of a test case using counterfactual machinery. We finish by considering the application of counterpossibles to mathematical explanation, and explore a second test case along these lines.
APA, Harvard, Vancouver, ISO, and other styles
39

Chalyi, Serhii, and Volodymyr Leshchynskyi. "A METHOD FOR EVALUATING EXPLANATIONS IN AN ARTIFICIAL INTELLIGENCE SYSTEM USING POSSIBILITY THEORY." Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, no. 2 (10) (December 19, 2023): 95–101. http://dx.doi.org/10.20998/2079-0023.2023.02.14.

Full text
Abstract:
The subject of the research is the process of generating explanations for the decision of an artificial intelligence system. Explanations are used to help the user understand the process of reaching the result and to be able to use an intelligent information system more effectively to make practical decisions for him or her. The purpose of this paper is to develop a method for evaluating explanations taking into account differences in input data and the corresponding decision of an artificial intelligence system. The solution of this problem makes it possible to evaluate the relevance of the explanation for the internal decision-making mechanism in an intelligent information system, regardless of the user's level of knowledge about the peculiarities of making and using such a decision. To achieve this goal, the following tasks are solved: structuring the evaluation of explanations depending on their level of detail, taking into account their compliance with the decision-making process in an intelligent system and the level of perception of the user of such a system; developing a method for evaluating explanations based on their compliance with the decision-making process in an intelligent system. Conclusions. The article structures the evaluation of explanations according to their level of detail. The levels of associative dependencies, precedents, causal dependencies and interactive dependencies are identified, which determine different levels of detail of explanations. It is shown that the associative and causal levels of detail of explanations can be assessed using numerical, probabilistic, or possibilistic indicators. The precedent and interactive levels require a subjective assessment based on a survey of users of the artificial intelligence system. The article develops a method for the possible assessment of the relevance of explanations for the decision-making process in an intelligent system, taking into account the dependencies between the input data and the decision of the intelligent system. The method includes the stages of assessing the sensitivity, correctness and complexity of the explanation based on a comparison of the values and quantity of the input data used in the explanation. The method makes it possible to comprehensively evaluate the explanation in terms of resistance to insignificant changes in the input data, relevance of the explanation to the result obtained, and complexity of the explanation calculation. In terms of practical application, the method makes it possible to minimize the number of input variables for the explanation while satisfying the sensitivity constraint of the explanation, which creates conditions for more efficient formation of the interpretation based on the use of a subset of key input variables that have a significant impact on the decision obtained by the intelligent system.
APA, Harvard, Vancouver, ISO, and other styles
40

Louis-Dreyfus, William. "Explanation." Hudson Review 56, no. 3 (2003): 451. http://dx.doi.org/10.2307/3852682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Desy, Peter. "Explanation." English Journal 74, no. 5 (September 1985): 74. http://dx.doi.org/10.2307/817709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Sawyer, T. M. "Explanation." Journal of Technical Writing and Communication 15, no. 2 (April 1985): 131–41. http://dx.doi.org/10.2190/xmjm-vtnq-l7mj-k6k2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Bergmanson, Jan P. G. "Explanation." Cornea 11, no. 5 (September 1992): 491. http://dx.doi.org/10.1097/00003226-199209000-00025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Farris, James S., Mari Kallersjo, Victor A. Albert, Marc Allard, Arne Anderberg, Brunella Bowditch, Carol Bult, et al. "EXPLANATION." Cladistics 11, no. 2 (June 1995): 211–18. http://dx.doi.org/10.1111/j.1096-0031.1995.tb00086.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Bradie, Michael. "Explanation." Teaching Philosophy 12, no. 3 (1989): 291–93. http://dx.doi.org/10.5840/teachphil198912377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Trzcinski, Andrzej, and Marcin Wodzinski. "Explanation." Polin: Studies in Polish Jewry 19, no. 1 (January 2007): 613–18. http://dx.doi.org/10.3828/polin.2007.19.613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ray, Arijit, Yi Yao, Rakesh Kumar, Ajay Divakaran, and Giedrius Burachas. "Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (October 28, 2019): 153–61. http://dx.doi.org/10.1609/hcomp.v7i1.5275.

Full text
Abstract:
While there have been many proposals on making AI algorithms explainable, few have attempted to evaluate the impact of AI-generated explanations on human performance in conducting human-AI collaborative tasks. To bridge the gap, we propose a Twenty-Questions style collaborative image retrieval game, Explanation-assisted Guess Which (ExAG), as a method of evaluating the efficacy of explanations (visual evidence or textual justification) in the context of Visual Question Answering (VQA). In our proposed ExAG, a human user needs to guess a secret image picked by the VQA agent by asking natural language questions to it. We show that overall, when AI explains its answers, users succeed more often in guessing the secret image correctly. Notably, a few correct explanations can readily improve human performance when VQA answers are mostly incorrect as compared to no-explanation games. Furthermore, we also show that while explanations rated as “helpful” significantly improve human performance, “incorrect” and “unhelpful” explanations can degrade performance as compared to no-explanation games. Our experiments, therefore, demonstrate that ExAG is an effective means to evaluate the efficacy of AI-generated explanation on a human-AI collaborative task.
APA, Harvard, Vancouver, ISO, and other styles
48

Lai, Chengen, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan, and Guangneng Hu. "Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2849–57. http://dx.doi.org/10.1609/aaai.v38i3.28065.

Full text
Abstract:
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems. Existing post-hoc methods have achieved significant progress in obtaining a plausible explanation. However, such post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation for answers without considering the facts in images; and 3) Semantic perturbation insensitivity, the model can not recognize the semantic changes caused by small perturbations. These problems reduce the faithfulness of explanations generated by models. To address the above issues, we propose a novel self-supervised Multi-level Contrastive Learning based natural language Explanation model (MCLE) for VQA with semantic-level, image-level, and instance-level factual and counterfactual samples. MCLE extracts discriminative features and aligns the feature spaces from explanations with visual question and answer to generate more consistent explanations. We conduct extensive experiments, ablation analysis, and case study to demonstrate the effectiveness of our method on two VQA-NLE benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
49

MORITA, Kunihisa. "Scientific Explanation and Pseudo-Scientific Explanation." Journal of the Japan Association for Philosophy of Science 39, no. 1 (2011): 25–30. http://dx.doi.org/10.4288/kisoron.39.1_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Smith, John Maynard. "Explanation in Biology: Explanation in Biology." Royal Institute of Philosophy Supplement 27 (March 1990): 65–72. http://dx.doi.org/10.1017/s135824610000504x.

Full text
Abstract:
During the war, I worked in aircraft design. About a year after D-day, an exhibition was arranged at Farnborough of the mass of German equipment that had been captured, including the doodlebug and the V2 rocket. I and a friend spent a fascinating two days wandering round the exhibits. The questions that kept arising were ‘Why did they make it like that?’, or, equivalently ‘I wonder what that is for?’ We were particularly puzzled by a gyroscope in the control system of the V2. One's first assumption was that the gyroscope maintained the rocket on its course, but, instead of being connected to the steering vanes, it was connected to the fuel supply to the rocket. Ultimately my friend (who would have made a better biologist than I) remembered that the rate of precession of a gyro depends on acceleration, and saw that the Germans had used this fact to design an ingenious device for switching off the engines when the required velocity had been reached.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography