Статті в журналах з теми "Prediction Explanation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Prediction Explanation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Prediction Explanation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Pintelas, Emmanuel, Meletis Liaskos, Ioannis E. Livieris, Sotiris Kotsiantis, and Panagiotis Pintelas. "Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction." Journal of Imaging 6, no. 6 (May 28, 2020): 37. http://dx.doi.org/10.3390/jimaging6060037.

Повний текст джерела
Анотація:
Image classification is a very popular machine learning domain in which deep convolutional neural networks have mainly emerged on such applications. These networks manage to achieve remarkable performance in terms of prediction accuracy but they are considered as black box models since they lack the ability to interpret their inner working mechanism and explain the main reasoning of their predictions. There is a variety of real world tasks, such as medical applications, in which interpretability and explainability play a significant role. Making decisions on critical issues such as cancer prediction utilizing black box models in order to achieve high prediction accuracy but without provision for any sort of explanation for its prediction, accuracy cannot be considered as sufficient and ethnically acceptable. Reasoning and explanation is essential in order to trust these models and support such critical predictions. Nevertheless, the definition and the validation of the quality of a prediction model’s explanation can be considered in general extremely subjective and unclear. In this work, an accurate and interpretable machine learning framework is proposed, for image classification problems able to make high quality explanations. For this task, it is developed a feature extraction and explanation extraction framework, proposing also three basic general conditions which validate the quality of any model’s prediction explanation for any application domain. The feature extraction framework will extract and create transparent and meaningful high level features for images, while the explanation extraction framework will be responsible for creating good explanations relying on these extracted features and the prediction model’s inner function with respect to the proposed conditions. As a case study application, brain tumor magnetic resonance images were utilized for predicting glioma cancer. Our results demonstrate the efficiency of the proposed model since it managed to achieve sufficient prediction accuracy being also interpretable and explainable in simple human terms.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Halliwell, Nicholas. "Evaluating Explanations of Relational Graph Convolutional Network Link Predictions on Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12880–81. http://dx.doi.org/10.1609/aaai.v36i11.21577.

Повний текст джерела
Анотація:
Recently, explanation methods have been proposed to evaluate the predictions of Graph Neural Networks on the task of link prediction. Evaluating explanation quality is difficult without ground truth explanations. This thesis is focused on providing a method, including datasets and scoring metrics, to quantitatively evaluate explanation methods on link prediction on Knowledge Graphs.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Halliwell, Nicholas, Fabien Gandon, and Freddy Lecue. "A Simplified Benchmark for Ambiguous Explanations of Knowledge Graph Link Prediction Using Relational Graph Convolutional Networks (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12963–64. http://dx.doi.org/10.1609/aaai.v36i11.21618.

Повний текст джерела
Анотація:
Relational Graph Convolutional Networks (RGCNs) are commonly used on Knowledge Graphs (KGs) to perform black box link prediction. Several algorithms have been proposed to explain their predictions. Evaluating performance of explanation methods for link prediction is difficult without ground truth explanations. Furthermore, there can be multiple explanations for a given prediction in a KG. No dataset exists where observations have multiple ground truth explanations to compare against. Additionally, no standard scoring metrics exist to compare predicted explanations against multiple ground truth explanations. We propose and evaluate a method, including a dataset, to benchmark explanation methods on the task of explainable link prediction using RGCNs.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

EL Shawi, Radwa, and Mouaz H. Al-Mallah. "Interpretable Local Concept-based Explanation with Human Feedback to Predict All-cause Mortality." Journal of Artificial Intelligence Research 75 (November 18, 2022): 833–55. http://dx.doi.org/10.1613/jair.1.14019.

Повний текст джерела
Анотація:
Machine learning models are incorporated in different fields and disciplines in which some of them require a high level of accountability and transparency, for example, the healthcare sector. With the General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. A widely used category of explanation techniques attempts to explain models’ predictions by quantifying the importance score of each input feature. However, summarizing such scores to provide human-interpretable explanations is challenging. Another category of explanation techniques focuses on learning a domain representation in terms of high-level human-understandable concepts and then utilizing them to explain predictions. These explanations are hampered by how concepts are constructed, which is not intrinsically interpretable. To this end, we propose Concept-based Local Explanations with Feedback (CLEF), a novel local model agnostic explanation framework for learning a set of high-level transparent concept definitions in high-dimensional tabular data that uses clinician-labeled concepts rather than raw features. CLEF maps the raw input features to high-level intuitive concepts and then decompose the evidence of prediction of the instance being explained into concepts. In addition, the proposed framework generates counterfactual explanations, suggesting the minimum changes in the instance’s concept based explanation that will lead to a different prediction. We demonstrate with simulated user feedback on predicting the risk of mortality. Such direct feedback is more effective than other techniques, that rely on hand-labelled or automatically extracted concepts, in learning concepts that align with ground truth concept definitions.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bonacich, Phillip. "EXPLANATION AND PREDICTION." Rationality and Society 9, no. 3 (August 1997): 373–77. http://dx.doi.org/10.1177/104346397009003006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Søgaard, Villy. "Explanation versus prediction." Technological Forecasting and Social Change 43, no. 2 (March 1993): 201–2. http://dx.doi.org/10.1016/0040-1625(93)90018-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bartsch, Karen. "False Belief Prediction and Explanation: Which Develops First and Why it Matters." International Journal of Behavioral Development 22, no. 2 (June 1998): 423–28. http://dx.doi.org/10.1080/016502598384450.

Повний текст джерела
Анотація:
In response to Wimmer and Mayringer’s (this issue) report “False belief understanding in young children: Explanations do not develop before predictions”, the theoretical importance of the explanation versus prediction issue is expanded and the empirical conclusion of the report is questioned.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Plomin, Robert, and Sophie von Stumm. "Polygenic scores: prediction versus explanation." Molecular Psychiatry 27, no. 1 (October 22, 2021): 49–52. http://dx.doi.org/10.1038/s41380-021-01348-y.

Повний текст джерела
Анотація:
AbstractDuring the past decade, polygenic scores have become a fast-growing area of research in the behavioural sciences. The ability to directly assess people’s genetic propensities has transformed research by making it possible to add genetic predictors of traits to any study. The value of polygenic scores in the behavioural sciences rests on using inherited DNA differences to predict, from birth, common disorders and complex traits in unrelated individuals in the population. This predictive power of polygenic scores does not require knowing anything about the processes that lie between genes and behaviour. It also does not mandate disentangling the extent to which the prediction is due to assortative mating, genotype–environment correlation, or even population stratification. Although bottom-up explanation from genes to brain to behaviour will remain the long-term goal of the behavioural sciences, prediction is also a worthy achievement because it has immediate practical utility for identifying individuals at risk and is the necessary first step towards explanation. A high priority for research must be to increase the predictive power of polygenic scores to be able to use them as an early warning system to prevent problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Douglas, Heather E. "Reintroducing Prediction to Explanation." Philosophy of Science 76, no. 4 (October 2009): 444–63. http://dx.doi.org/10.1086/648111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kishimoto, T., and T. Sato. "+: Another Explanation and Prediction." Progress of Theoretical Physics 116, no. 1 (July 1, 2006): 241–46. http://dx.doi.org/10.1143/ptp.116.241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Zeng, Siyang, Mehrdad Arjomandi, and Gang Luo. "Automatically Explaining Machine Learning Predictions on Severe Chronic Obstructive Pulmonary Disease Exacerbations: Retrospective Cohort Study." JMIR Medical Informatics 10, no. 2 (February 25, 2022): e33043. http://dx.doi.org/10.2196/33043.

Повний текст джерела
Анотація:
Background Chronic obstructive pulmonary disease (COPD) is a major cause of death and places a heavy burden on health care. To optimize the allocation of precious preventive care management resources and improve the outcomes for high-risk patients with COPD, we recently built the most accurate model to date to predict severe COPD exacerbations, which need inpatient stays or emergency department visits, in the following 12 months. Our model is a machine learning model. As is the case with most machine learning models, our model does not explain its predictions, forming a barrier for clinical use. Previously, we designed a method to automatically provide rule-type explanations for machine learning predictions and suggest tailored interventions with no loss of model performance. This method has been tested before for asthma outcome prediction but not for COPD outcome prediction. Objective This study aims to assess the generalizability of our automatic explanation method for predicting severe COPD exacerbations. Methods The patient cohort included all patients with COPD who visited the University of Washington Medicine facilities between 2011 and 2019. In a secondary analysis of 43,576 data instances, we used our formerly developed automatic explanation method to automatically explain our model’s predictions and suggest tailored interventions. Results Our method explained the predictions for 97.1% (100/103) of the patients with COPD whom our model correctly predicted to have severe COPD exacerbations in the following 12 months and the predictions for 73.6% (134/182) of the patients with COPD who had ≥1 severe COPD exacerbation in the following 12 months. Conclusions Our automatic explanation method worked well for predicting severe COPD exacerbations. After further improving our method, we hope to use it to facilitate future clinical use of our model. International Registered Report Identifier (IRRID) RR2-10.2196/13783
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Tyralis, Hristos, Georgia Papacharalampous, Andreas Langousis, and Simon Michael Papalexiou. "Explanation and Probabilistic Prediction of Hydrological Signatures with Statistical Boosting Algorithms." Remote Sensing 13, no. 3 (January 20, 2021): 333. http://dx.doi.org/10.3390/rs13030333.

Повний текст джерела
Анотація:
Hydrological signatures, i.e., statistical features of streamflow time series, are used to characterize the hydrology of a region. A relevant problem is the prediction of hydrological signatures in ungauged regions using the attributes obtained from remote sensing measurements at ungauged and gauged regions together with estimated hydrological signatures from gauged regions. The relevant framework is formulated as a regression problem, where the attributes are the predictor variables and the hydrological signatures are the dependent variables. Here we aim to provide probabilistic predictions of hydrological signatures using statistical boosting in a regression setting. We predict 12 hydrological signatures using 28 attributes in 667 basins in the contiguous US. We provide formal assessment of probabilistic predictions using quantile scores. We also exploit the statistical boosting properties with respect to the interpretability of derived models. It is shown that probabilistic predictions at quantile levels 2.5% and 97.5% using linear models as base learners exhibit better performance compared to more flexible boosting models that use both linear models and stumps (i.e., one-level decision trees). On the contrary, boosting models that use both linear models and stumps perform better than boosting with linear models when used for point predictions. Moreover, it is shown that climatic indices and topographic characteristics are the most important attributes for predicting hydrological signatures.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Dall’Aglio, John. "Sex and Prediction Error, Part 3: Provoking Prediction Error." Journal of the American Psychoanalytic Association 69, no. 4 (August 2021): 743–65. http://dx.doi.org/10.1177/00030651211042059.

Повний текст джерела
Анотація:
In parts 1 and 2 of this Lacanian neuropsychoanalytic series, surplus prediction error was presented as a neural correlate of the Lacanian concept of jouissance. Affective consciousness (a key source of prediction error in the brain) impels the work of cognition, the predictive work of explaining what is foreign and surprising. Yet this arousal is the necessary bedrock of all consciousness. Although the brain’s predictive model strives for homeostatic explanation of prediction error, jouissance “drives a hole” in the work of homeostasis. Some residual prediction error always remains. Lacanian clinical technique attends to this surplus and the failed predictions to which this jouissance “sticks.” Rather than striving to eliminate prediction error, clinical practice seeks its metabolization. Analysis targets one’s mode of jouissance to create a space for the subject to enjoy in some other way. This entails working with prediction error, not removing or tolerating it. Analysis aims to shake the very core of the subject by provoking prediction error—this drives clinical change. Brief clinical examples illustrate this view.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

ZHANG, Lijin, Xiayan WEI, Jiaqi LU, and Junhao PAN. "Lasso regression: From explanation to prediction." Advances in Psychological Science 28, no. 10 (2020): 1777. http://dx.doi.org/10.3724/sp.j.1042.2020.01777.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Kim, Jaegwon. "Explanation, Prediction, and Reduction in Emergentism." Intellectica. Revue de l'Association pour la Recherche Cognitive 25, no. 2 (1997): 45–57. http://dx.doi.org/10.3406/intel.1997.1556.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hofman, Jake M., Amit Sharma, and Duncan J. Watts. "Prediction and explanation in social systems." Science 355, no. 6324 (February 2, 2017): 486–88. http://dx.doi.org/10.1126/science.aal3856.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Wiegleb, Gerhard. "Explanation and prediction in vegetation science." Vegetatio 83, no. 1-2 (October 1989): 17–34. http://dx.doi.org/10.1007/bf00031678.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Tajgardoon, Mohammadamin, Malarkodi J. Samayamuthu, Luca Calzoni, and Shyam Visweswaran. "Patient-Specific Explanations for Predictions of Clinical Outcomes." ACI Open 03, no. 02 (July 2019): e88-e97. http://dx.doi.org/10.1055/s-0039-1697907.

Повний текст джерела
Анотація:
Abstract Background Machine learning models that are used for predicting clinical outcomes can be made more useful by augmenting predictions with simple and reliable patient-specific explanations for each prediction. Objectives This article evaluates the quality of explanations of predictions using physician reviewers. The predictions are obtained from a machine learning model that is developed to predict dire outcomes (severe complications including death) in patients with community acquired pneumonia (CAP). Methods Using a dataset of patients diagnosed with CAP, we developed a predictive model to predict dire outcomes. On a set of 40 patients, who were predicted to be either at very high risk or at very low risk of developing a dire outcome, we applied an explanation method to generate patient-specific explanations. Three physician reviewers independently evaluated each explanatory feature in the context of the patient's data and were instructed to disagree with a feature if they did not agree with the magnitude of support, the direction of support (supportive versus contradictory), or both. Results The model used for generating predictions achieved a F1 score of 0.43 and area under the receiver operating characteristic curve (AUROC) of 0.84 (95% confidence interval [CI]: 0.81–0.87). Interreviewer agreement between two reviewers was strong (Cohen's kappa coefficient = 0.87) and fair to moderate between the third reviewer and others (Cohen's kappa coefficient = 0.49 and 0.33). Agreement rates between reviewers and generated explanations—defined as the proportion of explanatory features with which majority of reviewers agreed—were 0.78 for actual explanations and 0.52 for fabricated explanations, and the difference between the two agreement rates was statistically significant (Chi-square = 19.76, p-value < 0.01). Conclusion There was good agreement among physician reviewers on patient-specific explanations that were generated to augment predictions of clinical outcomes. Such explanations can be useful in interpreting predictions of clinical outcomes.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Costa, Anderson Bessa Da, Larissa Moreira, Daniel Ciampi De Andrade, Adriano Veloso, and Nivio Ziviani. "Predicting the Evolution of Pain Relief." ACM Transactions on Computing for Healthcare 2, no. 4 (October 31, 2021): 1–28. http://dx.doi.org/10.1145/3466781.

Повний текст джерела
Анотація:
Modeling from data usually has two distinct facets: building sound explanatory models or creating powerful predictive models for a system or phenomenon. Most of recent literature does not exploit the relationship between explanation and prediction while learning models from data. Recent algorithms are not taking advantage of the fact that many phenomena are actually defined by diverse sub-populations and local structures, and thus there are many possible predictive models providing contrasting interpretations or competing explanations for the same phenomenon. In this article, we propose to explore a complementary link between explanation and prediction. Our main intuition is that models having their decisions explained by the same factors are likely to perform better predictions for data points within the same local structures. We evaluate our methodology to model the evolution of pain relief in patients suffering from chronic pain under usual guideline-based treatment. The ensembles generated using our framework are compared with all-in-one approaches of robust algorithms to high-dimensional data, such as Random Forests and XGBoost. Chronic pain can be primary or secondary to diseases. Its symptomatology can be classified as nociceptive, nociplastic, or neuropathic, and is generally associated with many different causal structures, challenging typical modeling methodologies. Our data includes 631 patients receiving pain treatment. We considered 338 features providing information about pain sensation, socioeconomic status, and prescribed treatments. Our goal is to predict, using data from the first consultation only, if the patient will be successful in treatment for chronic pain relief. As a result of this work, we were able to build ensembles that are able to consistently improve performance by up to 33% when compared to models trained using all the available features. We also obtained relevant gains in interpretability, with resulting ensembles using only 15% of the total number of features. We show we can effectively generate ensembles from competing explanations, promoting diversity in ensemble learning and leading to significant gains in accuracy by enforcing a stable scenario in which models that are dissimilar in terms of their predictions are also dissimilar in terms of their explanation factors.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Redhead, Michael. "Explanation in Physics: Explanation." Royal Institute of Philosophy Supplement 27 (March 1990): 135–54. http://dx.doi.org/10.1017/s1358246100005087.

Повний текст джерела
Анотація:
In what sense do the sciences explain? Or do they merely describe what is going on without answering why-questions at all. But cannot description at an appropriate ‘level’ provide all that we can reasonably ask of an explanation? Well, what do we mean by explanation anyway? What, if anything, gets left out when we provide a so-called scientific explanation? Are there limits of explanation in general, and scientific explanation, in particular? What are the criteria for a good explanation? Is it possible to satisfy all the desiderata simultaneously? If not, which should we regard as paramount? What is the connection between explanation and prediction? What exactly is it that statistical explanations explain? These are some of the questions that have generated a very extensive literature in the philosophy of science. In attempting to answer them, definite views will have to be taken on related matters, such as physical laws, causality, reduction, and questions of evidence and confirmation, of theory and observation, realism versus antirealism, and the objectivity and rationality of science. I will state my own views on these matters, in the course of this essay. To argue for everything in detail and to do justice to all the alternative views, would fill a book, perhaps several books. I want to lead up fairly quickly to modern physics, and review the explanatory situation there in rather more detail.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Fotheringham, A. Stewart, and M. J. Webber. "Explanation, Prediction and Planning: The Lowry Model." Economic Geography 61, no. 1 (January 1985): 106. http://dx.doi.org/10.2307/143686.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Cleland, Carol E. "Prediction and Explanation in Historical Natural Science." British Journal for the Philosophy of Science 62, no. 3 (September 1, 2011): 551–82. http://dx.doi.org/10.1093/bjps/axq024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Achinstein, Peter. "Explanation v. Prediction: Which Carries More Weight?" PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994, no. 2 (January 1994): 156–64. http://dx.doi.org/10.1086/psaprocbienmeetp.1994.2.192926.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Murray, A. Brad. "Reducing model complexity for explanation and prediction." Geomorphology 90, no. 3-4 (October 2007): 178–91. http://dx.doi.org/10.1016/j.geomorph.2006.10.020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Quinsey, Vernon L. "The prediction and explanation of criminal violence." International Journal of Law and Psychiatry 18, no. 2 (March 1995): 117–27. http://dx.doi.org/10.1016/0160-2527(95)00001-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Alufaisan, Yasmeen, Laura R. Marusich, Jonathan Z. Bakdash, Yan Zhou, and Murat Kantarcioglu. "Does Explainable Artificial Intelligence Improve Human Decision-Making?" Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6618–26. http://dx.doi.org/10.1609/aaai.v35i8.16819.

Повний текст джерела
Анотація:
Explainable AI provides insights to users into the why for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. There are mixed findings whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model. Using real datasets, we compare objective human decision accuracy without AI (control), with an AI prediction (no explanation), and AI prediction with explanation. We find providing any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact. Moreover, we observed the strongest predictor for human decision accuracy was AI accuracy and that users were somewhat able to detect when the AI was correct vs. incorrect, but this was not significantly affected by including an explanation. Our results indicate that, at least in some situations, the why information provided in explainable AI may not enhance user decision-making, and further research may be needed to understand how to integrate explainable AI into real systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

LIU, FU-SUI, KUANG-DING PENG та WAN-FANG CHEN. "A UNIFIED EXPLANATION FOR SUPERCONDUCTIVITY AND PSEUDOGAP IN YBa2Cu3O7-δ". International Journal of Modern Physics B 15, № 01 (10 січня 2001): 25–35. http://dx.doi.org/10.1142/s0217979201002667.

Повний текст джерела
Анотація:
Taking the two-local-spin-mediated interaction (TLSMI) as pairing interaction, this paper explains six important experimental findings in YBa 2 Cu 3 O 7-δ (Y123), and makes some predictions. The main prediction is that there is very high temperature superconductivity in extra-underdoped Y123.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Gładziejewski, Paweł. "Mechanistic unity of the predictive mind." Theory & Psychology 29, no. 5 (July 31, 2019): 657–75. http://dx.doi.org/10.1177/0959354319866258.

Повний текст джерела
Анотація:
It has recently been argued that cognitive scientists should embrace explanatory pluralism rather than pursue the search for a unificatory framework or theory. This stance dovetails with the mechanistic view of cognitive-scientific explanation. However, one recently proposed theory—based on an idea that the brain is a predictive engine—opposes pluralism with its unificatory ambitions. My aim here is to investigate those pretentions to elucidate what sort of unification is on offer. I challenge the idea that explanatory unification of cognitive science follows from the Free Energy Principle. I claim that if the predictive story is to provide a unification, it is by proposing that many distinct cognitive mechanisms fall under a single prediction-error-minimization schema. I also argue that even though unification is not an absolute evaluative criterion for mechanistic explanations, it may play an epistemic role in evaluating the relative credibility of an explanation.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Lo, Adeline, Herman Chernoff, Tian Zheng, and Shaw-Hwa Lo. "Why significant variables aren’t automatically good predictors." Proceedings of the National Academy of Sciences 112, no. 45 (October 26, 2015): 13892–97. http://dx.doi.org/10.1073/pnas.1518285112.

Повний текст джерела
Анотація:
Thus far, genome-wide association studies (GWAS) have been disappointing in the inability of investigators to use the results of identified, statistically significant variants in complex diseases to make predictions useful for personalized medicine. Why are significant variables not leading to good prediction of outcomes? We point out that this problem is prevalent in simple as well as complex data, in the sciences as well as the social sciences. We offer a brief explanation and some statistical insights on why higher significance cannot automatically imply stronger predictivity and illustrate through simulations and a real breast cancer example. We also demonstrate that highly predictive variables do not necessarily appear as highly significant, thus evading the researcher using significance-based methods. We point out that what makes variables good for prediction versus significance depends on different properties of the underlying distributions. If prediction is the goal, we must lay aside significance as the only selection standard. We suggest that progress in prediction requires efforts toward a new research agenda of searching for a novel criterion to retrieve highly predictive variables rather than highly significant variables. We offer an alternative approach that was not designed for significance, the partition retention method, which was very effective predicting on a long-studied breast cancer data set, by reducing the classification error rate from 30% to 8%.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Madumal, Prashan, Tim Miller, Liz Sonenberg, and Frank Vetere. "Explainable Reinforcement Learning through a Causal Lens." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2493–500. http://dx.doi.org/10.1609/aaai.v34i03.5631.

Повний текст джерела
Анотація:
Prominent theories in cognitive science propose that humans understand and represent the knowledge of the world through causal relationships. In making sense of the world, we build causal models in our mind to encode cause-effect relations of events and use these to explain why new events happen by referring to counterfactuals — things that did not happen. In this paper, we use causal models to derive causal explanations of the behaviour of model-free reinforcement learning agents. We present an approach that learns a structural causal model during reinforcement learning and encodes causal relationships between variables of interest. This model is then used to generate explanations of behaviour based on counterfactual analysis of the causal model. We computationally evaluate the model in 6 domains and measure performance and task prediction accuracy. We report on a study with 120 participants who observe agents playing a real-time strategy game (Starcraft II) and then receive explanations of the agents' behaviour. We investigate: 1) participants' understanding gained by explanations through task prediction; 2) explanation satisfaction and 3) trust. Our results show that causal model explanations perform better on these measures compared to two other baseline explanation models.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ahmed, Irfan, Indika Kumara, Vahideh Reshadat, A. S. M. Kayes, Willem-Jan van den Heuvel, and Damian A. Tamburri. "Travel Time Prediction and Explanation with Spatio-Temporal Features: A Comparative Study." Electronics 11, no. 1 (December 29, 2021): 106. http://dx.doi.org/10.3390/electronics11010106.

Повний текст джерела
Анотація:
Travel time information is used as input or auxiliary data for tasks such as dynamic navigation, infrastructure planning, congestion control, and accident detection. Various data-driven Travel Time Prediction (TTP) methods have been proposed in recent years. One of the most challenging tasks in TTP is developing and selecting the most appropriate prediction algorithm. The existing studies that empirically compare different TTP models only use a few models with specific features. Moreover, there is a lack of research on explaining TTPs made by black-box models. Such explanations can help to tune and apply TTP methods successfully. To fill these gaps in the current TTP literature, using three data sets, we compare three types of TTP methods (ensemble tree-based learning, deep neural networks, and hybrid models) and ten different prediction algorithms overall. Furthermore, we apply XAI (Explainable Artificial Intelligence) methods (SHAP and LIME) to understand and interpret models’ predictions. The prediction accuracy and reliability for all models are evaluated and compared. We observed that the ensemble learning methods, i.e., XGBoost and LightGBM, are the best performing models over the three data sets, and XAI methods can adequately explain how various spatial and temporal features influence travel time.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Alejandro Fernández Fernández, José. "United States banking stability: An explanation through machine learning." Banks and Bank Systems 15, no. 4 (December 16, 2020): 137–49. http://dx.doi.org/10.21511/bbs.15(4).2020.12.

Повний текст джерела
Анотація:
In this paper, an analysis of the prediction of bank stability in the United States from 1990 to 2017 is carried out, using bank solvency, delinquency and an ad hoc bank stability indicator as variables to measure said stability. Different machine learning assembly models have been used in the study, a random forest is developed because it is the most accurate of all those tested. Another novel element of the work is the use of partial dependency graphs (PDP) and individual conditional expectation curves (ICES) to interpret the results that allow observing for specific values how the banking variables vary, when the macro-financial variables vary.It is concluded that the most determining variables to predict bank solvency in the United States are interest rates, specifically the mortgage rate and the 5 and 10-year interest rates of treasury bonds, reducing solvency as these rates increase. For delinquency, the most important variable is the unemployment rate in the forecast. The financial stability index is made up of the normalized difference between the two factors obtained, one for solvency and the other for delinquency. The index prediction concludes that stability worsens as BBB corporate yield increases.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Yarkoni, Tal, and Jacob Westfall. "Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning." Perspectives on Psychological Science 12, no. 6 (August 25, 2017): 1100–1122. http://dx.doi.org/10.1177/1745691617693393.

Повний текст джерела
Анотація:
Psychology has historically been concerned, first and foremost, with explaining the causal mechanisms that give rise to behavior. Randomized, tightly controlled experiments are enshrined as the gold standard of psychological research, and there are endless investigations of the various mediating and moderating variables that govern various behaviors. We argue that psychology’s near-total focus on explaining the causes of behavior has led much of the field to be populated by research programs that provide intricate theories of psychological mechanism but that have little (or unknown) ability to predict future behaviors with any appreciable accuracy. We propose that principles and techniques from the field of machine learning can help psychology become a more predictive science. We review some of the fundamental concepts and tools of machine learning and point out examples where these concepts have been used to conduct interesting and important psychological research that focuses on predictive research questions. We suggest that an increased focus on prediction, rather than explanation, can ultimately lead us to greater understanding of behavior.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Perfilieva, Irina, and Vladik Kreinovich. "Why Fuzzy Transform Is Efficient in Large-Scale Prediction Problems: A Theoretical Explanation." Advances in Fuzzy Systems 2011 (2011): 1–5. http://dx.doi.org/10.1155/2011/985839.

Повний текст джерела
Анотація:
In many practical situations like weather prediction, we are interested in large-scale (averaged) value of the predicted quantities. For example, it is impossible to predict the exact future temperature at different spatial locations, but we can reasonably well predict average temperature over a region. Traditionally, to obtain such large-scale predictions, we first perform a detailed integration of the corresponding differential equation and then average the resulting detailed solution. This procedure is often very time-consuming, since we need to process all the details of the original data. In our previous papers, we have shown that similar quality large-scale prediction results can be obtained if, instead, we apply a much faster procedure—first average the inputs (by applying an appropriate fuzzy transform) and then use these averaged inputs to solve the corresponding (discretization of the) differential equation. In this paper, we provide a general theoretical explanation of why our semiheuristic method works, that is, why fuzzy transforms are efficient in large-scale predictions.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Lee, Yongsoo, Eungyu Lee, and Taejin Lee. "Human-Centered Efficient Explanation on Intrusion Detection Prediction." Electronics 11, no. 13 (July 2, 2022): 2082. http://dx.doi.org/10.3390/electronics11132082.

Повний текст джерела
Анотація:
The methodology for constructing intrusion detection systems and improving existing systems is being actively studied in order to detect harmful data within large-capacity network data. The most common approach is to use AI systems to adapt to unanticipated threats and improve system performance. However, most studies aim to improve performance, and performance-oriented systems tend to be composed of black box models, whose internal working is complex. In the field of security control, analysts strive for interpretation and response based on information from given data, system prediction results, and knowledge. Consequently, performance-oriented systems suffer from a lack of interpretability owing to the lack of system prediction results and internal process information. The recent social climate also demands a responsible system rather than a performance-focused one. This research aims to ensure understanding and interpretation by providing interpretability for AI systems in multiple classification environments that can detect various attacks. In particular, the better the performance, the more complex and less transparent the model and the more limited the area that the analyst can understand, the lower the processing efficiency accordingly. The approach provided in this research is an intrusion detection methodology that uses FOS based on SHAP values to evaluate if the prediction result is suspicious and selects the optimal rule from the transparent model to improve the explanation.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Timofeeva, T. V., and N. L. Allinger. "Molecular mechanics for organometallic molecules: prediction and explanation." Acta Crystallographica Section A Foundations of Crystallography 52, a1 (August 8, 1996): C285. http://dx.doi.org/10.1107/s0108767396088125.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Hechter, M. "Prediction Versus Explanation in the Measurement of Values." European Sociological Review 21, no. 2 (April 1, 2005): 91–108. http://dx.doi.org/10.1093/esr/jci006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Aligica, Paul Dragos. "Prediction, explanation and the epistemology of future studies." Futures 35, no. 10 (December 2003): 1027–40. http://dx.doi.org/10.1016/s0016-3287(03)00067-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Krauss, Daniel A., Bruce D. Sales, Judith V. Becker, and A. J. Figueredo. "Beyond Prediction to Explanation in Risk Assessment Research." International Journal of Law and Psychiatry 23, no. 2 (March 2000): 91–112. http://dx.doi.org/10.1016/s0160-2527(99)00032-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Caldeira, Gregory A. "Expert Judgment versus Statistical Models: Explanation versus Prediction." Perspectives on Politics 2, no. 04 (December 2004): 777–80. http://dx.doi.org/10.1017/s1537592704040526.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Wallis, Charles S. "Stich, Content, Prediction, and Explanation in Cognitive Science." PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1990, no. 1 (January 1990): 327–40. http://dx.doi.org/10.1086/psaprocbienmeetp.1990.1.192714.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Cybinski, Patti. "Description, explanation, prediction – the evolution of bankruptcy studies?" Managerial Finance 27, no. 4 (April 2001): 29–44. http://dx.doi.org/10.1108/03074350110767123.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Vos, Frederik G. S., Holger Schiele, and Lisa Hüttinger. "Supplier satisfaction: Explanation and out-of-sample prediction." Journal of Business Research 69, no. 10 (October 2016): 4613–23. http://dx.doi.org/10.1016/j.jbusres.2016.04.013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Güngör, Onur, Tunga Güngör, and Suzan Uskudarli. "EXSEQREG: Explaining sequence-based NLP tasks with regions with a case study using morphological features for named entity recognition." PLOS ONE 15, no. 12 (December 30, 2020): e0244179. http://dx.doi.org/10.1371/journal.pone.0244179.

Повний текст джерела
Анотація:
The state-of-the-art systems for most natural language engineering tasks employ machine learning methods. Despite the improved performances of these systems, there is a lack of established methods for assessing the quality of their predictions. This work introduces a method for explaining the predictions of any sequence-based natural language processing (NLP) task implemented with any model, neural or non-neural. Our method named EXSEQREG introduces the concept of region that links the prediction and features that are potentially important for the model. A region is a list of positions in the input sentence associated with a single prediction. Many NLP tasks are compatible with the proposed explanation method as regions can be formed according to the nature of the task. The method models the prediction probability differences that are induced by careful removal of features used by the model. The output of the method is a list of importance values. Each value signifies the impact of the corresponding feature on the prediction. The proposed method is demonstrated with a neural network based named entity recognition (NER) tagger using Turkish and Finnish datasets. A qualitative analysis of the explanations is presented. The results are validated with a procedure based on the mutual information score of each feature. We show that this method produces reasonable explanations and may be used for i) assessing the degree of the contribution of features regarding a specific prediction of the model, ii) exploring the features that played a significant role for a trained model when analyzed across the corpus.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Niu, Kezhu. "Autism Spectrum Disorder as a Disorder of Prediction in Sensorimotor Processing." Journal of Education, Humanities and Social Sciences 5 (November 23, 2022): 320–26. http://dx.doi.org/10.54097/ehss.v5i.2927.

Повний текст джерела
Анотація:
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized by persistent social interactive and communicative difficulties and repetitive, restricted behavioral patterns. Previous theories suggested impairments in two distinct sets of core abilities as an explanation for ASD. One is the delayed ability to reflect on others’ mental content, and the other is the lack of the tendency to integrate details to create meanings in contexts. In the current field, there is an emergent explanation to consider ASD as a disorder of prediction. Under this notion, two competing views proposed different accounts for the specific deficits in ASD predictive system. The Bayesian view believes that ASD individuals experience reduced priors and are less reliant on top-down information when making predictions. Alternatively, the predictive error view believes that ASD impairments result from a failure to ignore accidental prediction errors caused by environmental noise, leading to overly frequent updates and less generalizable predictions. Though both views seem credible, no previous studies have comprehensively examined their reliability in empirical evidence. Therefore, the present paper fills in the gap by reviewing the two views and their relevant psychological and neuroscientific evidence with a specific focus on sensorimotor prediction. The major conclusion is that most empirical evidence was consistent with the reduced prior proposal but not the prediction error weighing proposal. Specifically, the ASD population is resistant to reliable contextual priors even though their associative learning may remain unimpaired. In keeping with the reduced prior proposal, the ASD population showed atypical connectivity between brain areas, suggesting insufficient communication of top-down information. Additionally, subjective anxiety during the Bayesian inferential process probably hinders the prediction performance. One possible limitation of the present review is the generalizability of conclusions to the domain of social impairments. Future studies should dedicate to exploring the restrictive conditions on the reduced Bayesian prior and E/I ratio imbalance and the role of anxiety in moderating the predictive process. One practical implication is to promote context-dependent imitations in sensorimotor learning in ASD. This review can provide some insights to future intervention studies and practices for children with ASD.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Nor, Ahmad Kamal Mohd, Srinivasa Rao Pedapati, Masdi Muhammad, and Víctor Leiva. "Abnormality Detection and Failure Prediction Using Explainable Bayesian Deep Learning: Methodology and Case Study with Industrial Data." Mathematics 10, no. 4 (February 11, 2022): 554. http://dx.doi.org/10.3390/math10040554.

Повний текст джерела
Анотація:
Mistrust, amplified by numerous artificial intelligence (AI) related incidents, is an issue that has caused the energy and industrial sectors to be amongst the slowest adopter of AI methods. Central to this issue is the black-box problem of AI, which impedes investments and is fast becoming a legal hazard for users. Explainable AI (XAI) is a recent paradigm to tackle such an issue. Being the backbone of the industry, the prognostic and health management (PHM) domain has recently been introduced into XAI. However, many deficiencies, particularly the lack of explanation assessment methods and uncertainty quantification, plague this young domain. In the present paper, we elaborate a framework on explainable anomaly detection and failure prognostic employing a Bayesian deep learning model and Shapley additive explanations (SHAP) to generate local and global explanations from the PHM tasks. An uncertainty measure of the Bayesian model is utilized as a marker for anomalies and expands the prognostic explanation scope to include the model’s confidence. In addition, the global explanation is used to improve prognostic performance, an aspect neglected from the handful of studies on PHM-XAI. The quality of the explanation is examined employing local accuracy and consistency properties. The elaborated framework is tested on real-world gas turbine anomalies and synthetic turbofan failure prediction data. Seven out of eight of the tested anomalies were successfully identified. Additionally, the prognostic outcome showed a 19% improvement in statistical terms and achieved the highest prognostic score amongst best published results on the topic.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Xia, Bohui, Xueting Wang, and Toshihiko Yamasaki. "Semantic Explanation for Deep Neural Networks Using Feature Interactions." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 3s (October 31, 2021): 1–19. http://dx.doi.org/10.1145/3474557.

Повний текст джерела
Анотація:
Given the promising results obtained by deep-learning techniques in multimedia analysis, the explainability of predictions made by networks has become important in practical applications. We present a method to generate semantic and quantitative explanations that are easily interpretable by humans. The previous work to obtain such explanations has focused on the contributions of each feature, taking their sum to be the prediction result for a target variable; the lack of discriminative power due to this simple additive formulation led to low explanatory performance. Our method considers not only individual features but also their interactions, for a more detailed interpretation of the decisions made by networks. The algorithm is based on the factorization machine, a prediction method that calculates factor vectors for each feature. We conducted experiments on multiple datasets with different models to validate our method, achieving higher performance than the previous work. We show that including interactions not only generates explanations but also makes them richer and is able to convey more information. We show examples of produced explanations in a simple visual format and verify that they are easily interpretable and plausible.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Cavallaro, Massimo, Ed Moran, Benjamin Collyer, Noel D. McCarthy, Christopher Green, and Matt J. Keeling. "Informing antimicrobial stewardship with explainable AI." PLOS Digital Health 2, no. 1 (January 5, 2023): e0000162. http://dx.doi.org/10.1371/journal.pdig.0000162.

Повний текст джерела
Анотація:
The accuracy and flexibility of artificial intelligence (AI) systems often comes at the cost of a decreased ability to offer an intuitive explanation of their predictions. This hinders trust and discourage adoption of AI in healthcare, exacerbated by concerns over liabilities and risks to patients’ health in case of misdiagnosis. Providing an explanation for a model’s prediction is possible due to recent advances in the field of interpretable machine learning. We considered a data set of hospital admissions linked to records of antibiotic prescriptions and susceptibilities of bacterial isolates. An appropriately trained gradient boosted decision tree algorithm, supplemented by a Shapley explanation model, predicts the likely antimicrobial drug resistance, with the odds of resistance informed by characteristics of the patient, admission data, and historical drug treatments and culture test results. Applying this AI-based system, we found that it substantially reduces the risk of mismatched treatment compared with the observed prescriptions. The Shapley values provide an intuitive association between observations/data and outcomes; the associations identified are broadly consistent with expectations based on prior knowledge from health specialists. The results, and the ability to attribute confidence and explanations, support the wider adoption of AI in healthcare.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Bowman, Howard, Marco Filetti, Brad Wyble, and Christian Olivers. "Attention is more than prediction precision." Behavioral and Brain Sciences 36, no. 3 (May 10, 2013): 206–8. http://dx.doi.org/10.1017/s0140525x12002324.

Повний текст джерела
Анотація:
AbstractA cornerstone of the target article is that, in a predictive coding framework, attention can be modelled by weighting prediction error with a measure of precision. We argue that this is not a complete explanation, especially in the light of ERP (event-related potentials) data showing large evoked responses for frequently presented target stimuli, which thus are predicted.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Surameery, Nigar M. Shafiq, and Mohammed Y. Shakor. "Use Chat GPT to Solve Programming Bugs." International Journal of Information technology and Computer Engineering, no. 31 (January 28, 2023): 17–22. http://dx.doi.org/10.55529/ijitc.31.17.22.

Повний текст джерела
Анотація:
This research paper explores the use of Chat GPT in solving programming bugs. The paper examines the characteristics of Chat GPT and how they can be leveraged to provide debugging assistance, bug prediction, and bug explanation to help solve programming problems. The paper also explores the limitations of Chat GPT in solving programming bugs and the importance of using other debugging tools and techniques to validate its predictions and explanations. The paper concludes by highlighting the potential of Chat GPT as one part of a comprehensive debugging toolkit, and the benefits of combining its strengths with the strengths of other debugging tools to identify and fix bugs more effectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії