Добірка наукової літератури з теми "Prediction Explanation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Prediction Explanation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Prediction Explanation"

1

Pintelas, Emmanuel, Meletis Liaskos, Ioannis E. Livieris, Sotiris Kotsiantis, and Panagiotis Pintelas. "Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction." Journal of Imaging 6, no. 6 (May 28, 2020): 37. http://dx.doi.org/10.3390/jimaging6060037.

Повний текст джерела
Анотація:
Image classification is a very popular machine learning domain in which deep convolutional neural networks have mainly emerged on such applications. These networks manage to achieve remarkable performance in terms of prediction accuracy but they are considered as black box models since they lack the ability to interpret their inner working mechanism and explain the main reasoning of their predictions. There is a variety of real world tasks, such as medical applications, in which interpretability and explainability play a significant role. Making decisions on critical issues such as cancer prediction utilizing black box models in order to achieve high prediction accuracy but without provision for any sort of explanation for its prediction, accuracy cannot be considered as sufficient and ethnically acceptable. Reasoning and explanation is essential in order to trust these models and support such critical predictions. Nevertheless, the definition and the validation of the quality of a prediction model’s explanation can be considered in general extremely subjective and unclear. In this work, an accurate and interpretable machine learning framework is proposed, for image classification problems able to make high quality explanations. For this task, it is developed a feature extraction and explanation extraction framework, proposing also three basic general conditions which validate the quality of any model’s prediction explanation for any application domain. The feature extraction framework will extract and create transparent and meaningful high level features for images, while the explanation extraction framework will be responsible for creating good explanations relying on these extracted features and the prediction model’s inner function with respect to the proposed conditions. As a case study application, brain tumor magnetic resonance images were utilized for predicting glioma cancer. Our results demonstrate the efficiency of the proposed model since it managed to achieve sufficient prediction accuracy being also interpretable and explainable in simple human terms.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Halliwell, Nicholas. "Evaluating Explanations of Relational Graph Convolutional Network Link Predictions on Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12880–81. http://dx.doi.org/10.1609/aaai.v36i11.21577.

Повний текст джерела
Анотація:
Recently, explanation methods have been proposed to evaluate the predictions of Graph Neural Networks on the task of link prediction. Evaluating explanation quality is difficult without ground truth explanations. This thesis is focused on providing a method, including datasets and scoring metrics, to quantitatively evaluate explanation methods on link prediction on Knowledge Graphs.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Halliwell, Nicholas, Fabien Gandon, and Freddy Lecue. "A Simplified Benchmark for Ambiguous Explanations of Knowledge Graph Link Prediction Using Relational Graph Convolutional Networks (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12963–64. http://dx.doi.org/10.1609/aaai.v36i11.21618.

Повний текст джерела
Анотація:
Relational Graph Convolutional Networks (RGCNs) are commonly used on Knowledge Graphs (KGs) to perform black box link prediction. Several algorithms have been proposed to explain their predictions. Evaluating performance of explanation methods for link prediction is difficult without ground truth explanations. Furthermore, there can be multiple explanations for a given prediction in a KG. No dataset exists where observations have multiple ground truth explanations to compare against. Additionally, no standard scoring metrics exist to compare predicted explanations against multiple ground truth explanations. We propose and evaluate a method, including a dataset, to benchmark explanation methods on the task of explainable link prediction using RGCNs.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

EL Shawi, Radwa, and Mouaz H. Al-Mallah. "Interpretable Local Concept-based Explanation with Human Feedback to Predict All-cause Mortality." Journal of Artificial Intelligence Research 75 (November 18, 2022): 833–55. http://dx.doi.org/10.1613/jair.1.14019.

Повний текст джерела
Анотація:
Machine learning models are incorporated in different fields and disciplines in which some of them require a high level of accountability and transparency, for example, the healthcare sector. With the General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. A widely used category of explanation techniques attempts to explain models’ predictions by quantifying the importance score of each input feature. However, summarizing such scores to provide human-interpretable explanations is challenging. Another category of explanation techniques focuses on learning a domain representation in terms of high-level human-understandable concepts and then utilizing them to explain predictions. These explanations are hampered by how concepts are constructed, which is not intrinsically interpretable. To this end, we propose Concept-based Local Explanations with Feedback (CLEF), a novel local model agnostic explanation framework for learning a set of high-level transparent concept definitions in high-dimensional tabular data that uses clinician-labeled concepts rather than raw features. CLEF maps the raw input features to high-level intuitive concepts and then decompose the evidence of prediction of the instance being explained into concepts. In addition, the proposed framework generates counterfactual explanations, suggesting the minimum changes in the instance’s concept based explanation that will lead to a different prediction. We demonstrate with simulated user feedback on predicting the risk of mortality. Such direct feedback is more effective than other techniques, that rely on hand-labelled or automatically extracted concepts, in learning concepts that align with ground truth concept definitions.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bonacich, Phillip. "EXPLANATION AND PREDICTION." Rationality and Society 9, no. 3 (August 1997): 373–77. http://dx.doi.org/10.1177/104346397009003006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Søgaard, Villy. "Explanation versus prediction." Technological Forecasting and Social Change 43, no. 2 (March 1993): 201–2. http://dx.doi.org/10.1016/0040-1625(93)90018-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bartsch, Karen. "False Belief Prediction and Explanation: Which Develops First and Why it Matters." International Journal of Behavioral Development 22, no. 2 (June 1998): 423–28. http://dx.doi.org/10.1080/016502598384450.

Повний текст джерела
Анотація:
In response to Wimmer and Mayringer’s (this issue) report “False belief understanding in young children: Explanations do not develop before predictions”, the theoretical importance of the explanation versus prediction issue is expanded and the empirical conclusion of the report is questioned.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Plomin, Robert, and Sophie von Stumm. "Polygenic scores: prediction versus explanation." Molecular Psychiatry 27, no. 1 (October 22, 2021): 49–52. http://dx.doi.org/10.1038/s41380-021-01348-y.

Повний текст джерела
Анотація:
AbstractDuring the past decade, polygenic scores have become a fast-growing area of research in the behavioural sciences. The ability to directly assess people’s genetic propensities has transformed research by making it possible to add genetic predictors of traits to any study. The value of polygenic scores in the behavioural sciences rests on using inherited DNA differences to predict, from birth, common disorders and complex traits in unrelated individuals in the population. This predictive power of polygenic scores does not require knowing anything about the processes that lie between genes and behaviour. It also does not mandate disentangling the extent to which the prediction is due to assortative mating, genotype–environment correlation, or even population stratification. Although bottom-up explanation from genes to brain to behaviour will remain the long-term goal of the behavioural sciences, prediction is also a worthy achievement because it has immediate practical utility for identifying individuals at risk and is the necessary first step towards explanation. A high priority for research must be to increase the predictive power of polygenic scores to be able to use them as an early warning system to prevent problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Douglas, Heather E. "Reintroducing Prediction to Explanation." Philosophy of Science 76, no. 4 (October 2009): 444–63. http://dx.doi.org/10.1086/648111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kishimoto, T., and T. Sato. "+: Another Explanation and Prediction." Progress of Theoretical Physics 116, no. 1 (July 1, 2006): 241–46. http://dx.doi.org/10.1143/ptp.116.241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Prediction Explanation"

1

Gordon, Richard Douglas. "Explanation and prediction in the labour process theory." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/30583.

Повний текст джерела
Анотація:
The view that large-scale, long-range social theories cannot be predictive other than "in principle" is sufficiently widespread as to be considered the orthodox view. It is widely held that, lacking this predictive quality, social theories are cut off from a crucial form of vindication enjoyed by the experimental sciences. Thus many would agree with Ryan's assessment that while with regard to large-scale social changes "long-range prediction is not in principle impossible," nonetheless as a matter of practical methodology such a goal is of "dubious value." The reason commonly proffered as to why social theories cannot be predictive is the causal complexity of social life. Because of this feature, it is held, while we may be able to unearth interesting social generalizations, we will not be able to predict the many initial conditions together with which they predict. Alternately, due to this complexity we are able to achieve no better than tendency laws which do not permit predictions of sufficient precision to allow for predictive testing. This has been held to be true for other causally complex fields as well. Thus, Scriven has argued that Darwin was "the paradigm of the explanatory but non-predictive scientist" due to the constraints imposed on his methodology by the causal complexity of the biosphere. As a result of both an uncritical acceptance of the orthodox view and an inadequate analysis of Marx's methodology, Daniel Little has argued that Marxian theory is non-predictive. However, a thorough analysis of Marx's labour process theory shows it to be both clearly predictive and subject to justification by predictive assessment. Moreover, a formalization of the theory indicates that available data confirm it as regards both its central hypothesis and the matrix of social causation it exhibits. Little's position in regard to Marxian theory is strongly similar to Scriven's in regard to Darwinian theory. In both cases, faulty theoretical presuppositions combine with inadequate analysis to buttress false conclusions as to the asymmetry of explanation and prediction. Adequate analysis dispels Little's and Scriven's conclusions and exhibits important methodological parallels between Marx and Darwin.
Arts, Faculty of
Philosophy, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bonawitz, Elizabeth R. (Elizabeth Robbin). "The rational child : theories and evidence in prediction, exploration, and explanation." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/47891.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.
Includes bibliographical references (p. 122-133).
In this thesis, rational Bayesian models and the Theory-theory are bridged to explore ways in which children can be described as Bayesian scientists. I investigate what it means for children to take a rational approach to processes that support learning. In particular, I present empirical studies that show children making rational predictions, exploration, and explanations. I test the claim that differences in prior beliefs or changes in the observed evidence should affect these behaviors. The studies presented in this thesis encompass two manipulations: in some conditions, children's prior beliefs are equal, but the patterns of evidence are varied; in other conditions, children observe identical evidence but children's prior beliefs are varied. I incorporate an additional approach in this thesis, testing children within a variety of domains, tapping into their intuitive theories of biological kinds, psychosomatic illness, balance, and physical systems. Chapter One introduces the problem. Chapter Two explores how evidence and children's strong beliefs about biological events and psychosomatic illness influence their forced-choice explanations in a story-book task. Chapter Three presents a training study to further investigate the developmental differences discussed in Chapter Two. Chapter Four looks at how children's strong differential beliefs of balance interact with evidence to affect their predictions, play, explanations, and learning.
(cont.) Chapter Five looks at children's exploratory play with a jack-in-the-box, (where children don't have strong, differential beliefs), given different patterns of evidence. Chapter Six investigates children's explanations following theory-neutral evidence about a mechanical toy. Chapter Seven concludes the thesis. The following chapters will suggest that frameworks combining evidence and theories capture children's causal learning about the world.
by Elizabeth R. Bonawitz.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Watson, Jason Paul 1971. "Explanation and prediction of curious experimental phenomena in lasers and nonlinear optics." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/282875.

Повний текст джерела
Анотація:
Experimental data often contains curious and unexplained results. In the course of experimental investigations of Raman shifting and the Co:MgF₂ laser, results were obtained which would not have been expected from the typical theoretical picture. In the case of Raman shifting, the forward Stokes conversion was found to depend upon the pump bandwidth. Numerical modeling suggests that coupling between the Stokes directions may be the root cause of the phenomena. In the case of the Co:MgF₂ laser, the laser output was observed to have large amounts of spectral structure. This amount of structure should not be expected in a room temperature vibronically broadened laser. Further experiments point to adsorbed water vapor for the cause of the structure, and this hypothesis is supported by a numerical model. Additionally, a unique method for treating the effects of arbitrary gain distribution on the propagation of the lowest order laser cavity mode is expanded to cover new distributions and new coordinate systems. An extension to parametric gains is also made. The extensions are then used to predict unstable regions in real laser cavities. These instabilities are observed in diffraction calculations. Guidelines for observing this intriguing result are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Haar, D. H. "Formalised modelling of action theory in the explanation of crime for prediction, deduction and intervention." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599815.

Повний текст джерела
Анотація:
This dissertation proposes an original approach to theory of action in psychological and sociological criminology, i.e. to theory explaining the causation of human wilful behaviour at great abstraction through the information processing conducted by each individual human agent. It is argued that the model presented in this dissertation, the so-called Minimal Model of Action, is more theoretically comprehensive than prior familiar approaches originating in various related fields, in particular through its integration of both rational and habitual aspects of behaviour in a unified causal argument. Secondly, it is argued that the model is more methodologically appealing than previous approaches due to its formalisation through conventional mathematics. The proposed model is brought to bear on more concrete behavioural data and criminological problems in three separate chapters so as to scrutinise its validity and tractability from three methodologically different angles. An experimental chapter shows that empirical responses to computer-based scenario tasks frequently display behaviour patterns, especially forms of habituation, which the Minimal Model of Action in its simulated implementations and unlike previous models manages to explain and predict. In the following chapter, it is mainly shown through mathematical deduction both in continuation of and in juxtaposition to prior economic reasoning in which ways “optimum law enforcement” levels are systematically overestimated (and sometimes underestimated) under a variety of conditions when over-rationalised conceptions of the individual offender are employed. Finally, a chapter on aggregate levels of small-scale public corruption employs the general model to simulate a typical criminal phenomenon to the explanation of which economic and broader social conceptions of human agency equally should contribute.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

McKay, William L. "Hope and suicide resilience in the prediction and explanation of suicidality experiences in university students." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1456285751&sid=3&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Olofsson, Nina. "A Machine Learning Ensemble Approach to Churn Prediction : Developing and Comparing Local Explanation Models on Top of a Black-Box Classifier." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210565.

Повний текст джерела
Анотація:
Churn prediction methods are widely used in Customer Relationship Management and have proven to be valuable for retaining customers. To obtain a high predictive performance, recent studies rely on increasingly complex machine learning methods, such as ensemble or hybrid models. However, the more complex a model is, the more difficult it becomes to understand how decisions are actually made. Previous studies on machine learning interpretability have used a global perspective for understanding black-box models. This study explores the use of local explanation models for explaining the individual predictions of a Random Forest ensemble model. The churn prediction was studied on the users of Tink – a finance app. This thesis aims to take local explanations one step further by making comparisons between churn indicators of different user groups. Three sets of groups were created based on differences in three user features. The importance scores of all globally found churn indicators were then computed for each group with the help of local explanation models. The results showed that the groups did not have any significant differences regarding the globally most important churn indicators. Instead, differences were found for globally less important churn indicators, concerning the type of information that users stored in the app. In addition to comparing churn indicators between user groups, the result of this study was a well-performing Random Forest ensemble model with the ability of explaining the reason behind churn predictions for individual users. The model proved to be significantly better than a number of simpler models, with an average AUC of 0.93.
Metoder för att prediktera utträde är vanliga inom Customer Relationship Management och har visat sig vara värdefulla när det kommer till att behålla kunder. För att kunna prediktera utträde med så hög säkerhet som möjligt har den senasteforskningen fokuserat på alltmer komplexa maskininlärningsmodeller, såsom ensembler och hybridmodeller. En konsekvens av att ha alltmer komplexa modellerär dock att det blir svårare och svårare att förstå hur en viss modell har kommitfram till ett visst beslut. Tidigare studier inom maskininlärningsinterpretering har haft ett globalt perspektiv för att förklara svårförståeliga modeller. Denna studieutforskar lokala förklaringsmodeller för att förklara individuella beslut av en ensemblemodell känd som 'Random Forest'. Prediktionen av utträde studeras påanvändarna av Tink – en finansapp. Syftet med denna studie är att ta lokala förklaringsmodeller ett steg längre genomatt göra jämförelser av indikatorer för utträde mellan olika användargrupper. Totalt undersöktes tre par av grupper som påvisade skillnader i tre olika variabler. Sedan användes lokala förklaringsmodeller till att beräkna hur viktiga alla globaltfunna indikatorer för utträde var för respektive grupp. Resultaten visade att detinte fanns några signifikanta skillnader mellan grupperna gällande huvudindikatorerna för utträde. Istället visade resultaten skillnader i mindre viktiga indikatorer som hade att göra med den typ av information som lagras av användarna i appen. Förutom att undersöka skillnader i indikatorer för utträde resulterade dennastudie i en välfungerande modell för att prediktera utträde med förmågan attförklara individuella beslut. Random Forest-modellen visade sig vara signifikantbättre än ett antal enklare modeller, med ett AUC-värde på 0.93.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hasan, Rakebul. "Prédire les performances des requêtes et expliquer les résultats pour assister la consommation de données liées." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4082/document.

Повний текст джерела
Анотація:
Prédire les performances des requêtes et expliquer les résultats pour assister la consommation de données liées. Notre objectif est d'aider les utilisateurs à comprendre les performances d'interrogation SPARQL, les résultats de la requête, et dérivations sur les données liées. Pour aider les utilisateurs à comprendre les performances des requêtes, nous fournissons des prévisions de performances des requêtes sur la base de d’historique de requêtes et d'apprentissage symbolique. Nous n'utilisons pas de statistiques sur les données sous-jacentes à nos prévisions. Ce qui rend notre approche appropriée au Linked Data où les statistiques sont souvent absentes. Pour aider les utilisateurs des résultats de la requête dans leur compréhension, nous fournissons des explications de provenance. Nous présentons une approche sans annotation pour expliquer le “pourquoi” des résultats de la requête. Notre approche ne nécessite pas de reconception du processeur de requêtes, du modèle de données, ou du langage de requête. Nous utilisons SPARQL 1.1 pour générer la provenance en interrogeant les données, ce qui rend notre approche appropriée pour les données liées. Nous présentons également une étude sur les utilisateurs montrant l'impact des explications. Enfin, pour aider les utilisateurs à comprendre les dérivations sur les données liées, nous introduisons le concept d’explications liées. Nous publions les métadonnées d’explication comme des données liées. Cela permet d'expliquer les résultats en suivant les liens des données utilisées dans le calcul et les liens des explications. Nous présentons une extension de l'ontologie PROV W3C pour décrire les métadonnées d’explication. Nous présentons également une approche pour résumer ces explications et aider les utilisateurs à filtrer les explications
Our goal is to assist users in understanding SPARQL query performance, query results, and derivations on Linked Data. To help users in understanding query performance, we provide query performance predictions based on the query execution history. We present a machine learning approach to predict query performances. We do not use statistics about the underlying data for our predictions. This makes our approach suitable for the Linked Data scenario where statistics about the underlying data is often missing such as when the data is controlled by external parties. To help users in understanding query results, we provide provenance-based query result explanations. We present a non-annotation-based approach to generate why-provenance for SPARQL query results. Our approach does not require any re-engineering of the query processor, the data model, or the query language. We use the existing SPARQL 1.1 constructs to generate provenance by querying the data. This makes our approach suitable for Linked Data. We also present a user study to examine the impact of query result explanations. Finally to help users in understanding derivations on Linked Data, we introduce the concept of Linked Explanations. We publish explanation metadata as Linked Data. This allows explaining derived data in Linked Data by following the links of the data used in the derivation and the links of their explanation metadata. We present an extension of the W3C PROV ontology to describe explanation metadata. We also present an approach to summarize these explanations to help users filter information in the explanation, and have an understanding of what important information was used in the derivation
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rawstorne, Patrick. "A systematic analysis of the theory of reasoned action, the theory of planned behaviour and the technology acceptance model when applied to the prediction and explanation of information systems use in mandatory usage contexts." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060815.154410/index.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Bantegnie, Brice. "Eliminating propositional attitudes concepts." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0020.

Повний текст джерела
Анотація:
Dans cette thèse je défends l'élimination des concepts d'attitudes propositionnelles. Dans le premier chapitre, je présente les thèses éliminativistes en philosophie de l'esprit et des sciences cognitives contemporaines. Il y a deux types d'éliminativisme: le matérialisme éliminatif et l'éliminativisme des concepts. Il est possible d'éliminer les concepts soit des théories naïves soit des théories scientifiques. L'éliminativisme à propos des concepts d'attitudes propositionnelles que je défends requière le second type d'élimination. Dans les trois chapitres suivants je donne trois arguments en faveur de cette thèse. Je commence par soutenir que la théorie interventionniste de la causalité ne fonde pas nos jugements de causalité mentale. Ensuite je montre que nos concepts d'attitudes propositionnelles ne sont pas des concepts d'espèces naturelles car ils groupent ensemble les états des différents modules d'une architecture massivement modulaire, la thèse de modularité massive faisant partie, je l'affirme, de notre meilleur programme de recherche. Finalement, mon troisième argument repose sur l’élimination du concept de contenu mental de nos théories. Dans les deux derniers chapitres de la thèse, je défends ce dernier argument. Tout d'abord, je réfute l'argument du succès selon lequel étant donné que les psychologues emploient le concept de contenu mental et ce faisant produisent de la bonne science ce concept ne devrait pas être éliminé. Ensuite je rejette une autre façon d'éliminer ce concept, celle choisie par les théoriciens de la cognition étendue. Pour cela je réfute le meilleur argument qui a été donné en faveur de cette thèse: l'argument du système
In this dissertation, I argue for the elimination of propositional attitudes concepts. In the first chapter I sketch the landscape of eliminativism in contemporary philosophy of mind and cognitive science. There are two kinds of eliminativism: eliminative materialism and concept eliminativism. One can further distinguish between folk and science eliminativism about concepts: whereas the former says that the concept should be eliminated from our folk theories, the latter says that the concept should be eliminated form our scientific theories. The eliminativism about propositional attitudes concepts I defend is a species of the latter. In the next three chapters I put forward three arguments for this thesis. I first argue that the interventionist theory of causation cannot lend credit to our claims of mental causation. I then support the thesis by showing that propositional attitudes concepts aren't natural kind concepts because they cross-cut the states of the modules posited by the thesis of massive modularity, a thesis which, I contend, is part of our best research-program. Finally, my third argument rests on science eliminativism about the concept of mental content. In the two last chapters of the dissertation I first defend the elimination of the concept of mental content from the success argument, according to which as psychologists produce successful science while using the concept of mental content, the concept should be conserved. Then, I dismiss an alternative way of eliminating the concept, that is, the way taken by proponents of extended cognition, by refuting what I take to be the best argument for extended cognition, namely, the system argument
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lonjarret, Corentin. "Sequential recommendation and explanations." Thesis, Lyon, 2021. http://theses.insa-lyon.fr/publication/2021LYSEI003/these.pdf.

Повний текст джерела
Анотація:
Ces dernière années, les systèmes de recommandation ont reçu beaucoup d'attention avec l'élaboration de nombreuses propositions qui tirent parti des nouvelles avancées dans les domaines du Machine Learning et du Deep Learning. Grâce à l'automatisation de la collecte des données des actions des utilisateurs tels que l'achat d'un objet, le visionnage d'un film ou le clic sur un article de presse, les systèmes de recommandation ont accès à de plus en plus d'information. Ces données sont des retours implicites des utilisateurs (appelé «~implicit feedback~» en anglais) et permettent de conserver l'ordre séquentiel des actions de l’utilisateur. C'est dans ce contexte qu'ont émergé les systèmes de recommandations qui prennent en compte l’aspect séquentiel des données. Le but de ces approches est de combiner les préférences des utilisateurs (le goût général de l’utilisateur) et la dynamique séquentielle (les tendances à court terme des actions de l'utilisateur) afin de prévoir la ou les prochaines actions d'un utilisateur. Dans cette thèse, nous étudions la recommandation séquentielle qui vise à prédire le prochain article/action de l'utilisateur à partir des retours implicites des utilisateurs. Notre principale contribution, REBUS, est un nouveau modèle dans lequel seuls les items sont projetés dans un espace euclidien d'une manière qui intègre et unifie les préférences de l'utilisateur et la dynamique séquentielle. Pour saisir la dynamique séquentielle, REBUS utilise des séquences fréquentes afin de capturer des chaînes de Markov d'ordre personnalisé. Nous avons mené une étude empirique approfondie et démontré que notre modèle surpasse les performances des différents modèles de l’état de l’art, en particulier sur des jeux de données éparses. Nous avons également intégré REBUS dans myCADservices, une plateforme collaborative de la société française Visiativ. Nous présentons notre retour d'expérience sur cette mise en production du fruit de nos travaux de recherche. Enfin, nous avons proposé une nouvelle approche pour expliquer les recommandations fournies aux utilisateurs. Le fait de pouvoir expliquer une recommandation permet de contribuer à accroître la confiance qu'un utilisateur peut avoir dans un système de recommandation. Notre approche est basée sur la découverte de sous-groupes pour fournir des explications interprétables d'une recommandation pour tous types de modèles qui utilisent comme données d’entrée les retours implicites des utilisateurs
Recommender systems have received a lot of attention over the past decades with the proposal of many models that take advantage of the most advanced models of Deep Learning and Machine Learning. With the automation of the collect of user actions such as purchasing of items, watching movies, clicking on hyperlinks, the data available for recommender systems is becoming more and more abundant. These data, called implicit feedback, keeps the sequential order of actions. It is in this context that sequence-aware recommender systems have emerged. Their goal is to combine user preference (long-term users' profiles) and sequential dynamics (short-term tendencies) in order to recommend next actions to a user. In this thesis, we investigate sequential recommendation that aims to predict the user's next item/action from implicit feedback. Our main contribution is REBUS, a new metric embedding model, where only items are projected to integrate and unify user preferences and sequential dynamics. To capture sequential dynamics, REBUS uses frequent sequences in order to provide personalized order Markov chains. We have carried out extensive experiments and demonstrate that our method outperforms state-of-the-art models, especially on sparse datasets. Moreover we share our experience on the implementation and the integration of REBUS in myCADservices, a collaborative platform of the French company Visiativ. We also propose methods to explain the recommendations provided by recommender systems in the research line of explainable AI that has received a lot of attention recently. Despite the ubiquity of recommender systems only few researchers have attempted to explain the recommendations according to user input. However, being able to explain a recommendation would help increase the confidence that a user can have in a recommendation system. Hence, we propose a method based on subgroup discovery that provides interpretable explanations of a recommendation for models that use implicit feedback
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Prediction Explanation"

1

Dieks, Dennis, Wenceslao J. Gonzalez, Stephan Hartmann, Thomas Uebel, and Marcel Weber, eds. Explanation, Prediction, and Confirmation. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Dennis Geert Bernardus Johan Dieks. Explanation, Prediction, and Confirmation. Dordrecht: Springer Science+Business Media B.V., 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

P, Farrington David, ed. Criminal recidivism: Explanation, prediction and prevention. Abingdon, Oxon: Routledge, 2015.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bonneson, James, and John Ivan. Theory, Explanation, and Prediction in Road Safety. Washington, D.C.: Transportation Research Board, 2013. http://dx.doi.org/10.17226/22465.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Multiple regression in behavioral research: Explanation and prediction. 3rd ed. Forth Worth: Harcourt Brace College Publishers, 1997.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

L, Casti J., Karlqvist Anders, and Sweden Forskningsrådsnämnden, eds. Beyond belief: Randomness, prediction, and explanation in science. Boca Raton, Fla: CRC Press, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Eriksson, Bo G. Studying ageing: Experiences, description, variation, prediction and explanation. Göteborg: Department of Sociology, University of Gothenburg, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Eriksson, Bo G., and Bo G. Eriksson. Studying ageing: Experiences, description, variation, prediction and explanation. Göteborg: Department of Sociology, University of Gothenburg, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Trasler, Gordon. The Explanation of criminality. London: Routledge & Kegan Paul, 1998.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rastogi, P. N. Ethnic tensions in Indian society: Explanation, prediction, monitoring, and control. Delhi, India: Mittal Publications, 1986.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Prediction Explanation"

1

Worrall, John. "The No Miracles Intuition and the No Miracles Argument." In Explanation, Prediction, and Confirmation, 11–21. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Reutlinger, Alexander. "What’s Wrong with the Pragmatic-Ontic Account of Mechanistic Explanation?" In Explanation, Prediction, and Confirmation, 141–52. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Joffe, Michael. "Causality and Evidence Discovery in Epidemiology." In Explanation, Prediction, and Confirmation, 153–66. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Graßhoff, Gerd. "Inferences to Causal Relevance from Experiments." In Explanation, Prediction, and Confirmation, 167–82. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Love, Alan C., and Andreas Hüttemann. "Comparing Part-Whole Reductive Explanations in Biology and Physics1." In Explanation, Prediction, and Confirmation, 183–202. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

McLaughlin, Peter. "The Arrival of the Fittest." In Explanation, Prediction, and Confirmation, 203–22. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Reydon, Thomas A. C. "The Arrival of the Fittest What?" In Explanation, Prediction, and Confirmation, 223–37. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Spohn, Wolfgang. "Normativity is the Key to the Difference Between the Human and the Natural Sciences." In Explanation, Prediction, and Confirmation, 241–51. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lenk, Hans. "Methodological Higher-Level Interdisciplinarity by Scheme-Interpretationism: Against Methodological Separatism of the Natural, Social, and Human Sciences." In Explanation, Prediction, and Confirmation, 253–67. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Faye, Jan. "Explanation and Interpretation in the Sciences of Man." In Explanation, Prediction, and Confirmation, 269–79. Dordrecht: Springer Netherlands, 2011. http://dx.doi.org/10.1007/978-94-007-1180-8_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Prediction Explanation"

1

Kim, Marie, Jong-Arm Jun, YuJin Song, and Cheol Sig Pyo. "Explanation for building energy prediction." In 2020 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2020. http://dx.doi.org/10.1109/ictc49870.2020.9289340.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Andric, Marina, Iustina Ivanova, and Francesco Ricci. "Climbing Route Difficulty Grade Prediction and Explanation." In WI-IAT '21: IEEE/WIC/ACM International Conference on Web Intelligence. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3486622.3493932.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, Liangyue, and Hanghang Tong. "Uncovering Teamwork in Networks — Prediction, Optimization and Explanation." In 2017 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2017. http://dx.doi.org/10.1109/icdmw.2017.160.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhang Bofeng and Liu Yue. "Customized explanation in expert system for earthquake prediction." In 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'05). IEEE, 2005. http://dx.doi.org/10.1109/ictai.2005.54.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

La Malfa, Emanuele, Rhiannon Michelmore, Agnieszka M. Zbrzezny, Nicola Paoletti, and Marta Kwiatkowska. "On Guaranteed Optimal Robust Explanations for NLP Models." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/366.

Повний текст джерела
Анотація:
We build on abduction-based explanations for machine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP). Our explanations comprise a subset of the words of the input text that satisfies two key features: optimality w.r.t. a user-defined cost function, such as the length of explanation, and robustness, in that they ensure prediction invariance for any bounded perturbation in the embedding space of the left-out words. We present two solution algorithms, respectively based on implicit hitting sets and maximum universal subsets, introducing a number of algorithmic improvements to speed up convergence of hard instances. We show how our method can be configured with different perturbation sets in the embedded space and used to detect bias in predictions by enforcing include/exclude constraints on biased terms, as well as to enhance existing heuristic-based NLP explanation frameworks such as Anchors. We evaluate our framework on three widely used sentiment analysis tasks and texts of up to 100 words from SST, Twitter and IMDB datasets, demonstrating the effectiveness of the derived explanations.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Pereira, Filipe Dwan, Elaine Harada Teixeira de Oliveira, David Braga Fernandes de Oliveira, Leandro Silva Galvão de Carvalho, and Alexandra Ioana Cristea. "Interpretable AI to Understand Early Effective and Ineffective Programming Behaviours from CS1 Learners." In Anais Estendidos do Simpósio Brasileiro de Educação em Computação. Sociedade Brasileira de Computação, 2021. http://dx.doi.org/10.5753/educomp_estendido.2021.14853.

Повний текст джерела
Анотація:
Building predictive models to estimate the learner performance in the beginning of CS1 courses is essential in education to allow early interventions. However, the educational literature notes the lack of studies on early learner behaviours that can be effective or ineffective, that is, programming behaviours that potentially lead to success or failure, respectively. Hence, beyond the prediction, it is crucial to explain what leads the predictive model to make the decisions (e.g., why a given student s is classified as `passed'), which would allow a better understanding of which early programming behaviours are to be encouraged and triggered. In this work in progress, we use a state-of-the-art unified approach to interpret black-box model predictions, which uses SHapley Additive exPlanations (SHAP) method. SHAP method can be used to explain linearly a complex model (e.g. DL or XGboost) in instance level. In our context of CS1 performance prediction, this method gets the predictive model and the features values for a given student as input and the possibility of explanation of which feature values are increasing or decreasing the learner chances of passing as output. That is, using SHAP we can identify early effective and ineffective behaviours in student-level granularity. More than that, using this local explanation as building blocks, we can also extract global data insight and give a summarisation of the model. A video explaining this work can be found at the following link: https://youtu.be/pd6Ma6uInHo
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Wen, Bibek Paudel, Wei Zhang, Abraham Bernstein, and Huajun Chen. "Interaction Embeddings for Prediction and Explanation in Knowledge Graphs." In WSDM '19: The Twelfth ACM International Conference on Web Search and Data Mining. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3289600.3291014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Treviso, Marcos, and André F. T. Martins. "The Explanation Game: Towards Prediction Explainability through Sparse Communication." In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.blackboxnlp-1.10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Li, Yi, and Guo-en Xia. "The Explanation of Support Vector Machine in Customer Churn Prediction." In 2010 International Conference on E-Product E-Service and E-Entertainment (ICEEE 2010). IEEE, 2010. http://dx.doi.org/10.1109/iceee.2010.5660501.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Banerjee, Bonny, and Jayanta K. Dutta. "Efficient learning from explanation of prediction errors in streaming data." In 2013 IEEE International Conference on Big Data. IEEE, 2013. http://dx.doi.org/10.1109/bigdata.2013.6691728.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Prediction Explanation"

1

Lalisse, Matthias. Measuring the Impact of Campaign Finance on Congressional Voting: A Machine Learning Approach. Institute for New Economic Thinking Working Paper Series, February 2022. http://dx.doi.org/10.36687/inetwp178.

Повний текст джерела
Анотація:
How much does money drive legislative outcomes in the United States? In this article, we use aggregated campaign finance data as well as a Transformer based text embedding model to predict roll call votes for legislation in the US Congress with more than 90% accuracy. In a series of model comparisons in which the input feature sets are varied, we investigate the extent to which campaign finance is predictive of voting behavior in comparison with variables like partisan affiliation. We find that the financial interests backing a legislator’s campaigns are independently predictive in both chambers of Congress, but also uncover a sizable asymmetry between the Senate and the House of Representatives. These findings are cross-referenced with a Representational Similarity Analysis (RSA) linking legislators’ financial and voting records, in which we show that “legislators who vote together get paid together”, again discovering an asymmetry between the House and the Senate in the additional predictive power of campaign finance once party is accounted for. We suggest an explanation of these facts in terms of Thomas Ferguson’s Investment Theory of Party Competition: due to a number of structural differences between the House and Senate, but chiefly the lower amortized cost of obtaining individuated influence with Senators, political investors prefer operating on the House using the party as a proxy.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Fridman, Eyal, Jianming Yu, and Rivka Elbaum. Combining diversity within Sorghum bicolor for genomic and fine mapping of intra-allelic interactions underlying heterosis. United States Department of Agriculture, January 2012. http://dx.doi.org/10.32747/2012.7597925.bard.

Повний текст джерела
Анотація:
Heterosis, the enigmatic phenomenon in which whole genome heterozygous hybrids demonstrate superior fitness compared to their homozygous parents, is the main cornerstone of modern crop plant breeding. One explanation for this non-additive inheritance of hybrids is interaction of alleles within the same locus. This proposal aims at screening, identifying and investigating heterosis trait loci (HTL) for different yield traits by implementing a novel integrated mapping approach in Sorghum bicolor as a model for other crop plants. Originally, the general goal of this research was to perform a genetic dissection of heterosis in a diallel built from a set of Sorghum bicolor inbred lines. This was conducted by implementing a novel computational algorithm which aims at associating between specific heterozygosity found among hybrids with heterotic variation for different agronomic traits. The initial goals of the research are: (i) Perform genotype by sequencing (GBS) of the founder lines (ii) To evaluate the heterotic variation found in the diallel by performing field trails and measurements in the field (iii) To perform QTL analysis for identifying heterotic trait loci (HTL) (iv) to validate candidate HTL by testing the quantitative mode of inheritance in F2 populations, and (v) To identify candidate HTL in NAM founder lines and fine map these loci by test-cross selected RIL derived from these founders. The genetic mapping was initially achieved with app. 100 SSR markers, and later the founder lines were genotyped by sequencing. In addition to the original proposed research we have added two additional populations that were utilized to further develop the HTL mapping approach; (1) A diallel of budding yeast (Saccharomyces cerevisiae) that was tested for heterosis of doubling time, and (2) a recombinant inbred line population of Sorghum bicolor that allowed testing in the field and in more depth the contribution of heterosis to plant height, as well as to achieve novel simulation for predicting dominant and additive effects in tightly linked loci on pseudooverdominance. There are several conclusions relevant to crop plants in general and to sorghum breeding and biology in particular: (i) heterosis for reproductive (1), vegetative (2) and metabolic phenotypes is predominantly achieved via dominance complementation. (ii) most loci that seems to be inherited as overdominant are in fact achieving superior phenotype of the heterozygous due to linkage in repulsion, namely by pseudooverdominant mechanism. Our computer simulations show that such repulsion linkage could influence QTL detection and estimation of effect in segregating populations. (iii) A new height QTL (qHT7.1) was identified near the genomic region harboring the known auxin transporter Dw3 in sorghum, and its genetic dissection in RIL population demonstrated that it affects both the upper and lower parts of the plant, whereas Dw3 affects only the part below the flag leaf. (iv) HTL mapping for grain nitrogen content in sorghum grains has identified several candidate genes that regulate this trait, including several putative nitrate transporters and a transcription factor belonging to the no-apical meristem (NAC)-like large gene family. This activity was combined with another BARD-funded project in which several de-novo mutants in this gene were identified for functional analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії