Academic literature on the topic 'Algorithm explainability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithm explainability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithm explainability":

1

Nuobu, Gengpan. "Transformer model: Explainability and prospectiveness." Applied and Computational Engineering 20, no. 1 (October 23, 2023): 88–99. http://dx.doi.org/10.54254/2755-2721/20/20231079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of Artificial Intelligence(AI) is to simulate learning process of human brain by strong computing power and appropriate algorithm, so that the machine can develop judging ability at work as human. Current AI mainly relies on Deep Learning model which is based on artificial neural network, like Convolutional Neural Network(CNN) in computer visualization, but that also takes with some defects. This paper introduces defects of CNN and discusses Transformer model in solving unexplainability of traditional CNN algorithm. To discuss why the Transformer model and attention mechanism are considered as the way to AI intelligibility.
2

Hwang, Hyunseung, and Steven Euijong Whang. "XClusters: Explainability-First Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 7962–70. http://dx.doi.org/10.1609/aaai.v37i7.25963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We study the problem of explainability-first clustering where explainability becomes a first-class citizen for clustering. Previous clustering approaches use decision trees for explanation, but only after the clustering is completed. In contrast, our approach is to perform clustering and decision tree training holistically where the decision tree's performance and size also influence the clustering results. We assume the attributes for clustering and explaining are distinct, although this is not necessary. We observe that our problem is a monotonic optimization where the objective function is a difference of monotonic functions. We then propose an efficient branch-and-bound algorithm for finding the best parameters that lead to a balance of clustering accuracy and decision tree explainability. Our experiments show that our method can improve the explainability of any clustering that fits in our framework.
3

Pendyala, Vishnu, and Hyungkyun Kim. "Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI." Electronics 13, no. 6 (March 8, 2024): 1025. http://dx.doi.org/10.3390/electronics13061025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Machine learning is increasingly and ubiquitously being used in the medical domain. Evaluation metrics like accuracy, precision, and recall may indicate the performance of the models but not necessarily the reliability of their outcomes. This paper assesses the effectiveness of a number of machine learning algorithms applied to an important dataset in the medical domain, specifically, mental health, by employing explainability methodologies. Using multiple machine learning algorithms and model explainability techniques, this work provides insights into the models’ workings to help determine the reliability of the machine learning algorithm predictions. The results are not intuitive. It was found that the models were focusing significantly on less relevant features and, at times, unsound ranking of the features to make the predictions. This paper therefore argues that it is important for research in applied machine learning to provide insights into the explainability of models in addition to other performance metrics like accuracy. This is particularly important for applications in critical domains such as healthcare.
4

Loreti, Daniela, and Giorgio Visani. "Parallel approaches for a decision tree-based explainability algorithm." Future Generation Computer Systems 158 (September 2024): 308–22. http://dx.doi.org/10.1016/j.future.2024.04.044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zhenzhong, Qingyuan Zeng, Wanyu Lin, Min Jiang, and Kay Chen Tan. "Generating Diagnostic and Actionable Explanations for Fair Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21690–98. http://dx.doi.org/10.1609/aaai.v38i19.30168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A plethora of fair graph neural networks (GNNs) have been proposed to promote algorithmic fairness for high-stake real-life contexts. Meanwhile, explainability is generally proposed to help machine learning practitioners debug models by providing human-understandable explanations. However, seldom work on explainability is made to generate explanations for fairness diagnosis in GNNs. From the explainability perspective, this paper explores the problem of what subgraph patterns cause the biased behavior of GNNs, and what actions could practitioners take to rectify the bias? By answering the two questions, this paper aims to produce compact, diagnostic, and actionable explanations that are responsible for discriminatory behavior. Specifically, we formulate the problem of generating diagnostic and actionable explanations as a multi-objective combinatorial optimization problem. To solve the problem, a dedicated multi-objective evolutionary algorithm is presented to ensure GNNs' explainability and fairness in one go. In particular, an influenced nodes-based gradient approximation is developed to boost the computation efficiency of the evolutionary algorithm. We provide a theoretical analysis to illustrate the effectiveness of the proposed framework. Extensive experiments have been conducted to demonstrate the superiority of the proposed method in terms of classification performance, fairness, and interpretability.
6

Yiğit, Tuncay, Nilgün Şengöz, Özlem Özmen, Jude Hemanth, and Ali Hakan Işık. "Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning." Traitement du Signal 39, no. 3 (June 30, 2022): 863–69. http://dx.doi.org/10.18280/ts.390311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Artificial intelligence holds great promise in medical imaging, especially histopathological imaging. However, artificial intelligence algorithms cannot fully explain the thought processes during decision-making. This situation has brought the problem of explainability, i.e., the black box problem, of artificial intelligence applications to the agenda: an algorithm simply responds without stating the reasons for the given images. To overcome the problem and improve the explainability, explainable artificial intelligence (XAI) has come to the fore, and piqued the interest of many researchers. Against this backdrop, this study examines a new and original dataset using the deep learning algorithm, and visualizes the output with gradient-weighted class activation mapping (Grad-CAM), one of the XAI applications. Afterwards, a detailed questionnaire survey was conducted with the pathologists on these images. Both the decision-making processes and the explanations were verified, and the accuracy of the output was tested. The research results greatly help pathologists in the diagnosis of paratuberculosis.
7

Powell, Alison B. "Explanations as governance? Investigating practices of explanation in algorithmic system design." European Journal of Communication 36, no. 4 (August 2021): 362–75. http://dx.doi.org/10.1177/02673231211028376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The algorithms underpinning many everyday communication processes are now complex enough that rendering them explainable has become a key governance objective. This article examines the question of 'who should be required to explain what, to whom, in platform environments'. By working with algorithm designers and using design methods to extrapolate existing capacities to explain aglorithmic functioning, the article discusses the power relationships underpinning explanation of algorithmic function. Reviewing how key concepts of transparency and accountability connect with explainability, the paper argues that reliance on explainability as a governance mechanism can generate a dangerous paradox which legitimates increased reliance on programmable infrastructure as expert stakeholders are reassured by their ability to perform or receive explanations, while displacing responsibility for understandings of social context and definitions of public interest
8

Xie, Lijie, Zhaoming Hu, Xingjuan Cai, Wensheng Zhang, and Jinjun Chen. "Explainable recommendation based on knowledge graph and multi-objective optimization." Complex & Intelligent Systems 7, no. 3 (March 6, 2021): 1241–52. http://dx.doi.org/10.1007/s40747-021-00315-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractRecommendation system is a technology that can mine user's preference for items. Explainable recommendation is to produce recommendations for target users and give reasons at the same time to reveal reasons for recommendations. The explainability of recommendations that can improve the transparency of recommendations and the probability of users choosing the recommended items. The merits about explainability of recommendations are obvious, but it is not enough to focus solely on explainability of recommendations in field of explainable recommendations. Therefore, it is essential to construct an explainable recommendation framework to improve the explainability of recommended items while maintaining accuracy and diversity. An explainable recommendation framework based on knowledge graph and multi-objective optimization is proposed that can optimize the precision, diversity and explainability about recommendations at the same time. Knowledge graph connects users and items through different relationships to obtain an explainable candidate list for target user, and the path between target user and recommended item is used as an explanation basis. The explainable candidate list is optimized through multi-objective optimization algorithm to obtain the final recommendation list. It is concluded from the results about experiments that presented explainable recommendation framework provides high-quality recommendations that contains high accuracy, diversity and explainability.
9

Kabir, Sami, Mohammad Shahadat Hossain, and Karl Andersson. "An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings." Energies 17, no. 8 (April 9, 2024): 1797. http://dx.doi.org/10.3390/en17081797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The prediction of building energy consumption is beneficial to utility companies, users, and facility managers to reduce energy waste. However, due to various drawbacks of prediction algorithms, such as, non-transparent output, ad hoc explanation by post hoc tools, low accuracy, and the inability to deal with data uncertainties, such prediction has limited applicability in this domain. As a result, domain knowledge-based explainability with high accuracy is critical for making energy predictions trustworthy. Motivated by this, we propose an advanced explainable Belief Rule-Based Expert System (eBRBES) with domain knowledge-based explanations for the accurate prediction of energy consumption. We optimize BRBES’s parameters and structure to improve prediction accuracy while dealing with data uncertainties using its inference engine. To predict energy consumption, we take into account floor area, daylight, indoor occupancy, and building heating method. We also describe how a counterfactual output on energy consumption could have been achieved. Furthermore, we propose a novel Belief Rule-Based adaptive Balance Determination (BRBaBD) algorithm for determining the optimal balance between explainability and accuracy. To validate the proposed eBRBES framework, a case study based on Skellefteå, Sweden, is used. BRBaBD results show that our proposed eBRBES framework outperforms state-of-the-art machine learning algorithms in terms of optimal balance between explainability and accuracy by 85.08%.
10

Bulitko, Vadim, Shuwei Wang, Justin Stevens, and Levi H. S. Lelis. "Portability and Explainability of Synthesized Formula-based Heuristics." Proceedings of the International Symposium on Combinatorial Search 15, no. 1 (July 17, 2022): 29–37. http://dx.doi.org/10.1609/socs.v15i1.21749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Heuristic search is a key component of automated planning and pathfinding. It is guided by a heuristic function which estimates remaining solution cost. Traditionally heuristic functions for pathfinding have been human-designed or pre-computed for a specific search graph. The former tend to be compact, human-readable but generic. The latter offer better guidance but require per-graph pre-computation and have a substantial memory cost. We aim to retain compactness and readability of human-designed heuristics and increase their performance. We adopt the recently published approach of representing heuristic functions as algebraic formulae and automatically synthesizing them for video-game maps. Whereas published work merely randomly sampled the space of formula-based heuristic functions, we implement and evaluate a parameterized synthesis algorithm that unifies and generalizes the stochastic sampling, simulated annealing and a basic genetic algorithm. We tune the parameters for better synthesis performance and then, using maps from multiple video games, show that heuristics synthesized for maps from one game still outperform the baseline search (A* with weighted Manhattan distance) on maps from a different game. We analyze a frequently synthesized formula and explain how, despite having a higher error than the Manhattan distance, it takes advantage of the structure in video-game pathfinding problems and speeds up A*.

Dissertations / Theses on the topic "Algorithm explainability":

1

Raizonville, Adrien. "Regulation and competition policy of the digital economy : essays in industrial organization." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse aborde deux enjeux auxquels les régulateurs doivent faire face dans l’économie numérique : le défi informationnel généré par l'utilisation de nouvelles technologies d'intelligence artificielle et la problématique du pouvoir de marché des grandes plateformes numériques. Le premier chapitre de cette thèse étudie la mise en place d’un système d’audit (coûteux et imparfait) par un régulateur cherchant à réduire le risque de dommage généré par les technologies d’intelligence artificielle, tout en limitant le coût de la régulation. Les entreprises peuvent investir dans l'explicabilité de leurs technologies pour mieux comprendre leurs algorithmes et réduire leur coût de conformité à la réglementation. Lorsque l’explicabilité n’affecte pas l’efficacité des audits, la prise en compte du niveau d'explicabilité de la technologie dans la politique d’audit du régulateur induit davantage d'investissement en explicabilité et une conformité plus forte de la part des entreprises en comparaison d’une politique neutre à l’explicabilité. Si, au contraire, l'explicabilité facilite la détection d'une mauvaise conduite par le régulateur, les entreprises peuvent s’engager dans une stratégie d’opacification de leur technologie. Un comportement opportuniste de la part du régulateur décourage l'investissement dans l'explicabilité. Pour promouvoir l'explicabilité et la conformité, il peut être nécessaire de mettre en œuvre une réglementation de type "commande et contrôle" avec des normes d'explicabilité minimales. Le deuxième chapitre explore les effets de la coopétition entre deux plateformes bifaces sur les prix de souscription des utilisateurs. Plus spécifiquement, les plateformes fixent les prix de souscription d’un groupe d’utilisateurs (par exemple, les vendeurs) de manière coopérative et les prix de l’autre groupe (par exemple, les acheteurs) de manière non coopérative. En coopérant pour fixer le prix de souscription des vendeurs, chaque plateforme internalise l’externalité négative qu’elle exerce sur l’autre plateforme lorsqu’elle réduit son prix. Cela conduit les plateformes à augmenter le prix de souscription pour les vendeurs par rapport à la situation de concurrence. Dans le même temps, à mesure que la valeur économique des vendeurs augmente, comme les acheteurs exercent un effet de réseau positif sur les vendeurs, la concurrence entre plateformes pour attirer les acheteurs s'intensifie, ce qui conduit à une baisse du prix de souscription pour les acheteurs. Nous considérons deux scénarios : un marché en croissance (dans lequel de nouveaux utilisateurs peuvent rejoindre la plateforme) et un marché mature. Le surplus total augmente uniquement dans le premier cas, lorsque de nouveaux acheteurs peuvent rejoindre le marché. Enfin, le troisième chapitre s’intéresse à l'interopérabilité entre une plateforme en place et un nouvel entrant comme instrument de régulation pour améliorer la contestabilité du marché et limiter le pouvoir de marché de la plateforme en place. L'interopérabilité permet de partager les effets de réseau entre les deux plateformes, ce qui réduit leur importance dans le choix de souscription des utilisateurs à une plateforme. L'introduction de l'interopérabilité entraîne une réduction de la demande pour la plateforme en place, qui réduit le prix de son tarif de souscription. En revanche, pour des niveaux d'interopérabilité relativement faibles, la demande pour le nouvel entrant augmente (de même que son prix et son profit), puis celle-ci diminue pour des niveaux d'interopérabilité plus élevés. Dans tous les cas, les utilisateurs bénéficient de la mise en place de l’interopérabilité
This thesis addresses two issues facing regulators in the digital economy: the informational challenge generated by the use of new artificial intelligence technologies and the problem of the market power of large digital platforms. The first chapter of this thesis explores the implementation of a (costly and imperfect) audit system by a regulator seeking to limit the risk of damage generated by artificial intelligence technologies as well as its cost of regulation. Firms may invest in explainability to better understand their technologies and, thus, reduce their cost of compliance. When audit efficacy is not affected by explainability, firms invest voluntarily in explainability. Technology-specific regulation induces greater explainability and compliance than technology-neutral regulation. If, instead, explainability facilitates the regulator's detection of misconduct, a firm may hide its misconduct behind algorithmic opacity. Regulatory opportunism further deters investment in explainability. To promote explainability and compliance, command-and-control regulation with minimum explainability standards may be needed. The second chapter studies the effects of implementing a coopetition strategy between two two-sided platforms on the subscription prices of their users, in a growing market (i.e., in which new users can join the platform) and in a mature market. More specifically, the platforms cooperatively set the subscription prices of one group of users (e.g., sellers) and the prices of the other group (e.g., buyers) non-cooperatively. By cooperating on the subscription price of sellers, each platform internalizes the negative externality it exerts on the other platform when it reduces its price. This leads the platforms to increase the subscription price for sellers relative to the competitive situation. At the same time, as the economic value of sellers increases and as buyers exert a positive cross-network effect on sellers, competition between platforms to attract buyers intensifies, leading to a lower subscription price for buyers. The increase in total surplus only occurs when new buyers can join the market. Finally, the third chapter examines interoperability between an incumbent platform and a new entrant as a regulatory tool to improve market contestability and limit the market power of the incumbent platform. Interoperability allows network effects to be shared between the two platforms, thereby reducing the importance of network effects in users' choice of subscription to a platform. The preference to interact with exclusive users of the other platform leads to multihoming when interoperability is not perfect. Interoperability leads to a reduction in demand for the incumbent platform, which reduces its subscription price. In contrast, for relatively low levels of interoperability, demand for the entrant platform increases, as does its price and profit, before decreasing for higher levels of interoperability. Users always benefit from the introduction of interoperability
2

Li, Honghao. "Interpretable biological network reconstruction from observational data." Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur les méthodes basées sur des contraintes. Nous présentons comme exemple l’algorithme PC, pour lequel nous proposons une modification qui garantit la cohérence des ensembles de séparation, utilisés pendant l’étape de reconstruction du squelette pour supprimer les arêtes entre les variables conditionnellement indépendantes, par rapport au graphe final. Elle consiste à itérer l’algorithme d’apprentissage de structure tout en limitant la recherche des ensembles de séparation à ceux qui sont cohérents par rapport au graphe obtenu à la fin de l’itération précédente. La contrainte peut être posée avec une complexité de calcul limitée à l’aide de la décomposition en block-cut tree du squelette du graphe. La modification permet d’augmenter le rappel au prix de la précision des méthodes basées sur des contraintes, tout en conservant une performance globale similaire ou supérieure. Elle améliore également l’interprétabilité et l’explicabilité du modèle graphique obtenu. Nous présentons ensuite la méthode basée sur des contraintes MIIC, récemment développée, qui adopte les idées du cadre du maximum de vraisemblance pour améliorer la robustesse et la performance du graphe obtenu. Nous discutons les caractéristiques et les limites de MIIC, et proposons plusieurs modifications qui mettent l’accent sur l’interprétabilité du graphe obtenu et l’extensibilité de l’algorithme. En particulier, nous mettons en œuvre l’approche itérative pour renforcer la cohérence de l’ensemble de séparation, nous optons pour une règle d’orientation conservatrice et nous utilisons la probabilité d’orientation de MIIC pour étendre la notation des arêtes dans le graphe final afin d’illustrer différentes relations causales. L’algorithme MIIC est appliqué à un ensemble de données d’environ 400 000 dossiers de cancer du sein provenant de la base de données SEER, comme benchmark à grande échelle dans la vie réelle
This thesis is focused on constraint-based methods, one of the basic types of causal structure learning algorithm. We use PC algorithm as a representative, for which we propose a simple and general modification that is applicable to any PC-derived methods. The modification ensures that all separating sets used during the skeleton reconstruction step to remove edges between conditionally independent variables remain consistent with respect to the final graph. It consists in iterating the structure learning algorithm while restricting the search of separating sets to those that are consistent with respect to the graph obtained at the end of the previous iteration. The restriction can be achieved with limited computational complexity with the help of block-cut tree decomposition of the graph skeleton. The enforcement of separating set consistency is found to increase the recall of constraint-based methods at the cost of precision, while keeping similar or better overall performance. It also improves the interpretability and explainability of the obtained graphical model. We then introduce the recently developed constraint-based method MIIC, which adopts ideas from the maximum likelihood framework to improve the robustness and overall performance of the obtained graph. We discuss the characteristics and the limitations of MIIC, and propose several modifications that emphasize the interpretability of the obtained graph and the scalability of the algorithm. In particular, we implement the iterative approach to enforce separating set consistency, and opt for a conservative rule of orientation, and exploit the orientation probability feature of MIIC to extend the edge notation in the final graph to illustrate different causal implications. The MIIC algorithm is applied to a dataset of about 400 000 breast cancer records from the SEER database, as a large-scale real-life benchmark
3

BODINI, MATTEO. "DESIGN AND EXPLAINABILITY OF MACHINE LEARNING ALGORITHMS FOR THE CLASSIFICATION OF CARDIAC ABNORMALITIES FROM ELECTROCARDIOGRAM SIGNALS." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/888002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The research activity contained in the present thesis work is devoted to the development of novel Machine Learning (ML) and Deep Learning (DL) algorithms for the classification of Cardiac Abnormalities (CA) from Electrocardiogram (ECG) signals, along with the explanation of classification outputs with explainable approaches. Automated computer programs for ECG classification have been developed since 1950s to improve the correct interpretation of the ECG, nowadays facilitating health care decision-making by reducing costs and human errors. The first ECG interpretation computer programs were essentially developed by emph{translating into the machine} the domain knowledge provided by expert physicians. However, in the last years leading research groups proposed to employ standard ML algorithms (which involve feature extraction, followed by classification), and more recently emph{end-to-end} DL algorithms to build automated ECG classification computer programs for the detection of CA. Recently, several research works proposed DL algorithms which even exceeded the performance of board-certified cardiologists in detecting a wide range of CA from ECGs. As a matter of fact, DL algorithms seem to represent promising tools for automated ECG classification on the analyzed datasets. However, the latest research related to ML and DL carries two main drawbacks that were tackled throughout the doctoral experience. First, to let the standard ML algorithms to perform at their best, the proper preprocessing, feature engineering, and classification algorithm (along with its parameters and hyperparameters) must be selected. Even when end-to-end DL approaches are adopted, and the feature extraction step is automatically learned from data, the optimal model architecture is crucial to get the best performance. To address this issue, we exploited the domain knowledge of electrocardiography to design an ensemble ML classification algorithm to classify within a wide range of 27 CA. Differently from other works in the context of ECG classification, which often borrowed ML and DL architectures from other domains, we designed each model in the ensemble according to the domain knowledge to specifically classify a subset of the considered CA that alter the same set of ECG physiological features known by physicians. Furthermore, in a subsequent work, toward the same aim we experimented three different Automated ML frameworks to automatically find the optimal ML pipeline in the case of standard and end-to-end DL algorithms. Second, while several research articles reported remarkable results for the value of ML and DL in classifying ECGs, only a handful offer insights into the model’s learning representation of the ECG for the respective task. Without explaining what these models are sensing on the ECG to perform their classifications in an explainable way, the developers of such algorithms run a strong risk of discouraging the physicians to adopt these tools, since they need to understand how ML and DL work before entrusting it to facilitate their clinical practice. Methods to open the emph{black-boxes} of ML and DL have been applied to the ECG in a few works, but they often provided only explanations restricted to a single ECG at time and with limited, or even absent, framing into the knowledge domain of electrocardiography. To tackle such issues, we developed techniques to unveil which portions of the ECG were the most relevant to the classification output of a ML algorithm, by computing average explanations over all the training samples, and translating them for the physicians' understanding. In a preliminary work, we relied on the Local Interpretable Model-agnostic Explanations (LIME) explainability algorithm to highlight which ECG leads were the most relevant in the classification of ST-Elevation Myocardial Infarction with a Random Forest classifier. Then, in a subsequent work, we extended the approach and we designed two model-specific explainability algorithms for Convolutional Neural Networks to explain which ECG waves, a concept understood by physicians, were the most relevant in the classification process of a wide set of 27 CA for a state-of-the-art CNN.
4

Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les modèles d'intelligence artificielle (IA) actuels ont fait leurs preuves dans la résolution de diverses tâches, telles que la classification, la régression, le traitement du langage naturel (NLP) et le traitement d'images. Les ressources dont nous disposons aujourd'hui nous permettent d'entraîner des modèles d'IA très complexes pour résoudre différents problèmes dans presque tous les domaines : médecine, finance, justice, transport, prévisions, etc. Avec la popularité et l'utilisation généralisée des modèles d'IA, la nécessite d'assurer la confiance dans ces modèles s'est également accrue. Aussi complexes soient-ils aujourd'hui, ces modèles d'IA sont impossibles à interpréter et à comprendre par les humains. Dans cette thèse nous nous concentrons sur un domaine de recherche spécifique, à savoir l'intelligence artificielle explicable (xAI), qui vise à fournir des approches permettant d'interpréter les modèles d'IA complexes et d'expliquer leurs décisions. Nous présentons deux approches, STACI et BELLA, qui se concentrent sur les tâches de classification et de régression, respectivement, pour les données tabulaires. Les deux méthodes sont des approches post-hoc agnostiques au modèle déterministe, ce qui signifie qu'elles peuvent être appliquées à n'importe quel modèle boîte noire après sa création. De cette manière, l'interopérabilité présente une valeur ajoutée sans qu'il soit nécessaire de faire des compromis sur les performances du modèle de boîte noire. Nos méthodes fournissent des interprétations précises, simples et générales à la fois de l'ensemble du modèle boîte noire et de ses prédictions individuelles. Nous avons confirmé leur haute performance par des expériences approfondies et étude d'utilisateurs
Current state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
5

Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La thèse se place dans le domaine de l'IA explicable (XAI, eXplainable AI). Nous nous concentrons sur les méthodes d'interprétabilité post-hoc qui visent à expliquer à un utilisateur la prédiction pour une donnée d'intérêt spécifique effectuée par un modèle de décision entraîné. Pour augmenter l'interprétabilité des explications, cette thèse étudie l'intégration de connaissances utilisateur dans ces méthodes, et vise ainsi à améliorer la compréhensibilité de l'explication en générant des explications personnalisées adaptées à chaque utilisateur. Pour cela, nous proposons un formalisme général qui intègre explicitement la connaissance via un nouveau critère dans les objectifs d'interprétabilité. Ce formalisme est ensuite décliné pour différents types connaissances et différents types d'explications, particulièrement les exemples contre-factuels, conduisant à la proposition de plusieurs algorithmes (KICE, Knowledge Integration in Counterfactual Explanation, rKICE pour sa variante incluant des connaissances exprimées par des règles et KISM, Knowledge Integration in Surrogate Models). La question de l'agrégation des contraintes de qualité classique et de compatibilité avec les connaissances est également étudiée et nous proposons d'utiliser l'intégrale de Gödel comme opérateur d'agrégation. Enfin nous discutons de la difficulté à générer une unique explication adaptée à tous types d'utilisateurs et de la notion de diversité dans les explications
This thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations

Book chapters on the topic "Algorithm explainability":

1

Rady, Amgad, and Franck van Breugel. "Explainability of Probabilistic Bisimilarity Distances for Labelled Markov Chains." In Lecture Notes in Computer Science, 285–307. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30829-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractProbabilistic bisimilarity distances measure the similarity of behaviour of states of a labelled Markov chain. The smaller the distance between two states, the more alike they behave. Their distance is zero if and only if they are probabilistic bisimilar. Recently, algorithms have been developed that can compute probabilistic bisimilarity distances for labelled Markov chains with thousands of states within seconds. However, say we compute that the distance of two states is 0.125. How does one explain that 0.125 captures the similarity of their behaviour?In this paper, we address this question by returning to the definition of probabilistic bisimilarity distances proposed by Desharnais, Gupta, Jagadeesan, and Panangaden more than two decades ago. We use a slight variation of their logic to construct for each pair of states a sequence of formulas that explains the probabilistic bisimilarity distance of the states. Furthermore, we present an algorithm that computes those formulas and we show that each formula can be computed in polynomial time.We also prove that our logic is minimal. That is, if we leave out any operator from the logic, then the resulting logic no longer provides a logical characterization of the probabilistic bisimilarity distances.
2

Wang, Huaduo, and Gopal Gupta. "FOLD-SE: An Efficient Rule-Based Machine Learning Algorithm with Scalable Explainability." In Practical Aspects of Declarative Languages, 37–53. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-52038-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baniecki, Hubert, Wojciech Kretowicz, and Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning." In Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractMany methods have been developed to understand complex predictive models and high expectations are placed on post-hoc model explainability. It turns out that such explanations are not robust nor trustworthy, and they can be fooled. This paper presents techniques for attacking Partial Dependence (plots, profiles, PDP), which are among the most popular methods of explaining any predictive model trained on tabular data. We showcase that PD can be manipulated in an adversarial manner, which is alarming, especially in financial or medical applications where auditability became a must-have trait supporting black-box machine learning. The fooling is performed via poisoning the data to bend and shift explanations in the desired direction using genetic and gradient algorithms. We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a model-agnostic and an explanation-agnostic manner.
4

Duke, Toju. "Explainability." In Building Responsible AI Algorithms, 105–16. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9306-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Neubig, Stefan, Daria Cappey, Nicolas Gehring, Linus Göhl, Andreas Hein, and Helmut Krcmar. "Visualizing Explainable Touristic Recommendations: An Interactive Approach." In Information and Communication Technologies in Tourism 2024, 353–64. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractPersonalized recommendations have played a vital role in tourism, serving various purposes, ranging from an improved visitor experience to addressing sustainability issues. However, research shows that recommendations are more likely to be accepted by visitors if they are comprehensible and appeal to the visitors’ common sense. This highlights the importance of explainable recommendations that, according to a previously specified goal, explain an algorithm’s inference process, generate trust among visitors, or educate visitors by making them aware of sustainability practices. Based on this motivation, our paper proposes a visual, interactive approach to exploring recommendation explanations tailored to tourism. Agnostic to the underlying recommendation algorithm and the defined explainability goal, our approach leverages knowledge graphs to generate model-specific and post-hoc explanations. We demonstrate and evaluate our approach based on a prototypical dashboard implementing our concept. Following the results of our evaluation, our dashboard helps explain recommendations of arbitrary models, even in complex scenarios.
6

Stevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structures (e.g. neural networks) with post-processing to calculate feature importance. In this paper, a comprehensive overview of predictive models with varying intrinsic complexity are measured based on explainability with model-agnostic quantitative evaluation metrics. To this end, explainability is designed as a symbiosis between interpretability and faithfulness and thereby allowing to compare inherently created explanations (e.g. decision tree rules) with post-hoc explainability techniques (e.g. Shapley values) on top of AI models. Moreover, two improved versions of the logistic regression model capable of capturing non-linear interactions and both inherently generating their own explanations are proposed in the OOPPM context. These models are benchmarked with two common state-of-the-art models with post-hoc explanation techniques in the explainability-performance space.
7

Zhou, Jianlong, Fang Chen, and Andreas Holzinger. "Towards Explainability for AI Fairness." In xxAI - Beyond Explainable AI, 375–86. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractAI explainability is becoming indispensable to allow users to gain insights into the AI system’s decision-making process. Meanwhile, fairness is another rising concern that algorithmic predictions may be misaligned to the designer’s intent or social expectations such as discrimination to specific groups. In this work, we provide a state-of-the-art overview on the relations between explanation and AI fairness and especially the roles of explanation on human’s fairness judgement. The investigations demonstrate that fair decision making requires extensive contextual understanding, and AI explanations help identify potential variables that are driving the unfair outcomes. It is found that different types of AI explanations affect human’s fairness judgements differently. Some properties of features and social science theories need to be considered in making senses of fairness with explanations. Different challenges are identified to make responsible AI for trustworthy decision making from the perspective of explainability and fairness.
8

Darrab, Sadeq, Harshitha Allipilli, Sana Ghani, Harikrishnan Changaramkulath, Sricharan Koneru, David Broneske, and Gunter Saake. "Anomaly Detection Algorithms: Comparative Analysis and Explainability Perspectives." In Communications in Computer and Information Science, 90–104. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8696-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wanner, Jonas, Lukas-Valentin Herm, Kai Heinrich, and Christian Janiesch. "Stop Ordering Machine Learning Algorithms by Their Explainability! An Empirical Investigation of the Tradeoff Between Performance and Explainability." In Responsible AI and Analytics for an Ethical and Inclusive Digitized Society, 245–58. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85447-8_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sajid, Sad Wadi, K. M. Rashid Anjum, Md Al-Shaharia, and Mahmudul Hasan. "Investigating Machine Learning Algorithms with Model Explainability for Network Intrusion Detection." In Cyber Security and Business Intelligence, 121–36. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003285854-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Algorithm explainability":

1

Zhou, Tongyu, Haoyu Sheng, and Iris Howley. "Assessing Post-hoc Explainability of the BKT Algorithm." In AIES '20: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3375627.3375856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mollel, Rachel Stephen, Lina Stankovic, and Vladimir Stankovic. "Using explainability tools to inform NILM algorithm performance." In BuildSys '22: The 9th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3563357.3566148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Góra, Grzegorz, Andrzej Skowron, and Arkadiusz Wojna. "Explainability in RIONA Algorithm Combining Rule Induction and Instance-Based Learning." In 18th Conference on Computer Science and Intelligence Systems. IEEE, 2023. http://dx.doi.org/10.15439/2023f4139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cardoso, Fabio, Thiago Medeiros, Marley Vellasco, and Karla Figueiredo. "Optimizing explainability of Breast Cancer Recurrence using FuzzyGenetic." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/eniac.2023.234253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Breast cancer is the most common cancer diagnosed in the world, being the cause of death of 685,000 people worldwide in 2020. Due to the aggressiveness of the disease, early-stage identification, treatment, and remission detection are important to ensure longevity to those who may have cancer. In this paper, we propose a fuzzy-genetic approach for breast cancer recurrence classification. To this end, we use a Genetic Algorithm to design automatically the fuzzy inference system with the objective of balancing between accuracy and explainability. The proposed system achieved an accuracy of 91.30%, finding eleven rules with a maximum of three antecedents per rule, which provided a competitive result compared to other Machine Learning approaches.
5

Krishnamurthy, Bhargavi, Sajjan G. Shiva, and Saikat Das. "Handling Node Discovery Problem in Fog Computing using Categorical51 Algorithm With Explainability." In 2023 IEEE World AI IoT Congress (AIIoT). IEEE, 2023. http://dx.doi.org/10.1109/aiiot58121.2023.10174564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bounds, Charles Patrick, Mesbah Uddin, and Shishir Desai. "Tuning of Turbulence Model Closure Coefficients Using an Explainability Based Machine Learning Algorithm." In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-0562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<div class="section abstract"><div class="htmlview paragraph">This article discusses an application of Machine Learning (ML) tools to improve the prediction accuracy of Computational Fluid Dynamics (CFD) for external aerodynamic workflows. The Reynolds Averaged Navier-Stokes (RANS) approach to CFD has proved to be one of the most popular simulation methodologies due to its quick turnaround times and acceptable level of accuracy for most applications. However, in many cases the accuracy for the RANS models can prove to be suboptimal that can be significantly improved with model closure coefficient tuning. During the original turbulence model creation, these closure coefficients were chosen by somewhat ad hoc methods using simple canonical flows that do not transfer well to flows involving more complex objects, like the automotive bodies used in this work. This work presents a novel method of applying ML tools to CFD to optimize the turbulence closure coefficients by using model explainability tools such as Shapley Values, Shapley Additive exPlanations (SHAP), and ML surrogate models. The 25-degree slant Ahmed body model was used to obtain sampling data to tune closure coefficient in the Menter Shear Stress Transport (SST) turbulence model implemented in the open source CFD code, OpenFOAM v2012. Shapley additive values were then calculated using the samples which showed that <i>β</i><sup>∗</sup> has the strongest influence over the model predictions of lift and drag. ML surrogate models were then applied alongside SHAP providing a better overall sampling efficiency with Shapley additive values and more complete explanations of the model. The SHAP explanations showed that <i>β</i><sup>∗</sup> had the most influence on the force predictions followed by <i>σ</i><sub><i>ω</i>2</sub>, while <i>σ</i><sub><i>ω</i>1</sub>, <i>σ</i><sub><i>k</i>1</sub>, and <i>σ</i><sub><i>k</i>2</sub> were shown to have little impact. The surrogate model was then used along with its explanations to provide optimized coefficients that reduced the error in the drag and lift predictions to -3.67% and -2.49% respectively, from -9.67% and -75.8%.</div></div>
7

Oveis, Amir Hosein, Elisa Giusti, Giulio Meucci, Selenia Ghio, and Marco Martorella. "Explainability In Hyperspectral Image Classification: A Study of Xai Through the Shap Algorithm." In 2023 13th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS). IEEE, 2023. http://dx.doi.org/10.1109/whispers61460.2023.10430776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gopalakrishnan, Karthik, and V. John Mathews. "A Fast Unsupervised Online Learning Algorithm to Detect Structural Damage in Time-Varying Environments." In 2021 48th Annual Review of Progress in Quantitative Nondestructive Evaluation. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/qnde2021-75247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Machine learning based health monitoring techniques for damage detection have been widely studied. Most such approaches suffer from two main problems, time-varying environmental and operating conditions, and the difficulty in acquiring training data from damaged structures. Recently, our group presented an unsupervised learning algorithm using support vector data description (SVDD) and an autoencoder to detect damage in time-varying environments without training on data from damaged structures. Though the preliminary experiments produced promising results, the algorithm was computationally expensive. This paper presents an iterative algorithm that learns the state of a structure in time-varying environments online in a computationally efficient manner. This algorithm combines the fast, incremental SVDD (FISVDD) algorithm with signal features based on wavelet packet decomposition (WPD) to create a method that is efficient and provides more accurate detection of smaller damage than the autoencoder-based method. The use of FISVDD has created the possibility of online learning and adaptive damage detection in time-varying environmental and operating conditions (EOC). The WPD-based features also have the potential to provide explainability for the learning algorithm.
9

Quinn, Seán, and Alessandra Mileo. "Towards Architecture-Agnostic Neural Transfer: a Knowledge-Enhanced Approach." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The ability to enhance deep representations with prior knowledge is receiving a lot of attention from the AI community as a key enabler to improve the way modern Artificial Neural Networks (ANN) learn. In this paper we introduce our approach to this task, which comprises of a knowledge extraction algorithm, a knowledge injection algorithm and a common intermediate knowledge representation as an alternative to traditional neural transfer. As a result of this research, we envisage a knowledge-enhanced ANN, which will be able to learn, characterise and reuse knowledge extracted from the learning process, thus enabling more robust architecture-agnostic neural transfer, greater explainability and further integration of neural and symbolic approaches to learning.
10

Eiben, Eduard, Sebastian Ordyniak, Giacomo Paesani, and Stefan Szeider. "Learning Small Decision Trees with Large Domain." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
One favors decision trees (DTs) of the smallest size or depth to facilitate explainability and interpretability. However, learning such an optimal DT from data is well-known to be NP-hard. To overcome this complexity barrier, Ordyniak and Szeider (AAAI 21) initiated the study of optimal DT learning under the parameterized complexity perspective. They showed that solution size (i.e., number of nodes or depth of the DT) is insufficient to obtain fixed-parameter tractability (FPT). Therefore, they proposed an FPT algorithm that utilizes two auxiliary parameters: the maximum difference (as a structural property of the data set) and maximum domain size. They left it as an open question of whether bounding the maximum domain size is necessary. The main result of this paper answers this question. We present FPT algorithms for learning a smallest or lowest-depth DT from data, with the only parameters solution size and maximum difference. Thus, our algorithm is significantly more potent than the one by Szeider and Ordyniak as it can handle problem inputs with features that range over unbounded domains. We also close several gaps concerning the quality of approximation one obtains by only considering DTs based on minimum support sets.

To the bibliography