Littérature scientifique sur le sujet « Algorithm explainability »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Algorithm explainability ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Algorithm explainability"
Nuobu, Gengpan. « Transformer model : Explainability and prospectiveness ». Applied and Computational Engineering 20, no 1 (23 octobre 2023) : 88–99. http://dx.doi.org/10.54254/2755-2721/20/20231079.
Texte intégralHwang, Hyunseung, et Steven Euijong Whang. « XClusters : Explainability-First Clustering ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 7 (26 juin 2023) : 7962–70. http://dx.doi.org/10.1609/aaai.v37i7.25963.
Texte intégralPendyala, Vishnu, et Hyungkyun Kim. « Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI ». Electronics 13, no 6 (8 mars 2024) : 1025. http://dx.doi.org/10.3390/electronics13061025.
Texte intégralLoreti, Daniela, et Giorgio Visani. « Parallel approaches for a decision tree-based explainability algorithm ». Future Generation Computer Systems 158 (septembre 2024) : 308–22. http://dx.doi.org/10.1016/j.future.2024.04.044.
Texte intégralWang, Zhenzhong, Qingyuan Zeng, Wanyu Lin, Min Jiang et Kay Chen Tan. « Generating Diagnostic and Actionable Explanations for Fair Graph Neural Networks ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 19 (24 mars 2024) : 21690–98. http://dx.doi.org/10.1609/aaai.v38i19.30168.
Texte intégralYiğit, Tuncay, Nilgün Şengöz, Özlem Özmen, Jude Hemanth et Ali Hakan Işık. « Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning ». Traitement du Signal 39, no 3 (30 juin 2022) : 863–69. http://dx.doi.org/10.18280/ts.390311.
Texte intégralPowell, Alison B. « Explanations as governance ? Investigating practices of explanation in algorithmic system design ». European Journal of Communication 36, no 4 (août 2021) : 362–75. http://dx.doi.org/10.1177/02673231211028376.
Texte intégralXie, Lijie, Zhaoming Hu, Xingjuan Cai, Wensheng Zhang et Jinjun Chen. « Explainable recommendation based on knowledge graph and multi-objective optimization ». Complex & ; Intelligent Systems 7, no 3 (6 mars 2021) : 1241–52. http://dx.doi.org/10.1007/s40747-021-00315-y.
Texte intégralKabir, Sami, Mohammad Shahadat Hossain et Karl Andersson. « An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings ». Energies 17, no 8 (9 avril 2024) : 1797. http://dx.doi.org/10.3390/en17081797.
Texte intégralBulitko, Vadim, Shuwei Wang, Justin Stevens et Levi H. S. Lelis. « Portability and Explainability of Synthesized Formula-based Heuristics ». Proceedings of the International Symposium on Combinatorial Search 15, no 1 (17 juillet 2022) : 29–37. http://dx.doi.org/10.1609/socs.v15i1.21749.
Texte intégralThèses sur le sujet "Algorithm explainability"
Raizonville, Adrien. « Regulation and competition policy of the digital economy : essays in industrial organization ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT028.
Texte intégralThis thesis addresses two issues facing regulators in the digital economy: the informational challenge generated by the use of new artificial intelligence technologies and the problem of the market power of large digital platforms. The first chapter of this thesis explores the implementation of a (costly and imperfect) audit system by a regulator seeking to limit the risk of damage generated by artificial intelligence technologies as well as its cost of regulation. Firms may invest in explainability to better understand their technologies and, thus, reduce their cost of compliance. When audit efficacy is not affected by explainability, firms invest voluntarily in explainability. Technology-specific regulation induces greater explainability and compliance than technology-neutral regulation. If, instead, explainability facilitates the regulator's detection of misconduct, a firm may hide its misconduct behind algorithmic opacity. Regulatory opportunism further deters investment in explainability. To promote explainability and compliance, command-and-control regulation with minimum explainability standards may be needed. The second chapter studies the effects of implementing a coopetition strategy between two two-sided platforms on the subscription prices of their users, in a growing market (i.e., in which new users can join the platform) and in a mature market. More specifically, the platforms cooperatively set the subscription prices of one group of users (e.g., sellers) and the prices of the other group (e.g., buyers) non-cooperatively. By cooperating on the subscription price of sellers, each platform internalizes the negative externality it exerts on the other platform when it reduces its price. This leads the platforms to increase the subscription price for sellers relative to the competitive situation. At the same time, as the economic value of sellers increases and as buyers exert a positive cross-network effect on sellers, competition between platforms to attract buyers intensifies, leading to a lower subscription price for buyers. The increase in total surplus only occurs when new buyers can join the market. Finally, the third chapter examines interoperability between an incumbent platform and a new entrant as a regulatory tool to improve market contestability and limit the market power of the incumbent platform. Interoperability allows network effects to be shared between the two platforms, thereby reducing the importance of network effects in users' choice of subscription to a platform. The preference to interact with exclusive users of the other platform leads to multihoming when interoperability is not perfect. Interoperability leads to a reduction in demand for the incumbent platform, which reduces its subscription price. In contrast, for relatively low levels of interoperability, demand for the entrant platform increases, as does its price and profit, before decreasing for higher levels of interoperability. Users always benefit from the introduction of interoperability
Li, Honghao. « Interpretable biological network reconstruction from observational data ». Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5207.
Texte intégralThis thesis is focused on constraint-based methods, one of the basic types of causal structure learning algorithm. We use PC algorithm as a representative, for which we propose a simple and general modification that is applicable to any PC-derived methods. The modification ensures that all separating sets used during the skeleton reconstruction step to remove edges between conditionally independent variables remain consistent with respect to the final graph. It consists in iterating the structure learning algorithm while restricting the search of separating sets to those that are consistent with respect to the graph obtained at the end of the previous iteration. The restriction can be achieved with limited computational complexity with the help of block-cut tree decomposition of the graph skeleton. The enforcement of separating set consistency is found to increase the recall of constraint-based methods at the cost of precision, while keeping similar or better overall performance. It also improves the interpretability and explainability of the obtained graphical model. We then introduce the recently developed constraint-based method MIIC, which adopts ideas from the maximum likelihood framework to improve the robustness and overall performance of the obtained graph. We discuss the characteristics and the limitations of MIIC, and propose several modifications that emphasize the interpretability of the obtained graph and the scalability of the algorithm. In particular, we implement the iterative approach to enforce separating set consistency, and opt for a conservative rule of orientation, and exploit the orientation probability feature of MIIC to extend the edge notation in the final graph to illustrate different causal implications. The MIIC algorithm is applied to a dataset of about 400 000 breast cancer records from the SEER database, as a large-scale real-life benchmark
BODINI, MATTEO. « DESIGN AND EXPLAINABILITY OF MACHINE LEARNING ALGORITHMS FOR THE CLASSIFICATION OF CARDIAC ABNORMALITIES FROM ELECTROCARDIOGRAM SIGNALS ». Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/888002.
Texte intégralRadulovic, Nedeljko. « Post-hoc Explainable AI for Black Box Models on Tabular Data ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Texte intégralCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Jeyasothy, Adulam. « Génération d'explications post-hoc personnalisées ». Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Texte intégralThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Chapitres de livres sur le sujet "Algorithm explainability"
Rady, Amgad, et Franck van Breugel. « Explainability of Probabilistic Bisimilarity Distances for Labelled Markov Chains ». Dans Lecture Notes in Computer Science, 285–307. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30829-1_14.
Texte intégralWang, Huaduo, et Gopal Gupta. « FOLD-SE : An Efficient Rule-Based Machine Learning Algorithm with Scalable Explainability ». Dans Practical Aspects of Declarative Languages, 37–53. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-52038-9_3.
Texte intégralBaniecki, Hubert, Wojciech Kretowicz et Przemyslaw Biecek. « Fooling Partial Dependence via Data Poisoning ». Dans Machine Learning and Knowledge Discovery in Databases, 121–36. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.
Texte intégralDuke, Toju. « Explainability ». Dans Building Responsible AI Algorithms, 105–16. Berkeley, CA : Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9306-5_7.
Texte intégralNeubig, Stefan, Daria Cappey, Nicolas Gehring, Linus Göhl, Andreas Hein et Helmut Krcmar. « Visualizing Explainable Touristic Recommendations : An Interactive Approach ». Dans Information and Communication Technologies in Tourism 2024, 353–64. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_37.
Texte intégralStevens, Alexander, Johannes De Smedt et Jari Peeperkorn. « Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring ». Dans Lecture Notes in Business Information Processing, 194–206. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Texte intégralZhou, Jianlong, Fang Chen et Andreas Holzinger. « Towards Explainability for AI Fairness ». Dans xxAI - Beyond Explainable AI, 375–86. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_18.
Texte intégralDarrab, Sadeq, Harshitha Allipilli, Sana Ghani, Harikrishnan Changaramkulath, Sricharan Koneru, David Broneske et Gunter Saake. « Anomaly Detection Algorithms : Comparative Analysis and Explainability Perspectives ». Dans Communications in Computer and Information Science, 90–104. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8696-5_7.
Texte intégralWanner, Jonas, Lukas-Valentin Herm, Kai Heinrich et Christian Janiesch. « Stop Ordering Machine Learning Algorithms by Their Explainability ! An Empirical Investigation of the Tradeoff Between Performance and Explainability ». Dans Responsible AI and Analytics for an Ethical and Inclusive Digitized Society, 245–58. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85447-8_22.
Texte intégralSajid, Sad Wadi, K. M. Rashid Anjum, Md Al-Shaharia et Mahmudul Hasan. « Investigating Machine Learning Algorithms with Model Explainability for Network Intrusion Detection ». Dans Cyber Security and Business Intelligence, 121–36. London : Routledge, 2023. http://dx.doi.org/10.4324/9781003285854-8.
Texte intégralActes de conférences sur le sujet "Algorithm explainability"
Zhou, Tongyu, Haoyu Sheng et Iris Howley. « Assessing Post-hoc Explainability of the BKT Algorithm ». Dans AIES '20 : AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3375627.3375856.
Texte intégralMollel, Rachel Stephen, Lina Stankovic et Vladimir Stankovic. « Using explainability tools to inform NILM algorithm performance ». Dans BuildSys '22 : The 9th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation. New York, NY, USA : ACM, 2022. http://dx.doi.org/10.1145/3563357.3566148.
Texte intégralGóra, Grzegorz, Andrzej Skowron et Arkadiusz Wojna. « Explainability in RIONA Algorithm Combining Rule Induction and Instance-Based Learning ». Dans 18th Conference on Computer Science and Intelligence Systems. IEEE, 2023. http://dx.doi.org/10.15439/2023f4139.
Texte intégralCardoso, Fabio, Thiago Medeiros, Marley Vellasco et Karla Figueiredo. « Optimizing explainability of Breast Cancer Recurrence using FuzzyGenetic ». Dans Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/eniac.2023.234253.
Texte intégralKrishnamurthy, Bhargavi, Sajjan G. Shiva et Saikat Das. « Handling Node Discovery Problem in Fog Computing using Categorical51 Algorithm With Explainability ». Dans 2023 IEEE World AI IoT Congress (AIIoT). IEEE, 2023. http://dx.doi.org/10.1109/aiiot58121.2023.10174564.
Texte intégralBounds, Charles Patrick, Mesbah Uddin et Shishir Desai. « Tuning of Turbulence Model Closure Coefficients Using an Explainability Based Machine Learning Algorithm ». Dans WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States : SAE International, 2023. http://dx.doi.org/10.4271/2023-01-0562.
Texte intégralOveis, Amir Hosein, Elisa Giusti, Giulio Meucci, Selenia Ghio et Marco Martorella. « Explainability In Hyperspectral Image Classification : A Study of Xai Through the Shap Algorithm ». Dans 2023 13th Workshop on Hyperspectral Imaging and Signal Processing : Evolution in Remote Sensing (WHISPERS). IEEE, 2023. http://dx.doi.org/10.1109/whispers61460.2023.10430776.
Texte intégralGopalakrishnan, Karthik, et V. John Mathews. « A Fast Unsupervised Online Learning Algorithm to Detect Structural Damage in Time-Varying Environments ». Dans 2021 48th Annual Review of Progress in Quantitative Nondestructive Evaluation. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/qnde2021-75247.
Texte intégralQuinn, Seán, et Alessandra Mileo. « Towards Architecture-Agnostic Neural Transfer : a Knowledge-Enhanced Approach ». Dans Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California : International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/915.
Texte intégralEiben, Eduard, Sebastian Ordyniak, Giacomo Paesani et Stefan Szeider. « Learning Small Decision Trees with Large Domain ». Dans Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California : International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/355.
Texte intégral