Добірка наукової літератури з теми "Algorithm explainability"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Algorithm explainability".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Algorithm explainability"
Nuobu, Gengpan. "Transformer model: Explainability and prospectiveness." Applied and Computational Engineering 20, no. 1 (October 23, 2023): 88–99. http://dx.doi.org/10.54254/2755-2721/20/20231079.
Повний текст джерелаHwang, Hyunseung, and Steven Euijong Whang. "XClusters: Explainability-First Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 7962–70. http://dx.doi.org/10.1609/aaai.v37i7.25963.
Повний текст джерелаPendyala, Vishnu, and Hyungkyun Kim. "Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI." Electronics 13, no. 6 (March 8, 2024): 1025. http://dx.doi.org/10.3390/electronics13061025.
Повний текст джерелаLoreti, Daniela, and Giorgio Visani. "Parallel approaches for a decision tree-based explainability algorithm." Future Generation Computer Systems 158 (September 2024): 308–22. http://dx.doi.org/10.1016/j.future.2024.04.044.
Повний текст джерелаWang, Zhenzhong, Qingyuan Zeng, Wanyu Lin, Min Jiang, and Kay Chen Tan. "Generating Diagnostic and Actionable Explanations for Fair Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21690–98. http://dx.doi.org/10.1609/aaai.v38i19.30168.
Повний текст джерелаYiğit, Tuncay, Nilgün Şengöz, Özlem Özmen, Jude Hemanth, and Ali Hakan Işık. "Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning." Traitement du Signal 39, no. 3 (June 30, 2022): 863–69. http://dx.doi.org/10.18280/ts.390311.
Повний текст джерелаPowell, Alison B. "Explanations as governance? Investigating practices of explanation in algorithmic system design." European Journal of Communication 36, no. 4 (August 2021): 362–75. http://dx.doi.org/10.1177/02673231211028376.
Повний текст джерелаXie, Lijie, Zhaoming Hu, Xingjuan Cai, Wensheng Zhang, and Jinjun Chen. "Explainable recommendation based on knowledge graph and multi-objective optimization." Complex & Intelligent Systems 7, no. 3 (March 6, 2021): 1241–52. http://dx.doi.org/10.1007/s40747-021-00315-y.
Повний текст джерелаKabir, Sami, Mohammad Shahadat Hossain, and Karl Andersson. "An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings." Energies 17, no. 8 (April 9, 2024): 1797. http://dx.doi.org/10.3390/en17081797.
Повний текст джерелаBulitko, Vadim, Shuwei Wang, Justin Stevens, and Levi H. S. Lelis. "Portability and Explainability of Synthesized Formula-based Heuristics." Proceedings of the International Symposium on Combinatorial Search 15, no. 1 (July 17, 2022): 29–37. http://dx.doi.org/10.1609/socs.v15i1.21749.
Повний текст джерелаДисертації з теми "Algorithm explainability"
Raizonville, Adrien. "Regulation and competition policy of the digital economy : essays in industrial organization." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT028.
Повний текст джерелаThis thesis addresses two issues facing regulators in the digital economy: the informational challenge generated by the use of new artificial intelligence technologies and the problem of the market power of large digital platforms. The first chapter of this thesis explores the implementation of a (costly and imperfect) audit system by a regulator seeking to limit the risk of damage generated by artificial intelligence technologies as well as its cost of regulation. Firms may invest in explainability to better understand their technologies and, thus, reduce their cost of compliance. When audit efficacy is not affected by explainability, firms invest voluntarily in explainability. Technology-specific regulation induces greater explainability and compliance than technology-neutral regulation. If, instead, explainability facilitates the regulator's detection of misconduct, a firm may hide its misconduct behind algorithmic opacity. Regulatory opportunism further deters investment in explainability. To promote explainability and compliance, command-and-control regulation with minimum explainability standards may be needed. The second chapter studies the effects of implementing a coopetition strategy between two two-sided platforms on the subscription prices of their users, in a growing market (i.e., in which new users can join the platform) and in a mature market. More specifically, the platforms cooperatively set the subscription prices of one group of users (e.g., sellers) and the prices of the other group (e.g., buyers) non-cooperatively. By cooperating on the subscription price of sellers, each platform internalizes the negative externality it exerts on the other platform when it reduces its price. This leads the platforms to increase the subscription price for sellers relative to the competitive situation. At the same time, as the economic value of sellers increases and as buyers exert a positive cross-network effect on sellers, competition between platforms to attract buyers intensifies, leading to a lower subscription price for buyers. The increase in total surplus only occurs when new buyers can join the market. Finally, the third chapter examines interoperability between an incumbent platform and a new entrant as a regulatory tool to improve market contestability and limit the market power of the incumbent platform. Interoperability allows network effects to be shared between the two platforms, thereby reducing the importance of network effects in users' choice of subscription to a platform. The preference to interact with exclusive users of the other platform leads to multihoming when interoperability is not perfect. Interoperability leads to a reduction in demand for the incumbent platform, which reduces its subscription price. In contrast, for relatively low levels of interoperability, demand for the entrant platform increases, as does its price and profit, before decreasing for higher levels of interoperability. Users always benefit from the introduction of interoperability
Li, Honghao. "Interpretable biological network reconstruction from observational data." Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5207.
Повний текст джерелаThis thesis is focused on constraint-based methods, one of the basic types of causal structure learning algorithm. We use PC algorithm as a representative, for which we propose a simple and general modification that is applicable to any PC-derived methods. The modification ensures that all separating sets used during the skeleton reconstruction step to remove edges between conditionally independent variables remain consistent with respect to the final graph. It consists in iterating the structure learning algorithm while restricting the search of separating sets to those that are consistent with respect to the graph obtained at the end of the previous iteration. The restriction can be achieved with limited computational complexity with the help of block-cut tree decomposition of the graph skeleton. The enforcement of separating set consistency is found to increase the recall of constraint-based methods at the cost of precision, while keeping similar or better overall performance. It also improves the interpretability and explainability of the obtained graphical model. We then introduce the recently developed constraint-based method MIIC, which adopts ideas from the maximum likelihood framework to improve the robustness and overall performance of the obtained graph. We discuss the characteristics and the limitations of MIIC, and propose several modifications that emphasize the interpretability of the obtained graph and the scalability of the algorithm. In particular, we implement the iterative approach to enforce separating set consistency, and opt for a conservative rule of orientation, and exploit the orientation probability feature of MIIC to extend the edge notation in the final graph to illustrate different causal implications. The MIIC algorithm is applied to a dataset of about 400 000 breast cancer records from the SEER database, as a large-scale real-life benchmark
BODINI, MATTEO. "DESIGN AND EXPLAINABILITY OF MACHINE LEARNING ALGORITHMS FOR THE CLASSIFICATION OF CARDIAC ABNORMALITIES FROM ELECTROCARDIOGRAM SIGNALS." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/888002.
Повний текст джерелаRadulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Повний текст джерелаCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Повний текст джерелаThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Частини книг з теми "Algorithm explainability"
Rady, Amgad, and Franck van Breugel. "Explainability of Probabilistic Bisimilarity Distances for Labelled Markov Chains." In Lecture Notes in Computer Science, 285–307. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30829-1_14.
Повний текст джерелаWang, Huaduo, and Gopal Gupta. "FOLD-SE: An Efficient Rule-Based Machine Learning Algorithm with Scalable Explainability." In Practical Aspects of Declarative Languages, 37–53. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-52038-9_3.
Повний текст джерелаBaniecki, Hubert, Wojciech Kretowicz, and Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning." In Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.
Повний текст джерелаDuke, Toju. "Explainability." In Building Responsible AI Algorithms, 105–16. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9306-5_7.
Повний текст джерелаNeubig, Stefan, Daria Cappey, Nicolas Gehring, Linus Göhl, Andreas Hein, and Helmut Krcmar. "Visualizing Explainable Touristic Recommendations: An Interactive Approach." In Information and Communication Technologies in Tourism 2024, 353–64. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_37.
Повний текст джерелаStevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Повний текст джерелаZhou, Jianlong, Fang Chen, and Andreas Holzinger. "Towards Explainability for AI Fairness." In xxAI - Beyond Explainable AI, 375–86. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_18.
Повний текст джерелаDarrab, Sadeq, Harshitha Allipilli, Sana Ghani, Harikrishnan Changaramkulath, Sricharan Koneru, David Broneske, and Gunter Saake. "Anomaly Detection Algorithms: Comparative Analysis and Explainability Perspectives." In Communications in Computer and Information Science, 90–104. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8696-5_7.
Повний текст джерелаWanner, Jonas, Lukas-Valentin Herm, Kai Heinrich, and Christian Janiesch. "Stop Ordering Machine Learning Algorithms by Their Explainability! An Empirical Investigation of the Tradeoff Between Performance and Explainability." In Responsible AI and Analytics for an Ethical and Inclusive Digitized Society, 245–58. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85447-8_22.
Повний текст джерелаSajid, Sad Wadi, K. M. Rashid Anjum, Md Al-Shaharia, and Mahmudul Hasan. "Investigating Machine Learning Algorithms with Model Explainability for Network Intrusion Detection." In Cyber Security and Business Intelligence, 121–36. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003285854-8.
Повний текст джерелаТези доповідей конференцій з теми "Algorithm explainability"
Zhou, Tongyu, Haoyu Sheng, and Iris Howley. "Assessing Post-hoc Explainability of the BKT Algorithm." In AIES '20: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3375627.3375856.
Повний текст джерелаMollel, Rachel Stephen, Lina Stankovic, and Vladimir Stankovic. "Using explainability tools to inform NILM algorithm performance." In BuildSys '22: The 9th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3563357.3566148.
Повний текст джерелаGóra, Grzegorz, Andrzej Skowron, and Arkadiusz Wojna. "Explainability in RIONA Algorithm Combining Rule Induction and Instance-Based Learning." In 18th Conference on Computer Science and Intelligence Systems. IEEE, 2023. http://dx.doi.org/10.15439/2023f4139.
Повний текст джерелаCardoso, Fabio, Thiago Medeiros, Marley Vellasco, and Karla Figueiredo. "Optimizing explainability of Breast Cancer Recurrence using FuzzyGenetic." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/eniac.2023.234253.
Повний текст джерелаKrishnamurthy, Bhargavi, Sajjan G. Shiva, and Saikat Das. "Handling Node Discovery Problem in Fog Computing using Categorical51 Algorithm With Explainability." In 2023 IEEE World AI IoT Congress (AIIoT). IEEE, 2023. http://dx.doi.org/10.1109/aiiot58121.2023.10174564.
Повний текст джерелаBounds, Charles Patrick, Mesbah Uddin, and Shishir Desai. "Tuning of Turbulence Model Closure Coefficients Using an Explainability Based Machine Learning Algorithm." In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-0562.
Повний текст джерелаOveis, Amir Hosein, Elisa Giusti, Giulio Meucci, Selenia Ghio, and Marco Martorella. "Explainability In Hyperspectral Image Classification: A Study of Xai Through the Shap Algorithm." In 2023 13th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS). IEEE, 2023. http://dx.doi.org/10.1109/whispers61460.2023.10430776.
Повний текст джерелаGopalakrishnan, Karthik, and V. John Mathews. "A Fast Unsupervised Online Learning Algorithm to Detect Structural Damage in Time-Varying Environments." In 2021 48th Annual Review of Progress in Quantitative Nondestructive Evaluation. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/qnde2021-75247.
Повний текст джерелаQuinn, Seán, and Alessandra Mileo. "Towards Architecture-Agnostic Neural Transfer: a Knowledge-Enhanced Approach." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/915.
Повний текст джерелаEiben, Eduard, Sebastian Ordyniak, Giacomo Paesani, and Stefan Szeider. "Learning Small Decision Trees with Large Domain." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/355.
Повний текст джерела