Gotowa bibliografia na temat „Algorithm explainability”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Algorithm explainability”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Algorithm explainability"
Nuobu, Gengpan. "Transformer model: Explainability and prospectiveness". Applied and Computational Engineering 20, nr 1 (23.10.2023): 88–99. http://dx.doi.org/10.54254/2755-2721/20/20231079.
Pełny tekst źródłaHwang, Hyunseung, i Steven Euijong Whang. "XClusters: Explainability-First Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 7 (26.06.2023): 7962–70. http://dx.doi.org/10.1609/aaai.v37i7.25963.
Pełny tekst źródłaPendyala, Vishnu, i Hyungkyun Kim. "Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI". Electronics 13, nr 6 (8.03.2024): 1025. http://dx.doi.org/10.3390/electronics13061025.
Pełny tekst źródłaLoreti, Daniela, i Giorgio Visani. "Parallel approaches for a decision tree-based explainability algorithm". Future Generation Computer Systems 158 (wrzesień 2024): 308–22. http://dx.doi.org/10.1016/j.future.2024.04.044.
Pełny tekst źródłaWang, Zhenzhong, Qingyuan Zeng, Wanyu Lin, Min Jiang i Kay Chen Tan. "Generating Diagnostic and Actionable Explanations for Fair Graph Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 19 (24.03.2024): 21690–98. http://dx.doi.org/10.1609/aaai.v38i19.30168.
Pełny tekst źródłaYiğit, Tuncay, Nilgün Şengöz, Özlem Özmen, Jude Hemanth i Ali Hakan Işık. "Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning". Traitement du Signal 39, nr 3 (30.06.2022): 863–69. http://dx.doi.org/10.18280/ts.390311.
Pełny tekst źródłaPowell, Alison B. "Explanations as governance? Investigating practices of explanation in algorithmic system design". European Journal of Communication 36, nr 4 (sierpień 2021): 362–75. http://dx.doi.org/10.1177/02673231211028376.
Pełny tekst źródłaXie, Lijie, Zhaoming Hu, Xingjuan Cai, Wensheng Zhang i Jinjun Chen. "Explainable recommendation based on knowledge graph and multi-objective optimization". Complex & Intelligent Systems 7, nr 3 (6.03.2021): 1241–52. http://dx.doi.org/10.1007/s40747-021-00315-y.
Pełny tekst źródłaKabir, Sami, Mohammad Shahadat Hossain i Karl Andersson. "An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings". Energies 17, nr 8 (9.04.2024): 1797. http://dx.doi.org/10.3390/en17081797.
Pełny tekst źródłaBulitko, Vadim, Shuwei Wang, Justin Stevens i Levi H. S. Lelis. "Portability and Explainability of Synthesized Formula-based Heuristics". Proceedings of the International Symposium on Combinatorial Search 15, nr 1 (17.07.2022): 29–37. http://dx.doi.org/10.1609/socs.v15i1.21749.
Pełny tekst źródłaRozprawy doktorskie na temat "Algorithm explainability"
Raizonville, Adrien. "Regulation and competition policy of the digital economy : essays in industrial organization". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT028.
Pełny tekst źródłaThis thesis addresses two issues facing regulators in the digital economy: the informational challenge generated by the use of new artificial intelligence technologies and the problem of the market power of large digital platforms. The first chapter of this thesis explores the implementation of a (costly and imperfect) audit system by a regulator seeking to limit the risk of damage generated by artificial intelligence technologies as well as its cost of regulation. Firms may invest in explainability to better understand their technologies and, thus, reduce their cost of compliance. When audit efficacy is not affected by explainability, firms invest voluntarily in explainability. Technology-specific regulation induces greater explainability and compliance than technology-neutral regulation. If, instead, explainability facilitates the regulator's detection of misconduct, a firm may hide its misconduct behind algorithmic opacity. Regulatory opportunism further deters investment in explainability. To promote explainability and compliance, command-and-control regulation with minimum explainability standards may be needed. The second chapter studies the effects of implementing a coopetition strategy between two two-sided platforms on the subscription prices of their users, in a growing market (i.e., in which new users can join the platform) and in a mature market. More specifically, the platforms cooperatively set the subscription prices of one group of users (e.g., sellers) and the prices of the other group (e.g., buyers) non-cooperatively. By cooperating on the subscription price of sellers, each platform internalizes the negative externality it exerts on the other platform when it reduces its price. This leads the platforms to increase the subscription price for sellers relative to the competitive situation. At the same time, as the economic value of sellers increases and as buyers exert a positive cross-network effect on sellers, competition between platforms to attract buyers intensifies, leading to a lower subscription price for buyers. The increase in total surplus only occurs when new buyers can join the market. Finally, the third chapter examines interoperability between an incumbent platform and a new entrant as a regulatory tool to improve market contestability and limit the market power of the incumbent platform. Interoperability allows network effects to be shared between the two platforms, thereby reducing the importance of network effects in users' choice of subscription to a platform. The preference to interact with exclusive users of the other platform leads to multihoming when interoperability is not perfect. Interoperability leads to a reduction in demand for the incumbent platform, which reduces its subscription price. In contrast, for relatively low levels of interoperability, demand for the entrant platform increases, as does its price and profit, before decreasing for higher levels of interoperability. Users always benefit from the introduction of interoperability
Li, Honghao. "Interpretable biological network reconstruction from observational data". Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5207.
Pełny tekst źródłaThis thesis is focused on constraint-based methods, one of the basic types of causal structure learning algorithm. We use PC algorithm as a representative, for which we propose a simple and general modification that is applicable to any PC-derived methods. The modification ensures that all separating sets used during the skeleton reconstruction step to remove edges between conditionally independent variables remain consistent with respect to the final graph. It consists in iterating the structure learning algorithm while restricting the search of separating sets to those that are consistent with respect to the graph obtained at the end of the previous iteration. The restriction can be achieved with limited computational complexity with the help of block-cut tree decomposition of the graph skeleton. The enforcement of separating set consistency is found to increase the recall of constraint-based methods at the cost of precision, while keeping similar or better overall performance. It also improves the interpretability and explainability of the obtained graphical model. We then introduce the recently developed constraint-based method MIIC, which adopts ideas from the maximum likelihood framework to improve the robustness and overall performance of the obtained graph. We discuss the characteristics and the limitations of MIIC, and propose several modifications that emphasize the interpretability of the obtained graph and the scalability of the algorithm. In particular, we implement the iterative approach to enforce separating set consistency, and opt for a conservative rule of orientation, and exploit the orientation probability feature of MIIC to extend the edge notation in the final graph to illustrate different causal implications. The MIIC algorithm is applied to a dataset of about 400 000 breast cancer records from the SEER database, as a large-scale real-life benchmark
BODINI, MATTEO. "DESIGN AND EXPLAINABILITY OF MACHINE LEARNING ALGORITHMS FOR THE CLASSIFICATION OF CARDIAC ABNORMALITIES FROM ELECTROCARDIOGRAM SIGNALS". Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/888002.
Pełny tekst źródłaRadulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Pełny tekst źródłaCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.
Pełny tekst źródłaThis thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Części książek na temat "Algorithm explainability"
Rady, Amgad, i Franck van Breugel. "Explainability of Probabilistic Bisimilarity Distances for Labelled Markov Chains". W Lecture Notes in Computer Science, 285–307. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30829-1_14.
Pełny tekst źródłaWang, Huaduo, i Gopal Gupta. "FOLD-SE: An Efficient Rule-Based Machine Learning Algorithm with Scalable Explainability". W Practical Aspects of Declarative Languages, 37–53. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-52038-9_3.
Pełny tekst źródłaBaniecki, Hubert, Wojciech Kretowicz i Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning". W Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.
Pełny tekst źródłaDuke, Toju. "Explainability". W Building Responsible AI Algorithms, 105–16. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9306-5_7.
Pełny tekst źródłaNeubig, Stefan, Daria Cappey, Nicolas Gehring, Linus Göhl, Andreas Hein i Helmut Krcmar. "Visualizing Explainable Touristic Recommendations: An Interactive Approach". W Information and Communication Technologies in Tourism 2024, 353–64. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_37.
Pełny tekst źródłaStevens, Alexander, Johannes De Smedt i Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring". W Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Pełny tekst źródłaZhou, Jianlong, Fang Chen i Andreas Holzinger. "Towards Explainability for AI Fairness". W xxAI - Beyond Explainable AI, 375–86. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_18.
Pełny tekst źródłaDarrab, Sadeq, Harshitha Allipilli, Sana Ghani, Harikrishnan Changaramkulath, Sricharan Koneru, David Broneske i Gunter Saake. "Anomaly Detection Algorithms: Comparative Analysis and Explainability Perspectives". W Communications in Computer and Information Science, 90–104. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8696-5_7.
Pełny tekst źródłaWanner, Jonas, Lukas-Valentin Herm, Kai Heinrich i Christian Janiesch. "Stop Ordering Machine Learning Algorithms by Their Explainability! An Empirical Investigation of the Tradeoff Between Performance and Explainability". W Responsible AI and Analytics for an Ethical and Inclusive Digitized Society, 245–58. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85447-8_22.
Pełny tekst źródłaSajid, Sad Wadi, K. M. Rashid Anjum, Md Al-Shaharia i Mahmudul Hasan. "Investigating Machine Learning Algorithms with Model Explainability for Network Intrusion Detection". W Cyber Security and Business Intelligence, 121–36. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003285854-8.
Pełny tekst źródłaStreszczenia konferencji na temat "Algorithm explainability"
Zhou, Tongyu, Haoyu Sheng i Iris Howley. "Assessing Post-hoc Explainability of the BKT Algorithm". W AIES '20: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3375627.3375856.
Pełny tekst źródłaMollel, Rachel Stephen, Lina Stankovic i Vladimir Stankovic. "Using explainability tools to inform NILM algorithm performance". W BuildSys '22: The 9th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3563357.3566148.
Pełny tekst źródłaGóra, Grzegorz, Andrzej Skowron i Arkadiusz Wojna. "Explainability in RIONA Algorithm Combining Rule Induction and Instance-Based Learning". W 18th Conference on Computer Science and Intelligence Systems. IEEE, 2023. http://dx.doi.org/10.15439/2023f4139.
Pełny tekst źródłaCardoso, Fabio, Thiago Medeiros, Marley Vellasco i Karla Figueiredo. "Optimizing explainability of Breast Cancer Recurrence using FuzzyGenetic". W Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/eniac.2023.234253.
Pełny tekst źródłaKrishnamurthy, Bhargavi, Sajjan G. Shiva i Saikat Das. "Handling Node Discovery Problem in Fog Computing using Categorical51 Algorithm With Explainability". W 2023 IEEE World AI IoT Congress (AIIoT). IEEE, 2023. http://dx.doi.org/10.1109/aiiot58121.2023.10174564.
Pełny tekst źródłaBounds, Charles Patrick, Mesbah Uddin i Shishir Desai. "Tuning of Turbulence Model Closure Coefficients Using an Explainability Based Machine Learning Algorithm". W WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-0562.
Pełny tekst źródłaOveis, Amir Hosein, Elisa Giusti, Giulio Meucci, Selenia Ghio i Marco Martorella. "Explainability In Hyperspectral Image Classification: A Study of Xai Through the Shap Algorithm". W 2023 13th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS). IEEE, 2023. http://dx.doi.org/10.1109/whispers61460.2023.10430776.
Pełny tekst źródłaGopalakrishnan, Karthik, i V. John Mathews. "A Fast Unsupervised Online Learning Algorithm to Detect Structural Damage in Time-Varying Environments". W 2021 48th Annual Review of Progress in Quantitative Nondestructive Evaluation. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/qnde2021-75247.
Pełny tekst źródłaQuinn, Seán, i Alessandra Mileo. "Towards Architecture-Agnostic Neural Transfer: a Knowledge-Enhanced Approach". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/915.
Pełny tekst źródłaEiben, Eduard, Sebastian Ordyniak, Giacomo Paesani i Stefan Szeider. "Learning Small Decision Trees with Large Domain". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/355.
Pełny tekst źródła