Academic literature on the topic 'Explainability of machine learning models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Explainability of machine learning models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Explainability of machine learning models"
S, Akshay, and Manu Madhavan. "COMPARISON OF EXPLAINABILITY OF MACHINE LEARNING BASED MALAYALAM TEXT CLASSIFICATION." ICTACT Journal on Soft Computing 15, no. 1 (July 1, 2024): 3386–91. http://dx.doi.org/10.21917/ijsc.2024.0476.
Full textPark, Min Sue, Hwijae Son, Chongseok Hyun, and Hyung Ju Hwang. "Explainability of Machine Learning Models for Bankruptcy Prediction." IEEE Access 9 (2021): 124887–99. http://dx.doi.org/10.1109/access.2021.3110270.
Full textCheng, Xueyi, and Chang Che. "Interpretable Machine Learning: Explainability in Algorithm Design." Journal of Industrial Engineering and Applied Science 2, no. 6 (December 1, 2024): 65–70. https://doi.org/10.70393/6a69656173.323337.
Full textBozorgpanah, Aso, Vicenç Torra, and Laya Aliahmadipour. "Privacy and Explainability: The Effects of Data Protection on Shapley Values." Technologies 10, no. 6 (December 1, 2022): 125. http://dx.doi.org/10.3390/technologies10060125.
Full textZhang, Xueting. "Traffic Flow Prediction Based on Explainable Machine Learning." Highlights in Science, Engineering and Technology 56 (July 14, 2023): 56–64. http://dx.doi.org/10.54097/hset.v56i.9816.
Full textPendyala, Vishnu, and Hyungkyun Kim. "Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI." Electronics 13, no. 6 (March 8, 2024): 1025. http://dx.doi.org/10.3390/electronics13061025.
Full textKim, Dong-sup, and Seungwoo Shin. "THE ECONOMIC EXPLAINABILITY OF MACHINE LEARNING AND STANDARD ECONOMETRIC MODELS-AN APPLICATION TO THE U.S. MORTGAGE DEFAULT RISK." International Journal of Strategic Property Management 25, no. 5 (July 13, 2021): 396–412. http://dx.doi.org/10.3846/ijspm.2021.15129.
Full textTOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example." Journal of Medicine and Palliative Care 4, no. 2 (March 27, 2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.
Full textRodríguez Mallma, Mirko Jerber, Luis Zuloaga-Rotta, Rubén Borja-Rosales, Josef Renato Rodríguez Mallma, Marcos Vilca-Aguilar, María Salas-Ojeda, and David Mauricio. "Explainable Machine Learning Models for Brain Diseases: Insights from a Systematic Review." Neurology International 16, no. 6 (October 29, 2024): 1285–307. http://dx.doi.org/10.3390/neurolint16060098.
Full textBhagyashree D Shendkar. "Explainable Machine Learning Models for Real-Time Threat Detection in Cybersecurity." Panamerican Mathematical Journal 35, no. 1s (November 13, 2024): 264–75. http://dx.doi.org/10.52783/pmj.v35.i1s.2313.
Full textDissertations / Theses on the topic "Explainability of machine learning models"
Delaunay, Julien. "Explainability for machine learning models : from data adaptability to user perception." Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS076.
Full textThis thesis explores the generation of local explanations for already deployed machine learning models, aiming to identify optimal conditions for producing meaningful explanations considering both data and user requirements. The primary goal is to develop methods for generating explanations for any model while ensuring that these explanations remain faithful to the underlying model and comprehensible to the users. The thesis is divided into two parts. The first enhances a widely used rule-based explanation method to improve the quality of explanations. It then introduces a novel approach for evaluating the suitability of linear explanations to approximate a model. Additionally, it conducts a comparative experiment between two families of counterfactual explanation methods to analyze the advantages of one over the other. The second part focuses on user experiments to assess the impact of three explanation methods and two distinct representations. These experiments measure how users perceive their interaction with the model in terms of understanding and trust, depending on the explanations and representations. This research contributes to a better explanation generation, with potential implications for enhancing the transparency, trustworthiness, and usability of deployed AI systems
Stanzione, Vincenzo Maria. "Developing a new approach for machine learning explainability combining local and global model-agnostic approaches." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25480/.
Full textAyad, Célia. "Towards Reliable Post Hoc Explanations for Machine Learning on Tabular Data and their Applications." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX082.
Full textAs machine learning continues to demonstrate robust predictive capabili-ties, it has emerged as a very valuable tool in several scientific and indus-trial domains. However, as ML models evolve to achieve higher accuracy,they also become increasingly complex and require more parameters. Beingable to understand the inner complexities and to establish trust in the pre-dictions of these machine learning models, has therefore become essentialin various critical domains including healthcare, and finance. Researchershave developed explanation methods to make machine learning models moretransparent, helping users understand why predictions are made. However,these explanation methods often fall short in accurately explaining modelpredictions, making it difficult for domain experts to utilize them effectively.It’s crucial to identify the shortcomings of ML explanations, enhance theirreliability, and make them more user-friendly. Additionally, with many MLtasks becoming more data-intensive and the demand for widespread inte-gration rising, there is a need for methods that deliver strong predictiveperformance in a simpler and more cost-effective manner. In this disserta-tion, we address these problems in two main research thrusts: 1) We proposea methodology to evaluate various explainability methods in the context ofspecific data properties, such as noise levels, feature correlations, and classimbalance, and offer guidance for practitioners and researchers on selectingthe most suitable explainability method based on the characteristics of theirdatasets, revealing where these methods excel or fail. Additionally, we pro-vide clinicians with personalized explanations of cervical cancer risk factorsbased on their desired properties such as ease of understanding, consistency,and stability. 2) We introduce Shapley Chains, a new explanation techniquedesigned to overcome the lack of explanations of multi-output predictionsin the case of interdependent labels, where features may have indirect con-tributions to predict subsequent labels in the chain (i.e. the order in whichthese labels are predicted). Moreover, we propose Bayes LIME Chains toenhance the robustness of Shapley Chains
Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Full textCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Willot, Hénoïk. "Certified explanations of robust models." Electronic Thesis or Diss., Compiègne, 2024. http://www.theses.fr/2024COMP2812.
Full textWith the advent of automated or semi-automated decision systems in artificial intelligence comes the need of making them more reliable and transparent for an end-user. While the role of explainable methods is in general to increase transparency, reliability can be achieved by providing certified explanations, in the sense that those are guaranteed to be true, and by considering robust models that can abstain when having insufficient information, rather than enforcing precision for the mere sake of avoiding indecision. This last aspect is commonly referred to as skeptical inference. This work participates to this effort, by considering two cases: - The first one considers classical decision rules used to enforce fairness, which are the Ordered Weighted Averaging (OWA) with decreasing weights. Our main contribution is to fully characterise from an axiomatic perspective convex sets of such rules, and to provide together with this sound and complete explanation schemes that can be efficiently obtained through heuristics. Doing so, we also provide a unifying framework between the restricted and generalized Lorenz dominance, two qualitative criteria, and precise decreasing OWA. - The second one considers that our decision rule is a classification model resulting from a learning procedure, where the resulting model is a set of probabilities. We study and discuss the problem of providing prime implicant as explanations in such a case, where in addition to explaining clear preferences of one class over the other, we also have to treat the problem of declaring two classes as being incomparable. We describe the corresponding problems in general ways, before studying in more details the robust counter-part of the Naive Bayes Classifier
Kurasinski, Lukas. "Machine Learning explainability in text classification for Fake News detection." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20058.
Full textLounici, Sofiane. "Watermarking machine learning models." Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS282.pdf.
Full textThe protection of the intellectual property of machine learning models appears to be increasingly necessary, given the investments and their impact on society. In this thesis, we propose to study the watermarking of machine learning models. We provide a state of the art on current watermarking techniques, and then complement it by considering watermarking beyond image classification tasks. We then define forging attacks against watermarking for model hosting platforms and present a new fairness-based watermarking technique. In addition, we propose an implementation of the presented techniques
Maltbie, Nicholas. "Integrating Explainability in Deep Learning Application Development: A Categorization and Case Study." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623169431719474.
Full textHardoon, David Roi. "Semantic models for machine learning." Thesis, University of Southampton, 2006. https://eprints.soton.ac.uk/262019/.
Full textBODINI, MATTEO. "DESIGN AND EXPLAINABILITY OF MACHINE LEARNING ALGORITHMS FOR THE CLASSIFICATION OF CARDIAC ABNORMALITIES FROM ELECTROCARDIOGRAM SIGNALS." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/888002.
Full textBooks on the topic "Explainability of machine learning models"
Nandi, Anirban, and Aditya Kumar Pal. Interpreting Machine Learning Models. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4.
Full textBolc, Leonard. Computational Models of Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987.
Find full textGalindez Olascoaga, Laura Isabel, Wannes Meert, and Marian Verhelst. Hardware-Aware Probabilistic Machine Learning Models. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74042-9.
Full textSingh, Pramod. Deploy Machine Learning Models to Production. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6546-8.
Full textZhang, Zhihua. Statistical Machine Learning: Foundations, Methodologies and Models. UK: John Wiley & Sons, Limited, 2017.
Find full textRendell, Larry. Representations and models for concept learning. Urbana, IL (1304 W. Springfield Ave., Urbana 61801): Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1987.
Find full textEhteram, Mohammad, Zohreh Sheikh Khozani, Saeed Soltani-Mohammadi, and Maliheh Abbaszadeh. Estimating Ore Grade Using Evolutionary Machine Learning Models. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8106-7.
Full textBisong, Ekaba. Building Machine Learning and Deep Learning Models on Google Cloud Platform. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8.
Full textGupta, Punit, Mayank Kumar Goyal, Sudeshna Chakraborty, and Ahmed A. Elngar. Machine Learning and Optimization Models for Optimization in Cloud. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003185376.
Full textSuthaharan, Shan. Machine Learning Models and Algorithms for Big Data Classification. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7641-3.
Full textBook chapters on the topic "Explainability of machine learning models"
Nandi, Anirban, and Aditya Kumar Pal. "The Eight Pitfalls of Explainability Methods." In Interpreting Machine Learning Models, 321–28. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_15.
Full textNandi, Anirban, and Aditya Kumar Pal. "Explainability Facts: A Framework for Systematic Assessment of Explainable Approaches." In Interpreting Machine Learning Models, 69–82. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_6.
Full textKamath, Uday, and John Liu. "Pre-model Interpretability and Explainability." In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning, 27–77. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_2.
Full textDessain, Jean, Nora Bentaleb, and Fabien Vinas. "Cost of Explainability in AI: An Example with Credit Scoring Models." In Communications in Computer and Information Science, 498–516. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44064-9_26.
Full textHenriques, J., T. Rocha, P. de Carvalho, C. Silva, and S. Paredes. "Interpretability and Explainability of Machine Learning Models: Achievements and Challenges." In IFMBE Proceedings, 81–94. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59216-4_9.
Full textBargal, Sarah Adel, Andrea Zunino, Vitali Petsiuk, Jianming Zhang, Vittorio Murino, Stan Sclaroff, and Kate Saenko. "Beyond the Visual Analysis of Deep Model Saliency." In xxAI - Beyond Explainable AI, 255–69. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_13.
Full textStevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Full textBaniecki, Hubert, Wojciech Kretowicz, and Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning." In Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.
Full textColosimo, Bianca Maria, and Fabio Centofanti. "Model Interpretability, Explainability and Trust for Manufacturing 4.0." In Interpretability for Industry 4.0 : Statistical and Machine Learning Approaches, 21–36. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12402-0_2.
Full textSantos, Geanderson, Amanda Santana, Gustavo Vale, and Eduardo Figueiredo. "Yet Another Model! A Study on Model’s Similarities for Defect and Code Smells." In Fundamental Approaches to Software Engineering, 282–305. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_16.
Full textConference papers on the topic "Explainability of machine learning models"
Bouzid, Mohamed, and Manar Amayri. "Addressing Explainability in Load Forecasting Using Time Series Machine Learning Models." In 2024 IEEE 12th International Conference on Smart Energy Grid Engineering (SEGE), 233–40. IEEE, 2024. http://dx.doi.org/10.1109/sege62220.2024.10739606.
Full textBurgos, David, Ahsan Morshed, MD Mamunur Rashid, and Satria Mandala. "A Comparison of Machine Learning Models to Deep Learning Models for Cancer Image Classification and Explainability of Classification." In 2024 International Conference on Data Science and Its Applications (ICoDSA), 386–90. IEEE, 2024. http://dx.doi.org/10.1109/icodsa62899.2024.10651790.
Full textSheikhani, Arman, Ervin Agic, Mahshid Helali Moghadam, Juan Carlos Andresen, and Anders Vesterberg. "Lithium-Ion Battery SOH Forecasting: From Deep Learning Augmented by Explainability to Lightweight Machine Learning Models." In 2024 IEEE 29th International Conference on Emerging Technologies and Factory Automation (ETFA), 1–4. IEEE, 2024. http://dx.doi.org/10.1109/etfa61755.2024.10710794.
Full textMechouche, Ammar, Valerio Camerini, Caroline Del, Elsa Cansell, and Konstanca Nikolajevic. "From Dampers Estimated Loads to In-Service Degradation Correlations." In Vertical Flight Society 80th Annual Forum & Technology Display, 1–10. The Vertical Flight Society, 2024. http://dx.doi.org/10.4050/f-0080-2024-1108.
Full textIzza, Yacine, Xuanxiang Huang, Antonio Morgado, Jordi Planes, Alexey Ignatiev, and Joao Marques-Silva. "Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 475–86. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/45.
Full textAlami, Amine, Jaouad Boumhidi, and Loqman Chakir. "Explainability in CNN based Deep Learning models for medical image classification." In 2024 International Conference on Intelligent Systems and Computer Vision (ISCV), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/iscv60512.2024.10620149.
Full textRodríguez-Barroso, Nuria, Javier Del Ser, M. Victoria Luzón, and Francisco Herrera. "Defense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainability." In 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/fuzz-ieee60900.2024.10611769.
Full textPerikos, Isidoros. "Sensitive Content Detection in Social Networks Using Deep Learning Models and Explainability Techniques." In 2024 IEEE/ACIS 9th International Conference on Big Data, Cloud Computing, and Data Science (BCD), 48–53. IEEE, 2024. http://dx.doi.org/10.1109/bcd61269.2024.10743081.
Full textGafur, Jamil, Steve Goddard, and William Lai. "Adversarial Robustness and Explainability of Machine Learning Models." In PEARC '24: Practice and Experience in Advanced Research Computing. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3626203.3670522.
Full textIslam, Md Ariful, Kowshik Nittala, and Garima Bajwa. "Adding Explainability to Machine Learning Models to Detect Chronic Kidney Disease." In 2022 IEEE 23rd International Conference on Information Reuse and Integration for Data Science (IRI). IEEE, 2022. http://dx.doi.org/10.1109/iri54793.2022.00069.
Full textReports on the topic "Explainability of machine learning models"
Smith, Michael, Erin Acquesta, Arlo Ames, Alycia Carey, Christopher Cuellar, Richard Field, Trevor Maxfield, et al. SAGE Intrusion Detection System: Sensitivity Analysis Guided Explainability for Machine Learning. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/1820253.
Full textSkryzalin, Jacek, Kenneth Goss, and Benjamin Jackson. Securing machine learning models. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1661020.
Full textMartinez, Carianne, Jessica Jones, Drew Levin, Nathaniel Trask, and Patrick Finley. Physics-Informed Machine Learning for Epidemiological Models. Office of Scientific and Technical Information (OSTI), October 2020. http://dx.doi.org/10.2172/1706217.
Full textLavender, Samantha, and Trent Tinker, eds. Testbed-19: Machine Learning Models Engineering Report. Open Geospatial Consortium, Inc., April 2024. http://dx.doi.org/10.62973/23-033.
Full textSaenz, Juan Antonio, Ismael Djibrilla Boureima, Vitaliy Gyrya, and Susan Kurien. Machine-Learning for Rapid Optimization of Turbulence Models. Office of Scientific and Technical Information (OSTI), July 2020. http://dx.doi.org/10.2172/1638623.
Full textKulkarni, Sanjeev R. Extending and Unifying Formal Models for Machine Learning. Fort Belvoir, VA: Defense Technical Information Center, July 1997. http://dx.doi.org/10.21236/ada328730.
Full textBanerjee, Boudhayan. Machine Learning Models for Political Video Advertisement Classification. Ames (Iowa): Iowa State University, January 2017. http://dx.doi.org/10.31274/cc-20240624-976.
Full textValaitis, Vytautas, and Alessandro T. Villa. A Machine Learning Projection Method for Macro-Finance Models. Federal Reserve Bank of Chicago, 2022. http://dx.doi.org/10.21033/wp-2022-19.
Full textFessel, Kimberly. Machine Learning in Python. Instats Inc., 2024. http://dx.doi.org/10.61700/s74zy0ivgwioe1764.
Full textOgunbire, Abimbola, Panick Kalambay, Hardik Gajera, and Srinivas Pulugurtha. Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction. Mineta Transportation Institute, December 2023. http://dx.doi.org/10.31979/mti.2023.2320.
Full text