Littérature scientifique sur le sujet « Explainability of machine learning models »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Explainability of machine learning models ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Explainability of machine learning models"
S, Akshay, et Manu Madhavan. « COMPARISON OF EXPLAINABILITY OF MACHINE LEARNING BASED MALAYALAM TEXT CLASSIFICATION ». ICTACT Journal on Soft Computing 15, no 1 (1 juillet 2024) : 3386–91. http://dx.doi.org/10.21917/ijsc.2024.0476.
Texte intégralPark, Min Sue, Hwijae Son, Chongseok Hyun et Hyung Ju Hwang. « Explainability of Machine Learning Models for Bankruptcy Prediction ». IEEE Access 9 (2021) : 124887–99. http://dx.doi.org/10.1109/access.2021.3110270.
Texte intégralCheng, Xueyi, et Chang Che. « Interpretable Machine Learning : Explainability in Algorithm Design ». Journal of Industrial Engineering and Applied Science 2, no 6 (1 décembre 2024) : 65–70. https://doi.org/10.70393/6a69656173.323337.
Texte intégralBozorgpanah, Aso, Vicenç Torra et Laya Aliahmadipour. « Privacy and Explainability : The Effects of Data Protection on Shapley Values ». Technologies 10, no 6 (1 décembre 2022) : 125. http://dx.doi.org/10.3390/technologies10060125.
Texte intégralZhang, Xueting. « Traffic Flow Prediction Based on Explainable Machine Learning ». Highlights in Science, Engineering and Technology 56 (14 juillet 2023) : 56–64. http://dx.doi.org/10.54097/hset.v56i.9816.
Texte intégralPendyala, Vishnu, et Hyungkyun Kim. « Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI ». Electronics 13, no 6 (8 mars 2024) : 1025. http://dx.doi.org/10.3390/electronics13061025.
Texte intégralKim, Dong-sup, et Seungwoo Shin. « THE ECONOMIC EXPLAINABILITY OF MACHINE LEARNING AND STANDARD ECONOMETRIC MODELS-AN APPLICATION TO THE U.S. MORTGAGE DEFAULT RISK ». International Journal of Strategic Property Management 25, no 5 (13 juillet 2021) : 396–412. http://dx.doi.org/10.3846/ijspm.2021.15129.
Texte intégralTOPCU, Deniz. « How to explain a machine learning model : HbA1c classification example ». Journal of Medicine and Palliative Care 4, no 2 (27 mars 2023) : 117–25. http://dx.doi.org/10.47582/jompac.1259507.
Texte intégralRodríguez Mallma, Mirko Jerber, Luis Zuloaga-Rotta, Rubén Borja-Rosales, Josef Renato Rodríguez Mallma, Marcos Vilca-Aguilar, María Salas-Ojeda et David Mauricio. « Explainable Machine Learning Models for Brain Diseases : Insights from a Systematic Review ». Neurology International 16, no 6 (29 octobre 2024) : 1285–307. http://dx.doi.org/10.3390/neurolint16060098.
Texte intégralBhagyashree D Shendkar. « Explainable Machine Learning Models for Real-Time Threat Detection in Cybersecurity ». Panamerican Mathematical Journal 35, no 1s (13 novembre 2024) : 264–75. http://dx.doi.org/10.52783/pmj.v35.i1s.2313.
Texte intégralThèses sur le sujet "Explainability of machine learning models"
Delaunay, Julien. « Explainability for machine learning models : from data adaptability to user perception ». Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS076.
Texte intégralThis thesis explores the generation of local explanations for already deployed machine learning models, aiming to identify optimal conditions for producing meaningful explanations considering both data and user requirements. The primary goal is to develop methods for generating explanations for any model while ensuring that these explanations remain faithful to the underlying model and comprehensible to the users. The thesis is divided into two parts. The first enhances a widely used rule-based explanation method to improve the quality of explanations. It then introduces a novel approach for evaluating the suitability of linear explanations to approximate a model. Additionally, it conducts a comparative experiment between two families of counterfactual explanation methods to analyze the advantages of one over the other. The second part focuses on user experiments to assess the impact of three explanation methods and two distinct representations. These experiments measure how users perceive their interaction with the model in terms of understanding and trust, depending on the explanations and representations. This research contributes to a better explanation generation, with potential implications for enhancing the transparency, trustworthiness, and usability of deployed AI systems
Stanzione, Vincenzo Maria. « Developing a new approach for machine learning explainability combining local and global model-agnostic approaches ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25480/.
Texte intégralAyad, Célia. « Towards Reliable Post Hoc Explanations for Machine Learning on Tabular Data and their Applications ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX082.
Texte intégralAs machine learning continues to demonstrate robust predictive capabili-ties, it has emerged as a very valuable tool in several scientific and indus-trial domains. However, as ML models evolve to achieve higher accuracy,they also become increasingly complex and require more parameters. Beingable to understand the inner complexities and to establish trust in the pre-dictions of these machine learning models, has therefore become essentialin various critical domains including healthcare, and finance. Researchershave developed explanation methods to make machine learning models moretransparent, helping users understand why predictions are made. However,these explanation methods often fall short in accurately explaining modelpredictions, making it difficult for domain experts to utilize them effectively.It’s crucial to identify the shortcomings of ML explanations, enhance theirreliability, and make them more user-friendly. Additionally, with many MLtasks becoming more data-intensive and the demand for widespread inte-gration rising, there is a need for methods that deliver strong predictiveperformance in a simpler and more cost-effective manner. In this disserta-tion, we address these problems in two main research thrusts: 1) We proposea methodology to evaluate various explainability methods in the context ofspecific data properties, such as noise levels, feature correlations, and classimbalance, and offer guidance for practitioners and researchers on selectingthe most suitable explainability method based on the characteristics of theirdatasets, revealing where these methods excel or fail. Additionally, we pro-vide clinicians with personalized explanations of cervical cancer risk factorsbased on their desired properties such as ease of understanding, consistency,and stability. 2) We introduce Shapley Chains, a new explanation techniquedesigned to overcome the lack of explanations of multi-output predictionsin the case of interdependent labels, where features may have indirect con-tributions to predict subsequent labels in the chain (i.e. the order in whichthese labels are predicted). Moreover, we propose Bayes LIME Chains toenhance the robustness of Shapley Chains
Radulovic, Nedeljko. « Post-hoc Explainable AI for Black Box Models on Tabular Data ». Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Texte intégralCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Willot, Hénoïk. « Certified explanations of robust models ». Electronic Thesis or Diss., Compiègne, 2024. http://www.theses.fr/2024COMP2812.
Texte intégralWith the advent of automated or semi-automated decision systems in artificial intelligence comes the need of making them more reliable and transparent for an end-user. While the role of explainable methods is in general to increase transparency, reliability can be achieved by providing certified explanations, in the sense that those are guaranteed to be true, and by considering robust models that can abstain when having insufficient information, rather than enforcing precision for the mere sake of avoiding indecision. This last aspect is commonly referred to as skeptical inference. This work participates to this effort, by considering two cases: - The first one considers classical decision rules used to enforce fairness, which are the Ordered Weighted Averaging (OWA) with decreasing weights. Our main contribution is to fully characterise from an axiomatic perspective convex sets of such rules, and to provide together with this sound and complete explanation schemes that can be efficiently obtained through heuristics. Doing so, we also provide a unifying framework between the restricted and generalized Lorenz dominance, two qualitative criteria, and precise decreasing OWA. - The second one considers that our decision rule is a classification model resulting from a learning procedure, where the resulting model is a set of probabilities. We study and discuss the problem of providing prime implicant as explanations in such a case, where in addition to explaining clear preferences of one class over the other, we also have to treat the problem of declaring two classes as being incomparable. We describe the corresponding problems in general ways, before studying in more details the robust counter-part of the Naive Bayes Classifier
Kurasinski, Lukas. « Machine Learning explainability in text classification for Fake News detection ». Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20058.
Texte intégralLounici, Sofiane. « Watermarking machine learning models ». Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS282.pdf.
Texte intégralThe protection of the intellectual property of machine learning models appears to be increasingly necessary, given the investments and their impact on society. In this thesis, we propose to study the watermarking of machine learning models. We provide a state of the art on current watermarking techniques, and then complement it by considering watermarking beyond image classification tasks. We then define forging attacks against watermarking for model hosting platforms and present a new fairness-based watermarking technique. In addition, we propose an implementation of the presented techniques
Maltbie, Nicholas. « Integrating Explainability in Deep Learning Application Development : A Categorization and Case Study ». University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623169431719474.
Texte intégralHardoon, David Roi. « Semantic models for machine learning ». Thesis, University of Southampton, 2006. https://eprints.soton.ac.uk/262019/.
Texte intégralBODINI, MATTEO. « DESIGN AND EXPLAINABILITY OF MACHINE LEARNING ALGORITHMS FOR THE CLASSIFICATION OF CARDIAC ABNORMALITIES FROM ELECTROCARDIOGRAM SIGNALS ». Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/888002.
Texte intégralLivres sur le sujet "Explainability of machine learning models"
Nandi, Anirban, et Aditya Kumar Pal. Interpreting Machine Learning Models. Berkeley, CA : Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4.
Texte intégralBolc, Leonard. Computational Models of Learning. Berlin, Heidelberg : Springer Berlin Heidelberg, 1987.
Trouver le texte intégralGalindez Olascoaga, Laura Isabel, Wannes Meert et Marian Verhelst. Hardware-Aware Probabilistic Machine Learning Models. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74042-9.
Texte intégralSingh, Pramod. Deploy Machine Learning Models to Production. Berkeley, CA : Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6546-8.
Texte intégralZhang, Zhihua. Statistical Machine Learning : Foundations, Methodologies and Models. UK : John Wiley & Sons, Limited, 2017.
Trouver le texte intégralRendell, Larry. Representations and models for concept learning. Urbana, IL (1304 W. Springfield Ave., Urbana 61801) : Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1987.
Trouver le texte intégralEhteram, Mohammad, Zohreh Sheikh Khozani, Saeed Soltani-Mohammadi et Maliheh Abbaszadeh. Estimating Ore Grade Using Evolutionary Machine Learning Models. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8106-7.
Texte intégralBisong, Ekaba. Building Machine Learning and Deep Learning Models on Google Cloud Platform. Berkeley, CA : Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8.
Texte intégralGupta, Punit, Mayank Kumar Goyal, Sudeshna Chakraborty et Ahmed A. Elngar. Machine Learning and Optimization Models for Optimization in Cloud. Boca Raton : Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003185376.
Texte intégralSuthaharan, Shan. Machine Learning Models and Algorithms for Big Data Classification. Boston, MA : Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7641-3.
Texte intégralChapitres de livres sur le sujet "Explainability of machine learning models"
Nandi, Anirban, et Aditya Kumar Pal. « The Eight Pitfalls of Explainability Methods ». Dans Interpreting Machine Learning Models, 321–28. Berkeley, CA : Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_15.
Texte intégralNandi, Anirban, et Aditya Kumar Pal. « Explainability Facts : A Framework for Systematic Assessment of Explainable Approaches ». Dans Interpreting Machine Learning Models, 69–82. Berkeley, CA : Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_6.
Texte intégralKamath, Uday, et John Liu. « Pre-model Interpretability and Explainability ». Dans Explainable Artificial Intelligence : An Introduction to Interpretable Machine Learning, 27–77. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_2.
Texte intégralDessain, Jean, Nora Bentaleb et Fabien Vinas. « Cost of Explainability in AI : An Example with Credit Scoring Models ». Dans Communications in Computer and Information Science, 498–516. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44064-9_26.
Texte intégralHenriques, J., T. Rocha, P. de Carvalho, C. Silva et S. Paredes. « Interpretability and Explainability of Machine Learning Models : Achievements and Challenges ». Dans IFMBE Proceedings, 81–94. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59216-4_9.
Texte intégralBargal, Sarah Adel, Andrea Zunino, Vitali Petsiuk, Jianming Zhang, Vittorio Murino, Stan Sclaroff et Kate Saenko. « Beyond the Visual Analysis of Deep Model Saliency ». Dans xxAI - Beyond Explainable AI, 255–69. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_13.
Texte intégralStevens, Alexander, Johannes De Smedt et Jari Peeperkorn. « Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring ». Dans Lecture Notes in Business Information Processing, 194–206. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Texte intégralBaniecki, Hubert, Wojciech Kretowicz et Przemyslaw Biecek. « Fooling Partial Dependence via Data Poisoning ». Dans Machine Learning and Knowledge Discovery in Databases, 121–36. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.
Texte intégralColosimo, Bianca Maria, et Fabio Centofanti. « Model Interpretability, Explainability and Trust for Manufacturing 4.0 ». Dans Interpretability for Industry 4.0 : Statistical and Machine Learning Approaches, 21–36. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12402-0_2.
Texte intégralSantos, Geanderson, Amanda Santana, Gustavo Vale et Eduardo Figueiredo. « Yet Another Model ! A Study on Model’s Similarities for Defect and Code Smells ». Dans Fundamental Approaches to Software Engineering, 282–305. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_16.
Texte intégralActes de conférences sur le sujet "Explainability of machine learning models"
Bouzid, Mohamed, et Manar Amayri. « Addressing Explainability in Load Forecasting Using Time Series Machine Learning Models ». Dans 2024 IEEE 12th International Conference on Smart Energy Grid Engineering (SEGE), 233–40. IEEE, 2024. http://dx.doi.org/10.1109/sege62220.2024.10739606.
Texte intégralBurgos, David, Ahsan Morshed, MD Mamunur Rashid et Satria Mandala. « A Comparison of Machine Learning Models to Deep Learning Models for Cancer Image Classification and Explainability of Classification ». Dans 2024 International Conference on Data Science and Its Applications (ICoDSA), 386–90. IEEE, 2024. http://dx.doi.org/10.1109/icodsa62899.2024.10651790.
Texte intégralSheikhani, Arman, Ervin Agic, Mahshid Helali Moghadam, Juan Carlos Andresen et Anders Vesterberg. « Lithium-Ion Battery SOH Forecasting : From Deep Learning Augmented by Explainability to Lightweight Machine Learning Models ». Dans 2024 IEEE 29th International Conference on Emerging Technologies and Factory Automation (ETFA), 1–4. IEEE, 2024. http://dx.doi.org/10.1109/etfa61755.2024.10710794.
Texte intégralMechouche, Ammar, Valerio Camerini, Caroline Del, Elsa Cansell et Konstanca Nikolajevic. « From Dampers Estimated Loads to In-Service Degradation Correlations ». Dans Vertical Flight Society 80th Annual Forum & Technology Display, 1–10. The Vertical Flight Society, 2024. http://dx.doi.org/10.4050/f-0080-2024-1108.
Texte intégralIzza, Yacine, Xuanxiang Huang, Antonio Morgado, Jordi Planes, Alexey Ignatiev et Joao Marques-Silva. « Distance-Restricted Explanations : Theoretical Underpinnings & ; Efficient Implementation ». Dans 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 475–86. California : International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/45.
Texte intégralAlami, Amine, Jaouad Boumhidi et Loqman Chakir. « Explainability in CNN based Deep Learning models for medical image classification ». Dans 2024 International Conference on Intelligent Systems and Computer Vision (ISCV), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/iscv60512.2024.10620149.
Texte intégralRodríguez-Barroso, Nuria, Javier Del Ser, M. Victoria Luzón et Francisco Herrera. « Defense Strategy against Byzantine Attacks in Federated Machine Learning : Developments towards Explainability ». Dans 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/fuzz-ieee60900.2024.10611769.
Texte intégralPerikos, Isidoros. « Sensitive Content Detection in Social Networks Using Deep Learning Models and Explainability Techniques ». Dans 2024 IEEE/ACIS 9th International Conference on Big Data, Cloud Computing, and Data Science (BCD), 48–53. IEEE, 2024. http://dx.doi.org/10.1109/bcd61269.2024.10743081.
Texte intégralGafur, Jamil, Steve Goddard et William Lai. « Adversarial Robustness and Explainability of Machine Learning Models ». Dans PEARC '24 : Practice and Experience in Advanced Research Computing. New York, NY, USA : ACM, 2024. http://dx.doi.org/10.1145/3626203.3670522.
Texte intégralIslam, Md Ariful, Kowshik Nittala et Garima Bajwa. « Adding Explainability to Machine Learning Models to Detect Chronic Kidney Disease ». Dans 2022 IEEE 23rd International Conference on Information Reuse and Integration for Data Science (IRI). IEEE, 2022. http://dx.doi.org/10.1109/iri54793.2022.00069.
Texte intégralRapports d'organisations sur le sujet "Explainability of machine learning models"
Smith, Michael, Erin Acquesta, Arlo Ames, Alycia Carey, Christopher Cuellar, Richard Field, Trevor Maxfield et al. SAGE Intrusion Detection System : Sensitivity Analysis Guided Explainability for Machine Learning. Office of Scientific and Technical Information (OSTI), septembre 2021. http://dx.doi.org/10.2172/1820253.
Texte intégralSkryzalin, Jacek, Kenneth Goss et Benjamin Jackson. Securing machine learning models. Office of Scientific and Technical Information (OSTI), septembre 2020. http://dx.doi.org/10.2172/1661020.
Texte intégralMartinez, Carianne, Jessica Jones, Drew Levin, Nathaniel Trask et Patrick Finley. Physics-Informed Machine Learning for Epidemiological Models. Office of Scientific and Technical Information (OSTI), octobre 2020. http://dx.doi.org/10.2172/1706217.
Texte intégralLavender, Samantha, et Trent Tinker, dir. Testbed-19 : Machine Learning Models Engineering Report. Open Geospatial Consortium, Inc., avril 2024. http://dx.doi.org/10.62973/23-033.
Texte intégralSaenz, Juan Antonio, Ismael Djibrilla Boureima, Vitaliy Gyrya et Susan Kurien. Machine-Learning for Rapid Optimization of Turbulence Models. Office of Scientific and Technical Information (OSTI), juillet 2020. http://dx.doi.org/10.2172/1638623.
Texte intégralKulkarni, Sanjeev R. Extending and Unifying Formal Models for Machine Learning. Fort Belvoir, VA : Defense Technical Information Center, juillet 1997. http://dx.doi.org/10.21236/ada328730.
Texte intégralBanerjee, Boudhayan. Machine Learning Models for Political Video Advertisement Classification. Ames (Iowa) : Iowa State University, janvier 2017. http://dx.doi.org/10.31274/cc-20240624-976.
Texte intégralValaitis, Vytautas, et Alessandro T. Villa. A Machine Learning Projection Method for Macro-Finance Models. Federal Reserve Bank of Chicago, 2022. http://dx.doi.org/10.21033/wp-2022-19.
Texte intégralFessel, Kimberly. Machine Learning in Python. Instats Inc., 2024. http://dx.doi.org/10.61700/s74zy0ivgwioe1764.
Texte intégralOgunbire, Abimbola, Panick Kalambay, Hardik Gajera et Srinivas Pulugurtha. Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction. Mineta Transportation Institute, décembre 2023. http://dx.doi.org/10.31979/mti.2023.2320.
Texte intégral