Literatura académica sobre el tema "Explainability of machine learning models"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Explainability of machine learning models".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Explainability of machine learning models"
S, Akshay y Manu Madhavan. "COMPARISON OF EXPLAINABILITY OF MACHINE LEARNING BASED MALAYALAM TEXT CLASSIFICATION". ICTACT Journal on Soft Computing 15, n.º 1 (1 de julio de 2024): 3386–91. http://dx.doi.org/10.21917/ijsc.2024.0476.
Texto completoPark, Min Sue, Hwijae Son, Chongseok Hyun y Hyung Ju Hwang. "Explainability of Machine Learning Models for Bankruptcy Prediction". IEEE Access 9 (2021): 124887–99. http://dx.doi.org/10.1109/access.2021.3110270.
Texto completoCheng, Xueyi y Chang Che. "Interpretable Machine Learning: Explainability in Algorithm Design". Journal of Industrial Engineering and Applied Science 2, n.º 6 (1 de diciembre de 2024): 65–70. https://doi.org/10.70393/6a69656173.323337.
Texto completoBozorgpanah, Aso, Vicenç Torra y Laya Aliahmadipour. "Privacy and Explainability: The Effects of Data Protection on Shapley Values". Technologies 10, n.º 6 (1 de diciembre de 2022): 125. http://dx.doi.org/10.3390/technologies10060125.
Texto completoZhang, Xueting. "Traffic Flow Prediction Based on Explainable Machine Learning". Highlights in Science, Engineering and Technology 56 (14 de julio de 2023): 56–64. http://dx.doi.org/10.54097/hset.v56i.9816.
Texto completoPendyala, Vishnu y Hyungkyun Kim. "Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI". Electronics 13, n.º 6 (8 de marzo de 2024): 1025. http://dx.doi.org/10.3390/electronics13061025.
Texto completoKim, Dong-sup y Seungwoo Shin. "THE ECONOMIC EXPLAINABILITY OF MACHINE LEARNING AND STANDARD ECONOMETRIC MODELS-AN APPLICATION TO THE U.S. MORTGAGE DEFAULT RISK". International Journal of Strategic Property Management 25, n.º 5 (13 de julio de 2021): 396–412. http://dx.doi.org/10.3846/ijspm.2021.15129.
Texto completoTOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example". Journal of Medicine and Palliative Care 4, n.º 2 (27 de marzo de 2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.
Texto completoRodríguez Mallma, Mirko Jerber, Luis Zuloaga-Rotta, Rubén Borja-Rosales, Josef Renato Rodríguez Mallma, Marcos Vilca-Aguilar, María Salas-Ojeda y David Mauricio. "Explainable Machine Learning Models for Brain Diseases: Insights from a Systematic Review". Neurology International 16, n.º 6 (29 de octubre de 2024): 1285–307. http://dx.doi.org/10.3390/neurolint16060098.
Texto completoBhagyashree D Shendkar. "Explainable Machine Learning Models for Real-Time Threat Detection in Cybersecurity". Panamerican Mathematical Journal 35, n.º 1s (13 de noviembre de 2024): 264–75. http://dx.doi.org/10.52783/pmj.v35.i1s.2313.
Texto completoTesis sobre el tema "Explainability of machine learning models"
Delaunay, Julien. "Explainability for machine learning models : from data adaptability to user perception". Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS076.
Texto completoThis thesis explores the generation of local explanations for already deployed machine learning models, aiming to identify optimal conditions for producing meaningful explanations considering both data and user requirements. The primary goal is to develop methods for generating explanations for any model while ensuring that these explanations remain faithful to the underlying model and comprehensible to the users. The thesis is divided into two parts. The first enhances a widely used rule-based explanation method to improve the quality of explanations. It then introduces a novel approach for evaluating the suitability of linear explanations to approximate a model. Additionally, it conducts a comparative experiment between two families of counterfactual explanation methods to analyze the advantages of one over the other. The second part focuses on user experiments to assess the impact of three explanation methods and two distinct representations. These experiments measure how users perceive their interaction with the model in terms of understanding and trust, depending on the explanations and representations. This research contributes to a better explanation generation, with potential implications for enhancing the transparency, trustworthiness, and usability of deployed AI systems
Stanzione, Vincenzo Maria. "Developing a new approach for machine learning explainability combining local and global model-agnostic approaches". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25480/.
Texto completoAyad, Célia. "Towards Reliable Post Hoc Explanations for Machine Learning on Tabular Data and their Applications". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX082.
Texto completoAs machine learning continues to demonstrate robust predictive capabili-ties, it has emerged as a very valuable tool in several scientific and indus-trial domains. However, as ML models evolve to achieve higher accuracy,they also become increasingly complex and require more parameters. Beingable to understand the inner complexities and to establish trust in the pre-dictions of these machine learning models, has therefore become essentialin various critical domains including healthcare, and finance. Researchershave developed explanation methods to make machine learning models moretransparent, helping users understand why predictions are made. However,these explanation methods often fall short in accurately explaining modelpredictions, making it difficult for domain experts to utilize them effectively.It’s crucial to identify the shortcomings of ML explanations, enhance theirreliability, and make them more user-friendly. Additionally, with many MLtasks becoming more data-intensive and the demand for widespread inte-gration rising, there is a need for methods that deliver strong predictiveperformance in a simpler and more cost-effective manner. In this disserta-tion, we address these problems in two main research thrusts: 1) We proposea methodology to evaluate various explainability methods in the context ofspecific data properties, such as noise levels, feature correlations, and classimbalance, and offer guidance for practitioners and researchers on selectingthe most suitable explainability method based on the characteristics of theirdatasets, revealing where these methods excel or fail. Additionally, we pro-vide clinicians with personalized explanations of cervical cancer risk factorsbased on their desired properties such as ease of understanding, consistency,and stability. 2) We introduce Shapley Chains, a new explanation techniquedesigned to overcome the lack of explanations of multi-output predictionsin the case of interdependent labels, where features may have indirect con-tributions to predict subsequent labels in the chain (i.e. the order in whichthese labels are predicted). Moreover, we propose Bayes LIME Chains toenhance the robustness of Shapley Chains
Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Texto completoCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Willot, Hénoïk. "Certified explanations of robust models". Electronic Thesis or Diss., Compiègne, 2024. http://www.theses.fr/2024COMP2812.
Texto completoWith the advent of automated or semi-automated decision systems in artificial intelligence comes the need of making them more reliable and transparent for an end-user. While the role of explainable methods is in general to increase transparency, reliability can be achieved by providing certified explanations, in the sense that those are guaranteed to be true, and by considering robust models that can abstain when having insufficient information, rather than enforcing precision for the mere sake of avoiding indecision. This last aspect is commonly referred to as skeptical inference. This work participates to this effort, by considering two cases: - The first one considers classical decision rules used to enforce fairness, which are the Ordered Weighted Averaging (OWA) with decreasing weights. Our main contribution is to fully characterise from an axiomatic perspective convex sets of such rules, and to provide together with this sound and complete explanation schemes that can be efficiently obtained through heuristics. Doing so, we also provide a unifying framework between the restricted and generalized Lorenz dominance, two qualitative criteria, and precise decreasing OWA. - The second one considers that our decision rule is a classification model resulting from a learning procedure, where the resulting model is a set of probabilities. We study and discuss the problem of providing prime implicant as explanations in such a case, where in addition to explaining clear preferences of one class over the other, we also have to treat the problem of declaring two classes as being incomparable. We describe the corresponding problems in general ways, before studying in more details the robust counter-part of the Naive Bayes Classifier
Kurasinski, Lukas. "Machine Learning explainability in text classification for Fake News detection". Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20058.
Texto completoLounici, Sofiane. "Watermarking machine learning models". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS282.pdf.
Texto completoThe protection of the intellectual property of machine learning models appears to be increasingly necessary, given the investments and their impact on society. In this thesis, we propose to study the watermarking of machine learning models. We provide a state of the art on current watermarking techniques, and then complement it by considering watermarking beyond image classification tasks. We then define forging attacks against watermarking for model hosting platforms and present a new fairness-based watermarking technique. In addition, we propose an implementation of the presented techniques
Maltbie, Nicholas. "Integrating Explainability in Deep Learning Application Development: A Categorization and Case Study". University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623169431719474.
Texto completoHardoon, David Roi. "Semantic models for machine learning". Thesis, University of Southampton, 2006. https://eprints.soton.ac.uk/262019/.
Texto completoBODINI, MATTEO. "DESIGN AND EXPLAINABILITY OF MACHINE LEARNING ALGORITHMS FOR THE CLASSIFICATION OF CARDIAC ABNORMALITIES FROM ELECTROCARDIOGRAM SIGNALS". Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/888002.
Texto completoLibros sobre el tema "Explainability of machine learning models"
Nandi, Anirban y Aditya Kumar Pal. Interpreting Machine Learning Models. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4.
Texto completoBolc, Leonard. Computational Models of Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987.
Buscar texto completoGalindez Olascoaga, Laura Isabel, Wannes Meert y Marian Verhelst. Hardware-Aware Probabilistic Machine Learning Models. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74042-9.
Texto completoSingh, Pramod. Deploy Machine Learning Models to Production. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6546-8.
Texto completoZhang, Zhihua. Statistical Machine Learning: Foundations, Methodologies and Models. UK: John Wiley & Sons, Limited, 2017.
Buscar texto completoRendell, Larry. Representations and models for concept learning. Urbana, IL (1304 W. Springfield Ave., Urbana 61801): Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1987.
Buscar texto completoEhteram, Mohammad, Zohreh Sheikh Khozani, Saeed Soltani-Mohammadi y Maliheh Abbaszadeh. Estimating Ore Grade Using Evolutionary Machine Learning Models. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8106-7.
Texto completoBisong, Ekaba. Building Machine Learning and Deep Learning Models on Google Cloud Platform. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8.
Texto completoGupta, Punit, Mayank Kumar Goyal, Sudeshna Chakraborty y Ahmed A. Elngar. Machine Learning and Optimization Models for Optimization in Cloud. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003185376.
Texto completoSuthaharan, Shan. Machine Learning Models and Algorithms for Big Data Classification. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7641-3.
Texto completoCapítulos de libros sobre el tema "Explainability of machine learning models"
Nandi, Anirban y Aditya Kumar Pal. "The Eight Pitfalls of Explainability Methods". En Interpreting Machine Learning Models, 321–28. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_15.
Texto completoNandi, Anirban y Aditya Kumar Pal. "Explainability Facts: A Framework for Systematic Assessment of Explainable Approaches". En Interpreting Machine Learning Models, 69–82. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_6.
Texto completoKamath, Uday y John Liu. "Pre-model Interpretability and Explainability". En Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning, 27–77. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_2.
Texto completoDessain, Jean, Nora Bentaleb y Fabien Vinas. "Cost of Explainability in AI: An Example with Credit Scoring Models". En Communications in Computer and Information Science, 498–516. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44064-9_26.
Texto completoHenriques, J., T. Rocha, P. de Carvalho, C. Silva y S. Paredes. "Interpretability and Explainability of Machine Learning Models: Achievements and Challenges". En IFMBE Proceedings, 81–94. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59216-4_9.
Texto completoBargal, Sarah Adel, Andrea Zunino, Vitali Petsiuk, Jianming Zhang, Vittorio Murino, Stan Sclaroff y Kate Saenko. "Beyond the Visual Analysis of Deep Model Saliency". En xxAI - Beyond Explainable AI, 255–69. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_13.
Texto completoStevens, Alexander, Johannes De Smedt y Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring". En Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Texto completoBaniecki, Hubert, Wojciech Kretowicz y Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning". En Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.
Texto completoColosimo, Bianca Maria y Fabio Centofanti. "Model Interpretability, Explainability and Trust for Manufacturing 4.0". En Interpretability for Industry 4.0 : Statistical and Machine Learning Approaches, 21–36. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12402-0_2.
Texto completoSantos, Geanderson, Amanda Santana, Gustavo Vale y Eduardo Figueiredo. "Yet Another Model! A Study on Model’s Similarities for Defect and Code Smells". En Fundamental Approaches to Software Engineering, 282–305. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_16.
Texto completoActas de conferencias sobre el tema "Explainability of machine learning models"
Bouzid, Mohamed y Manar Amayri. "Addressing Explainability in Load Forecasting Using Time Series Machine Learning Models". En 2024 IEEE 12th International Conference on Smart Energy Grid Engineering (SEGE), 233–40. IEEE, 2024. http://dx.doi.org/10.1109/sege62220.2024.10739606.
Texto completoBurgos, David, Ahsan Morshed, MD Mamunur Rashid y Satria Mandala. "A Comparison of Machine Learning Models to Deep Learning Models for Cancer Image Classification and Explainability of Classification". En 2024 International Conference on Data Science and Its Applications (ICoDSA), 386–90. IEEE, 2024. http://dx.doi.org/10.1109/icodsa62899.2024.10651790.
Texto completoSheikhani, Arman, Ervin Agic, Mahshid Helali Moghadam, Juan Carlos Andresen y Anders Vesterberg. "Lithium-Ion Battery SOH Forecasting: From Deep Learning Augmented by Explainability to Lightweight Machine Learning Models". En 2024 IEEE 29th International Conference on Emerging Technologies and Factory Automation (ETFA), 1–4. IEEE, 2024. http://dx.doi.org/10.1109/etfa61755.2024.10710794.
Texto completoMechouche, Ammar, Valerio Camerini, Caroline Del, Elsa Cansell y Konstanca Nikolajevic. "From Dampers Estimated Loads to In-Service Degradation Correlations". En Vertical Flight Society 80th Annual Forum & Technology Display, 1–10. The Vertical Flight Society, 2024. http://dx.doi.org/10.4050/f-0080-2024-1108.
Texto completoIzza, Yacine, Xuanxiang Huang, Antonio Morgado, Jordi Planes, Alexey Ignatiev y Joao Marques-Silva. "Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation". En 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 475–86. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/45.
Texto completoAlami, Amine, Jaouad Boumhidi y Loqman Chakir. "Explainability in CNN based Deep Learning models for medical image classification". En 2024 International Conference on Intelligent Systems and Computer Vision (ISCV), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/iscv60512.2024.10620149.
Texto completoRodríguez-Barroso, Nuria, Javier Del Ser, M. Victoria Luzón y Francisco Herrera. "Defense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainability". En 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/fuzz-ieee60900.2024.10611769.
Texto completoPerikos, Isidoros. "Sensitive Content Detection in Social Networks Using Deep Learning Models and Explainability Techniques". En 2024 IEEE/ACIS 9th International Conference on Big Data, Cloud Computing, and Data Science (BCD), 48–53. IEEE, 2024. http://dx.doi.org/10.1109/bcd61269.2024.10743081.
Texto completoGafur, Jamil, Steve Goddard y William Lai. "Adversarial Robustness and Explainability of Machine Learning Models". En PEARC '24: Practice and Experience in Advanced Research Computing. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3626203.3670522.
Texto completoIslam, Md Ariful, Kowshik Nittala y Garima Bajwa. "Adding Explainability to Machine Learning Models to Detect Chronic Kidney Disease". En 2022 IEEE 23rd International Conference on Information Reuse and Integration for Data Science (IRI). IEEE, 2022. http://dx.doi.org/10.1109/iri54793.2022.00069.
Texto completoInformes sobre el tema "Explainability of machine learning models"
Smith, Michael, Erin Acquesta, Arlo Ames, Alycia Carey, Christopher Cuellar, Richard Field, Trevor Maxfield et al. SAGE Intrusion Detection System: Sensitivity Analysis Guided Explainability for Machine Learning. Office of Scientific and Technical Information (OSTI), septiembre de 2021. http://dx.doi.org/10.2172/1820253.
Texto completoSkryzalin, Jacek, Kenneth Goss y Benjamin Jackson. Securing machine learning models. Office of Scientific and Technical Information (OSTI), septiembre de 2020. http://dx.doi.org/10.2172/1661020.
Texto completoMartinez, Carianne, Jessica Jones, Drew Levin, Nathaniel Trask y Patrick Finley. Physics-Informed Machine Learning for Epidemiological Models. Office of Scientific and Technical Information (OSTI), octubre de 2020. http://dx.doi.org/10.2172/1706217.
Texto completoLavender, Samantha y Trent Tinker, eds. Testbed-19: Machine Learning Models Engineering Report. Open Geospatial Consortium, Inc., abril de 2024. http://dx.doi.org/10.62973/23-033.
Texto completoSaenz, Juan Antonio, Ismael Djibrilla Boureima, Vitaliy Gyrya y Susan Kurien. Machine-Learning for Rapid Optimization of Turbulence Models. Office of Scientific and Technical Information (OSTI), julio de 2020. http://dx.doi.org/10.2172/1638623.
Texto completoKulkarni, Sanjeev R. Extending and Unifying Formal Models for Machine Learning. Fort Belvoir, VA: Defense Technical Information Center, julio de 1997. http://dx.doi.org/10.21236/ada328730.
Texto completoBanerjee, Boudhayan. Machine Learning Models for Political Video Advertisement Classification. Ames (Iowa): Iowa State University, enero de 2017. http://dx.doi.org/10.31274/cc-20240624-976.
Texto completoValaitis, Vytautas y Alessandro T. Villa. A Machine Learning Projection Method for Macro-Finance Models. Federal Reserve Bank of Chicago, 2022. http://dx.doi.org/10.21033/wp-2022-19.
Texto completoFessel, Kimberly. Machine Learning in Python. Instats Inc., 2024. http://dx.doi.org/10.61700/s74zy0ivgwioe1764.
Texto completoOgunbire, Abimbola, Panick Kalambay, Hardik Gajera y Srinivas Pulugurtha. Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction. Mineta Transportation Institute, diciembre de 2023. http://dx.doi.org/10.31979/mti.2023.2320.
Texto completo