Gotowa bibliografia na temat „Explainability of machine learning models”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Explainability of machine learning models”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Explainability of machine learning models"
S, Akshay, i Manu Madhavan. "COMPARISON OF EXPLAINABILITY OF MACHINE LEARNING BASED MALAYALAM TEXT CLASSIFICATION". ICTACT Journal on Soft Computing 15, nr 1 (1.07.2024): 3386–91. http://dx.doi.org/10.21917/ijsc.2024.0476.
Pełny tekst źródłaPark, Min Sue, Hwijae Son, Chongseok Hyun i Hyung Ju Hwang. "Explainability of Machine Learning Models for Bankruptcy Prediction". IEEE Access 9 (2021): 124887–99. http://dx.doi.org/10.1109/access.2021.3110270.
Pełny tekst źródłaCheng, Xueyi, i Chang Che. "Interpretable Machine Learning: Explainability in Algorithm Design". Journal of Industrial Engineering and Applied Science 2, nr 6 (1.12.2024): 65–70. https://doi.org/10.70393/6a69656173.323337.
Pełny tekst źródłaBozorgpanah, Aso, Vicenç Torra i Laya Aliahmadipour. "Privacy and Explainability: The Effects of Data Protection on Shapley Values". Technologies 10, nr 6 (1.12.2022): 125. http://dx.doi.org/10.3390/technologies10060125.
Pełny tekst źródłaZhang, Xueting. "Traffic Flow Prediction Based on Explainable Machine Learning". Highlights in Science, Engineering and Technology 56 (14.07.2023): 56–64. http://dx.doi.org/10.54097/hset.v56i.9816.
Pełny tekst źródłaPendyala, Vishnu, i Hyungkyun Kim. "Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI". Electronics 13, nr 6 (8.03.2024): 1025. http://dx.doi.org/10.3390/electronics13061025.
Pełny tekst źródłaKim, Dong-sup, i Seungwoo Shin. "THE ECONOMIC EXPLAINABILITY OF MACHINE LEARNING AND STANDARD ECONOMETRIC MODELS-AN APPLICATION TO THE U.S. MORTGAGE DEFAULT RISK". International Journal of Strategic Property Management 25, nr 5 (13.07.2021): 396–412. http://dx.doi.org/10.3846/ijspm.2021.15129.
Pełny tekst źródłaTOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example". Journal of Medicine and Palliative Care 4, nr 2 (27.03.2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.
Pełny tekst źródłaRodríguez Mallma, Mirko Jerber, Luis Zuloaga-Rotta, Rubén Borja-Rosales, Josef Renato Rodríguez Mallma, Marcos Vilca-Aguilar, María Salas-Ojeda i David Mauricio. "Explainable Machine Learning Models for Brain Diseases: Insights from a Systematic Review". Neurology International 16, nr 6 (29.10.2024): 1285–307. http://dx.doi.org/10.3390/neurolint16060098.
Pełny tekst źródłaBhagyashree D Shendkar. "Explainable Machine Learning Models for Real-Time Threat Detection in Cybersecurity". Panamerican Mathematical Journal 35, nr 1s (13.11.2024): 264–75. http://dx.doi.org/10.52783/pmj.v35.i1s.2313.
Pełny tekst źródłaRozprawy doktorskie na temat "Explainability of machine learning models"
Delaunay, Julien. "Explainability for machine learning models : from data adaptability to user perception". Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS076.
Pełny tekst źródłaThis thesis explores the generation of local explanations for already deployed machine learning models, aiming to identify optimal conditions for producing meaningful explanations considering both data and user requirements. The primary goal is to develop methods for generating explanations for any model while ensuring that these explanations remain faithful to the underlying model and comprehensible to the users. The thesis is divided into two parts. The first enhances a widely used rule-based explanation method to improve the quality of explanations. It then introduces a novel approach for evaluating the suitability of linear explanations to approximate a model. Additionally, it conducts a comparative experiment between two families of counterfactual explanation methods to analyze the advantages of one over the other. The second part focuses on user experiments to assess the impact of three explanation methods and two distinct representations. These experiments measure how users perceive their interaction with the model in terms of understanding and trust, depending on the explanations and representations. This research contributes to a better explanation generation, with potential implications for enhancing the transparency, trustworthiness, and usability of deployed AI systems
Stanzione, Vincenzo Maria. "Developing a new approach for machine learning explainability combining local and global model-agnostic approaches". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25480/.
Pełny tekst źródłaAyad, Célia. "Towards Reliable Post Hoc Explanations for Machine Learning on Tabular Data and their Applications". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX082.
Pełny tekst źródłaAs machine learning continues to demonstrate robust predictive capabili-ties, it has emerged as a very valuable tool in several scientific and indus-trial domains. However, as ML models evolve to achieve higher accuracy,they also become increasingly complex and require more parameters. Beingable to understand the inner complexities and to establish trust in the pre-dictions of these machine learning models, has therefore become essentialin various critical domains including healthcare, and finance. Researchershave developed explanation methods to make machine learning models moretransparent, helping users understand why predictions are made. However,these explanation methods often fall short in accurately explaining modelpredictions, making it difficult for domain experts to utilize them effectively.It’s crucial to identify the shortcomings of ML explanations, enhance theirreliability, and make them more user-friendly. Additionally, with many MLtasks becoming more data-intensive and the demand for widespread inte-gration rising, there is a need for methods that deliver strong predictiveperformance in a simpler and more cost-effective manner. In this disserta-tion, we address these problems in two main research thrusts: 1) We proposea methodology to evaluate various explainability methods in the context ofspecific data properties, such as noise levels, feature correlations, and classimbalance, and offer guidance for practitioners and researchers on selectingthe most suitable explainability method based on the characteristics of theirdatasets, revealing where these methods excel or fail. Additionally, we pro-vide clinicians with personalized explanations of cervical cancer risk factorsbased on their desired properties such as ease of understanding, consistency,and stability. 2) We introduce Shapley Chains, a new explanation techniquedesigned to overcome the lack of explanations of multi-output predictionsin the case of interdependent labels, where features may have indirect con-tributions to predict subsequent labels in the chain (i.e. the order in whichthese labels are predicted). Moreover, we propose Bayes LIME Chains toenhance the robustness of Shapley Chains
Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.
Pełny tekst źródłaCurrent state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Willot, Hénoïk. "Certified explanations of robust models". Electronic Thesis or Diss., Compiègne, 2024. http://www.theses.fr/2024COMP2812.
Pełny tekst źródłaWith the advent of automated or semi-automated decision systems in artificial intelligence comes the need of making them more reliable and transparent for an end-user. While the role of explainable methods is in general to increase transparency, reliability can be achieved by providing certified explanations, in the sense that those are guaranteed to be true, and by considering robust models that can abstain when having insufficient information, rather than enforcing precision for the mere sake of avoiding indecision. This last aspect is commonly referred to as skeptical inference. This work participates to this effort, by considering two cases: - The first one considers classical decision rules used to enforce fairness, which are the Ordered Weighted Averaging (OWA) with decreasing weights. Our main contribution is to fully characterise from an axiomatic perspective convex sets of such rules, and to provide together with this sound and complete explanation schemes that can be efficiently obtained through heuristics. Doing so, we also provide a unifying framework between the restricted and generalized Lorenz dominance, two qualitative criteria, and precise decreasing OWA. - The second one considers that our decision rule is a classification model resulting from a learning procedure, where the resulting model is a set of probabilities. We study and discuss the problem of providing prime implicant as explanations in such a case, where in addition to explaining clear preferences of one class over the other, we also have to treat the problem of declaring two classes as being incomparable. We describe the corresponding problems in general ways, before studying in more details the robust counter-part of the Naive Bayes Classifier
Kurasinski, Lukas. "Machine Learning explainability in text classification for Fake News detection". Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20058.
Pełny tekst źródłaLounici, Sofiane. "Watermarking machine learning models". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS282.pdf.
Pełny tekst źródłaThe protection of the intellectual property of machine learning models appears to be increasingly necessary, given the investments and their impact on society. In this thesis, we propose to study the watermarking of machine learning models. We provide a state of the art on current watermarking techniques, and then complement it by considering watermarking beyond image classification tasks. We then define forging attacks against watermarking for model hosting platforms and present a new fairness-based watermarking technique. In addition, we propose an implementation of the presented techniques
Maltbie, Nicholas. "Integrating Explainability in Deep Learning Application Development: A Categorization and Case Study". University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623169431719474.
Pełny tekst źródłaHardoon, David Roi. "Semantic models for machine learning". Thesis, University of Southampton, 2006. https://eprints.soton.ac.uk/262019/.
Pełny tekst źródłaBODINI, MATTEO. "DESIGN AND EXPLAINABILITY OF MACHINE LEARNING ALGORITHMS FOR THE CLASSIFICATION OF CARDIAC ABNORMALITIES FROM ELECTROCARDIOGRAM SIGNALS". Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/888002.
Pełny tekst źródłaKsiążki na temat "Explainability of machine learning models"
Nandi, Anirban, i Aditya Kumar Pal. Interpreting Machine Learning Models. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4.
Pełny tekst źródłaBolc, Leonard. Computational Models of Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987.
Znajdź pełny tekst źródłaGalindez Olascoaga, Laura Isabel, Wannes Meert i Marian Verhelst. Hardware-Aware Probabilistic Machine Learning Models. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74042-9.
Pełny tekst źródłaSingh, Pramod. Deploy Machine Learning Models to Production. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6546-8.
Pełny tekst źródłaZhang, Zhihua. Statistical Machine Learning: Foundations, Methodologies and Models. UK: John Wiley & Sons, Limited, 2017.
Znajdź pełny tekst źródłaRendell, Larry. Representations and models for concept learning. Urbana, IL (1304 W. Springfield Ave., Urbana 61801): Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1987.
Znajdź pełny tekst źródłaEhteram, Mohammad, Zohreh Sheikh Khozani, Saeed Soltani-Mohammadi i Maliheh Abbaszadeh. Estimating Ore Grade Using Evolutionary Machine Learning Models. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8106-7.
Pełny tekst źródłaBisong, Ekaba. Building Machine Learning and Deep Learning Models on Google Cloud Platform. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8.
Pełny tekst źródłaGupta, Punit, Mayank Kumar Goyal, Sudeshna Chakraborty i Ahmed A. Elngar. Machine Learning and Optimization Models for Optimization in Cloud. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003185376.
Pełny tekst źródłaSuthaharan, Shan. Machine Learning Models and Algorithms for Big Data Classification. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7641-3.
Pełny tekst źródłaCzęści książek na temat "Explainability of machine learning models"
Nandi, Anirban, i Aditya Kumar Pal. "The Eight Pitfalls of Explainability Methods". W Interpreting Machine Learning Models, 321–28. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_15.
Pełny tekst źródłaNandi, Anirban, i Aditya Kumar Pal. "Explainability Facts: A Framework for Systematic Assessment of Explainable Approaches". W Interpreting Machine Learning Models, 69–82. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_6.
Pełny tekst źródłaKamath, Uday, i John Liu. "Pre-model Interpretability and Explainability". W Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning, 27–77. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_2.
Pełny tekst źródłaDessain, Jean, Nora Bentaleb i Fabien Vinas. "Cost of Explainability in AI: An Example with Credit Scoring Models". W Communications in Computer and Information Science, 498–516. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44064-9_26.
Pełny tekst źródłaHenriques, J., T. Rocha, P. de Carvalho, C. Silva i S. Paredes. "Interpretability and Explainability of Machine Learning Models: Achievements and Challenges". W IFMBE Proceedings, 81–94. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59216-4_9.
Pełny tekst źródłaBargal, Sarah Adel, Andrea Zunino, Vitali Petsiuk, Jianming Zhang, Vittorio Murino, Stan Sclaroff i Kate Saenko. "Beyond the Visual Analysis of Deep Model Saliency". W xxAI - Beyond Explainable AI, 255–69. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_13.
Pełny tekst źródłaStevens, Alexander, Johannes De Smedt i Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring". W Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.
Pełny tekst źródłaBaniecki, Hubert, Wojciech Kretowicz i Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning". W Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.
Pełny tekst źródłaColosimo, Bianca Maria, i Fabio Centofanti. "Model Interpretability, Explainability and Trust for Manufacturing 4.0". W Interpretability for Industry 4.0 : Statistical and Machine Learning Approaches, 21–36. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12402-0_2.
Pełny tekst źródłaSantos, Geanderson, Amanda Santana, Gustavo Vale i Eduardo Figueiredo. "Yet Another Model! A Study on Model’s Similarities for Defect and Code Smells". W Fundamental Approaches to Software Engineering, 282–305. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_16.
Pełny tekst źródłaStreszczenia konferencji na temat "Explainability of machine learning models"
Bouzid, Mohamed, i Manar Amayri. "Addressing Explainability in Load Forecasting Using Time Series Machine Learning Models". W 2024 IEEE 12th International Conference on Smart Energy Grid Engineering (SEGE), 233–40. IEEE, 2024. http://dx.doi.org/10.1109/sege62220.2024.10739606.
Pełny tekst źródłaBurgos, David, Ahsan Morshed, MD Mamunur Rashid i Satria Mandala. "A Comparison of Machine Learning Models to Deep Learning Models for Cancer Image Classification and Explainability of Classification". W 2024 International Conference on Data Science and Its Applications (ICoDSA), 386–90. IEEE, 2024. http://dx.doi.org/10.1109/icodsa62899.2024.10651790.
Pełny tekst źródłaSheikhani, Arman, Ervin Agic, Mahshid Helali Moghadam, Juan Carlos Andresen i Anders Vesterberg. "Lithium-Ion Battery SOH Forecasting: From Deep Learning Augmented by Explainability to Lightweight Machine Learning Models". W 2024 IEEE 29th International Conference on Emerging Technologies and Factory Automation (ETFA), 1–4. IEEE, 2024. http://dx.doi.org/10.1109/etfa61755.2024.10710794.
Pełny tekst źródłaMechouche, Ammar, Valerio Camerini, Caroline Del, Elsa Cansell i Konstanca Nikolajevic. "From Dampers Estimated Loads to In-Service Degradation Correlations". W Vertical Flight Society 80th Annual Forum & Technology Display, 1–10. The Vertical Flight Society, 2024. http://dx.doi.org/10.4050/f-0080-2024-1108.
Pełny tekst źródłaIzza, Yacine, Xuanxiang Huang, Antonio Morgado, Jordi Planes, Alexey Ignatiev i Joao Marques-Silva. "Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation". W 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 475–86. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/45.
Pełny tekst źródłaAlami, Amine, Jaouad Boumhidi i Loqman Chakir. "Explainability in CNN based Deep Learning models for medical image classification". W 2024 International Conference on Intelligent Systems and Computer Vision (ISCV), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/iscv60512.2024.10620149.
Pełny tekst źródłaRodríguez-Barroso, Nuria, Javier Del Ser, M. Victoria Luzón i Francisco Herrera. "Defense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainability". W 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/fuzz-ieee60900.2024.10611769.
Pełny tekst źródłaPerikos, Isidoros. "Sensitive Content Detection in Social Networks Using Deep Learning Models and Explainability Techniques". W 2024 IEEE/ACIS 9th International Conference on Big Data, Cloud Computing, and Data Science (BCD), 48–53. IEEE, 2024. http://dx.doi.org/10.1109/bcd61269.2024.10743081.
Pełny tekst źródłaGafur, Jamil, Steve Goddard i William Lai. "Adversarial Robustness and Explainability of Machine Learning Models". W PEARC '24: Practice and Experience in Advanced Research Computing. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3626203.3670522.
Pełny tekst źródłaIslam, Md Ariful, Kowshik Nittala i Garima Bajwa. "Adding Explainability to Machine Learning Models to Detect Chronic Kidney Disease". W 2022 IEEE 23rd International Conference on Information Reuse and Integration for Data Science (IRI). IEEE, 2022. http://dx.doi.org/10.1109/iri54793.2022.00069.
Pełny tekst źródłaRaporty organizacyjne na temat "Explainability of machine learning models"
Smith, Michael, Erin Acquesta, Arlo Ames, Alycia Carey, Christopher Cuellar, Richard Field, Trevor Maxfield i in. SAGE Intrusion Detection System: Sensitivity Analysis Guided Explainability for Machine Learning. Office of Scientific and Technical Information (OSTI), wrzesień 2021. http://dx.doi.org/10.2172/1820253.
Pełny tekst źródłaSkryzalin, Jacek, Kenneth Goss i Benjamin Jackson. Securing machine learning models. Office of Scientific and Technical Information (OSTI), wrzesień 2020. http://dx.doi.org/10.2172/1661020.
Pełny tekst źródłaMartinez, Carianne, Jessica Jones, Drew Levin, Nathaniel Trask i Patrick Finley. Physics-Informed Machine Learning for Epidemiological Models. Office of Scientific and Technical Information (OSTI), październik 2020. http://dx.doi.org/10.2172/1706217.
Pełny tekst źródłaLavender, Samantha, i Trent Tinker, red. Testbed-19: Machine Learning Models Engineering Report. Open Geospatial Consortium, Inc., kwiecień 2024. http://dx.doi.org/10.62973/23-033.
Pełny tekst źródłaSaenz, Juan Antonio, Ismael Djibrilla Boureima, Vitaliy Gyrya i Susan Kurien. Machine-Learning for Rapid Optimization of Turbulence Models. Office of Scientific and Technical Information (OSTI), lipiec 2020. http://dx.doi.org/10.2172/1638623.
Pełny tekst źródłaKulkarni, Sanjeev R. Extending and Unifying Formal Models for Machine Learning. Fort Belvoir, VA: Defense Technical Information Center, lipiec 1997. http://dx.doi.org/10.21236/ada328730.
Pełny tekst źródłaBanerjee, Boudhayan. Machine Learning Models for Political Video Advertisement Classification. Ames (Iowa): Iowa State University, styczeń 2017. http://dx.doi.org/10.31274/cc-20240624-976.
Pełny tekst źródłaValaitis, Vytautas, i Alessandro T. Villa. A Machine Learning Projection Method for Macro-Finance Models. Federal Reserve Bank of Chicago, 2022. http://dx.doi.org/10.21033/wp-2022-19.
Pełny tekst źródłaFessel, Kimberly. Machine Learning in Python. Instats Inc., 2024. http://dx.doi.org/10.61700/s74zy0ivgwioe1764.
Pełny tekst źródłaOgunbire, Abimbola, Panick Kalambay, Hardik Gajera i Srinivas Pulugurtha. Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction. Mineta Transportation Institute, grudzień 2023. http://dx.doi.org/10.31979/mti.2023.2320.
Pełny tekst źródła