Academic literature on the topic 'Explainability of machine learning models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Explainability of machine learning models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Explainability of machine learning models"

1

S, Akshay, and Manu Madhavan. "COMPARISON OF EXPLAINABILITY OF MACHINE LEARNING BASED MALAYALAM TEXT CLASSIFICATION." ICTACT Journal on Soft Computing 15, no. 1 (July 1, 2024): 3386–91. http://dx.doi.org/10.21917/ijsc.2024.0476.

Full text
Abstract:
Text classification is one of the primary NLP tasks where machine learning (ML) is widely used. Even though the applied machine learning models are similar, the classification task may address specific challenges from language to language. The concept of model explainability can provide an idea of how the models make decisions in these situations. In this paper, The explainability of different text classification models for Malayalam language, a morphologically rich Dravidian language predominantly spoken in Kerala, was compared. The experiments considered classification models from both traditional ML and deep learning genres. The experiments were conducted on three different datasets and explainability scores are formulated for each of the selected models. The results of experiments showed that deep learning models did very well with respect to performance matrices whereas traditional machine learning models did well if not better in the explainability part.
APA, Harvard, Vancouver, ISO, and other styles
2

Park, Min Sue, Hwijae Son, Chongseok Hyun, and Hyung Ju Hwang. "Explainability of Machine Learning Models for Bankruptcy Prediction." IEEE Access 9 (2021): 124887–99. http://dx.doi.org/10.1109/access.2021.3110270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cheng, Xueyi, and Chang Che. "Interpretable Machine Learning: Explainability in Algorithm Design." Journal of Industrial Engineering and Applied Science 2, no. 6 (December 1, 2024): 65–70. https://doi.org/10.70393/6a69656173.323337.

Full text
Abstract:
In recent years, there is a high demand for transparency and accountability in machine learning models, especially in domains such as healthcare, finance and etc. In this paper, we delve into deep how to make machine learning models more interpretable, with focus on the importance of the explainability of the algorithm design. The main objective of this paper is to fill this gap and provide a comprehensive survey and analytical study towards AutoML. To that end, we first introduce the AutoML technology and review its various tools and techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Bozorgpanah, Aso, Vicenç Torra, and Laya Aliahmadipour. "Privacy and Explainability: The Effects of Data Protection on Shapley Values." Technologies 10, no. 6 (December 1, 2022): 125. http://dx.doi.org/10.3390/technologies10060125.

Full text
Abstract:
There is an increasing need to provide explainability for machine learning models. There are different alternatives to provide explainability, for example, local and global methods. One of the approaches is based on Shapley values. Privacy is another critical requirement when dealing with sensitive data. Data-driven machine learning models may lead to disclosure. Data privacy provides several methods for ensuring privacy. In this paper, we study how methods for explainability based on Shapley values are affected by privacy methods. We show that some degree of protection still permits to maintain the information of Shapley values for the four machine learning models studied. Experiments seem to indicate that among the four models, Shapley values of linear models are the most affected ones.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Xueting. "Traffic Flow Prediction Based on Explainable Machine Learning." Highlights in Science, Engineering and Technology 56 (July 14, 2023): 56–64. http://dx.doi.org/10.54097/hset.v56i.9816.

Full text
Abstract:
Traffic flow prediction is one of the important links to realize an urban intelligent transportation system. Thanks to the in-depth research of artificial intelligence theories, the machine learning method has been widely used in intelligent transportation engineering. However, due to the “black box” as its characteristics, its application and further development are limited. Exploring the explainability of machine learning models in traffic flow prediction is an important issue to make it more reliable in traffic engineering and other practical applications. Apart from selecting the RandomForest model and the CatBoost model as the objects to research the traffic flow prediction against temporal and spatial changes, this paper makes a comprehensive evaluation and comparison with LightGBM and the other two prediction models through different indicators. Meanwhile, aiming at the low explainability of the models, their feature importance is analyzed and compared with reality. The results show that the RandomForest model and CatBoost model make good predictions, whose feature importance is consistent with the actual situation, verifying their explainability.
APA, Harvard, Vancouver, ISO, and other styles
6

Pendyala, Vishnu, and Hyungkyun Kim. "Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI." Electronics 13, no. 6 (March 8, 2024): 1025. http://dx.doi.org/10.3390/electronics13061025.

Full text
Abstract:
Machine learning is increasingly and ubiquitously being used in the medical domain. Evaluation metrics like accuracy, precision, and recall may indicate the performance of the models but not necessarily the reliability of their outcomes. This paper assesses the effectiveness of a number of machine learning algorithms applied to an important dataset in the medical domain, specifically, mental health, by employing explainability methodologies. Using multiple machine learning algorithms and model explainability techniques, this work provides insights into the models’ workings to help determine the reliability of the machine learning algorithm predictions. The results are not intuitive. It was found that the models were focusing significantly on less relevant features and, at times, unsound ranking of the features to make the predictions. This paper therefore argues that it is important for research in applied machine learning to provide insights into the explainability of models in addition to other performance metrics like accuracy. This is particularly important for applications in critical domains such as healthcare.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Dong-sup, and Seungwoo Shin. "THE ECONOMIC EXPLAINABILITY OF MACHINE LEARNING AND STANDARD ECONOMETRIC MODELS-AN APPLICATION TO THE U.S. MORTGAGE DEFAULT RISK." International Journal of Strategic Property Management 25, no. 5 (July 13, 2021): 396–412. http://dx.doi.org/10.3846/ijspm.2021.15129.

Full text
Abstract:
This study aims to bridge the gap between two perspectives of explainability−machine learning and engineering, and economics and standard econometrics−by applying three marginal measurements. The existing real estate literature has primarily used econometric models to analyze the factors that affect the default risk of mortgage loans. However, in this study, we estimate a default risk model using a machine learning-based approach with the help of a U.S. securitized mortgage loan database. Moreover, we compare the economic explainability of the models by calculating the marginal effect and marginal importance of individual risk factors using both econometric and machine learning approaches. Machine learning-based models are quite effective in terms of predictive power; however, the general perception is that they do not efficiently explain the causal relationships within them. This study utilizes the concepts of marginal effects and marginal importance to compare the explanatory power of individual input variables in various models. This can simultaneously help improve the explainability of machine learning techniques and enhance the performance of standard econometric methods.
APA, Harvard, Vancouver, ISO, and other styles
8

TOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example." Journal of Medicine and Palliative Care 4, no. 2 (March 27, 2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.

Full text
Abstract:
Aim: Machine learning tools have various applications in healthcare. However, the implementation of developed models is still limited because of various challenges. One of the most important problems is the lack of explainability of machine learning models. Explainability refers to the capacity to reveal the reasoning and logic behind the decisions made by AI systems, making it straightforward for human users to understand the process and how the system arrived at a specific outcome. The study aimed to compare the performance of different model-agnostic explanation methods using two different ML models created for HbA1c classification. Material and Method: The H2O AutoML engine was used for the development of two ML models (Gradient boosting machine (GBM) and default random forests (DRF)) using 3,036 records from NHANES open data set. Both global and local model-agnostic explanation methods, including performance metrics, feature important analysis and Partial dependence, Breakdown and Shapley additive explanation plots were utilized for the developed models. Results: While both GBM and DRF models have similar performance metrics, such as mean per class error and area under the receiver operating characteristic curve, they had slightly different variable importance. Local explainability methods also showed different contributions to the features. Conclusion: This study evaluated the significance of explainable machine learning techniques for comprehending complicated models and their role in incorporating AI in healthcare. The results indicate that although there are limitations to current explainability methods, particularly for clinical use, both global and local explanation models offer a glimpse into evaluating the model and can be used to enhance or compare models.
APA, Harvard, Vancouver, ISO, and other styles
9

Rodríguez Mallma, Mirko Jerber, Luis Zuloaga-Rotta, Rubén Borja-Rosales, Josef Renato Rodríguez Mallma, Marcos Vilca-Aguilar, María Salas-Ojeda, and David Mauricio. "Explainable Machine Learning Models for Brain Diseases: Insights from a Systematic Review." Neurology International 16, no. 6 (October 29, 2024): 1285–307. http://dx.doi.org/10.3390/neurolint16060098.

Full text
Abstract:
In recent years, Artificial Intelligence (AI) methods, specifically Machine Learning (ML) models, have been providing outstanding results in different areas of knowledge, with the health area being one of its most impactful fields of application. However, to be applied reliably, these models must provide users with clear, simple, and transparent explanations about the medical decision-making process. This systematic review aims to investigate the use and application of explainability in ML models used in brain disease studies. A systematic search was conducted in three major bibliographic databases, Web of Science, Scopus, and PubMed, from January 2014 to December 2023. A total of 133 relevant studies were identified and analyzed out of a total of 682 found in the initial search, in which the explainability of ML models in the medical context was studied, identifying 11 ML models and 12 explainability techniques applied in the study of 20 brain diseases.
APA, Harvard, Vancouver, ISO, and other styles
10

Bhagyashree D Shendkar. "Explainable Machine Learning Models for Real-Time Threat Detection in Cybersecurity." Panamerican Mathematical Journal 35, no. 1s (November 13, 2024): 264–75. http://dx.doi.org/10.52783/pmj.v35.i1s.2313.

Full text
Abstract:
In the rapidly evolving landscape of cybersecurity, traditional machine learning models often operate as "black boxes," providing high accuracy but lacking transparency in decision-making. This lack of explainability poses challenges for trust and accountability, especially in critical areas like threat detection and incident response. Explainable machine learning models aim to address this by making the model's predictions more understandable and interpretable to users. This research integrates explainable machine learning models for real-time threat detection in cybersecurity. Data from multiple sources, including network traffic, system logs, and user behavior, undergo preprocessing such as cleaning, feature extraction, and normalization. The processed data is passed through various machine learning models, including traditional approaches like SVM and decision trees, as well as deep learning models like CNN and RNN. Explainability techniques such as LIME, SHAP, and attention mechanisms provide transparency, ensuring interpretable predictions. The explanations are delivered through a user interface that generates alerts, visualizations, and reports, facilitating effective threat assessment and incident response in decision support systems. This framework enhances model performance, trust, and reliability in complex cybersecurity scenarios.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Explainability of machine learning models"

1

Delaunay, Julien. "Explainability for machine learning models : from data adaptability to user perception." Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS076.

Full text
Abstract:
Cette thèse se concentre sur la génération d'explications locales pour les modèles de machine learning déjà déployés, en recherchant les conditions optimales pour des explications pertinentes, prenant en compte à la fois les données et les besoins de l'utilisateur. L'objectif principal est de développer des méthodes produisant des explications pour n'importe quel modèle de prédiction, tout en veillant à ce que ces explications demeurent à la fois fidèles au modèle sous-jacent et compréhensibles par les utilisateurs qui les reçoivent. La thèse est divisée en deux parties. Dans la première, on améliore une méthode d'explication basée sur des règles. On introduit ensuite une approche pour évaluer l'adéquation des explications linéaires pour approximer un modèle à expliquer. Enfin, cette partie présente une expérimentation comparative entre deux familles de méthodes d'explication contrefactuelles, dans le but d'analyser les avantages de l'une par rapport à l'autre. La deuxième partie se concentre sur des expériences utilisateurs évaluant l'impact de trois méthodes d'explication et de deux représentations différentes. Ces expériences mesurent la perception en termes de compréhension et de confiance des utilisateurs en fonction des explications et de leurs représentations. L'ensemble de ces travaux contribue à une meilleure compréhension de la génération d'explications pour les modèles de machine learning, avec des implications potentielles pour l'amélioration de la transparence, de la confiance et de l'utilisabilité des systèmes d'IA déployés
This thesis explores the generation of local explanations for already deployed machine learning models, aiming to identify optimal conditions for producing meaningful explanations considering both data and user requirements. The primary goal is to develop methods for generating explanations for any model while ensuring that these explanations remain faithful to the underlying model and comprehensible to the users. The thesis is divided into two parts. The first enhances a widely used rule-based explanation method to improve the quality of explanations. It then introduces a novel approach for evaluating the suitability of linear explanations to approximate a model. Additionally, it conducts a comparative experiment between two families of counterfactual explanation methods to analyze the advantages of one over the other. The second part focuses on user experiments to assess the impact of three explanation methods and two distinct representations. These experiments measure how users perceive their interaction with the model in terms of understanding and trust, depending on the explanations and representations. This research contributes to a better explanation generation, with potential implications for enhancing the transparency, trustworthiness, and usability of deployed AI systems
APA, Harvard, Vancouver, ISO, and other styles
2

Stanzione, Vincenzo Maria. "Developing a new approach for machine learning explainability combining local and global model-agnostic approaches." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25480/.

Full text
Abstract:
The last couple of past decades have seen a new flourishing season for the Artificial Intelligence, in particular for Machine Learning (ML). This is reflected in the great number of fields that are employing ML solutions to overcome a broad spectrum of problems. However, most of the last employed ML models have a black-box behavior. This means that given a certain input, we are not able to understand why one of these models produced a certain output or made a certain decision. Most of the time, we are not interested in knowing what and how the model is thinking, but if we think of a model which makes extremely critical decisions or takes decisions that have a heavy result on people’s lives, in these cases explainability is a duty. A great variety of techniques to perform global or local explanations are available. One of the most widespread is Local Interpretable Model-Agnostic Explanations (LIME), which creates a local linear model in the proximity of an input to understand in which way each feature contributes to the final output. However, LIME is not immune from instability problems and sometimes to incoherent predictions. Furthermore, as a local explainability technique, LIME needs to be performed for each different input that we want to explain. In this work, we have been inspired by the LIME approach for linear models to craft a novel technique. In combination with the Model-based Recursive Partitioning (MOB), a brand-new score function to assess the quality of a partition and the usage of Sobol quasi-Montecarlo sampling, we developed a new global model-agnostic explainability technique we called Global-Lime. Global-Lime is capable of giving a global understanding of the original ML model, through an ensemble of spatially not overlapped hyperplanes, plus a local explanation for a certain output considering only the corresponding linear approximation. The idea is to train the black-box model and then supply along with it its explainable version.
APA, Harvard, Vancouver, ISO, and other styles
3

Ayad, Célia. "Towards Reliable Post Hoc Explanations for Machine Learning on Tabular Data and their Applications." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX082.

Full text
Abstract:
Alors que l’apprentissage automatique continue de démontrer de solides capacités prédictives, il est devenu un outil très précieux dans plusieurs domaines scientifiques et industriels. Cependant, à mesure que les modèles ML évoluent pour atteindre une plus grande précision, ils deviennent également de plus en plus complexes et nécessitent davantage de paramètres.Être capable de comprendre les complexités internes et d’établir une confiance dans les prédictions de ces modèles d’apprentissage automatique est donc devenu essentiel dans divers domaines critiques, notamment la santé et la finance.Les chercheurs ont développé des méthodes d’explication pour rendre les modèles d’apprentissage automatique plus transparents, aidant ainsi les utilisateurs à comprendre pourquoi les prédictions sont faites. Cependant, ces méthodes d’explication ne parviennent souvent pas à expliquer avec précision les prédictions des modèles, ce qui rend difficile leur utilisation efficace par les experts du domaine. Il est crucial d'identifier les lacunes des explications du ML, d'améliorer leur fiabilité et de les rendre plus conviviales. De plus, alors que de nombreuses tâches de ML sont de plus en plus gourmandes en données et que la demande d'intégration généralisée augmente, il existe un besoin pour des méthodes offrant de solides performances prédictives de manière plus simple et plus rentable.Dans cette thèse, nous abordons ces problèmes dans deux axes de recherche principaux:1) Nous proposons une méthodologie pour évaluer diverses méthodes d'explicabilité dans le contexte de propriétés de données spécifiques, telles que les niveaux de bruit, les corrélations de caractéristiques et le déséquilibre de classes, et proposons des conseils aux praticiens et aux chercheurs pour sélectionner la méthode d'explicabilité la plus appropriée en fonction des caractéristiques de leurs ensembles de données, révélant où ces méthodes excellent ou échouent.De plus, nous fournissons aux cliniciens des explications personnalisées sur les facteurs de risque du cancer du col de l’utérus en fonction de leurs propriétés souhaitées telles que la facilité de compréhension, la cohérence et la stabilité.2) Nous introduisons Shapley Chains, une nouvelle technique d'explication conçue pour surmonter le manque d'explications conçues pour les cas à sorties multiples où les étiquettes sont interdépendantes, où les caractéristiques peuvent avoir des contributions indirectes pour prédire les étiquettes ultérieures dans la chaîne (l'ordre dans lequel ces étiquettes sont prédit). De plus, nous proposons Bayes LIME Chains pour améliorer la robustesse de Shapley Chains
As machine learning continues to demonstrate robust predictive capabili-ties, it has emerged as a very valuable tool in several scientific and indus-trial domains. However, as ML models evolve to achieve higher accuracy,they also become increasingly complex and require more parameters. Beingable to understand the inner complexities and to establish trust in the pre-dictions of these machine learning models, has therefore become essentialin various critical domains including healthcare, and finance. Researchershave developed explanation methods to make machine learning models moretransparent, helping users understand why predictions are made. However,these explanation methods often fall short in accurately explaining modelpredictions, making it difficult for domain experts to utilize them effectively.It’s crucial to identify the shortcomings of ML explanations, enhance theirreliability, and make them more user-friendly. Additionally, with many MLtasks becoming more data-intensive and the demand for widespread inte-gration rising, there is a need for methods that deliver strong predictiveperformance in a simpler and more cost-effective manner. In this disserta-tion, we address these problems in two main research thrusts: 1) We proposea methodology to evaluate various explainability methods in the context ofspecific data properties, such as noise levels, feature correlations, and classimbalance, and offer guidance for practitioners and researchers on selectingthe most suitable explainability method based on the characteristics of theirdatasets, revealing where these methods excel or fail. Additionally, we pro-vide clinicians with personalized explanations of cervical cancer risk factorsbased on their desired properties such as ease of understanding, consistency,and stability. 2) We introduce Shapley Chains, a new explanation techniquedesigned to overcome the lack of explanations of multi-output predictionsin the case of interdependent labels, where features may have indirect con-tributions to predict subsequent labels in the chain (i.e. the order in whichthese labels are predicted). Moreover, we propose Bayes LIME Chains toenhance the robustness of Shapley Chains
APA, Harvard, Vancouver, ISO, and other styles
4

Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.

Full text
Abstract:
Les modèles d'intelligence artificielle (IA) actuels ont fait leurs preuves dans la résolution de diverses tâches, telles que la classification, la régression, le traitement du langage naturel (NLP) et le traitement d'images. Les ressources dont nous disposons aujourd'hui nous permettent d'entraîner des modèles d'IA très complexes pour résoudre différents problèmes dans presque tous les domaines : médecine, finance, justice, transport, prévisions, etc. Avec la popularité et l'utilisation généralisée des modèles d'IA, la nécessite d'assurer la confiance dans ces modèles s'est également accrue. Aussi complexes soient-ils aujourd'hui, ces modèles d'IA sont impossibles à interpréter et à comprendre par les humains. Dans cette thèse nous nous concentrons sur un domaine de recherche spécifique, à savoir l'intelligence artificielle explicable (xAI), qui vise à fournir des approches permettant d'interpréter les modèles d'IA complexes et d'expliquer leurs décisions. Nous présentons deux approches, STACI et BELLA, qui se concentrent sur les tâches de classification et de régression, respectivement, pour les données tabulaires. Les deux méthodes sont des approches post-hoc agnostiques au modèle déterministe, ce qui signifie qu'elles peuvent être appliquées à n'importe quel modèle boîte noire après sa création. De cette manière, l'interopérabilité présente une valeur ajoutée sans qu'il soit nécessaire de faire des compromis sur les performances du modèle de boîte noire. Nos méthodes fournissent des interprétations précises, simples et générales à la fois de l'ensemble du modèle boîte noire et de ses prédictions individuelles. Nous avons confirmé leur haute performance par des expériences approfondies et étude d'utilisateurs
Current state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
APA, Harvard, Vancouver, ISO, and other styles
5

Willot, Hénoïk. "Certified explanations of robust models." Electronic Thesis or Diss., Compiègne, 2024. http://www.theses.fr/2024COMP2812.

Full text
Abstract:
Avec l'utilisation croissante des systèmes d'aide à la décision, automatisés ou semi-automatisés, en intelligence artificielle se crée le besoin de les rendre fiables et transparents pour un utilisateur final. Tandis que le rôle des méthodes d'explicabilité est généralement d'augmenter la transparence, la fiabilité peut être obtenue en fournissant des explications certifiées, dans le sens qu'elles sont garanties d'être vraies, et en considérant des modèles robustes qui peuvent s'abstenir quand l'information disponible est trop insuffisante, plutôt que de forcer une décision dans l'unique but d'éviter l'indécision. Ce dernier aspect est communément appelé "inférence sceptique". Ce travail s'inscrit dans ces considérations, en étudiant deux cas : - Le premier se focalise sur un modèle classique de décision utilisé pour intégrer de l'équité, les Sommes Pondérées Ordonnées (Ordered Weighted Averaging -- OWA) à poids décroissants. Notre principale contribution est de caractériser d'un point de vue axiomatique un ensemble convexe de ces règles, et de proposer à partir de cette caractérisation un schéma explicatif correct et complet des décisions prises qui peuvent être obtenues efficacement à partir d'heuristiques. Ce faisant, nous proposons aussi un cadre unifiant les dominances de Lorenz restreintes et généralisées, deux critères qualitatifs, et les OWA décroissants précis. - Le second se focalise sur le cas où la règle de décision est un modèle de classification obtenu à partir d'une procédure d'apprentissage sous forme d'un ensemble convexe de probabilités. Nous étudions et traitons le problème de fournir des impliquants premiers comme explication dans ce contexte, où en plus d'expliquer les préférences d'une classe sur une autre, nous avons aussi à traiter le cas où deux classes sont considérées incomparables. Nous décrivons ces problèmes de manière générale avant de les étudier en détail pour la version robuste du classifieur de Bayes Naïf
With the advent of automated or semi-automated decision systems in artificial intelligence comes the need of making them more reliable and transparent for an end-user. While the role of explainable methods is in general to increase transparency, reliability can be achieved by providing certified explanations, in the sense that those are guaranteed to be true, and by considering robust models that can abstain when having insufficient information, rather than enforcing precision for the mere sake of avoiding indecision. This last aspect is commonly referred to as skeptical inference. This work participates to this effort, by considering two cases: - The first one considers classical decision rules used to enforce fairness, which are the Ordered Weighted Averaging (OWA) with decreasing weights. Our main contribution is to fully characterise from an axiomatic perspective convex sets of such rules, and to provide together with this sound and complete explanation schemes that can be efficiently obtained through heuristics. Doing so, we also provide a unifying framework between the restricted and generalized Lorenz dominance, two qualitative criteria, and precise decreasing OWA. - The second one considers that our decision rule is a classification model resulting from a learning procedure, where the resulting model is a set of probabilities. We study and discuss the problem of providing prime implicant as explanations in such a case, where in addition to explaining clear preferences of one class over the other, we also have to treat the problem of declaring two classes as being incomparable. We describe the corresponding problems in general ways, before studying in more details the robust counter-part of the Naive Bayes Classifier
APA, Harvard, Vancouver, ISO, and other styles
6

Kurasinski, Lukas. "Machine Learning explainability in text classification for Fake News detection." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20058.

Full text
Abstract:
Fake news detection gained an interest in recent years. This made researchers try to findmodels that can classify text in the direction of fake news detection. While new modelsare developed, researchers mostly focus on the accuracy of a model. There is little researchdone in the subject of explainability of Neural Network (NN) models constructed for textclassification and fake news detection. When trying to add a level of explainability to aNeural Network model, allot of different aspects have to be taken under consideration.Text length, pre-processing, and complexity play an important role in achieving successfully classification. Model’s architecture has to be taken under consideration as well. Allthese aspects are analyzed in this thesis. In this work, an analysis of attention weightsis performed to give an insight into NN reasoning about texts. Visualizations are usedto show how 2 models, Bidirectional Long-Short term memory Convolution Neural Network (BIDir-LSTM-CNN), and Bidirectional Encoder Representations from Transformers(BERT), distribute their attentions while training and classifying texts. In addition, statistical data is gathered to deepen the analysis. After the analysis, it is concluded thatexplainability can positively influence the decisions made while constructing a NN modelfor text classification and fake news detection. Although explainability is useful, it is nota definitive answer to the problem. Architects should test, and experiment with differentsolutions, to be successful in effective model construction.
APA, Harvard, Vancouver, ISO, and other styles
7

Lounici, Sofiane. "Watermarking machine learning models." Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS282.pdf.

Full text
Abstract:
La protection de la propriété intellectuelle des modèles d’apprentissage automatique apparaît de plus en plus nécessaire, au vu des investissements et de leur impact sur la société. Dans cette thèse, nous proposons d’étudier le tatouage de modèles d’apprentissage automatique. Nous fournissons un état de l’art sur les techniques de tatouage actuelles, puis nous le complétons en considérant le tatouage de modèles au-delà des tâches de classification d’images. Nous définissons ensuite les attaques de contrefaçon contre le tatouage pour les plateformes d’hébergement de modèles, et nous présentons une nouvelle technique de tatouages par biais algorithmique. De plus, nous proposons une implémentation des techniques présentées
The protection of the intellectual property of machine learning models appears to be increasingly necessary, given the investments and their impact on society. In this thesis, we propose to study the watermarking of machine learning models. We provide a state of the art on current watermarking techniques, and then complement it by considering watermarking beyond image classification tasks. We then define forging attacks against watermarking for model hosting platforms and present a new fairness-based watermarking technique. In addition, we propose an implementation of the presented techniques
APA, Harvard, Vancouver, ISO, and other styles
8

Maltbie, Nicholas. "Integrating Explainability in Deep Learning Application Development: A Categorization and Case Study." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623169431719474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hardoon, David Roi. "Semantic models for machine learning." Thesis, University of Southampton, 2006. https://eprints.soton.ac.uk/262019/.

Full text
Abstract:
In this thesis we present approaches to the creation and usage of semantic models by the analysis of the data spread in the feature space. We aim to introduce the general notion of using feature selection techniques in machine learning applications. The applied approaches obtain new feature directions on data, such that machine learning applications would show an increase in performance. We review three principle methods that are used throughout the thesis. Firstly Canonical Correlation Analysis (CCA), which is a method of correlating linear relationships between two multidimensional variables. CCA can be seen as using complex labels as a way of guiding feature selection towards the underlying semantics. CCA makes use of two views of the same semantic object to extract a representation of the semantics. Secondly Partial Least Squares (PLS), a method similar to CCA. It selects feature directions that are useful for the task at hand, though PLS only uses one view of an object and the label as the corresponding pair. PLS could be thought of as a method that looks for directions that are good for distinguishing the different labels. The third method is the Fisher kernel. A method that aims to extract more information of a generative model than simply by their output probabilities. The aim is to analyse how the Fisher score depends on the model and which aspects of the model are important in determining the Fisher score. We focus our theoretical investigation primarily on CCA and its kernel variant. Providing a theoretical analysis of the method's stability using Rademacher complexity, hence deriving the error bound for new data. We conclude the thesis by applying the described approaches to problems in the various fields of image, text, music application and medical analysis, describing several novel applications on relevant real-world data. The aim of the thesis is to provide a theoretical understanding of semantic models, while also providing a good application foundation on how these models can be practically used.
APA, Harvard, Vancouver, ISO, and other styles
10

BODINI, MATTEO. "DESIGN AND EXPLAINABILITY OF MACHINE LEARNING ALGORITHMS FOR THE CLASSIFICATION OF CARDIAC ABNORMALITIES FROM ELECTROCARDIOGRAM SIGNALS." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/888002.

Full text
Abstract:
The research activity contained in the present thesis work is devoted to the development of novel Machine Learning (ML) and Deep Learning (DL) algorithms for the classification of Cardiac Abnormalities (CA) from Electrocardiogram (ECG) signals, along with the explanation of classification outputs with explainable approaches. Automated computer programs for ECG classification have been developed since 1950s to improve the correct interpretation of the ECG, nowadays facilitating health care decision-making by reducing costs and human errors. The first ECG interpretation computer programs were essentially developed by emph{translating into the machine} the domain knowledge provided by expert physicians. However, in the last years leading research groups proposed to employ standard ML algorithms (which involve feature extraction, followed by classification), and more recently emph{end-to-end} DL algorithms to build automated ECG classification computer programs for the detection of CA. Recently, several research works proposed DL algorithms which even exceeded the performance of board-certified cardiologists in detecting a wide range of CA from ECGs. As a matter of fact, DL algorithms seem to represent promising tools for automated ECG classification on the analyzed datasets. However, the latest research related to ML and DL carries two main drawbacks that were tackled throughout the doctoral experience. First, to let the standard ML algorithms to perform at their best, the proper preprocessing, feature engineering, and classification algorithm (along with its parameters and hyperparameters) must be selected. Even when end-to-end DL approaches are adopted, and the feature extraction step is automatically learned from data, the optimal model architecture is crucial to get the best performance. To address this issue, we exploited the domain knowledge of electrocardiography to design an ensemble ML classification algorithm to classify within a wide range of 27 CA. Differently from other works in the context of ECG classification, which often borrowed ML and DL architectures from other domains, we designed each model in the ensemble according to the domain knowledge to specifically classify a subset of the considered CA that alter the same set of ECG physiological features known by physicians. Furthermore, in a subsequent work, toward the same aim we experimented three different Automated ML frameworks to automatically find the optimal ML pipeline in the case of standard and end-to-end DL algorithms. Second, while several research articles reported remarkable results for the value of ML and DL in classifying ECGs, only a handful offer insights into the model’s learning representation of the ECG for the respective task. Without explaining what these models are sensing on the ECG to perform their classifications in an explainable way, the developers of such algorithms run a strong risk of discouraging the physicians to adopt these tools, since they need to understand how ML and DL work before entrusting it to facilitate their clinical practice. Methods to open the emph{black-boxes} of ML and DL have been applied to the ECG in a few works, but they often provided only explanations restricted to a single ECG at time and with limited, or even absent, framing into the knowledge domain of electrocardiography. To tackle such issues, we developed techniques to unveil which portions of the ECG were the most relevant to the classification output of a ML algorithm, by computing average explanations over all the training samples, and translating them for the physicians' understanding. In a preliminary work, we relied on the Local Interpretable Model-agnostic Explanations (LIME) explainability algorithm to highlight which ECG leads were the most relevant in the classification of ST-Elevation Myocardial Infarction with a Random Forest classifier. Then, in a subsequent work, we extended the approach and we designed two model-specific explainability algorithms for Convolutional Neural Networks to explain which ECG waves, a concept understood by physicians, were the most relevant in the classification process of a wide set of 27 CA for a state-of-the-art CNN.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Explainability of machine learning models"

1

Nandi, Anirban, and Aditya Kumar Pal. Interpreting Machine Learning Models. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bolc, Leonard. Computational Models of Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Galindez Olascoaga, Laura Isabel, Wannes Meert, and Marian Verhelst. Hardware-Aware Probabilistic Machine Learning Models. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74042-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Pramod. Deploy Machine Learning Models to Production. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6546-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Zhihua. Statistical Machine Learning: Foundations, Methodologies and Models. UK: John Wiley & Sons, Limited, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rendell, Larry. Representations and models for concept learning. Urbana, IL (1304 W. Springfield Ave., Urbana 61801): Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ehteram, Mohammad, Zohreh Sheikh Khozani, Saeed Soltani-Mohammadi, and Maliheh Abbaszadeh. Estimating Ore Grade Using Evolutionary Machine Learning Models. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8106-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bisong, Ekaba. Building Machine Learning and Deep Learning Models on Google Cloud Platform. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gupta, Punit, Mayank Kumar Goyal, Sudeshna Chakraborty, and Ahmed A. Elngar. Machine Learning and Optimization Models for Optimization in Cloud. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003185376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Suthaharan, Shan. Machine Learning Models and Algorithms for Big Data Classification. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7641-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Explainability of machine learning models"

1

Nandi, Anirban, and Aditya Kumar Pal. "The Eight Pitfalls of Explainability Methods." In Interpreting Machine Learning Models, 321–28. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nandi, Anirban, and Aditya Kumar Pal. "Explainability Facts: A Framework for Systematic Assessment of Explainable Approaches." In Interpreting Machine Learning Models, 69–82. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7802-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kamath, Uday, and John Liu. "Pre-model Interpretability and Explainability." In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning, 27–77. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dessain, Jean, Nora Bentaleb, and Fabien Vinas. "Cost of Explainability in AI: An Example with Credit Scoring Models." In Communications in Computer and Information Science, 498–516. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44064-9_26.

Full text
Abstract:
AbstractThis paper examines the cost of explainability in machine learning models for credit scoring. The analysis is conducted under the constraint of meeting the regulatory requirements of the European Central Bank (ECB), using a real-life dataset of over 50,000 credit exposures. We compare the statistical and financial performances of black-box models, such as XGBoost and neural networks, with inherently explainable models like logistic regression and GAMs. Notably, statistical performance does not necessarily correlate with financial performance. Our results reveal a difference of 15 to 20 basis points in annual return on investment between the best performing black-box model and the best performing inherently explainable model, as cost of explainability. We also find that the cost of explainability increases together with the risk appetite.To enhance the interpretability of explainable models, we apply isotonic smoothing of features’ shape functions based on expert judgment. Our findings suggest that incorporating expert judgment in the form of isotonic smoothing improves the explainability without compromising the performance. These results have significant implications for the use of explainable models in credit risk assessment and for regulatory compliance.
APA, Harvard, Vancouver, ISO, and other styles
5

Henriques, J., T. Rocha, P. de Carvalho, C. Silva, and S. Paredes. "Interpretability and Explainability of Machine Learning Models: Achievements and Challenges." In IFMBE Proceedings, 81–94. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59216-4_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bargal, Sarah Adel, Andrea Zunino, Vitali Petsiuk, Jianming Zhang, Vittorio Murino, Stan Sclaroff, and Kate Saenko. "Beyond the Visual Analysis of Deep Model Saliency." In xxAI - Beyond Explainable AI, 255–69. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_13.

Full text
Abstract:
AbstractIncreased explainability in machine learning is traditionally associated with lower performance, e.g. a decision tree is more explainable, but less accurate than a deep neural network. We argue that, in fact, increasing the explainability of a deep classifier can improve its generalization. In this chapter, we survey a line of our published work that demonstrates how spatial and spatiotemporal visual explainability can be obtained, and how such explainability can be used to train models that generalize better on unseen in-domain and out-of-domain samples, refine fine-grained classification predictions, better utilize network capacity, and are more robust to network compression.
APA, Harvard, Vancouver, ISO, and other styles
7

Stevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing, 194–206. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Full text
Abstract:
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structures (e.g. neural networks) with post-processing to calculate feature importance. In this paper, a comprehensive overview of predictive models with varying intrinsic complexity are measured based on explainability with model-agnostic quantitative evaluation metrics. To this end, explainability is designed as a symbiosis between interpretability and faithfulness and thereby allowing to compare inherently created explanations (e.g. decision tree rules) with post-hoc explainability techniques (e.g. Shapley values) on top of AI models. Moreover, two improved versions of the logistic regression model capable of capturing non-linear interactions and both inherently generating their own explanations are proposed in the OOPPM context. These models are benchmarked with two common state-of-the-art models with post-hoc explanation techniques in the explainability-performance space.
APA, Harvard, Vancouver, ISO, and other styles
8

Baniecki, Hubert, Wojciech Kretowicz, and Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning." In Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.

Full text
Abstract:
AbstractMany methods have been developed to understand complex predictive models and high expectations are placed on post-hoc model explainability. It turns out that such explanations are not robust nor trustworthy, and they can be fooled. This paper presents techniques for attacking Partial Dependence (plots, profiles, PDP), which are among the most popular methods of explaining any predictive model trained on tabular data. We showcase that PD can be manipulated in an adversarial manner, which is alarming, especially in financial or medical applications where auditability became a must-have trait supporting black-box machine learning. The fooling is performed via poisoning the data to bend and shift explanations in the desired direction using genetic and gradient algorithms. We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a model-agnostic and an explanation-agnostic manner.
APA, Harvard, Vancouver, ISO, and other styles
9

Colosimo, Bianca Maria, and Fabio Centofanti. "Model Interpretability, Explainability and Trust for Manufacturing 4.0." In Interpretability for Industry 4.0 : Statistical and Machine Learning Approaches, 21–36. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12402-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Santos, Geanderson, Amanda Santana, Gustavo Vale, and Eduardo Figueiredo. "Yet Another Model! A Study on Model’s Similarities for Defect and Code Smells." In Fundamental Approaches to Software Engineering, 282–305. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_16.

Full text
Abstract:
AbstractSoftware defect and code smell prediction help developers identify problems in the code and fix them before they degrade the quality or the user experience. The prediction of software defects and code smells is challenging, since it involves many factors inherent to the development process. Many studies propose machine learning models for defects and code smells. However, we have not found studies that explore and compare these machine learning models, nor that focus on the explainability of the models. This analysis allows us to verify which features and quality attributes influence software defects and code smells. Hence, developers can use this information to predict if a class may be faulty or smelly through the evaluation of a few features and quality attributes. In this study, we fill this gap by comparing machine learning models for predicting defects and seven code smells. We trained in a dataset composed of 19,024 classes and 70 software features that range from different quality attributes extracted from 14 Java open-source projects. We then ensemble five machine learning models and employed explainability concepts to explore the redundancies in the models using the top-10 software features and quality attributes that are known to contribute to the defects and code smell predictions. Furthermore, we conclude that although the quality attributes vary among the models, the complexity, documentation, and size are the most relevant. More specifically, Nesting Level Else-If is the only software feature relevant to all models.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Explainability of machine learning models"

1

Bouzid, Mohamed, and Manar Amayri. "Addressing Explainability in Load Forecasting Using Time Series Machine Learning Models." In 2024 IEEE 12th International Conference on Smart Energy Grid Engineering (SEGE), 233–40. IEEE, 2024. http://dx.doi.org/10.1109/sege62220.2024.10739606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Burgos, David, Ahsan Morshed, MD Mamunur Rashid, and Satria Mandala. "A Comparison of Machine Learning Models to Deep Learning Models for Cancer Image Classification and Explainability of Classification." In 2024 International Conference on Data Science and Its Applications (ICoDSA), 386–90. IEEE, 2024. http://dx.doi.org/10.1109/icodsa62899.2024.10651790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sheikhani, Arman, Ervin Agic, Mahshid Helali Moghadam, Juan Carlos Andresen, and Anders Vesterberg. "Lithium-Ion Battery SOH Forecasting: From Deep Learning Augmented by Explainability to Lightweight Machine Learning Models." In 2024 IEEE 29th International Conference on Emerging Technologies and Factory Automation (ETFA), 1–4. IEEE, 2024. http://dx.doi.org/10.1109/etfa61755.2024.10710794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mechouche, Ammar, Valerio Camerini, Caroline Del, Elsa Cansell, and Konstanca Nikolajevic. "From Dampers Estimated Loads to In-Service Degradation Correlations." In Vertical Flight Society 80th Annual Forum & Technology Display, 1–10. The Vertical Flight Society, 2024. http://dx.doi.org/10.4050/f-0080-2024-1108.

Full text
Abstract:
This paper presents an original method that takes advantage of existing large in-service flight data, damper load Machine Learning models as well as the inventory of degraded dampers (elastomeric part), to link the estimated loads and operational conditions to damper degradation cases. The Machine Learning models are trained on flight test campaigns data, and then applied on in-service helicopter data to estimate damper loads as a function of flight parameters. The estimated load history is then used as an input to generate engineering load indicators. These latter, jointly with operational and usage data, are correlated with the reported dampers' degradation observations. Finally, an explainability mechanism is investigated to better understand the Machine Learning models inferences, opening perspectives towards precise damper degradation root causes identification. The obtained results are promising, showing that the occurrence of damper degradation correlates with load history and helicopter operations.
APA, Harvard, Vancouver, ISO, and other styles
5

Izza, Yacine, Xuanxiang Huang, Antonio Morgado, Jordi Planes, Alexey Ignatiev, and Joao Marques-Silva. "Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 475–86. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/45.

Full text
Abstract:
The uses of machine learning (ML) have snowballed in recent years. In many cases, ML models are highly complex, and their operation is beyond the understanding of human decision-makers. Nevertheless, some uses of ML models involve high-stakes and safety-critical applications. Explainable artificial intelligence (XAI) aims to help human decision-makers in understanding the operation of such complex ML models, thus eliciting trust in their operation. Unfortunately, the majority of past XAI work is based on informal approaches, that offer no guarantees of rigor. Unsurprisingly, there exists comprehensive experimental and theoretical evidence confirming that informal methods of XAI can provide human-decision makers with erroneous information. Logic-based XAI represents a rigorous approach to explainability; it is model-based and offers the strongest guarantees of rigor of computed explanations. However, a well-known drawback of logic-based XAI is the complexity of logic reasoning, especially for highly complex ML models. Recent work proposed distance-restricted explanations, i.e. explanations that are rigorous provided the distance to a given input is small enough. Distance-restricted explainability is tightly related with adversarial robustness, and it has been shown to scale for moderately complex ML models, but the number of inputs still represents a key limiting factor. This paper investigates novel algorithms for scaling up the performance of logic-based explainers when computing and enumerating ML model explanations with a large number of inputs.
APA, Harvard, Vancouver, ISO, and other styles
6

Alami, Amine, Jaouad Boumhidi, and Loqman Chakir. "Explainability in CNN based Deep Learning models for medical image classification." In 2024 International Conference on Intelligent Systems and Computer Vision (ISCV), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/iscv60512.2024.10620149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rodríguez-Barroso, Nuria, Javier Del Ser, M. Victoria Luzón, and Francisco Herrera. "Defense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainability." In 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/fuzz-ieee60900.2024.10611769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Perikos, Isidoros. "Sensitive Content Detection in Social Networks Using Deep Learning Models and Explainability Techniques." In 2024 IEEE/ACIS 9th International Conference on Big Data, Cloud Computing, and Data Science (BCD), 48–53. IEEE, 2024. http://dx.doi.org/10.1109/bcd61269.2024.10743081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gafur, Jamil, Steve Goddard, and William Lai. "Adversarial Robustness and Explainability of Machine Learning Models." In PEARC '24: Practice and Experience in Advanced Research Computing. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3626203.3670522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Islam, Md Ariful, Kowshik Nittala, and Garima Bajwa. "Adding Explainability to Machine Learning Models to Detect Chronic Kidney Disease." In 2022 IEEE 23rd International Conference on Information Reuse and Integration for Data Science (IRI). IEEE, 2022. http://dx.doi.org/10.1109/iri54793.2022.00069.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Explainability of machine learning models"

1

Smith, Michael, Erin Acquesta, Arlo Ames, Alycia Carey, Christopher Cuellar, Richard Field, Trevor Maxfield, et al. SAGE Intrusion Detection System: Sensitivity Analysis Guided Explainability for Machine Learning. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/1820253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Skryzalin, Jacek, Kenneth Goss, and Benjamin Jackson. Securing machine learning models. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1661020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Martinez, Carianne, Jessica Jones, Drew Levin, Nathaniel Trask, and Patrick Finley. Physics-Informed Machine Learning for Epidemiological Models. Office of Scientific and Technical Information (OSTI), October 2020. http://dx.doi.org/10.2172/1706217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lavender, Samantha, and Trent Tinker, eds. Testbed-19: Machine Learning Models Engineering Report. Open Geospatial Consortium, Inc., April 2024. http://dx.doi.org/10.62973/23-033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Saenz, Juan Antonio, Ismael Djibrilla Boureima, Vitaliy Gyrya, and Susan Kurien. Machine-Learning for Rapid Optimization of Turbulence Models. Office of Scientific and Technical Information (OSTI), July 2020. http://dx.doi.org/10.2172/1638623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kulkarni, Sanjeev R. Extending and Unifying Formal Models for Machine Learning. Fort Belvoir, VA: Defense Technical Information Center, July 1997. http://dx.doi.org/10.21236/ada328730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Banerjee, Boudhayan. Machine Learning Models for Political Video Advertisement Classification. Ames (Iowa): Iowa State University, January 2017. http://dx.doi.org/10.31274/cc-20240624-976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Valaitis, Vytautas, and Alessandro T. Villa. A Machine Learning Projection Method for Macro-Finance Models. Federal Reserve Bank of Chicago, 2022. http://dx.doi.org/10.21033/wp-2022-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fessel, Kimberly. Machine Learning in Python. Instats Inc., 2024. http://dx.doi.org/10.61700/s74zy0ivgwioe1764.

Full text
Abstract:
This intensive, hands-on workshop offers a deep dive into machine learning with Python, designed for PhD students, professors, and researchers across various fields. Participants will master practical skills in data cleaning, exploratory data analysis, and building powerful machine learning models, including neural networks, to elevate their research. With real-world coding exercises and expert guidance, this workshop will equip you with the tools to turn data into actionable insights.
APA, Harvard, Vancouver, ISO, and other styles
10

Ogunbire, Abimbola, Panick Kalambay, Hardik Gajera, and Srinivas Pulugurtha. Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction. Mineta Transportation Institute, December 2023. http://dx.doi.org/10.31979/mti.2023.2320.

Full text
Abstract:
Nearly 5,000 people are killed and more than 418,000 are injured in weather-related traffic incidents each year. Assessments of the effectiveness of statistical models applied to crash severity prediction compared to machine learning (ML) and deep learning techniques (DL) help researchers and practitioners know what models are most effective under specific conditions. Given the class imbalance in crash data, the synthetic minority over-sampling technique for nominal (SMOTE-N) data was employed to generate synthetic samples for the minority class. The ordered logit model (OLM) and the ordered probit model (OPM) were evaluated as statistical models, while random forest (RF) and XGBoost were evaluated as ML models. For DL, multi-layer perceptron (MLP) and TabNet were evaluated. The performance of these models varied across severity levels, with property damage only (PDO) predictions performing the best and severe injury predictions performing the worst. The TabNet model performed best in predicting severe injury and PDO crashes, while RF was the most effective in predicting moderate injury crashes. However, all models struggled with severe injury classification, indicating the potential need for model refinement and exploration of other techniques. Hence, the choice of model depends on the specific application and the relative costs of false negatives and false positives. This conclusion underscores the need for further research in this area to improve the prediction accuracy of severe and moderate injury incidents, ultimately improving available data that can be used to increase road safety.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography