Segui questo link per vedere altri tipi di pubblicazioni sul tema: Explainability of machine learning models.

Articoli di riviste sul tema "Explainability of machine learning models"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Explainability of machine learning models".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

S, Akshay, e Manu Madhavan. "COMPARISON OF EXPLAINABILITY OF MACHINE LEARNING BASED MALAYALAM TEXT CLASSIFICATION". ICTACT Journal on Soft Computing 15, n. 1 (1 luglio 2024): 3386–91. http://dx.doi.org/10.21917/ijsc.2024.0476.

Testo completo
Abstract (sommario):
Text classification is one of the primary NLP tasks where machine learning (ML) is widely used. Even though the applied machine learning models are similar, the classification task may address specific challenges from language to language. The concept of model explainability can provide an idea of how the models make decisions in these situations. In this paper, The explainability of different text classification models for Malayalam language, a morphologically rich Dravidian language predominantly spoken in Kerala, was compared. The experiments considered classification models from both traditional ML and deep learning genres. The experiments were conducted on three different datasets and explainability scores are formulated for each of the selected models. The results of experiments showed that deep learning models did very well with respect to performance matrices whereas traditional machine learning models did well if not better in the explainability part.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Park, Min Sue, Hwijae Son, Chongseok Hyun e Hyung Ju Hwang. "Explainability of Machine Learning Models for Bankruptcy Prediction". IEEE Access 9 (2021): 124887–99. http://dx.doi.org/10.1109/access.2021.3110270.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Cheng, Xueyi, e Chang Che. "Interpretable Machine Learning: Explainability in Algorithm Design". Journal of Industrial Engineering and Applied Science 2, n. 6 (1 dicembre 2024): 65–70. https://doi.org/10.70393/6a69656173.323337.

Testo completo
Abstract (sommario):
In recent years, there is a high demand for transparency and accountability in machine learning models, especially in domains such as healthcare, finance and etc. In this paper, we delve into deep how to make machine learning models more interpretable, with focus on the importance of the explainability of the algorithm design. The main objective of this paper is to fill this gap and provide a comprehensive survey and analytical study towards AutoML. To that end, we first introduce the AutoML technology and review its various tools and techniques.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Bozorgpanah, Aso, Vicenç Torra e Laya Aliahmadipour. "Privacy and Explainability: The Effects of Data Protection on Shapley Values". Technologies 10, n. 6 (1 dicembre 2022): 125. http://dx.doi.org/10.3390/technologies10060125.

Testo completo
Abstract (sommario):
There is an increasing need to provide explainability for machine learning models. There are different alternatives to provide explainability, for example, local and global methods. One of the approaches is based on Shapley values. Privacy is another critical requirement when dealing with sensitive data. Data-driven machine learning models may lead to disclosure. Data privacy provides several methods for ensuring privacy. In this paper, we study how methods for explainability based on Shapley values are affected by privacy methods. We show that some degree of protection still permits to maintain the information of Shapley values for the four machine learning models studied. Experiments seem to indicate that among the four models, Shapley values of linear models are the most affected ones.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Zhang, Xueting. "Traffic Flow Prediction Based on Explainable Machine Learning". Highlights in Science, Engineering and Technology 56 (14 luglio 2023): 56–64. http://dx.doi.org/10.54097/hset.v56i.9816.

Testo completo
Abstract (sommario):
Traffic flow prediction is one of the important links to realize an urban intelligent transportation system. Thanks to the in-depth research of artificial intelligence theories, the machine learning method has been widely used in intelligent transportation engineering. However, due to the “black box” as its characteristics, its application and further development are limited. Exploring the explainability of machine learning models in traffic flow prediction is an important issue to make it more reliable in traffic engineering and other practical applications. Apart from selecting the RandomForest model and the CatBoost model as the objects to research the traffic flow prediction against temporal and spatial changes, this paper makes a comprehensive evaluation and comparison with LightGBM and the other two prediction models through different indicators. Meanwhile, aiming at the low explainability of the models, their feature importance is analyzed and compared with reality. The results show that the RandomForest model and CatBoost model make good predictions, whose feature importance is consistent with the actual situation, verifying their explainability.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Pendyala, Vishnu, e Hyungkyun Kim. "Assessing the Reliability of Machine Learning Models Applied to the Mental Health Domain Using Explainable AI". Electronics 13, n. 6 (8 marzo 2024): 1025. http://dx.doi.org/10.3390/electronics13061025.

Testo completo
Abstract (sommario):
Machine learning is increasingly and ubiquitously being used in the medical domain. Evaluation metrics like accuracy, precision, and recall may indicate the performance of the models but not necessarily the reliability of their outcomes. This paper assesses the effectiveness of a number of machine learning algorithms applied to an important dataset in the medical domain, specifically, mental health, by employing explainability methodologies. Using multiple machine learning algorithms and model explainability techniques, this work provides insights into the models’ workings to help determine the reliability of the machine learning algorithm predictions. The results are not intuitive. It was found that the models were focusing significantly on less relevant features and, at times, unsound ranking of the features to make the predictions. This paper therefore argues that it is important for research in applied machine learning to provide insights into the explainability of models in addition to other performance metrics like accuracy. This is particularly important for applications in critical domains such as healthcare.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Kim, Dong-sup, e Seungwoo Shin. "THE ECONOMIC EXPLAINABILITY OF MACHINE LEARNING AND STANDARD ECONOMETRIC MODELS-AN APPLICATION TO THE U.S. MORTGAGE DEFAULT RISK". International Journal of Strategic Property Management 25, n. 5 (13 luglio 2021): 396–412. http://dx.doi.org/10.3846/ijspm.2021.15129.

Testo completo
Abstract (sommario):
This study aims to bridge the gap between two perspectives of explainability−machine learning and engineering, and economics and standard econometrics−by applying three marginal measurements. The existing real estate literature has primarily used econometric models to analyze the factors that affect the default risk of mortgage loans. However, in this study, we estimate a default risk model using a machine learning-based approach with the help of a U.S. securitized mortgage loan database. Moreover, we compare the economic explainability of the models by calculating the marginal effect and marginal importance of individual risk factors using both econometric and machine learning approaches. Machine learning-based models are quite effective in terms of predictive power; however, the general perception is that they do not efficiently explain the causal relationships within them. This study utilizes the concepts of marginal effects and marginal importance to compare the explanatory power of individual input variables in various models. This can simultaneously help improve the explainability of machine learning techniques and enhance the performance of standard econometric methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

TOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example". Journal of Medicine and Palliative Care 4, n. 2 (27 marzo 2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.

Testo completo
Abstract (sommario):
Aim: Machine learning tools have various applications in healthcare. However, the implementation of developed models is still limited because of various challenges. One of the most important problems is the lack of explainability of machine learning models. Explainability refers to the capacity to reveal the reasoning and logic behind the decisions made by AI systems, making it straightforward for human users to understand the process and how the system arrived at a specific outcome. The study aimed to compare the performance of different model-agnostic explanation methods using two different ML models created for HbA1c classification. Material and Method: The H2O AutoML engine was used for the development of two ML models (Gradient boosting machine (GBM) and default random forests (DRF)) using 3,036 records from NHANES open data set. Both global and local model-agnostic explanation methods, including performance metrics, feature important analysis and Partial dependence, Breakdown and Shapley additive explanation plots were utilized for the developed models. Results: While both GBM and DRF models have similar performance metrics, such as mean per class error and area under the receiver operating characteristic curve, they had slightly different variable importance. Local explainability methods also showed different contributions to the features. Conclusion: This study evaluated the significance of explainable machine learning techniques for comprehending complicated models and their role in incorporating AI in healthcare. The results indicate that although there are limitations to current explainability methods, particularly for clinical use, both global and local explanation models offer a glimpse into evaluating the model and can be used to enhance or compare models.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Rodríguez Mallma, Mirko Jerber, Luis Zuloaga-Rotta, Rubén Borja-Rosales, Josef Renato Rodríguez Mallma, Marcos Vilca-Aguilar, María Salas-Ojeda e David Mauricio. "Explainable Machine Learning Models for Brain Diseases: Insights from a Systematic Review". Neurology International 16, n. 6 (29 ottobre 2024): 1285–307. http://dx.doi.org/10.3390/neurolint16060098.

Testo completo
Abstract (sommario):
In recent years, Artificial Intelligence (AI) methods, specifically Machine Learning (ML) models, have been providing outstanding results in different areas of knowledge, with the health area being one of its most impactful fields of application. However, to be applied reliably, these models must provide users with clear, simple, and transparent explanations about the medical decision-making process. This systematic review aims to investigate the use and application of explainability in ML models used in brain disease studies. A systematic search was conducted in three major bibliographic databases, Web of Science, Scopus, and PubMed, from January 2014 to December 2023. A total of 133 relevant studies were identified and analyzed out of a total of 682 found in the initial search, in which the explainability of ML models in the medical context was studied, identifying 11 ML models and 12 explainability techniques applied in the study of 20 brain diseases.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Bhagyashree D Shendkar. "Explainable Machine Learning Models for Real-Time Threat Detection in Cybersecurity". Panamerican Mathematical Journal 35, n. 1s (13 novembre 2024): 264–75. http://dx.doi.org/10.52783/pmj.v35.i1s.2313.

Testo completo
Abstract (sommario):
In the rapidly evolving landscape of cybersecurity, traditional machine learning models often operate as "black boxes," providing high accuracy but lacking transparency in decision-making. This lack of explainability poses challenges for trust and accountability, especially in critical areas like threat detection and incident response. Explainable machine learning models aim to address this by making the model's predictions more understandable and interpretable to users. This research integrates explainable machine learning models for real-time threat detection in cybersecurity. Data from multiple sources, including network traffic, system logs, and user behavior, undergo preprocessing such as cleaning, feature extraction, and normalization. The processed data is passed through various machine learning models, including traditional approaches like SVM and decision trees, as well as deep learning models like CNN and RNN. Explainability techniques such as LIME, SHAP, and attention mechanisms provide transparency, ensuring interpretable predictions. The explanations are delivered through a user interface that generates alerts, visualizations, and reports, facilitating effective threat assessment and incident response in decision support systems. This framework enhances model performance, trust, and reliability in complex cybersecurity scenarios.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Chen, Yinhe. "Enhancing stability and explainability in reinforcement learning with machine learning". Applied and Computational Engineering 101, n. 1 (8 novembre 2024): 25–34. http://dx.doi.org/10.54254/2755-2721/101/20240943.

Testo completo
Abstract (sommario):
Abstract. In the field of reinforcement learning, training agents using machine learning algorithms to learn and perform tasks in complex environments has become a prevalent approach. However, reinforcement learning faces challenges such as training instability and decision opacity, which limit its feasibility in real-world applications. To solve the problems of stability and transparency in reinforcement learning, this project will use advanced algorithms like Proximal Policy Optimization (PPO), Q-DAGGER, and Gradient Boosting Decision Trees to set up reinforcement learning agents in the OpenAI Gymnasium environment. Specifically, the study selected the Atari game Breakout as the testbed, enhancing training efficiency and game performance by refining reward structures and decision-making processes, and integrating interpretable models to provide explanations for agent decisions. This study has successfully developed robust reinforcement learning agents that excel in complex environments. By employing advanced algorithms like PPO, Q-DAGGER, and Gradient Boosting Decision Trees, the study has addressed issues of training instability, and improved game performance through optimized reward structures and decision processes. Additionally, by integrating interpretable models, the study has provided insights into the learned strategies of the agents, thereby enhancing decision transparency. These findings provide crucial support for the broader application of reinforcement learning in real-world scenarios and offer valuable insights for tackling other complex tasks.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Borch, Christian, e Bo Hee Min. "Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading". Big Data & Society 9, n. 2 (luglio 2022): 205395172211113. http://dx.doi.org/10.1177/20539517221111361.

Testo completo
Abstract (sommario):
Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Kolarik, Michal, Martin Sarnovsky, Jan Paralic e Frantisek Babic. "Explainability of deep learning models in medical video analysis: a survey". PeerJ Computer Science 9 (14 marzo 2023): e1253. http://dx.doi.org/10.7717/peerj-cs.1253.

Testo completo
Abstract (sommario):
Deep learning methods have proven to be effective for multiple diagnostic tasks in medicine and have been performing significantly better in comparison to other traditional machine learning methods. However, the black-box nature of deep neural networks has restricted their use in real-world applications, especially in healthcare. Therefore, explainability of the machine learning models, which focuses on providing of the comprehensible explanations of model outputs, may affect the possibility of adoption of such models in clinical use. There are various studies reviewing approaches to explainability in multiple domains. This article provides a review of the current approaches and applications of explainable deep learning for a specific area of medical data analysis—medical video processing tasks. The article introduces the field of explainable AI and summarizes the most important requirements for explainability in medical applications. Subsequently, we provide an overview of existing methods, evaluation metrics and focus more on those that can be applied to analytical tasks involving the processing of video data in the medical domain. Finally we identify some of the open research issues in the analysed area.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Pezoa, R., L. Salinas e C. Torres. "Explainability of High Energy Physics events classification using SHAP". Journal of Physics: Conference Series 2438, n. 1 (1 febbraio 2023): 012082. http://dx.doi.org/10.1088/1742-6596/2438/1/012082.

Testo completo
Abstract (sommario):
Abstract Complex machine learning models have been fundamental for achieving accurate results regarding events classification in High Energy Physics (HEP). However, these complex models or black-box systems lack transparency and interpretability. In this work, we use the SHapley Additive exPlanations (SHAP) method for explaining the output of two event machine learning classifiers, based on eXtreme Gradient Boost (XGBoost) and deep neural networks (DNN). We compute SHAP values to interpret the results and analyze the importance of individual features, and the experiments show that SHAP method has high potential for understanding complex machine learning model in the context of high energy physics.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Mukendi, Christian Mulomba, Asser Kasai Itakala e Pierrot Muteba Tibasima. "Beyond Accuracy: Building Trustworthy Extreme Events Predictions Through Explainable Machine Learning". European Journal of Theoretical and Applied Sciences 2, n. 1 (1 gennaio 2024): 199–218. http://dx.doi.org/10.59324/ejtas.2024.2(1).15.

Testo completo
Abstract (sommario):
Extreme events, despite their rarity, pose a significant threat due to their immense impact. While machine learning has emerged as a game-changer for predicting these events, the crucial challenge lies in trusting these predictions. Existing studies primarily focus on improving accuracy, neglecting the crucial aspect of model explainability. This gap hinders the integration of these solutions into decision-making processes. Addressing this critical issue, this paper investigates the explainability of extreme event forecasting using a hybrid forecasting and classification approach. By focusing on two economic indicators, Business Confidence Index (BCI) and Consumer Confidence Index (CCI), the study aims to understand why and when extreme event predictions can be trusted, especially in the context of imbalanced classes (normal vs. extreme events). Machine learning models are comparatively analysed, exploring their explainability through dedicated tools. Additionally, various class balancing methods are assessed for their effectiveness. This combined approach delves into the factors influencing extreme event prediction accuracy, offering valuable insights for building trustworthy forecasting models.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Wang, Liyang, Yu Cheng, Ningjing Sang e You Yao. "Explainability and Stability of Machine Learning Applications — A Financial Risk Management Perspective". Modern Economics & Management Forum 5, n. 5 (6 novembre 2024): 956. http://dx.doi.org/10.32629/memf.v5i5.2902.

Testo completo
Abstract (sommario):
With advancement in computing power, hardware, and machine learning algorithms, more and more industry sectors have started to incorporate machine learning in the core business. The adoption of machine learning model in risk management is slower, due to the sensitive nature of the tasks, data involved, and regulatory pressure. This paper evaluates the explainability and stability of machine learning models on a traditional financial risk management task and found out that machine learning models can exhibit an enhanced level of adaptability and stability. However, different models could lead to drastically different performance, which require companies to spend additional resources in training and development. Overall, the net benefits are overwhelming, if done correctly.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Gupta, Gopal, Huaduo Wang, Kinjal Basu, Farahad Shakerin, Parth Padalkar, Elmer Salazar, Sarat Chandra Varanasi e Sopam Dasgupta. "Logic-Based Explainable and Incremental Machine Learning". Proceedings of the AAAI Symposium Series 2, n. 1 (22 gennaio 2024): 230–32. http://dx.doi.org/10.1609/aaaiss.v2i1.27678.

Testo completo
Abstract (sommario):
Mainstream machine learning methods lack interpretability, explainability, incrementality, and data-economy. We propose using logic programming (LP) to rectify these problems. We discuss the FOLD family of rule-based machine learning algorithms that learn models from relational datasets as a set of default rules. These models are competitive with state-of-the-art machine learning systems in terms of accuracy and execution efficiency. We also motivate how logic programming can be useful for theory revision and explanation based learning.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Collin, Adele, Adrián Ayuso-Muñoz, Paloma Tejera-Nevado, Lucía Prieto-Santamaría, Antonio Verdejo-García, Carmen Díaz-Batanero, Fermín Fernández-Calderón, Natalia Albein-Urios, Óscar M. Lozano e Alejandro Rodríguez-González. "Analyzing Dropout in Alcohol Recovery Programs: A Machine Learning Approach". Journal of Clinical Medicine 13, n. 16 (15 agosto 2024): 4825. http://dx.doi.org/10.3390/jcm13164825.

Testo completo
Abstract (sommario):
Background: Retention in treatment is crucial for the success of interventions targeting alcohol use disorder (AUD), which affects over 100 million people globally. Most previous studies have used classical statistical techniques to predict treatment dropout, and their results remain inconclusive. This study aimed to use novel machine learning tools to identify models that predict dropout with greater precision, enabling the development of better retention strategies for those at higher risk. Methods: A retrospective observational study of 39,030 (17.3% female) participants enrolled in outpatient-based treatment for alcohol use disorder in a state-wide public treatment network has been used. Participants were recruited between 1 January 2015 and 31 December 2019. We applied different machine learning algorithms to create models that allow one to predict the premature cessation of treatment (dropout). With the objective of increasing the explainability of those models with the best precision, considered as black-box models, explainability technique analyses were also applied. Results: Considering as the best models those obtained with one of the so-called black-box models (support vector classifier (SVC)), the results from the best model, from the explainability perspective, showed that the variables that showed greater explanatory capacity for treatment dropout are previous drug use as well as psychiatric comorbidity. Among these variables, those of having undergone previous opioid substitution treatment and receiving coordinated psychiatric care in mental health services showed the greatest capacity for predicting dropout. Conclusions: By using novel machine learning techniques on a large representative sample of patients enrolled in alcohol use disorder treatment, we have identified several machine learning models that help in predicting a higher risk of treatment dropout. Previous treatment for other substance use disorders (SUDs) and concurrent psychiatric comorbidity were the best predictors of dropout, and patients showing these characteristics may need more intensive or complementary interventions to benefit from treatment.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Aas, Kjersti, Arthur Charpentier, Fei Huang e Ronald Richman. "Insurance analytics: prediction, explainability, and fairness". Annals of Actuarial Science 18, n. 3 (novembre 2024): 535–39. https://doi.org/10.1017/s1748499524000289.

Testo completo
Abstract (sommario):
AbstractThe expanding application of advanced analytics in insurance has generated numerous opportunities, such as more accurate predictive modeling powered by machine learning and artificial intelligence (AI) methods, the utilization of novel and unstructured datasets, and the automation of key operations. Significant advances in these areas are being made through novel applications and adaptations of predictive modeling techniques for insurance purposes, while, concurrently, rapid advances in machine learning methods are being made outside of the insurance sector. However, these innovations also bring substantial challenges, particularly around the transparency, explanation, and fairness of complex algorithmic models and the economic and societal impacts of their adoption in decision-making. As insurance is a highly regulated industry, models may be required by regulators to be explainable, in order to enable analysis of the basis for decision making. Due to the societal importance of insurance, significant attention is being paid to ensuring that insurance models do not discriminate unfairly. In this special issue, we feature papers that explore key issues in insurance analytics, focusing on prediction, explainability, and fairness.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Tocchetti, Andrea, e Marco Brambilla. "The Role of Human Knowledge in Explainable AI". Data 7, n. 7 (6 luglio 2022): 93. http://dx.doi.org/10.3390/data7070093.

Testo completo
Abstract (sommario):
As the performance and complexity of machine learning models have grown significantly over the last years, there has been an increasing need to develop methodologies to describe their behaviour. Such a need has mainly arisen due to the widespread use of black-box models, i.e., high-performing models whose internal logic is challenging to describe and understand. Therefore, the machine learning and AI field is facing a new challenge: making models more explainable through appropriate techniques. The final goal of an explainability method is to faithfully describe the behaviour of a (black-box) model to users who can get a better understanding of its logic, thus increasing the trust and acceptance of the system. Unfortunately, state-of-the-art explainability approaches may not be enough to guarantee the full understandability of explanations from a human perspective. For this reason, human-in-the-loop methods have been widely employed to enhance and/or evaluate explanations of machine learning models. These approaches focus on collecting human knowledge that AI systems can then employ or involving humans to achieve their objectives (e.g., evaluating or improving the system). This article aims to present a literature overview on collecting and employing human knowledge to improve and evaluate the understandability of machine learning models through human-in-the-loop approaches. Furthermore, a discussion on the challenges, state-of-the-art, and future trends in explainability is also provided.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Keçeli, Tarık, Nevruz İlhanlı e Kemal Hakan Gülkesen. "Prediction of retinopathy through machine learning in diabetes mellitus". Journal of Health Sciences and Medicine 7, n. 4 (30 luglio 2024): 467–71. http://dx.doi.org/10.32322/jhsm.1502050.

Testo completo
Abstract (sommario):
Aims: Development of a machine learning model on an electronic health record (EHR) dataset for predicting retinopathy in people with diabetes mellitus (DM), analysis of its explainability. Methods: A public dataset based on EHR records of patients diagnosed with DM located in İstanbul, Turkiye (n=77724) was used. The categorical variable indicating a retinopathy-positive diagnosis was chosen as the target variable. Variables were preprocessed and split into training and test sets with the same ratio of class distribution for model training and evaluation respectively. Four machine learning models were developed for comparison: logistic regression, decision tree, random forest and eXtreme Gradient Boosting (XGBoost). Each algorithm’s optimal hyperparameters were obtained using randomized search cross validation with 10-folds followed by the training of the models based on the results. The receiver operating characteristic (ROC) area under curve (AUC) score was used as the primary evaluation metric. SHapley Additive exPlanations (SHAP) analysis was done to provide explainability of the trained models. Results: The XGBoost model showed the best results on retinopathy classification on the test set with a low amount of overfitting (AUC: 0.813, 95% CI: 0.808-0.819). 15 variables that had the highest impact on the prediction were obtained for explainability, which include eye-ear drugs, other eye diseases, Disorders of refraction, Insulin aspart and hemoglobin A1c (HbA1c). Conclusion: Early detection of retinopathy based on EHR data can be successfully detected in people with diabetes using machine learning. Our study reports that the XGBoost algorithm performed best in this research, with the presence of other eye diseases, insulin dependence and high HbA1c being observed as important predictors of retinopathy.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Burkart, Nadia, e Marco F. Huber. "A Survey on the Explainability of Supervised Machine Learning". Journal of Artificial Intelligence Research 70 (19 gennaio 2021): 245–317. http://dx.doi.org/10.1613/jair.1.12228.

Testo completo
Abstract (sommario):
Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Kulaklıoğlu, Duru. "Explainable AI: Enhancing Interpretability of Machine Learning Models". Human Computer Interaction 8, n. 1 (6 dicembre 2024): 91. https://doi.org/10.62802/z3pde490.

Testo completo
Abstract (sommario):
Explainable Artificial Intelligence (XAI) is emerging as a critical field to address the “black box” nature of many machine learning (ML) models. While these models achieve high predictive accuracy, their opacity undermines trust, adoption, and ethical compliance in critical domains such as healthcare, finance, and autonomous systems. This research explores methodologies and frameworks to enhance the interpretability of ML models, focusing on techniques like feature attribution, surrogate models, and counterfactual explanations. By balancing model complexity and transparency, this study highlights strategies to bridge the gap between performance and explainability. The integration of XAI into ML workflows not only fosters trust but also aligns with regulatory requirements, enabling actionable insights for stakeholders. The findings reveal a roadmap to design inherently interpretable models and tools for post-hoc analysis, offering a sustainable approach to democratize AI.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Nagahisarchoghaei, Mohammad, Nasheen Nur, Logan Cummins, Nashtarin Nur, Mirhossein Mousavi Karimi, Shreya Nandanwar, Siddhartha Bhattacharyya e Shahram Rahimi. "An Empirical Survey on Explainable AI Technologies: Recent Trends, Use-Cases, and Categories from Technical and Application Perspectives". Electronics 12, n. 5 (22 febbraio 2023): 1092. http://dx.doi.org/10.3390/electronics12051092.

Testo completo
Abstract (sommario):
In a wide range of industries and academic fields, artificial intelligence is becoming increasingly prevalent. AI models are taking on more crucial decision-making tasks as they grow in popularity and performance. Although AI models, particularly machine learning models, are successful in research, they have numerous limitations and drawbacks in practice. Furthermore, due to the lack of transparency behind their behavior, users need more understanding of how these models make specific decisions, especially in complex state-of-the-art machine learning algorithms. Complex machine learning systems utilize less transparent algorithms, thereby exacerbating the problem. This survey analyzes the significance and evolution of explainable AI (XAI) research across various domains and applications. Throughout this study, a rich repository of explainability classifications and summaries has been developed, along with their applications and practical use cases. We believe this study will make it easier for researchers to understand all explainability methods and access their applications simultaneously.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Przybył, Krzysztof. "Explainable AI: Machine Learning Interpretation in Blackcurrant Powders". Sensors 24, n. 10 (17 maggio 2024): 3198. http://dx.doi.org/10.3390/s24103198.

Testo completo
Abstract (sommario):
Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the ‘glass box’ group of Decision Tree, among others, and the ‘black box’ group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Zubair, Md, Helge Janicke, Ahmad Mohsin, Leandros Maglaras e Iqbal H. Sarker. "Automated Sensor Node Malicious Activity Detection with Explainability Analysis". Sensors 24, n. 12 (7 giugno 2024): 3712. http://dx.doi.org/10.3390/s24123712.

Testo completo
Abstract (sommario):
Cybersecurity has become a major concern in the modern world due to our heavy reliance on cyber systems. Advanced automated systems utilize many sensors for intelligent decision-making, and any malicious activity of these sensors could potentially lead to a system-wide collapse. To ensure safety and security, it is essential to have a reliable system that can automatically detect and prevent any malicious activity, and modern detection systems are created based on machine learning (ML) models. Most often, the dataset generated from the sensor node for detecting malicious activity is highly imbalanced because the Malicious class is significantly fewer than the Non-Malicious class. To address these issues, we proposed a hybrid data balancing technique in combination with a Cluster-based Under Sampling and Synthetic Minority Oversampling Technique (SMOTE). We have also proposed an ensemble machine learning model that outperforms other standard ML models, achieving 99.7% accuracy. Additionally, we have identified the critical features that pose security risks to the sensor nodes with extensive explainability analysis of our proposed machine learning model. In brief, we have explored a hybrid data balancing method, developed a robust ensemble machine learning model for detecting malicious sensor nodes, and conducted a thorough analysis of the model’s explainability.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Ullah, Ihsan, Andre Rios, Vaibhav Gala e Susan Mckeever. "Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation". Applied Sciences 12, n. 1 (23 dicembre 2021): 136. http://dx.doi.org/10.3390/app12010136.

Testo completo
Abstract (sommario):
Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with tabular datasets containing mixed data (categorical and numerical) using a deep neural network (1D-CNN), for Credit Card Fraud detection and Telecom Customer Churn prediction use cases. We show how LRP is more effective than traditional explainability concepts of Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for explainability. This effectiveness is both local to a sample level and holistic over the whole testing set. We also discuss the significant computational time advantage of LRP (1–2 s) over LIME (22 s) and SHAP (108 s) on the same laptop, and thus its potential for real time application scenarios. In addition, our validation of LRP has highlighted features for enhancing model performance, thus opening up a new area of research of using XAI as an approach for feature subset selection.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Alsubhi, Bashayer, Basma Alharbi, Nahla Aljojo, Ameen Banjar, Araek Tashkandi, Abdullah Alghoson e Anas Al-Tirawi. "Effective Feature Prediction Models for Student Performance". Engineering, Technology & Applied Science Research 13, n. 5 (13 ottobre 2023): 11937–44. http://dx.doi.org/10.48084/etasr.6345.

Testo completo
Abstract (sommario):
The ability to accurately predict how students will perform has a significant impact on the teaching and learning process, as it can inform the instructor to devote extra attention to a particular student or group of students, which in turn prevents those students from failing a certain course. When it comes to educational data mining, the accuracy and explainability of predictions are of equal importance. Accuracy refers to the degree to which the predicted value was accurate, and explainability refers to the degree to which the predicted value could be understood. This study used machine learning to predict the features that best contribute to the performance of a student, using a dataset collected from a public university in Jeddah, Saudi Arabia. Experimental analysis was carried out with Black-Box (BB) and White-Box (WB) machine-learning classification models. In BB classification models, a decision (or class) is often predicted with limited explainability on why this decision was made, while in WB classification models decisions made are fully interpretable to the stakeholders. The results showed that these BB models performed similarly in terms of accuracy and recall whether the classifiers attempted to predict an A or an F grade. When comparing the classifiers' accuracy in making predictions on B grade, the Support Vector Machine (SVM) was found to be superior to Naïve Bayes (NB). However, the recall results were quite similar except for the K-Nearest Neighbor (KNN) classifier. When predicting grades C and D, RF had the best accuracy and NB the worst. RF had the best recall when predicting a C grade, while NB had the lowest. When predicting a D grade, SVM had the best recall performance, while NB had the lowest.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

BARAJAS ARANDA, DANIEL ALEJANDRO, MIGUEL ANGEL SICILIA URBAN, MARIA DOLORES TORRES SOTO e AURORA TORRES SOTO. "COMPARISON AND EXPLANABILITY OF MACHINE LEARNING MODELS IN PREDICTIVE SUICIDE ANALYSIS". DYNA NEW TECHNOLOGIES 11, n. 1 (28 febbraio 2024): [10P.]. http://dx.doi.org/10.6036/nt11028.

Testo completo
Abstract (sommario):
ABSTRACT In this comparative study of machine learning models for predicting suicidal behavior, three approaches were evaluated: neural network, logistic regression, and decision trees. The results revealed that the neural network showed the best predictive performance, with an accuracy of 82.35%, followed by logistic regression (76.47%) and decision trees (64.71%). Additionally, the explainability analysis revealed that each model assigned different importance to the features in predicting suicidal behavior, highlighting the need to understand how models interpret features and how they influence predictions. The study provides valuable information for healthcare professionals and suicide prevention experts, enabling them to design more effective interventions and better understand the risk factors associated with suicidal behavior. However, it is noted the need to consider other factors, such as model interpretability and its applicability in different contexts or populations. Furthermore, further research and validation in different datasets are recommended to strengthen the understanding and applicability of the models in different contexts. In summary, this study significantly contributes to the field of predicting suicidal behavior using machine learning models, offering a detailed insight into the strengths and weaknesses of each approach and highlighting the importance of model interpretation for better understanding the underlying factors of suicidal behavior. Key words: Suicidal behavior prediction, Machine learning models, Neural network, Logistic regression, Decision tres, Explainability análisis, Healthcare intervention
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Chen, Tianjie, e Md Faisal Kabir. "Explainable machine learning approach for cancer prediction through binarilization of RNA sequencing data". PLOS ONE 19, n. 5 (10 maggio 2024): e0302947. http://dx.doi.org/10.1371/journal.pone.0302947.

Testo completo
Abstract (sommario):
In recent years, researchers have proven the effectiveness and speediness of machine learning-based cancer diagnosis models. However, it is difficult to explain the results generated by machine learning models, especially ones that utilized complex high-dimensional data like RNA sequencing data. In this study, we propose the binarilization technique as a novel way to treat RNA sequencing data and used it to construct explainable cancer prediction models. We tested our proposed data processing technique on five different models, namely neural network, random forest, xgboost, support vector machine, and decision tree, using four cancer datasets collected from the National Cancer Institute Genomic Data Commons. Since our datasets are imbalanced, we evaluated the performance of all models using metrics designed for imbalance performance like geometric mean, Matthews correlation coefficient, F-Measure, and area under the receiver operating characteristic curve. Our approach showed comparative performance while relying on less features. Additionally, we demonstrated that data binarilization offers higher explainability by revealing how each feature affects the prediction. These results demonstrate the potential of data binarilization technique in improving the performance and explainability of RNA sequencing based cancer prediction models.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Van Der Laan, Jake. "Explainability of Artificial Intelligence Models: Technical Foundations and Legal Principles". Vietnamese Journal of Legal Sciences 7, n. 2 (1 dicembre 2022): 1–38. http://dx.doi.org/10.2478/vjls-2022-0006.

Testo completo
Abstract (sommario):
Abstract The now prevalent use of Artificial Intelligence (AI) and specifically machine learning driven models to automate the making of decisions raises novel legal issues. One issue of particular importance arises when the rationale for the automated decision is not readily determinable or traceable by virtue of the complexity of the model used: How can such a decision be legally assessed and substantiated? How can any potential legal liability for a “wrong” decision be properly determined? These questions are being explored by organizations and governments around the world. A key informant to any analysis in these cases is the extent to which the model in question is “explainable”. This paper seeks to provide (1) an introductory overview of the technical components of machine learning models in a manner consumable by someone without a computer science or mathematics background, (2) a summary of the Canadian and Vietnamese response to the explainability challenge so far, (3) an analysis of what an ”explanation” is in the scientific and legal domains, and (4) a preliminary legal framework for analyzing the sufficiency of explanation of a particular model and its prediction(s).
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Kong, Weihao, Jianping Chen e Pengfei Zhu. "Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research". Minerals 14, n. 2 (24 gennaio 2024): 128. http://dx.doi.org/10.3390/min14020128.

Testo completo
Abstract (sommario):
Sandstone-hosted uranium deposits are indeed significant sources of uranium resources globally. They are typically found in sedimentary basins and have been extensively explored and exploited in various countries. They play a significant role in meeting global uranium demand and are considered important resources for nuclear energy production. Erlian Basin, as one of the sedimentary basins in northern China, is known for its uranium mineralization hosted within sandstone formations. In this research, machine learning (ML) methodology was applied to mineral prospectivity mapping (MPM) of the metallogenic zone in the Manite depression of the Erlian Basin. An ML model of 92% accuracy was implemented with the random forest algorithm. Additionally, the confusion matrix and receiver operating characteristic curve were used as model evaluation indicators. Furthermore, the model explainability research with post hoc interpretability algorithms bridged the gap between complex opaque (black-box) models and geological cognition, enabling the effective and responsible use of AI technologies. The MPM results shown in QGIS provided vivid geological insights for ML-based metallogenic prediction. With the favorable prospective targets delineated, geologists can make decisions for further uranium exploration.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Pathan, Refat Khan, Israt Jahan Shorna, Md Sayem Hossain, Mayeen Uddin Khandaker, Huda I. Almohammed e Zuhal Y. Hamd. "The efficacy of machine learning models in lung cancer risk prediction with explainability". PLOS ONE 19, n. 6 (13 giugno 2024): e0305035. http://dx.doi.org/10.1371/journal.pone.0305035.

Testo completo
Abstract (sommario):
Among many types of cancers, to date, lung cancer remains one of the deadliest cancers around the world. Many researchers, scientists, doctors, and people from other fields continuously contribute to this subject regarding early prediction and diagnosis. One of the significant problems in prediction is the black-box nature of machine learning models. Though the detection rate is comparatively satisfactory, people have yet to learn how a model came to that decision, causing trust issues among patients and healthcare workers. This work uses multiple machine learning models on a numerical dataset of lung cancer-relevant parameters and compares performance and accuracy. After comparison, each model has been explained using different methods. The main contribution of this research is to give logical explanations of why the model reached a particular decision to achieve trust. This research has also been compared with a previous study that worked with a similar dataset and took expert opinions regarding their proposed model. We also showed that our research achieved better results than their proposed model and specialist opinion using hyperparameter tuning, having an improved accuracy of almost 100% in all four models.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Satoni Kurniawansyah, Arius. "EXPLAINABLE ARTIFICIAL INTELLIGENCE THEORY IN DECISION MAKING TREATMENT OF ARITHMIA PATIENTS WITH USING DEEP LEARNING MODELS". Jurnal Rekayasa Sistem Informasi dan Teknologi 1, n. 1 (29 agosto 2022): 26–41. http://dx.doi.org/10.59407/jrsit.v1i1.75.

Testo completo
Abstract (sommario):
In the context of Explainable Artificial Intelligence, there are two important keywords: interpretability and "explainability". Interpretability is the extent to which humans can understand the causes of decisions. The better the interpretability of an AI/ML model, the easier it is for someone to understand why certain decisions or predictions have been made. Some cases of AI/ML implementation may not require explanation, because they are used in a low-risk environment, meaning mistakes will not have serious consequences. The need for interpretability and explainability arises when an AI system is used for certain high-risk problems or tasks, so it is not enough just to get predictive/classification decision outputs, but also needs explanations to convince users that AI (1: Model Explainability) is working the right way and (2: Decision Explainability) has made the right decision (Hotma, 2022). This research provides benefits for the development of knowledge regarding the implementation model of Explainable AI Theory in assisting Doctors' Decision Making for patients with cardiac arrhythmias with the Deep Learning Model in assisting Doctors' Decision Making for patients with cardiac arrhythmias. Knowing the Deep Learning Algorithm can be used in Machine Learning to read EKG Results. Knowing how to improve the results of the accuracy of the Explainable AI Application in Decision Making by Doctors for patients with cardiac arrhythmias. The use of Explanable Artificial Intelligence in the management of arrhythmia patients can provide an interpretation for doctors to be more optimal in treating patients. The results of this AI machine decision can increase doctors' confidence in treating arrhythmia patients optimally, effectively and efficiently. And also treatment will be faster because it is assisted by tools, so that patients can be treated more quickly. Thus it will reduce the mortality rate in arrhythmia patients.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Chen, Xingqian, Honghui Fan, Wenhe Chen, Yaoxin Zhang, Dingkun Zhu e Shuangbao Song. "Explaining a Logic Dendritic Neuron Model by Using the Morphology of Decision Trees". Electronics 13, n. 19 (3 ottobre 2024): 3911. http://dx.doi.org/10.3390/electronics13193911.

Testo completo
Abstract (sommario):
The development of explainable machine learning methods is attracting increasing attention. Dendritic neuron models have emerged as powerful machine learning methods in recent years. However, providing explainability to a dendritic neuron model has not been explored. In this study, we propose a logic dendritic neuron model (LDNM) and discuss its characteristics. Then, we use a tree-based model called the morphology of decision trees (MDT) to approximate LDNM to gain its explainability. Specifically, a trained LDNM is simplified by a proprietary structure pruning mechanism. Then, the pruned LDNM is further transformed into an MDT, which is easy to understand, to gain explainability. Finally, six benchmark classification problems are used to verify the effectiveness of the structure pruning and MDT transformation. The experimental results show that MDT can provide competitive classification accuracy compared with LDNM, and the concise structure of MDT can provide insight into how the classification results are concluded by LDNM. This paper provides a global surrogate explanation approach for LDNM.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Sarder Abdulla Al Shiam, Md Mahdi Hasan, Md Jubair Pantho, Sarmin Akter Shochona, Md Boktiar Nayeem, M Tazwar Hossain Choudhury e Tuan Ngoc Nguyen. "Credit Risk Prediction Using Explainable AI". Journal of Business and Management Studies 6, n. 2 (18 marzo 2024): 61–66. http://dx.doi.org/10.32996/jbms.2024.6.2.6.

Testo completo
Abstract (sommario):
Despite advancements in machine-learning prediction techniques, the majority of lenders continue to rely on conventional methods for predicting credit defaults, largely due to their lack of transparency and explainability. This reluctance to embrace newer approaches persists as there is a compelling need for credit default prediction models to be explainable. This study introduces credit default prediction models employing several tree-based ensemble methods, with the most effective model, XGBoost, being further utilized to enhance explainability. We implement SHapley Additive exPlanations (SHAP) in ML-based credit scoring models using data from the US-based P2P Lending Platform, Lending Club. Detailed discussions on the results, along with explanations using SHAP values, are also provided. The model explainability generated by Shapely values enables its applicability to a broad spectrum of industry applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Hong, Xianbin, Sheng-Uei Guan, Nian Xue, Zhen Li, Ka Lok Man, Prudence W. H. Wong e Dawei Liu. "Dual-Track Lifelong Machine Learning-Based Fine-Grained Product Quality Analysis". Applied Sciences 13, n. 3 (17 gennaio 2023): 1241. http://dx.doi.org/10.3390/app13031241.

Testo completo
Abstract (sommario):
Artificial intelligence (AI) systems are becoming wiser, even surpassing human performances in some fields, such as image classification, chess, and Go. However, most high-performance AI systems, such as deep learning models, are black boxes (i.e., only system inputs and outputs are visible, but the internal mechanisms are unknown) and, thus, are notably challenging to understand. Thereby a system with better explainability is needed to help humans understand AI. This paper proposes a dual-track AI approach that uses reinforcement learning to supplement fine-grained deep learning-based sentiment classification. Through lifelong machine learning, the dual-track approach can gradually become wiser and realize high performance (while keeping outstanding explainability). The extensive experimental results show that the proposed dual-track approach can provide reasonable fine-grained sentiment analyses to product reviews and remarkably achieve a 133% promotion of the Macro-F1 score on the Twitter sentiment classification task and a 27.12% promotion of the Macro-F1 score on an Amazon iPhone 11 sentiment classification task, respectively.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Brito, João, e Hugo Proença. "A Short Survey on Machine Learning Explainability: An Application to Periocular Recognition". Electronics 10, n. 15 (3 agosto 2021): 1861. http://dx.doi.org/10.3390/electronics10151861.

Testo completo
Abstract (sommario):
Interpretability has made significant strides in recent years, enabling the formerly black-box models to reach new levels of transparency. These kinds of models can be particularly useful to broaden the applicability of machine learning-based systems to domains where—apart from the predictions—appropriate justifications are also required (e.g., forensics and medical image analysis). In this context, techniques that focus on visual explanations are of particular interest here, due to their ability to directly portray the reasons that support a given prediction. Therefore, in this document, we focus on presenting the core principles of interpretability and describing the main methods that deliver visual cues (including one that we designed for periocular recognition in particular). Based on these intuitions, the experiments performed show explanations that attempt to highlight the most important periocular components towards a non-match decision. Then, some particularly challenging scenarios are presented to naturally sustain our conclusions and thoughts regarding future directions.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Ghadge, Nikhil. "Leveraging Machine Learning to Enhance Information Exploration". Machine Learning and Applications: An International Journal 11, n. 2 (28 giugno 2024): 17–27. http://dx.doi.org/10.5121/mlaij.2024.11203.

Testo completo
Abstract (sommario):
Machine learning algorithms are revolutionizing intelligent search and information discovery capabilities. By incorporating techniques like supervised learning, unsupervised learning, reinforcement learning, and deep learning, systems can automatically extract insights and patterns from vast data repositories. Natural language processing enables deeper comprehension of text, while image recognition unlocks knowledge from visual data. Machine learning powers personalized recommendation engines and accurate sentiment analysis. Integrating knowledge graphs enriches machine learning models with background knowledge for enhanced accuracy and explainability. Applications span voice search, anomaly detection, predictive analytics, text mining, and data clustering. However, interpretable AI models are crucial for enabling transparency and trustworthiness. Key challenges include limited training data, complex domain knowledge requirements, and ethical considerations around bias and privacy. Ongoing research that combines machine learning, knowledge representation, and human-centered design will advance intelligent search and discovery. The collaboration between artificial and human intelligence holds the potential to revolutionize information access and knowledge acquisition.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Vilain, Matthieu, e Stéphane Aris-Brosou. "Machine Learning Algorithms Associate Case Numbers with SARS-CoV-2 Variants Rather Than with Impactful Mutations". Viruses 15, n. 6 (24 maggio 2023): 1226. http://dx.doi.org/10.3390/v15061226.

Testo completo
Abstract (sommario):
During the SARS-CoV-2 pandemic, much effort has been geared towards creating models to predict case numbers. These models typically rely on epidemiological data, and as such overlook viral genomic information, which could be assumed to improve predictions, as different variants show varying levels of virulence. To test this hypothesis, we implemented simple models to predict future case numbers based on the genomic sequences of the Alpha and Delta variants, which were co-circulating in Texas and Minnesota early during the pandemic. Sequences were encoded, matched with case numbers at a future time based on collection date, and used to train two algorithms: one based on random forests and one based on a feed-forward neural network. While prediction accuracies were ≥93%, explainability analyses showed that the models were not associating case numbers with mutations known to have an impact on virulence, but with individual variants. This work highlights the necessity of gaining a better understanding of the data used for training and of conducting explainability analysis to assess whether model predictions are misleading.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Cao, Xuenan, e Roozbeh Yousefzadeh. "Extrapolation and AI transparency: Why machine learning models should reveal when they make decisions beyond their training". Big Data & Society 10, n. 1 (gennaio 2023): 205395172311697. http://dx.doi.org/10.1177/20539517231169731.

Testo completo
Abstract (sommario):
The right to artificial intelligence (AI) explainability has consolidated as a consensus in the research community and policy-making. However, a key component of explainability has been missing: extrapolation, which can reveal whether a model is making inferences beyond the boundaries of its training. We report that AI models extrapolate outside their range of familiar data, frequently and without notifying the users and stakeholders. Knowing whether a model has extrapolated or not is a fundamental insight that should be included in explaining AI models in favor of transparency, accountability, and fairness. Instead of dwelling on the negatives, we offer ways to clear the roadblocks in promoting AI transparency. Our commentary accompanies practical clauses useful to include in AI regulations such as the AI Bill of Rights, the National AI Initiative Act in the United States, and the AI Act by the European Commission.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Soliman, Amira, Björn Agvall, Kobra Etminani, Omar Hamed e Markus Lingman. "The Price of Explainability in Machine Learning Models for 100-Day Readmission Prediction in Heart Failure: Retrospective, Comparative, Machine Learning Study". Journal of Medical Internet Research 25 (27 ottobre 2023): e46934. http://dx.doi.org/10.2196/46934.

Testo completo
Abstract (sommario):
Background Sensitive and interpretable machine learning (ML) models can provide valuable assistance to clinicians in managing patients with heart failure (HF) at discharge by identifying individual factors associated with a high risk of readmission. In this cohort study, we delve into the factors driving the potential utility of classification models as decision support tools for predicting readmissions in patients with HF. Objective The primary objective of this study is to assess the trade-off between using deep learning (DL) and traditional ML models to identify the risk of 100-day readmissions in patients with HF. Additionally, the study aims to provide explanations for the model predictions by highlighting important features both on a global scale across the patient cohort and on a local level for individual patients. Methods The retrospective data for this study were obtained from the Regional Health Care Information Platform in Region Halland, Sweden. The study cohort consisted of patients diagnosed with HF who were over 40 years old and had been hospitalized at least once between 2017 and 2019. Data analysis encompassed the period from January 1, 2017, to December 31, 2019. Two ML models were developed and validated to predict 100-day readmissions, with a focus on the explainability of the model’s decisions. These models were built based on decision trees and recurrent neural architecture. Model explainability was obtained using an ML explainer. The predictive performance of these models was compared against 2 risk assessment tools using multiple performance metrics. Results The retrospective data set included a total of 15,612 admissions, and within these admissions, readmission occurred in 5597 cases, representing a readmission rate of 35.85%. It is noteworthy that a traditional and explainable model, informed by clinical knowledge, exhibited performance comparable to the DL model and surpassed conventional scoring methods in predicting readmission among patients with HF. The evaluation of predictive model performance was based on commonly used metrics, with an area under the precision-recall curve of 66% for the deep model and 68% for the traditional model on the holdout data set. Importantly, the explanations provided by the traditional model offer actionable insights that have the potential to enhance care planning. Conclusions This study found that a widely used deep prediction model did not outperform an explainable ML model when predicting readmissions among patients with HF. The results suggest that model transparency does not necessarily compromise performance, which could facilitate the clinical adoption of such models.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Kim, Jaehun. "Increasing trust in complex machine learning systems". ACM SIGIR Forum 55, n. 1 (giugno 2021): 1–3. http://dx.doi.org/10.1145/3476415.3476435.

Testo completo
Abstract (sommario):
Machine learning (ML) has become a core technology for many real-world applications. Modern ML models are applied to unprecedentedly complex and difficult challenges, including very large and subjective problems. For instance, applications towards multimedia understanding have been advanced substantially. Here, it is already prevalent that cultural/artistic objects such as music and videos are analyzed and served to users according to their preference, enabled through ML techniques. One of the most recent breakthroughs in ML is Deep Learning (DL), which has been immensely adopted to tackle such complex problems. DL allows for higher learning capacity, making end-to-end learning possible, which reduces the need for substantial engineering effort, while achieving high effectiveness. At the same time, this also makes DL models more complex than conventional ML models. Reports in several domains indicate that such more complex ML models may have potentially critical hidden problems: various biases embedded in the training data can emerge in the prediction, extremely sensitive models can make unaccountable mistakes. Furthermore, the black-box nature of the DL models hinders the interpretation of the mechanisms behind them. Such unexpected drawbacks result in a significant impact on the trustworthiness of the systems in which the ML models are equipped as the core apparatus. In this thesis, a series of studies investigates aspects of trustworthiness for complex ML applications, namely the reliability and explainability. Specifically, we focus on music as the primary domain of interest, considering its complexity and subjectivity. Due to this nature of music, ML models for music are necessarily complex for achieving meaningful effectiveness. As such, the reliability and explainability of music ML models are crucial in the field. The first main chapter of the thesis investigates the transferability of the neural network in the Music Information Retrieval (MIR) context. Transfer learning, where the pre-trained ML models are used as off-the-shelf modules for the task at hand, has become one of the major ML practices. It is helpful since a substantial amount of the information is already encoded in the pre-trained models, which allows the model to achieve high effectiveness even when the amount of the dataset for the current task is scarce. However, this may not always be true if the "source" task which pre-trained the model shares little commonality with the "target" task at hand. An experiment including multiple "source" tasks and "target" tasks was conducted to examine the conditions which have a positive effect on the transferability. The result of the experiment suggests that the number of source tasks is a major factor of transferability. Simultaneously, it is less evident that there is a single source task that is universally effective on multiple target tasks. Overall, we conclude that considering multiple pre-trained models or pre-training a model employing heterogeneous source tasks can increase the chance for successful transfer learning. The second major work investigates the robustness of the DL models in the transfer learning context. The hypothesis is that the DL models can be susceptible to imperceptible noise on the input. This may drastically shift the analysis of similarity among inputs, which is undesirable for tasks such as information retrieval. Several DL models pre-trained in MIR tasks are examined for a set of plausible perturbations in a real-world setup. Based on a proposed sensitivity measure, the experimental results indicate that all the DL models were substantially vulnerable to perturbations, compared to a traditional feature encoder. They also suggest that the experimental framework can be used to test the pre-trained DL models for measuring robustness. In the final main chapter, the explainability of black-box ML models is discussed. In particular, the chapter focuses on the evaluation of the explanation derived from model-agnostic explanation methods. With black-box ML models having become common practice, model-agnostic explanation methods have been developed to explain a prediction. However, the evaluation of such explanations is still an open problem. The work introduces an evaluation framework that measures the quality of the explanations employing fidelity and complexity. Fidelity refers to the explained mechanism's coherence to the black-box model, while complexity is the length of the explanation. Throughout the thesis, we gave special attention to the experimental design, such that robust conclusions can be reached. Furthermore, we focused on delivering machine learning framework and evaluation frameworks. This is crucial, as we intend that the experimental design and results will be reusable in general ML practice. As it implies, we also aim our findings to be applicable beyond the music applications such as computer vision or natural language processing. Trustworthiness in ML is not a domain-specific problem. Thus, it is vital for both researchers and practitioners from diverse problem spaces to increase awareness of complex ML systems' trustworthiness. We believe the research reported in this thesis provides meaningful stepping stones towards the trustworthiness of ML.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Adak, Anirban, Biswajeet Pradhan e Nagesh Shukla. "Sentiment Analysis of Customer Reviews of Food Delivery Services Using Deep Learning and Explainable Artificial Intelligence: Systematic Review". Foods 11, n. 10 (21 maggio 2022): 1500. http://dx.doi.org/10.3390/foods11101500.

Testo completo
Abstract (sommario):
During the COVID-19 crisis, customers’ preference in having food delivered to their doorstep instead of waiting in a restaurant has propelled the growth of food delivery services (FDSs). With all restaurants going online and bringing FDSs onboard, such as UberEATS, Menulog or Deliveroo, customer reviews on online platforms have become an important source of information about the company’s performance. FDS organisations aim to gather complaints from customer feedback and effectively use the data to determine the areas for improvement to enhance customer satisfaction. This work aimed to review machine learning (ML) and deep learning (DL) models and explainable artificial intelligence (XAI) methods to predict customer sentiments in the FDS domain. A literature review revealed the wide usage of lexicon-based and ML techniques for predicting sentiments through customer reviews in FDS. However, limited studies applying DL techniques were found due to the lack of the model interpretability and explainability of the decisions made. The key findings of this systematic review are as follows: 77% of the models are non-interpretable in nature, and organisations can argue for the explainability and trust in the system. DL models in other domains perform well in terms of accuracy but lack explainability, which can be achieved with XAI implementation. Future research should focus on implementing DL models for sentiment analysis in the FDS domain and incorporating XAI techniques to bring out the explainability of the models.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Kolluru, Vinothkumar, Yudhisthir Nuthakki, Sudeep Mungara, Sonika Koganti, Advaitha Naidu Chintakunta e Charan Sundar Telaganeni. "Healthcare Through AI: Integrating Deep Learning, Federated Learning, and XAI for Disease Management". International Journal of Soft Computing and Engineering 13, n. 6 (30 gennaio 2024): 21–27. http://dx.doi.org/10.35940/ijsce.d3646.13060124.

Testo completo
Abstract (sommario):
The applications of Artificial Intelligence (AI) have been resonating across various fields for the past three decades, with the healthcare domain being a primary beneficiary of these innovations and advancements. Recently, AI techniques such as deep learning, machine learning, and federated learning have been frequently employed to address challenges in disease management. However, these techniques often face issues related to transparency, interpretability, and explainability. This is where explainable AI (XAI) plays a crucial role in ensuring the explainability of AI models. There is a need to explore the current role of XAI in healthcare, along with the challenges and applications of XAI in the domain of healthcare and disease management. This paper presents a systematic literature review-based study to investigate the integration of XAI with deep learning and federated learning in the digital transformation of healthcare and disease management. The results of this study indicate that XAI is increasingly gaining the attention of researchers, practitioners, and policymakers in the healthcare domain.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Matara, Caroline, Simpson Osano, Amir Okeyo Yusuf e Elisha Ochungo Aketch. "Prediction of Vehicle-induced Air Pollution based on Advanced Machine Learning Models". Engineering, Technology & Applied Science Research 14, n. 1 (8 febbraio 2024): 12837–43. http://dx.doi.org/10.48084/etasr.6678.

Testo completo
Abstract (sommario):
Vehicle-induced air pollution is an important issue in the 21st century, posing detrimental effects on human health. Prediction of vehicle-emitted air pollutants and evaluation of the diverse factors that contribute to them are of the utmost importance. This study employed advanced tree-based machine learning models to predict vehicle-induced air pollutant levels, with a particular focus on fine particulate matter (PM2.5). In addition to a benchmark statistical model, the models employed were Gradient Boosting (GB), Light Gradient Boosting Machine (LGBM), Extreme Gradient Boosting (XGBoost), Extra Tree (ET), and Random Forest (RF). Regarding the evaluation of PM2.5 predictions, the ET model outperformed the others, as shown by MAE of 1.69, MSE of 5.91, RMSE of 2.43, and R2 of 0.71. Afterward, the optimal ET models were interpreted using SHAP analysis to overcome the ET model's lack of explainability. Based on the SHAP analysis, it was determined that temperature, humidity, and wind speed emerged as the primary determinants in forecasting PM2.5 levels.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Radiuk, Pavlo, Olexander Barmak, Eduard Manziuk e Iurii Krak. "Explainable Deep Learning: A Visual Analytics Approach with Transition Matrices". Mathematics 12, n. 7 (29 marzo 2024): 1024. http://dx.doi.org/10.3390/math12071024.

Testo completo
Abstract (sommario):
The non-transparency of artificial intelligence (AI) systems, particularly in deep learning (DL), poses significant challenges to their comprehensibility and trustworthiness. This study aims to enhance the explainability of DL models through visual analytics (VA) and human-in-the-loop (HITL) principles, making these systems more transparent and understandable to end users. In this work, we propose a novel approach that utilizes a transition matrix to interpret results from DL models through more comprehensible machine learning (ML) models. The methodology involves constructing a transition matrix between the feature spaces of DL and ML models as formal and mental models, respectively, improving the explainability for classification tasks. We validated our approach with computational experiments on the MNIST, FNC-1, and Iris datasets using a qualitative and quantitative comparison criterion, that is, how different the results obtained by our approach are from the ground truth of the training and testing samples. The proposed approach significantly enhanced model clarity and understanding in the MNIST dataset, with SSIM and PSNR values of 0.697 and 17.94, respectively, showcasing high-fidelity reconstructions. Moreover, achieving an F1m score of 77.76% and a weighted accuracy of 89.38%, our approach proved its effectiveness in stance detection with the FNC-1 dataset, complemented by its ability to explain key textual nuances. For the Iris dataset, the separating hyperplane constructed based on the proposed approach allowed for enhancing classification accuracy. Overall, using VA, HITL principles, and a transition matrix, our approach significantly improves the explainability of DL models without compromising their performance, marking a step forward in developing more transparent and trustworthy AI systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Gao, Jingyue, Xiting Wang, Yasha Wang e Xing Xie. "Explainable Recommendation through Attentive Multi-View Learning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 3622–29. http://dx.doi.org/10.1609/aaai.v33i01.33013622.

Testo completo
Abstract (sommario):
Recommender systems have been playing an increasingly important role in our daily life due to the explosive growth of information. Accuracy and explainability are two core aspects when we evaluate a recommendation model and have become one of the fundamental trade-offs in machine learning. In this paper, we propose to alleviate the trade-off between accuracy and explainability by developing an explainable deep model that combines the advantages of deep learning-based models and existing explainable methods. The basic idea is to build an initial network based on an explainable deep hierarchy (e.g., Microsoft Concept Graph) and improve the model accuracy by optimizing key variables in the hierarchy (e.g., node importance and relevance). To ensure accurate rating prediction, we propose an attentive multi-view learning framework. The framework enables us to handle sparse and noisy data by co-regularizing among different feature levels and combining predictions attentively. To mine readable explanations from the hierarchy, we formulate personalized explanation generation as a constrained tree node selection problem and propose a dynamic programming algorithm to solve it. Experimental results show that our model outperforms state-of-the-art methods in terms of both accuracy and explainability.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Lohaj, Oliver, Ján Paralič, Peter Bednár, Zuzana Paraličová e Matúš Huba. "Unraveling COVID-19 Dynamics via Machine Learning and XAI: Investigating Variant Influence and Prognostic Classification". Machine Learning and Knowledge Extraction 5, n. 4 (25 settembre 2023): 1266–81. http://dx.doi.org/10.3390/make5040064.

Testo completo
Abstract (sommario):
Machine learning (ML) has been used in different ways in the fight against COVID-19 disease. ML models have been developed, e.g., for diagnostic or prognostic purposes and using various modalities of data (e.g., textual, visual, or structured). Due to the many specific aspects of this disease and its evolution over time, there is still not enough understanding of all relevant factors influencing the course of COVID-19 in particular patients. In all aspects of our work, there was a strong involvement of a medical expert following the human-in-the-loop principle. This is a very important but usually neglected part of the ML and knowledge extraction (KE) process. Our research shows that explainable artificial intelligence (XAI) may significantly support this part of ML and KE. Our research focused on using ML for knowledge extraction in two specific scenarios. In the first scenario, we aimed to discover whether adding information about the predominant COVID-19 variant impacts the performance of the ML models. In the second scenario, we focused on prognostic classification models concerning the need for an intensive care unit for a given patient in connection with different explainability AI (XAI) methods. We have used nine ML algorithms, namely XGBoost, CatBoost, LightGBM, logistic regression, Naive Bayes, random forest, SGD, SVM-linear, and SVM-RBF. We measured the performance of the resulting models using precision, accuracy, and AUC metrics. Subsequently, we focused on knowledge extraction from the best-performing models using two different approaches as follows: (a) features extracted automatically by forward stepwise selection (FSS); (b) attributes and their interactions discovered by model explainability methods. Both were compared with the attributes selected by the medical experts in advance based on the domain expertise. Our experiments showed that adding information about the COVID-19 variant did not influence the performance of the resulting ML models. It also turned out that medical experts were much more precise in the identification of significant attributes than FSS. Explainability methods identified almost the same attributes as a medical expert and interesting interactions among them, which the expert discussed from a medical point of view. The results of our research and their consequences are discussed.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Akgüller, Ömer, Mehmet Ali Balcı e Gabriela Cioca. "Functional Brain Network Disruptions in Parkinson’s Disease: Insights from Information Theory and Machine Learning". Diagnostics 14, n. 23 (4 dicembre 2024): 2728. https://doi.org/10.3390/diagnostics14232728.

Testo completo
Abstract (sommario):
Objectives: This study investigates disruptions in functional brain networks in Parkinson’s Disease (PD), using advanced modeling and machine learning. Functional networks were constructed using the Nonlinear Autoregressive Distributed Lag (NARDL) model, which captures nonlinear and asymmetric dependencies between regions of interest (ROIs). Key network metrics and information-theoretic measures were extracted to classify PD patients and healthy controls (HC), using deep learning models, with explainability methods employed to identify influential features. Methods: Resting-state fMRI data from the Parkinson’s Progression Markers Initiative (PPMI) dataset were used to construct NARDL-based networks. Metrics, such as Degree, Closeness, Betweenness, and Eigenvector Centrality, along with Network Entropy and Complexity, were analyzed. Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory (LSTM) models, classified PD and HC groups. Explainability techniques, including SHAP and LIME, identified significant features driving the classifications. Results: PD patients showed reduced Closeness (22%) and Betweenness Centrality (18%). CNN achieved 91% accuracy, with Network Entropy and Eigenvector Centrality identified as key features. Increased Network Entropy indicated heightened randomness in PD brain networks. Conclusions: NARDL-based analysis with interpretable deep learning effectively distinguishes PD from HC, offering insights into neural disruptions and potential personalized treatments for PD.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia