Letteratura scientifica selezionata sul tema "Explanability"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Explanability".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Explanability"

1

Collier, John. "Reduction, supervenience, and physical emergence". Behavioral and Brain Sciences 27, n. 5 (ottobre 2004): 629–30. http://dx.doi.org/10.1017/s0140525x04240146.

Testo completo
Abstract (sommario):
After distinguishing reductive explanability in principle from ontological deflation, I give a case of an obviously physical property that is reductively inexplicable in principle. I argue that biological systems often have this character, and that, if we make certain assumptions about the cohesion and dynamics of the mind and its physical substrate, then it is emergent according to Broad's criteria.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hu, Hanqing, Mehmed Kantardzic e Shreyas Kar. "Explainable data stream mining: Why the new models are better". Intelligent Decision Technologies 18, n. 1 (20 febbraio 2024): 371–85. http://dx.doi.org/10.3233/idt-230065.

Testo completo
Abstract (sommario):
Explainable Machine Learning brings expandability, interpretability, and accountability to Data Mining Algorithms. Existing explanation frameworks focus on explaining the decision process of a single model in a static dataset. However, in data stream mining changes in data distribution over time, called concept drift, may require updating the learning models to reflect the current data environment. It is therefore important to go beyond static models and understand what has changed among the learning models before and after a concept drift. We propose a Data Stream Explanability framework (DSE) that works together with a typical data stream mining framework where support vector machine models are used. DSE aims to help non-expert users understand model dynamics in a concept drifting data stream. DSE visualizes differences between SVM models before and after concept drift, to produce explanations on why the new model fits the data better. A survey was carried out between expert and non-expert users on the effectiveness of the framework. Although results showed non-expert users on average responded with less understanding of the issue compared to expert users, the difference is not statistically significant. This indicates that DSE successfully brings the explanability of model change to non-expert users.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Venkata Krishnamoorthy, T., C. Venkataiah, Y. Mallikarjuna Rao, D. Rajendra Prasad, Kurra Upendra Chowdary, Manjula Jayamma e R. Sireesha. "A novel NASNet model with LIME explanability for lung disease classification". Biomedical Signal Processing and Control 93 (luglio 2024): 106114. http://dx.doi.org/10.1016/j.bspc.2024.106114.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

BARAJAS ARANDA, DANIEL ALEJANDRO, MIGUEL ANGEL SICILIA URBAN, MARIA DOLORES TORRES SOTO e AURORA TORRES SOTO. "COMPARISON AND EXPLANABILITY OF MACHINE LEARNING MODELS IN PREDICTIVE SUICIDE ANALYSIS". DYNA NEW TECHNOLOGIES 11, n. 1 (28 febbraio 2024): [10P.]. http://dx.doi.org/10.6036/nt11028.

Testo completo
Abstract (sommario):
ABSTRACT In this comparative study of machine learning models for predicting suicidal behavior, three approaches were evaluated: neural network, logistic regression, and decision trees. The results revealed that the neural network showed the best predictive performance, with an accuracy of 82.35%, followed by logistic regression (76.47%) and decision trees (64.71%). Additionally, the explainability analysis revealed that each model assigned different importance to the features in predicting suicidal behavior, highlighting the need to understand how models interpret features and how they influence predictions. The study provides valuable information for healthcare professionals and suicide prevention experts, enabling them to design more effective interventions and better understand the risk factors associated with suicidal behavior. However, it is noted the need to consider other factors, such as model interpretability and its applicability in different contexts or populations. Furthermore, further research and validation in different datasets are recommended to strengthen the understanding and applicability of the models in different contexts. In summary, this study significantly contributes to the field of predicting suicidal behavior using machine learning models, offering a detailed insight into the strengths and weaknesses of each approach and highlighting the importance of model interpretation for better understanding the underlying factors of suicidal behavior. Key words: Suicidal behavior prediction, Machine learning models, Neural network, Logistic regression, Decision tres, Explainability análisis, Healthcare intervention
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Pachouly, Mrs Shikha J. "The Role of Explanability in AI-Driven Fashion Recommendation Model - A Review". International Journal for Research in Applied Science and Engineering Technology 12, n. 1 (31 gennaio 2024): 769–75. http://dx.doi.org/10.22214/ijraset.2024.56885.

Testo completo
Abstract (sommario):
Abstract: Fashion recommendation systems powered by AI have transformed the way consumers discover clothing and accessories. However, these systems often lack transparency, leaving users in the dark about why certain recommendations are made. This review paper explores "The Role of Explainability in AI-Driven Fashion Recommendation Models." We begin by establishing the fundamentals of AI-driven fashion recommendations and the challenges they face, such as subjective fashion preferences and the need to balance personalization and diversity. The paper also explores evaluation metrics for measuring the effectiveness of explainability, considering user satisfaction, trust, and system performance. Ethical concerns related to bias and fairness in fashion recommendations are discussed, with explainability playing a crucial role in addressing these issues.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Adam, Carole, Patrick Taillandier, Julie Dugdale e Benoit Gaudou. "BDI vs FSM Agents in Social Simulations for Raising Awareness in Disasters". International Journal of Information Systems for Crisis Response and Management 9, n. 1 (gennaio 2017): 27–44. http://dx.doi.org/10.4018/ijiscram.2017010103.

Testo completo
Abstract (sommario):
Each summer in Australia, bushfires burn many hectares of forest, causing deaths, injuries, and destroying property. Agent-based simulation is a powerful tool to test various management strategies on a simulated population, and to raise awareness of the actual population behaviour. But valid results depend on realistic underlying models. This article describes two simulations of the Australian population's behaviour during bushfires designed in previous work, one based on a finite-state machine architecture, the other based on a belief-desire-intention agent architecture. It then proposes several contributions towards more realistic agent-based models of human behaviour: a methodology and tool for easily designing BDI models; a number of objective and subjective criteria for comparing agent-based models; a comparison of our two models along these criteria, showing that BDI provides better explanability and understandability of behaviour, makes models easier to extend, and is therefore best adapted; and a discussion of possible extensions of BDI models to further improve their realism.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Hollis, Kate Fultz, Lina F. Soualmia e Brigitte Séroussi. "Artificial Intelligence in Health Informatics: Hype or Reality?" Yearbook of Medical Informatics 28, n. 01 (agosto 2019): 003–4. http://dx.doi.org/10.1055/s-0039-1677951.

Testo completo
Abstract (sommario):
Objectives: To provide an introduction to the 2019 International Medical Informatics Association (IMIA) Yearbook by the editors. Methods: This editorial presents an overview and introduction to the 2019 IMIA Yearbook which includes the special topic “Artificial Intelligence in Health: New Opportunities, Challenges, and Practical Implications". The special topic is discussed, the IMIA President’s statement is introduced, and changes in the Yearbook editorial team are described. Results: Artificial intelligence (AI) in Medicine arose in the 1970’s from new approaches for representing expert knowledge with computers. Since then, AI in medicine has gradually evolved toward essentially data-driven approaches with great results in image analysis. However, data integration, storage, and management still present clear challenges among which the lack of explanability of the results produced by data-driven AI methods. Conclusion: With more health data availability, and the recent developments of efficient and improved machine learning algorithms, there is a renewed interest for AI in medicine.The objective is to help health professionals improve patient care while also reduce costs. However, the other costs of AI, including ethical issues when processing personal health data by algorithms, should be included.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Hussain, Sardar Mehboob, Domenico Buongiorno, Nicola Altini, Francesco Berloco, Berardino Prencipe, Marco Moschetta, Vitoantonio Bevilacqua e Antonio Brunetti. "Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence". Applied Sciences 12, n. 12 (19 giugno 2022): 6230. http://dx.doi.org/10.3390/app12126230.

Testo completo
Abstract (sommario):
Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Cha, Hyunjung, e Hunsik Kang. "Comparison of Level and Relationship in Attitudes and Ethical Awareness toward Artificial Intelligence between Elementary General and Science-Gifted Students". Korean Science Education Society for the Gifted 16, n. 1 (30 aprile 2024): 50–61. http://dx.doi.org/10.29306/jseg.2024.16.1.50.

Testo completo
Abstract (sommario):
This study compared the level and relationship in attitudes and ethical awareness toward Artificial Intelligence (AI) between elementary general and science-gifted students. For this purpose, 90 elementary general students in grades5-6 and 87 elementary science-gifted students in grades 5-6 were selected, and their attitudes toward AI and ethical awareness toward AI were tested. The results of the independent samples t-test showed that the means of the science-gifted students were statistically significantly higher than those of the general students in the overall and all sub-scales of attitudes toward AI. In addition, the means of the science-gifted students were statistically significantly higher than those of the general students in the overall and three sub-domains (‘stability and reliability’, ‘transparency and explanability’, and ‘robot rights’) of ethical awareness toward AI. However, the mean differences between the two groups were not statistically significant in two sub-domains (‘no discrimination’ and ‘employment’) of ethical awareness toward AI. The correlation analysis showed statistically significant correlations between attitudes and ethical awareness toward AI in both elementary general and science-gifted students. In particular, five sub-domains of attitudes toward AI were significantly correlated with the overall of ethical awareness toward AI and three or four in the five sub-domains of ethical awareness toward AI with high reliability. The correlation coefficients between ethical awareness and attitudes toward AI, especially the two sub-domains of ‘emotional interaction with AI’ and ‘social influence of AI’, were statistically significantly larger for the science-gifted students than for the general students. Educational implications of these findings are discussed.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Kumar, Sowmya Ramesh, e Samarth Ramesh Kedilaya. "Navigating Complexity: Harnessing AI for Multivariate Time Series Forecasting in Dynamic Environments". Journal of Engineering and Applied Sciences Technology, 31 dicembre 2023, 1–8. http://dx.doi.org/10.47363/jeast/2023(5)219.

Testo completo
Abstract (sommario):
Multivariate Time Series Analysis (MTSA) plays a pivotal role in forecasting within diverse domains by addressing the complexities arising from interdependencies among multiple variables. This exploration delves into the fundamentals, methodologies, and applications of MTSA, elucidating its role in enhancing predictive capabilities. The key concepts in MTSA, including Vector Autoregression, Cointegration, Error Correction Models, and Granger Causality, form the foundation for understanding dynamic relationships among variables. The methodology section outlines the critical steps in MTSA, such as model specification, estimation, diagnostics, and forecasting. Additionally, the abstract explores the capabilities of Artificial Intelligence (AI) in time-series forecasting, emphasizing improved accuracy, long-term trend recognition, dynamic pattern recognition, and the handling of seasonality and anomalies. Specific AI models, such as Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), Echo State Networks (ESNs), and Online Learning Algorithms, are discussed in detail, along with practical implementation examples. Furthermore, the abstract introduces the benefits and challenges associated with MTSA. The benefits include comprehensive insights, improved forecast accuracy, and real-world relevance, while challenges encompass data and model complexity, explicability, and the validity of assumptions. The discussion emphasizes the need for innovative approaches to explain the predictions of complex models and highlights ongoing research in developing explanability frameworks.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Explanability"

1

Bertrand, Astrid. "Misplaced trust in AI : the explanation paradox and the human-centric path. A characterisation of the cognitive challenges to appropriately trust algorithmic decisions and applications in the financial sector". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT012.

Testo completo
Abstract (sommario):
L'IA devenant de plus en plus présente dans nos vies, nous sommes soucieux de comprendrele fonctionnement de ces structures opaques. Pour répondre à cette demande, le domaine de la recherche en explicabilité (XAI) s'est considérablement développé au cours des dernières années. Cependant, peu de travaux ont étudié le besoin en explicabilité des régulateurs ou des consommateurs à la lumière d'exigences légales en matière d'explications. Cette thèse s'attache à comprendre le rôle des explications pour permettre la conformité réglementaire des systèmes améliorés par l'IA dans des applications financières. La première partie passe en revue le défi de prendre en compte les biais cognitifs de l'homme dans les explications des systèmes d'IA. L'analyse fournit plusieurs pistes pour mieux aligner les solutions d'explicabilité sur les processus cognitifs des individus, notamment en concevant des explications plus interactives. Elle présente ensuite une taxonomie des différentes façons d'interagir avec les solutions d'explicabilité. La deuxième partie se concentre sur des contextes financiers précis. Une étude porte sur les systèmes de recommandation et de souscription en ligne de contrats d'assurance-vie. L'étude souligne que les explications présentées dans ce contexte n'améliorent pas de manière significative la compréhension de la recommandation par les utilisateurs non experts. Elles ne suscitent pas davantage la confiance des utilisateurs que si aucune explication n'était fournie. Une autre étude analyse les besoins des régulateurs en matière d'explication dans le cadre de la lutte contre le blanchiment d'argent et le financement du terrorisme. Elle constate que les autorités de contrôle ont besoin d'explications pour établir le caractère répréhensible des cas de défaillance échantillonnés, ou pour vérifier et contester la bonne compréhension de l'IA par les banques
As AI is becoming more widespread in our everyday lives, concerns have been raised about comprehending how these opaque structures operate. In response, the research field of explainability (XAI) has developed considerably in recent years. However, little work has studied regulators' need for explainability or considered effects of explanations on users in light of legal requirements for explanations. This thesis focuses on understanding the role of AI explanations to enable regulatory compliance of AI-enhanced systems in financial applications. The first part reviews the challenge of taking into account human cognitive biases in the explanations of AI systems. The analysis provides several directions to better align explainability solutions with people's cognitive processes, including designing more interactive explanations. It then presents a taxonomy of the different ways to interact with explainability solutions. The second part focuses on specific financial contexts. One study takes place in the domain of online recommender systems for life insurance contracts. The study highlights that feature based explanations do not significantly improve non expert users' understanding of the recommendation, nor lead to more appropriate reliance compared to having no explanation at all. Another study analyzes the needs of regulators for explainability in anti-money laundering and financing of terrorism. It finds that supervisors need explanations to establish the reprehensibility of sampled failure cases, or to verify and challenge banks' correct understanding of the AI
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Explanability"

1

Daglarli, Evren. "Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models for Cyber-Physical Systems". In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 42–67. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5101-1.ch003.

Testo completo
Abstract (sommario):
Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Explanability"

1

Singla, Kushal, e Subham Biswas. "Machine learning explanability method for the multi-label classification model". In 2021 IEEE 15th International Conference on Semantic Computing (ICSC). IEEE, 2021. http://dx.doi.org/10.1109/icsc50631.2021.00063.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hampel-Arias, Zigfried, Adra Carr, Natalie Klein e Eric Flynn. "2D Spectral Representations and Autoencoders for Hyperspectral Imagery Classification and ExplanabilitY". In 2024 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI). IEEE, 2024. http://dx.doi.org/10.1109/ssiai59505.2024.10508608.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Montoya, Fernando, Esteban Berríos, Daniela Díaz e Hernán Astudillo. "Counterfactual Explanability: An Application of Causal Inference in a Financial Sector Delivery Business Process". In 2023 42nd IEEE International Conference of the Chilean Computer Science Society (SCCC). IEEE, 2023. http://dx.doi.org/10.1109/sccc59417.2023.10315742.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Panati, Chandana, Simon Wagner e Stefan Brüggenwirth. "Multiple Target Recognition Within SAR Scene Achieved Using YOLO and Explanability Investigated Using Gradient-Free Visualisation". In 2024 IEEE Radar Conference (RadarConf24). IEEE, 2024. http://dx.doi.org/10.1109/radarconf2458775.2024.10548088.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia