Gotowa bibliografia na temat „Explanability”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Explanability”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Explanability"
Collier, John. "Reduction, supervenience, and physical emergence". Behavioral and Brain Sciences 27, nr 5 (październik 2004): 629–30. http://dx.doi.org/10.1017/s0140525x04240146.
Pełny tekst źródłaHu, Hanqing, Mehmed Kantardzic i Shreyas Kar. "Explainable data stream mining: Why the new models are better". Intelligent Decision Technologies 18, nr 1 (20.02.2024): 371–85. http://dx.doi.org/10.3233/idt-230065.
Pełny tekst źródłaVenkata Krishnamoorthy, T., C. Venkataiah, Y. Mallikarjuna Rao, D. Rajendra Prasad, Kurra Upendra Chowdary, Manjula Jayamma i R. Sireesha. "A novel NASNet model with LIME explanability for lung disease classification". Biomedical Signal Processing and Control 93 (lipiec 2024): 106114. http://dx.doi.org/10.1016/j.bspc.2024.106114.
Pełny tekst źródłaBARAJAS ARANDA, DANIEL ALEJANDRO, MIGUEL ANGEL SICILIA URBAN, MARIA DOLORES TORRES SOTO i AURORA TORRES SOTO. "COMPARISON AND EXPLANABILITY OF MACHINE LEARNING MODELS IN PREDICTIVE SUICIDE ANALYSIS". DYNA NEW TECHNOLOGIES 11, nr 1 (28.02.2024): [10P.]. http://dx.doi.org/10.6036/nt11028.
Pełny tekst źródłaPachouly, Mrs Shikha J. "The Role of Explanability in AI-Driven Fashion Recommendation Model - A Review". International Journal for Research in Applied Science and Engineering Technology 12, nr 1 (31.01.2024): 769–75. http://dx.doi.org/10.22214/ijraset.2024.56885.
Pełny tekst źródłaAdam, Carole, Patrick Taillandier, Julie Dugdale i Benoit Gaudou. "BDI vs FSM Agents in Social Simulations for Raising Awareness in Disasters". International Journal of Information Systems for Crisis Response and Management 9, nr 1 (styczeń 2017): 27–44. http://dx.doi.org/10.4018/ijiscram.2017010103.
Pełny tekst źródłaHollis, Kate Fultz, Lina F. Soualmia i Brigitte Séroussi. "Artificial Intelligence in Health Informatics: Hype or Reality?" Yearbook of Medical Informatics 28, nr 01 (sierpień 2019): 003–4. http://dx.doi.org/10.1055/s-0039-1677951.
Pełny tekst źródłaHussain, Sardar Mehboob, Domenico Buongiorno, Nicola Altini, Francesco Berloco, Berardino Prencipe, Marco Moschetta, Vitoantonio Bevilacqua i Antonio Brunetti. "Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence". Applied Sciences 12, nr 12 (19.06.2022): 6230. http://dx.doi.org/10.3390/app12126230.
Pełny tekst źródłaCha, Hyunjung, i Hunsik Kang. "Comparison of Level and Relationship in Attitudes and Ethical Awareness toward Artificial Intelligence between Elementary General and Science-Gifted Students". Korean Science Education Society for the Gifted 16, nr 1 (30.04.2024): 50–61. http://dx.doi.org/10.29306/jseg.2024.16.1.50.
Pełny tekst źródłaKumar, Sowmya Ramesh, i Samarth Ramesh Kedilaya. "Navigating Complexity: Harnessing AI for Multivariate Time Series Forecasting in Dynamic Environments". Journal of Engineering and Applied Sciences Technology, 31.12.2023, 1–8. http://dx.doi.org/10.47363/jeast/2023(5)219.
Pełny tekst źródłaRozprawy doktorskie na temat "Explanability"
Bertrand, Astrid. "Misplaced trust in AI : the explanation paradox and the human-centric path. A characterisation of the cognitive challenges to appropriately trust algorithmic decisions and applications in the financial sector". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT012.
Pełny tekst źródłaAs AI is becoming more widespread in our everyday lives, concerns have been raised about comprehending how these opaque structures operate. In response, the research field of explainability (XAI) has developed considerably in recent years. However, little work has studied regulators' need for explainability or considered effects of explanations on users in light of legal requirements for explanations. This thesis focuses on understanding the role of AI explanations to enable regulatory compliance of AI-enhanced systems in financial applications. The first part reviews the challenge of taking into account human cognitive biases in the explanations of AI systems. The analysis provides several directions to better align explainability solutions with people's cognitive processes, including designing more interactive explanations. It then presents a taxonomy of the different ways to interact with explainability solutions. The second part focuses on specific financial contexts. One study takes place in the domain of online recommender systems for life insurance contracts. The study highlights that feature based explanations do not significantly improve non expert users' understanding of the recommendation, nor lead to more appropriate reliance compared to having no explanation at all. Another study analyzes the needs of regulators for explainability in anti-money laundering and financing of terrorism. It finds that supervisors need explanations to establish the reprehensibility of sampled failure cases, or to verify and challenge banks' correct understanding of the AI
Części książek na temat "Explanability"
Daglarli, Evren. "Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models for Cyber-Physical Systems". W Advances in Systems Analysis, Software Engineering, and High Performance Computing, 42–67. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5101-1.ch003.
Pełny tekst źródłaStreszczenia konferencji na temat "Explanability"
Singla, Kushal, i Subham Biswas. "Machine learning explanability method for the multi-label classification model". W 2021 IEEE 15th International Conference on Semantic Computing (ICSC). IEEE, 2021. http://dx.doi.org/10.1109/icsc50631.2021.00063.
Pełny tekst źródłaHampel-Arias, Zigfried, Adra Carr, Natalie Klein i Eric Flynn. "2D Spectral Representations and Autoencoders for Hyperspectral Imagery Classification and ExplanabilitY". W 2024 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI). IEEE, 2024. http://dx.doi.org/10.1109/ssiai59505.2024.10508608.
Pełny tekst źródłaMontoya, Fernando, Esteban Berríos, Daniela Díaz i Hernán Astudillo. "Counterfactual Explanability: An Application of Causal Inference in a Financial Sector Delivery Business Process". W 2023 42nd IEEE International Conference of the Chilean Computer Science Society (SCCC). IEEE, 2023. http://dx.doi.org/10.1109/sccc59417.2023.10315742.
Pełny tekst źródłaPanati, Chandana, Simon Wagner i Stefan Brüggenwirth. "Multiple Target Recognition Within SAR Scene Achieved Using YOLO and Explanability Investigated Using Gradient-Free Visualisation". W 2024 IEEE Radar Conference (RadarConf24). IEEE, 2024. http://dx.doi.org/10.1109/radarconf2458775.2024.10548088.
Pełny tekst źródła