Gotowa bibliografia na temat „Explainable Image Captioning (XIC)”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Explainable Image Captioning (XIC)”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Explainable Image Captioning (XIC)"
Han, Seung-Ho, Min-Su Kwon i Ho-Jin Choi. "EXplainable AI (XAI) approach to image captioning". Journal of Engineering 2020, nr 13 (1.07.2020): 589–94. http://dx.doi.org/10.1049/joe.2019.1217.
Pełny tekst źródłaFei, Zhengcong, Mingyuan Fan, Li Zhu, Junshi Huang, Xiaoming Wei i Xiaolin Wei. "Uncertainty-Aware Image Captioning". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 1 (26.06.2023): 614–22. http://dx.doi.org/10.1609/aaai.v37i1.25137.
Pełny tekst źródłaLiu, Haixia, i Tim Brailsford. "Reproducing “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention”". Journal of Physics: Conference Series 2589, nr 1 (1.09.2023): 012012. http://dx.doi.org/10.1088/1742-6596/2589/1/012012.
Pełny tekst źródłaBiswas, Rajarshi, Michael Barz i Daniel Sonntag. "Towards Explanatory Interactive Image Captioning Using Top-Down and Bottom-Up Features, Beam Search and Re-ranking". KI - Künstliche Intelligenz 34, nr 4 (8.07.2020): 571–84. http://dx.doi.org/10.1007/s13218-020-00679-2.
Pełny tekst źródłaGhosh, Swarnendu, Teresa Gonçalves i Nibaran Das. "Im2Graph: A Weakly Supervised Approach for Generating Holistic Scene Graphs from Regional Dependencies". Future Internet 15, nr 2 (10.02.2023): 70. http://dx.doi.org/10.3390/fi15020070.
Pełny tekst źródłaNaresh, Naresh, Gunikhan .. i V. Balaji. "AMR-XAI-DWT: Age-Related Macular Regenerated Classification using X-AI with Dual Tree CWT". Fusion: Practice and Applications 15, nr 2 (2024): 17–35. http://dx.doi.org/10.54216/fpa.150202.
Pełny tekst źródłaYong, Gunwoo, Meiyin Liu i SangHyun Lee. "Explainable Image Captioning to Identify Ergonomic Problems and Solutions for Construction Workers". Journal of Computing in Civil Engineering 38, nr 4 (lipiec 2024). http://dx.doi.org/10.1061/jccee5.cpeng-5744.
Pełny tekst źródłaPan, Yingwei, Yehao Li, Ting Yao i Tao Mei. "Bottom-up and Top-down Object Inference Networks for Image Captioning". ACM Transactions on Multimedia Computing, Communications, and Applications, 19.01.2023. http://dx.doi.org/10.1145/3580366.
Pełny tekst źródłaIlinykh, Nikolai, i Simon Dobnik. "What Does a Language-And-Vision Transformer See: The Impact of Semantic Information on Visual Representations". Frontiers in Artificial Intelligence 4 (3.12.2021). http://dx.doi.org/10.3389/frai.2021.767971.
Pełny tekst źródłaRozprawy doktorskie na temat "Explainable Image Captioning (XIC)"
Elguendouze, Sofiane. "Explainable Artificial Intelligence approaches for Image Captioning". Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1003.
Pełny tekst źródłaThe rapid advancement of image captioning models, driven by the integration of deep learning techniques that combine image and text modalities, has resulted in increasingly complex systems. However, these models often operate as black boxes, lacking the ability to provide transparent explanations for their decisions. This thesis addresses the explainability of image captioning systems based on Encoder-Attention-Decoder architectures, through four aspects. First, it explores the concept of the latent space, marking a departure from traditional approaches relying on the original representation space. Second, it introduces the notion of decisiveness, leading to the formulation of a new definition for the concept of component influence/decisiveness in the context of explainable image captioning, as well as a perturbation-based approach to capturing decisiveness. The third aspect aims to elucidate the factors influencing explanation quality, in particular the scope of explanation methods. Accordingly, latent-based variants of well-established explanation methods such as LRP and LIME have been developed, along with the introduction of a latent-centered evaluation approach called Latent Ablation. The fourth aspect of this work involves investigating what we call saliency and the representation of certain visual concepts, such as object quantity, at different levels of the captioning architecture
Części książek na temat "Explainable Image Captioning (XIC)"
Beddiar, Romaissa, i Mourad Oussalah. "Explainability in medical image captioning". W Explainable Deep Learning AI, 239–61. Elsevier, 2023. http://dx.doi.org/10.1016/b978-0-32-396098-4.00018-1.
Pełny tekst źródłaStreszczenia konferencji na temat "Explainable Image Captioning (XIC)"
Tseng, Ching-Shan, Ying-Jia Lin i Hung-Yu Kao. "Relation-Aware Image Captioning for Explainable Visual Question Answering". W 2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI). IEEE, 2022. http://dx.doi.org/10.1109/taai57707.2022.00035.
Pełny tekst źródłaElguendouze, Sofiane, Marcilio C. P. de Souto, Adel Hafiane i Anais Halftermeyer. "Towards Explainable Deep Learning for Image Captioning through Representation Space Perturbation". W 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892275.
Pełny tekst źródła