Auswahl der wissenschaftlichen Literatur zum Thema „Explainable Image Captioning (XIC)“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Explainable Image Captioning (XIC)" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Explainable Image Captioning (XIC)"
Han, Seung-Ho, Min-Su Kwon und Ho-Jin Choi. „EXplainable AI (XAI) approach to image captioning“. Journal of Engineering 2020, Nr. 13 (01.07.2020): 589–94. http://dx.doi.org/10.1049/joe.2019.1217.
Der volle Inhalt der QuelleFei, Zhengcong, Mingyuan Fan, Li Zhu, Junshi Huang, Xiaoming Wei und Xiaolin Wei. „Uncertainty-Aware Image Captioning“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 1 (26.06.2023): 614–22. http://dx.doi.org/10.1609/aaai.v37i1.25137.
Der volle Inhalt der QuelleLiu, Haixia, und Tim Brailsford. „Reproducing “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention”“. Journal of Physics: Conference Series 2589, Nr. 1 (01.09.2023): 012012. http://dx.doi.org/10.1088/1742-6596/2589/1/012012.
Der volle Inhalt der QuelleBiswas, Rajarshi, Michael Barz und Daniel Sonntag. „Towards Explanatory Interactive Image Captioning Using Top-Down and Bottom-Up Features, Beam Search and Re-ranking“. KI - Künstliche Intelligenz 34, Nr. 4 (08.07.2020): 571–84. http://dx.doi.org/10.1007/s13218-020-00679-2.
Der volle Inhalt der QuelleGhosh, Swarnendu, Teresa Gonçalves und Nibaran Das. „Im2Graph: A Weakly Supervised Approach for Generating Holistic Scene Graphs from Regional Dependencies“. Future Internet 15, Nr. 2 (10.02.2023): 70. http://dx.doi.org/10.3390/fi15020070.
Der volle Inhalt der QuelleNaresh, Naresh, Gunikhan .. und V. Balaji. „AMR-XAI-DWT: Age-Related Macular Regenerated Classification using X-AI with Dual Tree CWT“. Fusion: Practice and Applications 15, Nr. 2 (2024): 17–35. http://dx.doi.org/10.54216/fpa.150202.
Der volle Inhalt der QuelleYong, Gunwoo, Meiyin Liu und SangHyun Lee. „Explainable Image Captioning to Identify Ergonomic Problems and Solutions for Construction Workers“. Journal of Computing in Civil Engineering 38, Nr. 4 (Juli 2024). http://dx.doi.org/10.1061/jccee5.cpeng-5744.
Der volle Inhalt der QuellePan, Yingwei, Yehao Li, Ting Yao und Tao Mei. „Bottom-up and Top-down Object Inference Networks for Image Captioning“. ACM Transactions on Multimedia Computing, Communications, and Applications, 19.01.2023. http://dx.doi.org/10.1145/3580366.
Der volle Inhalt der QuelleIlinykh, Nikolai, und Simon Dobnik. „What Does a Language-And-Vision Transformer See: The Impact of Semantic Information on Visual Representations“. Frontiers in Artificial Intelligence 4 (03.12.2021). http://dx.doi.org/10.3389/frai.2021.767971.
Der volle Inhalt der QuelleDissertationen zum Thema "Explainable Image Captioning (XIC)"
Elguendouze, Sofiane. „Explainable Artificial Intelligence approaches for Image Captioning“. Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1003.
Der volle Inhalt der QuelleThe rapid advancement of image captioning models, driven by the integration of deep learning techniques that combine image and text modalities, has resulted in increasingly complex systems. However, these models often operate as black boxes, lacking the ability to provide transparent explanations for their decisions. This thesis addresses the explainability of image captioning systems based on Encoder-Attention-Decoder architectures, through four aspects. First, it explores the concept of the latent space, marking a departure from traditional approaches relying on the original representation space. Second, it introduces the notion of decisiveness, leading to the formulation of a new definition for the concept of component influence/decisiveness in the context of explainable image captioning, as well as a perturbation-based approach to capturing decisiveness. The third aspect aims to elucidate the factors influencing explanation quality, in particular the scope of explanation methods. Accordingly, latent-based variants of well-established explanation methods such as LRP and LIME have been developed, along with the introduction of a latent-centered evaluation approach called Latent Ablation. The fourth aspect of this work involves investigating what we call saliency and the representation of certain visual concepts, such as object quantity, at different levels of the captioning architecture
Buchteile zum Thema "Explainable Image Captioning (XIC)"
Beddiar, Romaissa, und Mourad Oussalah. „Explainability in medical image captioning“. In Explainable Deep Learning AI, 239–61. Elsevier, 2023. http://dx.doi.org/10.1016/b978-0-32-396098-4.00018-1.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Explainable Image Captioning (XIC)"
Tseng, Ching-Shan, Ying-Jia Lin und Hung-Yu Kao. „Relation-Aware Image Captioning for Explainable Visual Question Answering“. In 2022 International Conference on Technologies and Applications of Artificial Intelligence (TAAI). IEEE, 2022. http://dx.doi.org/10.1109/taai57707.2022.00035.
Der volle Inhalt der QuelleElguendouze, Sofiane, Marcilio C. P. de Souto, Adel Hafiane und Anais Halftermeyer. „Towards Explainable Deep Learning for Image Captioning through Representation Space Perturbation“. In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892275.
Der volle Inhalt der Quelle