Zeitschriftenartikel zum Thema „Multi-Modal representations“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Multi-Modal representations" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Wu, Lianlong, Seewon Choi, Daniel Raggi, Aaron Stockdill, Grecia Garcia Garcia, Fiorenzo Colarusso, Peter C. H. Cheng und Mateja Jamnik. „Generation of Visual Representations for Multi-Modal Mathematical Knowledge“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 21 (24.03.2024): 23850–52. http://dx.doi.org/10.1609/aaai.v38i21.30586.
Der volle Inhalt der QuelleZhang, Yi, Mingyuan Chen, Jundong Shen und Chongjun Wang. „Tailor Versatile Multi-Modal Learning for Multi-Label Emotion Recognition“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 8 (28.06.2022): 9100–9108. http://dx.doi.org/10.1609/aaai.v36i8.20895.
Der volle Inhalt der QuelleZhang, Dong, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu und Guodong Zhou. „Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 16 (18.05.2021): 14347–55. http://dx.doi.org/10.1609/aaai.v35i16.17687.
Der volle Inhalt der QuelleLiu, Hao, Jindong Han, Yanjie Fu, Jingbo Zhou, Xinjiang Lu und Hui Xiong. „Multi-modal transportation recommendation with unified route representation learning“. Proceedings of the VLDB Endowment 14, Nr. 3 (November 2020): 342–50. http://dx.doi.org/10.14778/3430915.3430924.
Der volle Inhalt der QuelleWang, Huansha, Qinrang Liu, Ruiyang Huang und Jianpeng Zhang. „Multi-Modal Entity Alignment Method Based on Feature Enhancement“. Applied Sciences 13, Nr. 11 (01.06.2023): 6747. http://dx.doi.org/10.3390/app13116747.
Der volle Inhalt der QuelleWu, Tianxing, Chaoyu Gao, Lin Li und Yuxiang Wang. „Leveraging Multi-Modal Information for Cross-Lingual Entity Matching across Knowledge Graphs“. Applied Sciences 12, Nr. 19 (08.10.2022): 10107. http://dx.doi.org/10.3390/app121910107.
Der volle Inhalt der QuelleHan, Ning, Jingjing Chen, Hao Zhang, Huanwen Wang und Hao Chen. „Adversarial Multi-Grained Embedding Network for Cross-Modal Text-Video Retrieval“. ACM Transactions on Multimedia Computing, Communications, and Applications 18, Nr. 2 (31.05.2022): 1–23. http://dx.doi.org/10.1145/3483381.
Der volle Inhalt der QuelleYing, Qichao, Xiaoxiao Hu, Yangming Zhou, Zhenxing Qian, Dan Zeng und Shiming Ge. „Bootstrapping Multi-View Representations for Fake News Detection“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 4 (26.06.2023): 5384–92. http://dx.doi.org/10.1609/aaai.v37i4.25670.
Der volle Inhalt der QuelleHuang, Yufeng, Jiji Tang, Zhuo Chen, Rongsheng Zhang, Xinfeng Zhang, Weijie Chen, Zeng Zhao et al. „Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-Modal Structured Representations“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 2417–25. http://dx.doi.org/10.1609/aaai.v38i3.28017.
Der volle Inhalt der Quellevan Tulder, Gijs, und Marleen de Bruijne. „Learning Cross-Modality Representations From Multi-Modal Images“. IEEE Transactions on Medical Imaging 38, Nr. 2 (Februar 2019): 638–48. http://dx.doi.org/10.1109/tmi.2018.2868977.
Der volle Inhalt der QuelleKiela, Douwe, und Stephen Clark. „Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception“. Journal of Artificial Intelligence Research 60 (26.12.2017): 1003–30. http://dx.doi.org/10.1613/jair.5665.
Der volle Inhalt der QuelleCui, Xiaohui, Xiaolong Qu, Dongmei Li, Yu Yang, Yuxun Li und Xiaoping Zhang. „MKGCN: Multi-Modal Knowledge Graph Convolutional Network for Music Recommender Systems“. Electronics 12, Nr. 12 (15.06.2023): 2688. http://dx.doi.org/10.3390/electronics12122688.
Der volle Inhalt der QuelleDong, Bin, Songlei Jian und Kai Lu. „Learning Multimodal Representations by Symmetrically Transferring Local Structures“. Symmetry 12, Nr. 9 (13.09.2020): 1504. http://dx.doi.org/10.3390/sym12091504.
Der volle Inhalt der QuelleLi, Yehao, Jiahao Fan, Yingwei Pan, Ting Yao, Weiyao Lin und Tao Mei. „Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training“. ACM Transactions on Multimedia Computing, Communications, and Applications 18, Nr. 2 (31.05.2022): 1–16. http://dx.doi.org/10.1145/3473140.
Der volle Inhalt der QuelleGu, Zhihao, Jiangning Zhang, Liang Liu, Xu Chen, Jinlong Peng, Zhenye Gan, Guannan Jiang, Annan Shu, Yabiao Wang und Lizhuang Ma. „Rethinking Reverse Distillation for Multi-Modal Anomaly Detection“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 8 (24.03.2024): 8445–53. http://dx.doi.org/10.1609/aaai.v38i8.28687.
Der volle Inhalt der QuelleWang, Zi, Chenglong Li, Aihua Zheng, Ran He und Jin Tang. „Interact, Embed, and EnlargE: Boosting Modality-Specific Representations for Multi-Modal Person Re-identification“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 3 (28.06.2022): 2633–41. http://dx.doi.org/10.1609/aaai.v36i3.20165.
Der volle Inhalt der QuelleWróblewska, Anna, Jacek Dąbrowski, Michał Pastuszak, Andrzej Michałowski, Michał Daniluk, Barbara Rychalska, Mikołaj Wieczorek und Sylwia Sysko-Romańczuk. „Designing Multi-Modal Embedding Fusion-Based Recommender“. Electronics 11, Nr. 9 (27.04.2022): 1391. http://dx.doi.org/10.3390/electronics11091391.
Der volle Inhalt der QuelleHe, Qibin. „Prompting Multi-Modal Image Segmentation with Semantic Grouping“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 2094–102. http://dx.doi.org/10.1609/aaai.v38i3.27981.
Der volle Inhalt der QuelleLiang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang und Zhe Xue. „Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 12 (24.03.2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Der volle Inhalt der QuelleBodapati, Jyostna Devi, Veeranjaneyulu Naralasetti, Shaik Nagur Shareef, Saqib Hakak, Muhammad Bilal, Praveen Kumar Reddy Maddikunta und Ohyun Jo. „Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction“. Electronics 9, Nr. 6 (30.05.2020): 914. http://dx.doi.org/10.3390/electronics9060914.
Der volle Inhalt der QuelleLiu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu und Hui Xiong. „Joint Representation Learning for Multi-Modal Transportation Recommendation“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Der volle Inhalt der QuelleYang, Fan, Wei Li, Menglong Yang, Binbin Liang und Jianwei Zhang. „Multi-Modal Disordered Representation Learning Network for Description-Based Person Search“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 15 (24.03.2024): 16316–24. http://dx.doi.org/10.1609/aaai.v38i15.29567.
Der volle Inhalt der QuelleJüttner, Martin, und Ingo Rentschler. „Imagery in multi-modal object learning“. Behavioral and Brain Sciences 25, Nr. 2 (April 2002): 197–98. http://dx.doi.org/10.1017/s0140525x0238004x.
Der volle Inhalt der QuelleTao, Rui, Meng Zhu, Haiyan Cao und Honge Ren. „Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective“. Sensors 24, Nr. 10 (14.05.2024): 3130. http://dx.doi.org/10.3390/s24103130.
Der volle Inhalt der QuellePugeault, Nicolas, Florentin Wörgötter und Norbert Krüger. „Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints“. PLoS ONE 5, Nr. 6 (09.06.2010): e10663. http://dx.doi.org/10.1371/journal.pone.0010663.
Der volle Inhalt der QuelleLara, Bruno, Juan Manuel Rendon-Mancha und Marcos A. Capistran. „Prediction of Undesired Situations based on Multi-Modal Representations“. IEEE Latin America Transactions 5, Nr. 2 (Mai 2007): 103–8. http://dx.doi.org/10.1109/tla.2007.4381351.
Der volle Inhalt der QuelleGeng, Shijie, Peng Gao, Moitreya Chatterjee, Chiori Hori, Jonathan Le Roux, Yongfeng Zhang, Hongsheng Li und Anoop Cherian. „Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 2 (18.05.2021): 1415–23. http://dx.doi.org/10.1609/aaai.v35i2.16231.
Der volle Inhalt der QuelleYan, Facheng, Mingshu Zhang und Bin Wei. „Multimodal integration for fake news detection on social media platforms“. MATEC Web of Conferences 395 (2024): 01013. http://dx.doi.org/10.1051/matecconf/202439501013.
Der volle Inhalt der QuelleEscobar-Grisales, Daniel, Cristian David Ríos-Urrego und Juan Rafael Orozco-Arroyave. „Deep Learning and Artificial Intelligence Applied to Model Speech and Language in Parkinson’s Disease“. Diagnostics 13, Nr. 13 (25.06.2023): 2163. http://dx.doi.org/10.3390/diagnostics13132163.
Der volle Inhalt der QuelleZhai, Hanming, Xiaojun Lv, Zhiwen Hou, Xin Tong und Fanliang Bu. „MLSFF: Multi-level structural features fusion for multi-modal knowledge graph completion“. Mathematical Biosciences and Engineering 20, Nr. 8 (2023): 14096–116. http://dx.doi.org/10.3934/mbe.2023630.
Der volle Inhalt der QuelleHua, Yan, Yingyun Yang und Jianhe Du. „Deep Multi-Modal Metric Learning with Multi-Scale Correlation for Image-Text Retrieval“. Electronics 9, Nr. 3 (10.03.2020): 466. http://dx.doi.org/10.3390/electronics9030466.
Der volle Inhalt der QuelleYang, Yiying, Fukun Yin, Wen Liu, Jiayuan Fan, Xin Chen, Gang Yu und Tao Chen. „PM-INR: Prior-Rich Multi-Modal Implicit Large-Scale Scene Neural Representation“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 7 (24.03.2024): 6594–602. http://dx.doi.org/10.1609/aaai.v38i7.28481.
Der volle Inhalt der QuelleHu, Lianyu, Liqing Gao, Zekang Liu, Chi-Man Pun und Wei Feng. „COMMA: Co-articulated Multi-Modal Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 2238–46. http://dx.doi.org/10.1609/aaai.v38i3.27997.
Der volle Inhalt der QuelleHill, Felix, Roi Reichart und Anna Korhonen. „Multi-Modal Models for Concrete and Abstract Concept Meaning“. Transactions of the Association for Computational Linguistics 2 (Dezember 2014): 285–96. http://dx.doi.org/10.1162/tacl_a_00183.
Der volle Inhalt der QuelleLiu, Xuanwu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Yazhou Ren und Maozu Guo. „Ranking-Based Deep Cross-Modal Hashing“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4400–4407. http://dx.doi.org/10.1609/aaai.v33i01.33014400.
Der volle Inhalt der QuelleSezerer, Erhan, und Selma Tekir. „Incorporating Concreteness in Multi-Modal Language Models with Curriculum Learning“. Applied Sciences 11, Nr. 17 (06.09.2021): 8241. http://dx.doi.org/10.3390/app11178241.
Der volle Inhalt der QuelleGou, Yingdong, Kexin Wang, Siwen Wei und Changxin Shi. „GMDA: GCN-Based Multi-Modal Domain Adaptation for Real-Time Disaster Detection“. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 31, Nr. 06 (Dezember 2023): 957–73. http://dx.doi.org/10.1142/s0218488523500435.
Der volle Inhalt der QuelleJang, Jiho, Chaerin Kong, DongHyeon Jeon, Seonhoon Kim und Nojun Kwak. „Unifying Vision-Language Representation Space with Single-Tower Transformer“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 1 (26.06.2023): 980–88. http://dx.doi.org/10.1609/aaai.v37i1.25178.
Der volle Inhalt der QuelleKabir, Anowarul, und Amarda Shehu. „GOProFormer: A Multi-Modal Transformer Method for Gene Ontology Protein Function Prediction“. Biomolecules 12, Nr. 11 (18.11.2022): 1709. http://dx.doi.org/10.3390/biom12111709.
Der volle Inhalt der QuelleAlam, Mohammad Arif Ul. „College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 11 (28.06.2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Der volle Inhalt der QuelleQian, Shengsheng, Dizhan Xue, Huaiwen Zhang, Quan Fang und Changsheng Xu. „Dual Adversarial Graph Neural Networks for Multi-label Cross-modal Retrieval“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 3 (18.05.2021): 2440–48. http://dx.doi.org/10.1609/aaai.v35i3.16345.
Der volle Inhalt der QuelleLu, Lyujian, Saad Elbeleidy, Lauren Zoe Baker und Hua Wang. „Learning Multi-Modal Biomarker Representations via Globally Aligned Longitudinal Enrichments“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 01 (03.04.2020): 817–24. http://dx.doi.org/10.1609/aaai.v34i01.5426.
Der volle Inhalt der QuelleZhang, Heng, Vishal M. Patel und Rama Chellappa. „Low-Rank and Joint Sparse Representations for Multi-Modal Recognition“. IEEE Transactions on Image Processing 26, Nr. 10 (Oktober 2017): 4741–52. http://dx.doi.org/10.1109/tip.2017.2721838.
Der volle Inhalt der QuelleBlown, Eric, und Tom G. K. Bryce. „Conceptual Coherence Revealed in Multi‐Modal Representations of Astronomy Knowledge“. International Journal of Science Education 32, Nr. 1 (30.06.2009): 31–67. http://dx.doi.org/10.1080/09500690902974207.
Der volle Inhalt der QuelleTorasso, Pietro. „Multiple representations and multi-modal reasoning in medical diagnostic systems“. Artificial Intelligence in Medicine 23, Nr. 1 (August 2001): 49–69. http://dx.doi.org/10.1016/s0933-3657(01)00075-6.
Der volle Inhalt der QuelleWachinger, Christian, und Nassir Navab. „Entropy and Laplacian images: Structural representations for multi-modal registration“. Medical Image Analysis 16, Nr. 1 (Januar 2012): 1–17. http://dx.doi.org/10.1016/j.media.2011.03.001.
Der volle Inhalt der QuelleFang, Feiyi, Tao Zhou, Zhenbo Song und Jianfeng Lu. „MMCAN: Multi-Modal Cross-Attention Network for Free-Space Detection with Uncalibrated Hyperspectral Sensors“. Remote Sensing 15, Nr. 4 (20.02.2023): 1142. http://dx.doi.org/10.3390/rs15041142.
Der volle Inhalt der QuelleGao, Jingsheng, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu und Yuzhuo Fu. „LAMM: Label Alignment for Multi-Modal Prompt Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 1815–23. http://dx.doi.org/10.1609/aaai.v38i3.27950.
Der volle Inhalt der QuelleBao, Peijun, Wenhan Yang, Boon Poh Ng, Meng Hwa Er und Alex C. Kot. „Cross-Modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 1 (26.06.2023): 215–22. http://dx.doi.org/10.1609/aaai.v37i1.25093.
Der volle Inhalt der QuelleLiu, Shuang, Mei Li, Zhong Zhang, Baihua Xiao und Tariq S. Durrani. „Multi-Evidence and Multi-Modal Fusion Network for Ground-Based Cloud Recognition“. Remote Sensing 12, Nr. 3 (02.02.2020): 464. http://dx.doi.org/10.3390/rs12030464.
Der volle Inhalt der Quelle