Статті в журналах з теми "Multi-Modal representations"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Multi-Modal representations".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Wu, Lianlong, Seewon Choi, Daniel Raggi, Aaron Stockdill, Grecia Garcia Garcia, Fiorenzo Colarusso, Peter C. H. Cheng, and Mateja Jamnik. "Generation of Visual Representations for Multi-Modal Mathematical Knowledge." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23850–52. http://dx.doi.org/10.1609/aaai.v38i21.30586.
Zhang, Yi, Mingyuan Chen, Jundong Shen, and Chongjun Wang. "Tailor Versatile Multi-Modal Learning for Multi-Label Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9100–9108. http://dx.doi.org/10.1609/aaai.v36i8.20895.
Zhang, Dong, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu, and Guodong Zhou. "Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14347–55. http://dx.doi.org/10.1609/aaai.v35i16.17687.
Liu, Hao, Jindong Han, Yanjie Fu, Jingbo Zhou, Xinjiang Lu, and Hui Xiong. "Multi-modal transportation recommendation with unified route representation learning." Proceedings of the VLDB Endowment 14, no. 3 (November 2020): 342–50. http://dx.doi.org/10.14778/3430915.3430924.
Wang, Huansha, Qinrang Liu, Ruiyang Huang, and Jianpeng Zhang. "Multi-Modal Entity Alignment Method Based on Feature Enhancement." Applied Sciences 13, no. 11 (June 1, 2023): 6747. http://dx.doi.org/10.3390/app13116747.
Wu, Tianxing, Chaoyu Gao, Lin Li, and Yuxiang Wang. "Leveraging Multi-Modal Information for Cross-Lingual Entity Matching across Knowledge Graphs." Applied Sciences 12, no. 19 (October 8, 2022): 10107. http://dx.doi.org/10.3390/app121910107.
Han, Ning, Jingjing Chen, Hao Zhang, Huanwen Wang, and Hao Chen. "Adversarial Multi-Grained Embedding Network for Cross-Modal Text-Video Retrieval." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 2 (May 31, 2022): 1–23. http://dx.doi.org/10.1145/3483381.
Ying, Qichao, Xiaoxiao Hu, Yangming Zhou, Zhenxing Qian, Dan Zeng, and Shiming Ge. "Bootstrapping Multi-View Representations for Fake News Detection." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 5384–92. http://dx.doi.org/10.1609/aaai.v37i4.25670.
Huang, Yufeng, Jiji Tang, Zhuo Chen, Rongsheng Zhang, Xinfeng Zhang, Weijie Chen, Zeng Zhao, et al. "Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-Modal Structured Representations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2417–25. http://dx.doi.org/10.1609/aaai.v38i3.28017.
van Tulder, Gijs, and Marleen de Bruijne. "Learning Cross-Modality Representations From Multi-Modal Images." IEEE Transactions on Medical Imaging 38, no. 2 (February 2019): 638–48. http://dx.doi.org/10.1109/tmi.2018.2868977.
Kiela, Douwe, and Stephen Clark. "Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception." Journal of Artificial Intelligence Research 60 (December 26, 2017): 1003–30. http://dx.doi.org/10.1613/jair.5665.
Cui, Xiaohui, Xiaolong Qu, Dongmei Li, Yu Yang, Yuxun Li, and Xiaoping Zhang. "MKGCN: Multi-Modal Knowledge Graph Convolutional Network for Music Recommender Systems." Electronics 12, no. 12 (June 15, 2023): 2688. http://dx.doi.org/10.3390/electronics12122688.
Dong, Bin, Songlei Jian, and Kai Lu. "Learning Multimodal Representations by Symmetrically Transferring Local Structures." Symmetry 12, no. 9 (September 13, 2020): 1504. http://dx.doi.org/10.3390/sym12091504.
Li, Yehao, Jiahao Fan, Yingwei Pan, Ting Yao, Weiyao Lin, and Tao Mei. "Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 2 (May 31, 2022): 1–16. http://dx.doi.org/10.1145/3473140.
Gu, Zhihao, Jiangning Zhang, Liang Liu, Xu Chen, Jinlong Peng, Zhenye Gan, Guannan Jiang, Annan Shu, Yabiao Wang, and Lizhuang Ma. "Rethinking Reverse Distillation for Multi-Modal Anomaly Detection." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 8445–53. http://dx.doi.org/10.1609/aaai.v38i8.28687.
Wang, Zi, Chenglong Li, Aihua Zheng, Ran He, and Jin Tang. "Interact, Embed, and EnlargE: Boosting Modality-Specific Representations for Multi-Modal Person Re-identification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2633–41. http://dx.doi.org/10.1609/aaai.v36i3.20165.
Wróblewska, Anna, Jacek Dąbrowski, Michał Pastuszak, Andrzej Michałowski, Michał Daniluk, Barbara Rychalska, Mikołaj Wieczorek, and Sylwia Sysko-Romańczuk. "Designing Multi-Modal Embedding Fusion-Based Recommender." Electronics 11, no. 9 (April 27, 2022): 1391. http://dx.doi.org/10.3390/electronics11091391.
He, Qibin. "Prompting Multi-Modal Image Segmentation with Semantic Grouping." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2094–102. http://dx.doi.org/10.1609/aaai.v38i3.27981.
Liang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang, and Zhe Xue. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Bodapati, Jyostna Devi, Veeranjaneyulu Naralasetti, Shaik Nagur Shareef, Saqib Hakak, Muhammad Bilal, Praveen Kumar Reddy Maddikunta, and Ohyun Jo. "Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction." Electronics 9, no. 6 (May 30, 2020): 914. http://dx.doi.org/10.3390/electronics9060914.
Liu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu, and Hui Xiong. "Joint Representation Learning for Multi-Modal Transportation Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Yang, Fan, Wei Li, Menglong Yang, Binbin Liang, and Jianwei Zhang. "Multi-Modal Disordered Representation Learning Network for Description-Based Person Search." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (March 24, 2024): 16316–24. http://dx.doi.org/10.1609/aaai.v38i15.29567.
Jüttner, Martin, and Ingo Rentschler. "Imagery in multi-modal object learning." Behavioral and Brain Sciences 25, no. 2 (April 2002): 197–98. http://dx.doi.org/10.1017/s0140525x0238004x.
Tao, Rui, Meng Zhu, Haiyan Cao, and Honge Ren. "Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective." Sensors 24, no. 10 (May 14, 2024): 3130. http://dx.doi.org/10.3390/s24103130.
Pugeault, Nicolas, Florentin Wörgötter, and Norbert Krüger. "Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints." PLoS ONE 5, no. 6 (June 9, 2010): e10663. http://dx.doi.org/10.1371/journal.pone.0010663.
Lara, Bruno, Juan Manuel Rendon-Mancha, and Marcos A. Capistran. "Prediction of Undesired Situations based on Multi-Modal Representations." IEEE Latin America Transactions 5, no. 2 (May 2007): 103–8. http://dx.doi.org/10.1109/tla.2007.4381351.
Geng, Shijie, Peng Gao, Moitreya Chatterjee, Chiori Hori, Jonathan Le Roux, Yongfeng Zhang, Hongsheng Li, and Anoop Cherian. "Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1415–23. http://dx.doi.org/10.1609/aaai.v35i2.16231.
Yan, Facheng, Mingshu Zhang, and Bin Wei. "Multimodal integration for fake news detection on social media platforms." MATEC Web of Conferences 395 (2024): 01013. http://dx.doi.org/10.1051/matecconf/202439501013.
Escobar-Grisales, Daniel, Cristian David Ríos-Urrego, and Juan Rafael Orozco-Arroyave. "Deep Learning and Artificial Intelligence Applied to Model Speech and Language in Parkinson’s Disease." Diagnostics 13, no. 13 (June 25, 2023): 2163. http://dx.doi.org/10.3390/diagnostics13132163.
Zhai, Hanming, Xiaojun Lv, Zhiwen Hou, Xin Tong, and Fanliang Bu. "MLSFF: Multi-level structural features fusion for multi-modal knowledge graph completion." Mathematical Biosciences and Engineering 20, no. 8 (2023): 14096–116. http://dx.doi.org/10.3934/mbe.2023630.
Hua, Yan, Yingyun Yang, and Jianhe Du. "Deep Multi-Modal Metric Learning with Multi-Scale Correlation for Image-Text Retrieval." Electronics 9, no. 3 (March 10, 2020): 466. http://dx.doi.org/10.3390/electronics9030466.
Yang, Yiying, Fukun Yin, Wen Liu, Jiayuan Fan, Xin Chen, Gang Yu, and Tao Chen. "PM-INR: Prior-Rich Multi-Modal Implicit Large-Scale Scene Neural Representation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (March 24, 2024): 6594–602. http://dx.doi.org/10.1609/aaai.v38i7.28481.
Hu, Lianyu, Liqing Gao, Zekang Liu, Chi-Man Pun, and Wei Feng. "COMMA: Co-articulated Multi-Modal Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2238–46. http://dx.doi.org/10.1609/aaai.v38i3.27997.
Hill, Felix, Roi Reichart, and Anna Korhonen. "Multi-Modal Models for Concrete and Abstract Concept Meaning." Transactions of the Association for Computational Linguistics 2 (December 2014): 285–96. http://dx.doi.org/10.1162/tacl_a_00183.
Liu, Xuanwu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Yazhou Ren, and Maozu Guo. "Ranking-Based Deep Cross-Modal Hashing." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4400–4407. http://dx.doi.org/10.1609/aaai.v33i01.33014400.
Sezerer, Erhan, and Selma Tekir. "Incorporating Concreteness in Multi-Modal Language Models with Curriculum Learning." Applied Sciences 11, no. 17 (September 6, 2021): 8241. http://dx.doi.org/10.3390/app11178241.
Gou, Yingdong, Kexin Wang, Siwen Wei, and Changxin Shi. "GMDA: GCN-Based Multi-Modal Domain Adaptation for Real-Time Disaster Detection." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 31, no. 06 (December 2023): 957–73. http://dx.doi.org/10.1142/s0218488523500435.
Jang, Jiho, Chaerin Kong, DongHyeon Jeon, Seonhoon Kim, and Nojun Kwak. "Unifying Vision-Language Representation Space with Single-Tower Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (June 26, 2023): 980–88. http://dx.doi.org/10.1609/aaai.v37i1.25178.
Kabir, Anowarul, and Amarda Shehu. "GOProFormer: A Multi-Modal Transformer Method for Gene Ontology Protein Function Prediction." Biomolecules 12, no. 11 (November 18, 2022): 1709. http://dx.doi.org/10.3390/biom12111709.
Alam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Qian, Shengsheng, Dizhan Xue, Huaiwen Zhang, Quan Fang, and Changsheng Xu. "Dual Adversarial Graph Neural Networks for Multi-label Cross-modal Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (May 18, 2021): 2440–48. http://dx.doi.org/10.1609/aaai.v35i3.16345.
Lu, Lyujian, Saad Elbeleidy, Lauren Zoe Baker, and Hua Wang. "Learning Multi-Modal Biomarker Representations via Globally Aligned Longitudinal Enrichments." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 817–24. http://dx.doi.org/10.1609/aaai.v34i01.5426.
Zhang, Heng, Vishal M. Patel, and Rama Chellappa. "Low-Rank and Joint Sparse Representations for Multi-Modal Recognition." IEEE Transactions on Image Processing 26, no. 10 (October 2017): 4741–52. http://dx.doi.org/10.1109/tip.2017.2721838.
Blown, Eric, and Tom G. K. Bryce. "Conceptual Coherence Revealed in Multi‐Modal Representations of Astronomy Knowledge." International Journal of Science Education 32, no. 1 (June 30, 2009): 31–67. http://dx.doi.org/10.1080/09500690902974207.
Torasso, Pietro. "Multiple representations and multi-modal reasoning in medical diagnostic systems." Artificial Intelligence in Medicine 23, no. 1 (August 2001): 49–69. http://dx.doi.org/10.1016/s0933-3657(01)00075-6.
Wachinger, Christian, and Nassir Navab. "Entropy and Laplacian images: Structural representations for multi-modal registration." Medical Image Analysis 16, no. 1 (January 2012): 1–17. http://dx.doi.org/10.1016/j.media.2011.03.001.
Fang, Feiyi, Tao Zhou, Zhenbo Song, and Jianfeng Lu. "MMCAN: Multi-Modal Cross-Attention Network for Free-Space Detection with Uncalibrated Hyperspectral Sensors." Remote Sensing 15, no. 4 (February 20, 2023): 1142. http://dx.doi.org/10.3390/rs15041142.
Gao, Jingsheng, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu, and Yuzhuo Fu. "LAMM: Label Alignment for Multi-Modal Prompt Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 1815–23. http://dx.doi.org/10.1609/aaai.v38i3.27950.
Bao, Peijun, Wenhan Yang, Boon Poh Ng, Meng Hwa Er, and Alex C. Kot. "Cross-Modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (June 26, 2023): 215–22. http://dx.doi.org/10.1609/aaai.v37i1.25093.
Liu, Shuang, Mei Li, Zhong Zhang, Baihua Xiao, and Tariq S. Durrani. "Multi-Evidence and Multi-Modal Fusion Network for Ground-Based Cloud Recognition." Remote Sensing 12, no. 3 (February 2, 2020): 464. http://dx.doi.org/10.3390/rs12030464.