Статті в журналах з теми "Multimodal Embeddings"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Multimodal Embeddings".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Tyshchuk, Kirill, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev, and Alexander Panchenko. "On Isotropy of Multimodal Embeddings." Information 14, no. 7 (July 10, 2023): 392. http://dx.doi.org/10.3390/info14070392.
Повний текст джерелаGuo, Zhiqiang, Jianjun Li, Guohui Li, Chaoyang Wang, Si Shi, and Bin Ruan. "LGMRec: Local and Global Graph Learning for Multimodal Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 8454–62. http://dx.doi.org/10.1609/aaai.v38i8.28688.
Повний текст джерелаShang, Bin, Yinliang Zhao, Jun Liu, and Di Wang. "LAFA: Multimodal Knowledge Graph Completion with Link Aware Fusion and Aggregation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 8957–65. http://dx.doi.org/10.1609/aaai.v38i8.28744.
Повний текст джерелаSun, Zhongkai, Prathusha Sarma, William Sethares, and Yingyu Liang. "Learning Relationships between Text, Audio, and Video via Deep Canonical Correlation for Multimodal Language Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8992–99. http://dx.doi.org/10.1609/aaai.v34i05.6431.
Повний текст джерелаMerkx, Danny, and Stefan L. Frank. "Learning semantic sentence representations from visually grounded language without lexical knowledge." Natural Language Engineering 25, no. 4 (July 2019): 451–66. http://dx.doi.org/10.1017/s1351324919000196.
Повний текст джерелаTang, Zhenchao, Jiehui Huang, Guanxing Chen, and Calvin Yu-Chian Chen. "Comprehensive View Embedding Learning for Single-Cell Multimodal Integration." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15292–300. http://dx.doi.org/10.1609/aaai.v38i14.29453.
Повний текст джерелаZhang, Linhai, Deyu Zhou, Yulan He, and Zeng Yang. "MERL: Multimodal Event Representation Learning in Heterogeneous Embedding Spaces." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14420–27. http://dx.doi.org/10.1609/aaai.v35i16.17695.
Повний текст джерелаSah, Shagan, Sabarish Gopalakishnan, and Raymond Ptucha. "Aligned attention for common multimodal embeddings." Journal of Electronic Imaging 29, no. 02 (March 25, 2020): 1. http://dx.doi.org/10.1117/1.jei.29.2.023013.
Повний текст джерелаZhang, Rongchao, Yiwei Lou, Dexuan Xu, Yongzhi Cao, Hanpin Wang, and Yu Huang. "A Learnable Discrete-Prior Fusion Autoencoder with Contrastive Learning for Tabular Data Synthesis." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (March 24, 2024): 16803–11. http://dx.doi.org/10.1609/aaai.v38i15.29621.
Повний текст джерелаLin, Kaiyi, Xing Xu, Lianli Gao, Zheng Wang, and Heng Tao Shen. "Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11515–22. http://dx.doi.org/10.1609/aaai.v34i07.6817.
Повний текст джерелаZhu, Chaoyu, Zhihao Yang, Xiaoqiong Xia, Nan Li, Fan Zhong, and Lei Liu. "Multimodal reasoning based on knowledge graph embedding for specific diseases." Bioinformatics 38, no. 8 (February 12, 2022): 2235–45. http://dx.doi.org/10.1093/bioinformatics/btac085.
Повний текст джерелаTripathi, Aakash, Asim Waqas, Yasin Yilmaz, and Ghulam Rasool. "Abstract 4905: Multimodal transformer model improves survival prediction in lung cancer compared to unimodal approaches." Cancer Research 84, no. 6_Supplement (March 22, 2024): 4905. http://dx.doi.org/10.1158/1538-7445.am2024-4905.
Повний текст джерелаOta, Kosuke, Keiichiro Shirai, Hidetoshi Miyao, and Minoru Maruyama. "Multimodal Analogy-Based Image Retrieval by Improving Semantic Embeddings." Journal of Advanced Computational Intelligence and Intelligent Informatics 26, no. 6 (November 20, 2022): 995–1003. http://dx.doi.org/10.20965/jaciii.2022.p0995.
Повний текст джерелаMai, Sijie, Haifeng Hu, and Songlong Xing. "Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 164–72. http://dx.doi.org/10.1609/aaai.v34i01.5347.
Повний текст джерелаKim, Donghyun, Kuniaki Saito, Kate Saenko, Stan Sclaroff, and Bryan Plummer. "MULE: Multimodal Universal Language Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11254–61. http://dx.doi.org/10.1609/aaai.v34i07.6785.
Повний текст джерелаWehrmann, Jônatas, Anderson Mattjie, and Rodrigo C. Barros. "Order embeddings and character-level convolutions for multimodal alignment." Pattern Recognition Letters 102 (January 2018): 15–22. http://dx.doi.org/10.1016/j.patrec.2017.11.020.
Повний текст джерелаMithun, Niluthpol C., Juncheng Li, Florian Metze, and Amit K. Roy-Chowdhury. "Joint embeddings with multimodal cues for video-text retrieval." International Journal of Multimedia Information Retrieval 8, no. 1 (January 12, 2019): 3–18. http://dx.doi.org/10.1007/s13735-018-00166-3.
Повний текст джерелаNayak, Roshan, B. S. Ullas Kannantha, Kruthi S, and C. Gururaj. "Multimodal Offensive Meme Classification u sing Transformers and BiLSTM." International Journal of Engineering and Advanced Technology 11, no. 3 (February 28, 2022): 96–102. http://dx.doi.org/10.35940/ijeat.c3392.0211322.
Повний текст джерелаChen, Weijia, Zhijun Lu, Lijue You, Lingling Zhou, Jie Xu, and Ken Chen. "Artificial Intelligence–Based Multimodal Risk Assessment Model for Surgical Site Infection (AMRAMS): Development and Validation Study." JMIR Medical Informatics 8, no. 6 (June 15, 2020): e18186. http://dx.doi.org/10.2196/18186.
Повний текст джерелаN.D., Smelik. "Multimodal topic model for texts and images utilizing their embeddings." Machine Learning and Data Analysis 2, no. 4 (2016): 421–41. http://dx.doi.org/10.21469/22233792.2.4.05.
Повний текст джерелаAbdou, Ahmed, Ekta Sood, Philipp Müller, and Andreas Bulling. "Gaze-enhanced Crossmodal Embeddings for Emotion Recognition." Proceedings of the ACM on Human-Computer Interaction 6, ETRA (May 13, 2022): 1–18. http://dx.doi.org/10.1145/3530879.
Повний текст джерелаChen, Qihua, Xuejin Chen, Chenxuan Wang, Yixiong Liu, Zhiwei Xiong, and Feng Wu. "Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 2 (March 24, 2024): 1174–82. http://dx.doi.org/10.1609/aaai.v38i2.27879.
Повний текст джерелаHu, Wenbo, Yifan Xu, Yi Li, Weiyue Li, Zeyuan Chen, and Zhuowen Tu. "BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual Questions." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2256–64. http://dx.doi.org/10.1609/aaai.v38i3.27999.
Повний текст джерелаShen, Aili, Bahar Salehi, Jianzhong Qi, and Timothy Baldwin. "A General Approach to Multimodal Document Quality Assessment." Journal of Artificial Intelligence Research 68 (July 22, 2020): 607–32. http://dx.doi.org/10.1613/jair.1.11647.
Повний текст джерелаTseng, Shao-Yen, Shrikanth Narayanan, and Panayiotis Georgiou. "Multimodal Embeddings From Language Models for Emotion Recognition in the Wild." IEEE Signal Processing Letters 28 (2021): 608–12. http://dx.doi.org/10.1109/lsp.2021.3065598.
Повний текст джерелаJing, Xuebin, Liang He, Zhida Song, and Shaolei Wang. "Audio–Visual Fusion Based on Interactive Attention for Person Verification." Sensors 23, no. 24 (December 15, 2023): 9845. http://dx.doi.org/10.3390/s23249845.
Повний текст джерелаSalin, Emmanuelle, Badreddine Farah, Stéphane Ayache, and Benoit Favre. "Are Vision-Language Transformers Learning Multimodal Representations? A Probing Perspective." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 11248–57. http://dx.doi.org/10.1609/aaai.v36i10.21375.
Повний текст джерелаSkantze, Gabriel, and Bram Willemsen. "CoLLIE: Continual Learning of Language Grounding from Language-Image Embeddings." Journal of Artificial Intelligence Research 74 (July 9, 2022): 1201–23. http://dx.doi.org/10.1613/jair.1.13689.
Повний текст джерелаWang, Jenq-Haur, Mehdi Norouzi, and Shu Ming Tsai. "Augmenting Multimodal Content Representation with Transformers for Misinformation Detection." Big Data and Cognitive Computing 8, no. 10 (October 11, 2024): 134. http://dx.doi.org/10.3390/bdcc8100134.
Повний текст джерелаKang, Yu, Tianqiao Liu, Hang Li, Yang Hao, and Wenbiao Ding. "Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 10875–83. http://dx.doi.org/10.1609/aaai.v36i10.21334.
Повний текст джерелаYang, Bang, Yong Dai, Xuxin Cheng, Yaowei Li, Asif Raza, and Yuexian Zou. "Embracing Language Inclusivity and Diversity in CLIP through Continual Language Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (March 24, 2024): 6458–66. http://dx.doi.org/10.1609/aaai.v38i6.28466.
Повний текст джерелаWang, Fengjun, Sarai Mizrachi, Moran Beladev, Guy Nadav, Gil Amsalem, Karen Lastmann Assaraf, and Hadas Harush Boker. "MuMIC – Multimodal Embedding for Multi-Label Image Classification with Tempered Sigmoid." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15603–11. http://dx.doi.org/10.1609/aaai.v37i13.26850.
Повний текст джерелаNikzad-Khasmakhi, N., M. A. Balafar, M. Reza Feizi-Derakhshi, and Cina Motamed. "BERTERS: Multimodal representation learning for expert recommendation system with transformers and graph embeddings." Chaos, Solitons & Fractals 151 (October 2021): 111260. http://dx.doi.org/10.1016/j.chaos.2021.111260.
Повний текст джерелаLiu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu, and Hui Xiong. "Joint Representation Learning for Multi-Modal Transportation Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Повний текст джерелаChen, Guang, Fangxiang Feng, Guangwei Zhang, Xiaoxu Li, and Ruifan Li. "A Visually Enhanced Neural Encoder for Synset Induction." Electronics 12, no. 16 (August 20, 2023): 3521. http://dx.doi.org/10.3390/electronics12163521.
Повний текст джерелаXu, Xing, Jialin Tian, Kaiyi Lin, Huimin Lu, Jie Shao, and Heng Tao Shen. "Zero-shot Cross-modal Retrieval by Assembling AutoEncoder and Generative Adversarial Network." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (March 31, 2021): 1–17. http://dx.doi.org/10.1145/3424341.
Повний текст джерелаAnitha Mummireddygari and N Ananda Reddy. "Optimizing Speaker Recognition in Complex Environments : An Enhanced Framework with Artificial Neural Networks for Multi-Speaker Settings." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 3 (May 28, 2024): 387–98. http://dx.doi.org/10.32628/cseit24103116.
Повний текст джерелаD.S. Rao, Rakhi Madhukararao Joshi,. "Multi-camera Vehicle Tracking and Recognition with Multimodal Contrastive Domain Sharing GAN and Topological Embeddings." Journal of Electrical Systems 20, no. 2s (April 4, 2024): 675–86. http://dx.doi.org/10.52783/jes.1532.
Повний текст джерелаKim, MinJun, SeungWoo Song, YouHan Lee, Haneol Jang, and KyungTae Lim. "BOK-VQA: Bilingual outside Knowledge-Based Visual Question Answering via Graph Representation Pretraining." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (March 24, 2024): 18381–89. http://dx.doi.org/10.1609/aaai.v38i16.29798.
Повний текст джерелаAlam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Повний текст джерелаZhang, Ruochi, Tianming Zhou, and Jian Ma. "Multiscale and integrative single-cell Hi-C analysis with Higashi." Nature Biotechnology 40, no. 2 (October 11, 2021): 254–61. http://dx.doi.org/10.1038/s41587-021-01034-y.
Повний текст джерелаLiang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang, and Zhe Xue. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (March 24, 2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Повний текст джерелаZhang, Litian, Xiaoming Zhang, Ziyi Zhou, Feiran Huang, and Chaozhuo Li. "Reinforced Adaptive Knowledge Learning for Multimodal Fake News Detection." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (March 24, 2024): 16777–85. http://dx.doi.org/10.1609/aaai.v38i15.29618.
Повний текст джерелаFaizabadi, Ahmed Rimaz, Hasan Firdaus Mohd Zaki, Zulkifli Zainal Abidin, Muhammad Afif Husman, and Nik Nur Wahidah Nik Hashim. "Learning a Multimodal 3D Face Embedding for Robust RGBD Face Recognition." Journal of Integrated and Advanced Engineering (JIAE) 3, no. 1 (March 9, 2023): 37–46. http://dx.doi.org/10.51662/jiae.v3i1.84.
Повний текст джерелаBenzinho, José, João Ferreira, Joel Batista, Leandro Pereira, Marisa Maximiano, Vítor Távora, Ricardo Gomes, and Orlando Remédios. "LLM Based Chatbot for Farm-to-Fork Blockchain Traceability Platform." Applied Sciences 14, no. 19 (October 2, 2024): 8856. http://dx.doi.org/10.3390/app14198856.
Повний текст джерелаLiu, Xinyi, Bo Peng, Meiliu Wu, Mingshu Wang, Heng Cai, and Qunying Huang. "Occupation Prediction with Multimodal Learning from Tweet Messages and Google Street View Images." AGILE: GIScience Series 5 (May 30, 2024): 1–6. http://dx.doi.org/10.5194/agile-giss-5-36-2024.
Повний текст джерелаSun, Jianguo, Hanqi Yin, Ye Tian, Junpeng Wu, Linshan Shen, and Lei Chen. "Two-Level Multimodal Fusion for Sentiment Analysis in Public Security." Security and Communication Networks 2021 (June 3, 2021): 1–10. http://dx.doi.org/10.1155/2021/6662337.
Повний текст джерелаYuan, Hui, Yuanyuan Tang, Wei Xu, and Raymond Yiu Keung Lau. "Exploring the influence of multimodal social media data on stock performance: an empirical perspective and analysis." Internet Research 31, no. 3 (January 12, 2021): 871–91. http://dx.doi.org/10.1108/intr-11-2019-0461.
Повний текст джерелаMingote, Victoria, Ignacio Viñals, Pablo Gimeno, Antonio Miguel, Alfonso Ortega, and Eduardo Lleida. "Multimodal Diarization Systems by Training Enrollment Models as Identity Representations." Applied Sciences 12, no. 3 (January 21, 2022): 1141. http://dx.doi.org/10.3390/app12031141.
Повний текст джерелаKrawczuk, Patrycja, Zachary Fox, Dakota Murdock, Jennifer Doherty, Antoinette Stroupe, Stephen M. Schwartz, Lynne Penberthy, et al. "Abstract 2318: Multimodal machine learning for the automatic classification of recurrent cancers." Cancer Research 84, no. 6_Supplement (March 22, 2024): 2318. http://dx.doi.org/10.1158/1538-7445.am2024-2318.
Повний текст джерела