Artículos de revistas sobre el tema "Multimodal Embeddings"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Multimodal Embeddings".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Tyshchuk, Kirill, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev y Alexander Panchenko. "On Isotropy of Multimodal Embeddings". Information 14, n.º 7 (10 de julio de 2023): 392. http://dx.doi.org/10.3390/info14070392.
Texto completoGuo, Zhiqiang, Jianjun Li, Guohui Li, Chaoyang Wang, Si Shi y Bin Ruan. "LGMRec: Local and Global Graph Learning for Multimodal Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 8 (24 de marzo de 2024): 8454–62. http://dx.doi.org/10.1609/aaai.v38i8.28688.
Texto completoShang, Bin, Yinliang Zhao, Jun Liu y Di Wang. "LAFA: Multimodal Knowledge Graph Completion with Link Aware Fusion and Aggregation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 8 (24 de marzo de 2024): 8957–65. http://dx.doi.org/10.1609/aaai.v38i8.28744.
Texto completoSun, Zhongkai, Prathusha Sarma, William Sethares y Yingyu Liang. "Learning Relationships between Text, Audio, and Video via Deep Canonical Correlation for Multimodal Language Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8992–99. http://dx.doi.org/10.1609/aaai.v34i05.6431.
Texto completoMerkx, Danny y Stefan L. Frank. "Learning semantic sentence representations from visually grounded language without lexical knowledge". Natural Language Engineering 25, n.º 4 (julio de 2019): 451–66. http://dx.doi.org/10.1017/s1351324919000196.
Texto completoTang, Zhenchao, Jiehui Huang, Guanxing Chen y Calvin Yu-Chian Chen. "Comprehensive View Embedding Learning for Single-Cell Multimodal Integration". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 14 (24 de marzo de 2024): 15292–300. http://dx.doi.org/10.1609/aaai.v38i14.29453.
Texto completoZhang, Linhai, Deyu Zhou, Yulan He y Zeng Yang. "MERL: Multimodal Event Representation Learning in Heterogeneous Embedding Spaces". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 16 (18 de mayo de 2021): 14420–27. http://dx.doi.org/10.1609/aaai.v35i16.17695.
Texto completoSah, Shagan, Sabarish Gopalakishnan y Raymond Ptucha. "Aligned attention for common multimodal embeddings". Journal of Electronic Imaging 29, n.º 02 (25 de marzo de 2020): 1. http://dx.doi.org/10.1117/1.jei.29.2.023013.
Texto completoZhang, Rongchao, Yiwei Lou, Dexuan Xu, Yongzhi Cao, Hanpin Wang y Yu Huang. "A Learnable Discrete-Prior Fusion Autoencoder with Contrastive Learning for Tabular Data Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de marzo de 2024): 16803–11. http://dx.doi.org/10.1609/aaai.v38i15.29621.
Texto completoLin, Kaiyi, Xing Xu, Lianli Gao, Zheng Wang y Heng Tao Shen. "Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11515–22. http://dx.doi.org/10.1609/aaai.v34i07.6817.
Texto completoZhu, Chaoyu, Zhihao Yang, Xiaoqiong Xia, Nan Li, Fan Zhong y Lei Liu. "Multimodal reasoning based on knowledge graph embedding for specific diseases". Bioinformatics 38, n.º 8 (12 de febrero de 2022): 2235–45. http://dx.doi.org/10.1093/bioinformatics/btac085.
Texto completoTripathi, Aakash, Asim Waqas, Yasin Yilmaz y Ghulam Rasool. "Abstract 4905: Multimodal transformer model improves survival prediction in lung cancer compared to unimodal approaches". Cancer Research 84, n.º 6_Supplement (22 de marzo de 2024): 4905. http://dx.doi.org/10.1158/1538-7445.am2024-4905.
Texto completoOta, Kosuke, Keiichiro Shirai, Hidetoshi Miyao y Minoru Maruyama. "Multimodal Analogy-Based Image Retrieval by Improving Semantic Embeddings". Journal of Advanced Computational Intelligence and Intelligent Informatics 26, n.º 6 (20 de noviembre de 2022): 995–1003. http://dx.doi.org/10.20965/jaciii.2022.p0995.
Texto completoMai, Sijie, Haifeng Hu y Songlong Xing. "Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 01 (3 de abril de 2020): 164–72. http://dx.doi.org/10.1609/aaai.v34i01.5347.
Texto completoKim, Donghyun, Kuniaki Saito, Kate Saenko, Stan Sclaroff y Bryan Plummer. "MULE: Multimodal Universal Language Embedding". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11254–61. http://dx.doi.org/10.1609/aaai.v34i07.6785.
Texto completoWehrmann, Jônatas, Anderson Mattjie y Rodrigo C. Barros. "Order embeddings and character-level convolutions for multimodal alignment". Pattern Recognition Letters 102 (enero de 2018): 15–22. http://dx.doi.org/10.1016/j.patrec.2017.11.020.
Texto completoMithun, Niluthpol C., Juncheng Li, Florian Metze y Amit K. Roy-Chowdhury. "Joint embeddings with multimodal cues for video-text retrieval". International Journal of Multimedia Information Retrieval 8, n.º 1 (12 de enero de 2019): 3–18. http://dx.doi.org/10.1007/s13735-018-00166-3.
Texto completoNayak, Roshan, B. S. Ullas Kannantha, Kruthi S y C. Gururaj. "Multimodal Offensive Meme Classification u sing Transformers and BiLSTM". International Journal of Engineering and Advanced Technology 11, n.º 3 (28 de febrero de 2022): 96–102. http://dx.doi.org/10.35940/ijeat.c3392.0211322.
Texto completoChen, Weijia, Zhijun Lu, Lijue You, Lingling Zhou, Jie Xu y Ken Chen. "Artificial Intelligence–Based Multimodal Risk Assessment Model for Surgical Site Infection (AMRAMS): Development and Validation Study". JMIR Medical Informatics 8, n.º 6 (15 de junio de 2020): e18186. http://dx.doi.org/10.2196/18186.
Texto completoN.D., Smelik. "Multimodal topic model for texts and images utilizing their embeddings". Machine Learning and Data Analysis 2, n.º 4 (2016): 421–41. http://dx.doi.org/10.21469/22233792.2.4.05.
Texto completoAbdou, Ahmed, Ekta Sood, Philipp Müller y Andreas Bulling. "Gaze-enhanced Crossmodal Embeddings for Emotion Recognition". Proceedings of the ACM on Human-Computer Interaction 6, ETRA (13 de mayo de 2022): 1–18. http://dx.doi.org/10.1145/3530879.
Texto completoChen, Qihua, Xuejin Chen, Chenxuan Wang, Yixiong Liu, Zhiwei Xiong y Feng Wu. "Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 2 (24 de marzo de 2024): 1174–82. http://dx.doi.org/10.1609/aaai.v38i2.27879.
Texto completoHu, Wenbo, Yifan Xu, Yi Li, Weiyue Li, Zeyuan Chen y Zhuowen Tu. "BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual Questions". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 2256–64. http://dx.doi.org/10.1609/aaai.v38i3.27999.
Texto completoShen, Aili, Bahar Salehi, Jianzhong Qi y Timothy Baldwin. "A General Approach to Multimodal Document Quality Assessment". Journal of Artificial Intelligence Research 68 (22 de julio de 2020): 607–32. http://dx.doi.org/10.1613/jair.1.11647.
Texto completoTseng, Shao-Yen, Shrikanth Narayanan y Panayiotis Georgiou. "Multimodal Embeddings From Language Models for Emotion Recognition in the Wild". IEEE Signal Processing Letters 28 (2021): 608–12. http://dx.doi.org/10.1109/lsp.2021.3065598.
Texto completoJing, Xuebin, Liang He, Zhida Song y Shaolei Wang. "Audio–Visual Fusion Based on Interactive Attention for Person Verification". Sensors 23, n.º 24 (15 de diciembre de 2023): 9845. http://dx.doi.org/10.3390/s23249845.
Texto completoSalin, Emmanuelle, Badreddine Farah, Stéphane Ayache y Benoit Favre. "Are Vision-Language Transformers Learning Multimodal Representations? A Probing Perspective". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 11248–57. http://dx.doi.org/10.1609/aaai.v36i10.21375.
Texto completoSkantze, Gabriel y Bram Willemsen. "CoLLIE: Continual Learning of Language Grounding from Language-Image Embeddings". Journal of Artificial Intelligence Research 74 (9 de julio de 2022): 1201–23. http://dx.doi.org/10.1613/jair.1.13689.
Texto completoWang, Jenq-Haur, Mehdi Norouzi y Shu Ming Tsai. "Augmenting Multimodal Content Representation with Transformers for Misinformation Detection". Big Data and Cognitive Computing 8, n.º 10 (11 de octubre de 2024): 134. http://dx.doi.org/10.3390/bdcc8100134.
Texto completoKang, Yu, Tianqiao Liu, Hang Li, Yang Hao y Wenbiao Ding. "Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 10875–83. http://dx.doi.org/10.1609/aaai.v36i10.21334.
Texto completoYang, Bang, Yong Dai, Xuxin Cheng, Yaowei Li, Asif Raza y Yuexian Zou. "Embracing Language Inclusivity and Diversity in CLIP through Continual Language Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de marzo de 2024): 6458–66. http://dx.doi.org/10.1609/aaai.v38i6.28466.
Texto completoWang, Fengjun, Sarai Mizrachi, Moran Beladev, Guy Nadav, Gil Amsalem, Karen Lastmann Assaraf y Hadas Harush Boker. "MuMIC – Multimodal Embedding for Multi-Label Image Classification with Tempered Sigmoid". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 13 (26 de junio de 2023): 15603–11. http://dx.doi.org/10.1609/aaai.v37i13.26850.
Texto completoNikzad-Khasmakhi, N., M. A. Balafar, M. Reza Feizi-Derakhshi y Cina Motamed. "BERTERS: Multimodal representation learning for expert recommendation system with transformers and graph embeddings". Chaos, Solitons & Fractals 151 (octubre de 2021): 111260. http://dx.doi.org/10.1016/j.chaos.2021.111260.
Texto completoLiu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu y Hui Xiong. "Joint Representation Learning for Multi-Modal Transportation Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Texto completoChen, Guang, Fangxiang Feng, Guangwei Zhang, Xiaoxu Li y Ruifan Li. "A Visually Enhanced Neural Encoder for Synset Induction". Electronics 12, n.º 16 (20 de agosto de 2023): 3521. http://dx.doi.org/10.3390/electronics12163521.
Texto completoXu, Xing, Jialin Tian, Kaiyi Lin, Huimin Lu, Jie Shao y Heng Tao Shen. "Zero-shot Cross-modal Retrieval by Assembling AutoEncoder and Generative Adversarial Network". ACM Transactions on Multimedia Computing, Communications, and Applications 17, n.º 1s (31 de marzo de 2021): 1–17. http://dx.doi.org/10.1145/3424341.
Texto completoAnitha Mummireddygari y N Ananda Reddy. "Optimizing Speaker Recognition in Complex Environments : An Enhanced Framework with Artificial Neural Networks for Multi-Speaker Settings". International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, n.º 3 (28 de mayo de 2024): 387–98. http://dx.doi.org/10.32628/cseit24103116.
Texto completoD.S. Rao, Rakhi Madhukararao Joshi,. "Multi-camera Vehicle Tracking and Recognition with Multimodal Contrastive Domain Sharing GAN and Topological Embeddings". Journal of Electrical Systems 20, n.º 2s (4 de abril de 2024): 675–86. http://dx.doi.org/10.52783/jes.1532.
Texto completoKim, MinJun, SeungWoo Song, YouHan Lee, Haneol Jang y KyungTae Lim. "BOK-VQA: Bilingual outside Knowledge-Based Visual Question Answering via Graph Representation Pretraining". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de marzo de 2024): 18381–89. http://dx.doi.org/10.1609/aaai.v38i16.29798.
Texto completoAlam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junio de 2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Texto completoZhang, Ruochi, Tianming Zhou y Jian Ma. "Multiscale and integrative single-cell Hi-C analysis with Higashi". Nature Biotechnology 40, n.º 2 (11 de octubre de 2021): 254–61. http://dx.doi.org/10.1038/s41587-021-01034-y.
Texto completoLiang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang y Zhe Xue. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 12 (24 de marzo de 2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Texto completoZhang, Litian, Xiaoming Zhang, Ziyi Zhou, Feiran Huang y Chaozhuo Li. "Reinforced Adaptive Knowledge Learning for Multimodal Fake News Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de marzo de 2024): 16777–85. http://dx.doi.org/10.1609/aaai.v38i15.29618.
Texto completoFaizabadi, Ahmed Rimaz, Hasan Firdaus Mohd Zaki, Zulkifli Zainal Abidin, Muhammad Afif Husman y Nik Nur Wahidah Nik Hashim. "Learning a Multimodal 3D Face Embedding for Robust RGBD Face Recognition". Journal of Integrated and Advanced Engineering (JIAE) 3, n.º 1 (9 de marzo de 2023): 37–46. http://dx.doi.org/10.51662/jiae.v3i1.84.
Texto completoBenzinho, José, João Ferreira, Joel Batista, Leandro Pereira, Marisa Maximiano, Vítor Távora, Ricardo Gomes y Orlando Remédios. "LLM Based Chatbot for Farm-to-Fork Blockchain Traceability Platform". Applied Sciences 14, n.º 19 (2 de octubre de 2024): 8856. http://dx.doi.org/10.3390/app14198856.
Texto completoLiu, Xinyi, Bo Peng, Meiliu Wu, Mingshu Wang, Heng Cai y Qunying Huang. "Occupation Prediction with Multimodal Learning from Tweet Messages and Google Street View Images". AGILE: GIScience Series 5 (30 de mayo de 2024): 1–6. http://dx.doi.org/10.5194/agile-giss-5-36-2024.
Texto completoSun, Jianguo, Hanqi Yin, Ye Tian, Junpeng Wu, Linshan Shen y Lei Chen. "Two-Level Multimodal Fusion for Sentiment Analysis in Public Security". Security and Communication Networks 2021 (3 de junio de 2021): 1–10. http://dx.doi.org/10.1155/2021/6662337.
Texto completoYuan, Hui, Yuanyuan Tang, Wei Xu y Raymond Yiu Keung Lau. "Exploring the influence of multimodal social media data on stock performance: an empirical perspective and analysis". Internet Research 31, n.º 3 (12 de enero de 2021): 871–91. http://dx.doi.org/10.1108/intr-11-2019-0461.
Texto completoMingote, Victoria, Ignacio Viñals, Pablo Gimeno, Antonio Miguel, Alfonso Ortega y Eduardo Lleida. "Multimodal Diarization Systems by Training Enrollment Models as Identity Representations". Applied Sciences 12, n.º 3 (21 de enero de 2022): 1141. http://dx.doi.org/10.3390/app12031141.
Texto completoKrawczuk, Patrycja, Zachary Fox, Dakota Murdock, Jennifer Doherty, Antoinette Stroupe, Stephen M. Schwartz, Lynne Penberthy et al. "Abstract 2318: Multimodal machine learning for the automatic classification of recurrent cancers". Cancer Research 84, n.º 6_Supplement (22 de marzo de 2024): 2318. http://dx.doi.org/10.1158/1538-7445.am2024-2318.
Texto completo