Articoli di riviste sul tema "Multimodal Embeddings"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Multimodal Embeddings".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.
Tyshchuk, Kirill, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev e Alexander Panchenko. "On Isotropy of Multimodal Embeddings". Information 14, n. 7 (10 luglio 2023): 392. http://dx.doi.org/10.3390/info14070392.
Testo completoGuo, Zhiqiang, Jianjun Li, Guohui Li, Chaoyang Wang, Si Shi e Bin Ruan. "LGMRec: Local and Global Graph Learning for Multimodal Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 8 (24 marzo 2024): 8454–62. http://dx.doi.org/10.1609/aaai.v38i8.28688.
Testo completoShang, Bin, Yinliang Zhao, Jun Liu e Di Wang. "LAFA: Multimodal Knowledge Graph Completion with Link Aware Fusion and Aggregation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 8 (24 marzo 2024): 8957–65. http://dx.doi.org/10.1609/aaai.v38i8.28744.
Testo completoSun, Zhongkai, Prathusha Sarma, William Sethares e Yingyu Liang. "Learning Relationships between Text, Audio, and Video via Deep Canonical Correlation for Multimodal Language Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 05 (3 aprile 2020): 8992–99. http://dx.doi.org/10.1609/aaai.v34i05.6431.
Testo completoMerkx, Danny, e Stefan L. Frank. "Learning semantic sentence representations from visually grounded language without lexical knowledge". Natural Language Engineering 25, n. 4 (luglio 2019): 451–66. http://dx.doi.org/10.1017/s1351324919000196.
Testo completoTang, Zhenchao, Jiehui Huang, Guanxing Chen e Calvin Yu-Chian Chen. "Comprehensive View Embedding Learning for Single-Cell Multimodal Integration". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 14 (24 marzo 2024): 15292–300. http://dx.doi.org/10.1609/aaai.v38i14.29453.
Testo completoZhang, Linhai, Deyu Zhou, Yulan He e Zeng Yang. "MERL: Multimodal Event Representation Learning in Heterogeneous Embedding Spaces". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 16 (18 maggio 2021): 14420–27. http://dx.doi.org/10.1609/aaai.v35i16.17695.
Testo completoSah, Shagan, Sabarish Gopalakishnan e Raymond Ptucha. "Aligned attention for common multimodal embeddings". Journal of Electronic Imaging 29, n. 02 (25 marzo 2020): 1. http://dx.doi.org/10.1117/1.jei.29.2.023013.
Testo completoZhang, Rongchao, Yiwei Lou, Dexuan Xu, Yongzhi Cao, Hanpin Wang e Yu Huang. "A Learnable Discrete-Prior Fusion Autoencoder with Contrastive Learning for Tabular Data Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 15 (24 marzo 2024): 16803–11. http://dx.doi.org/10.1609/aaai.v38i15.29621.
Testo completoLin, Kaiyi, Xing Xu, Lianli Gao, Zheng Wang e Heng Tao Shen. "Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 07 (3 aprile 2020): 11515–22. http://dx.doi.org/10.1609/aaai.v34i07.6817.
Testo completoZhu, Chaoyu, Zhihao Yang, Xiaoqiong Xia, Nan Li, Fan Zhong e Lei Liu. "Multimodal reasoning based on knowledge graph embedding for specific diseases". Bioinformatics 38, n. 8 (12 febbraio 2022): 2235–45. http://dx.doi.org/10.1093/bioinformatics/btac085.
Testo completoTripathi, Aakash, Asim Waqas, Yasin Yilmaz e Ghulam Rasool. "Abstract 4905: Multimodal transformer model improves survival prediction in lung cancer compared to unimodal approaches". Cancer Research 84, n. 6_Supplement (22 marzo 2024): 4905. http://dx.doi.org/10.1158/1538-7445.am2024-4905.
Testo completoOta, Kosuke, Keiichiro Shirai, Hidetoshi Miyao e Minoru Maruyama. "Multimodal Analogy-Based Image Retrieval by Improving Semantic Embeddings". Journal of Advanced Computational Intelligence and Intelligent Informatics 26, n. 6 (20 novembre 2022): 995–1003. http://dx.doi.org/10.20965/jaciii.2022.p0995.
Testo completoMai, Sijie, Haifeng Hu e Songlong Xing. "Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 01 (3 aprile 2020): 164–72. http://dx.doi.org/10.1609/aaai.v34i01.5347.
Testo completoKim, Donghyun, Kuniaki Saito, Kate Saenko, Stan Sclaroff e Bryan Plummer. "MULE: Multimodal Universal Language Embedding". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 07 (3 aprile 2020): 11254–61. http://dx.doi.org/10.1609/aaai.v34i07.6785.
Testo completoWehrmann, Jônatas, Anderson Mattjie e Rodrigo C. Barros. "Order embeddings and character-level convolutions for multimodal alignment". Pattern Recognition Letters 102 (gennaio 2018): 15–22. http://dx.doi.org/10.1016/j.patrec.2017.11.020.
Testo completoMithun, Niluthpol C., Juncheng Li, Florian Metze e Amit K. Roy-Chowdhury. "Joint embeddings with multimodal cues for video-text retrieval". International Journal of Multimedia Information Retrieval 8, n. 1 (12 gennaio 2019): 3–18. http://dx.doi.org/10.1007/s13735-018-00166-3.
Testo completoNayak, Roshan, B. S. Ullas Kannantha, Kruthi S e C. Gururaj. "Multimodal Offensive Meme Classification u sing Transformers and BiLSTM". International Journal of Engineering and Advanced Technology 11, n. 3 (28 febbraio 2022): 96–102. http://dx.doi.org/10.35940/ijeat.c3392.0211322.
Testo completoChen, Weijia, Zhijun Lu, Lijue You, Lingling Zhou, Jie Xu e Ken Chen. "Artificial Intelligence–Based Multimodal Risk Assessment Model for Surgical Site Infection (AMRAMS): Development and Validation Study". JMIR Medical Informatics 8, n. 6 (15 giugno 2020): e18186. http://dx.doi.org/10.2196/18186.
Testo completoN.D., Smelik. "Multimodal topic model for texts and images utilizing their embeddings". Machine Learning and Data Analysis 2, n. 4 (2016): 421–41. http://dx.doi.org/10.21469/22233792.2.4.05.
Testo completoAbdou, Ahmed, Ekta Sood, Philipp Müller e Andreas Bulling. "Gaze-enhanced Crossmodal Embeddings for Emotion Recognition". Proceedings of the ACM on Human-Computer Interaction 6, ETRA (13 maggio 2022): 1–18. http://dx.doi.org/10.1145/3530879.
Testo completoChen, Qihua, Xuejin Chen, Chenxuan Wang, Yixiong Liu, Zhiwei Xiong e Feng Wu. "Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 2 (24 marzo 2024): 1174–82. http://dx.doi.org/10.1609/aaai.v38i2.27879.
Testo completoHu, Wenbo, Yifan Xu, Yi Li, Weiyue Li, Zeyuan Chen e Zhuowen Tu. "BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual Questions". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 3 (24 marzo 2024): 2256–64. http://dx.doi.org/10.1609/aaai.v38i3.27999.
Testo completoShen, Aili, Bahar Salehi, Jianzhong Qi e Timothy Baldwin. "A General Approach to Multimodal Document Quality Assessment". Journal of Artificial Intelligence Research 68 (22 luglio 2020): 607–32. http://dx.doi.org/10.1613/jair.1.11647.
Testo completoTseng, Shao-Yen, Shrikanth Narayanan e Panayiotis Georgiou. "Multimodal Embeddings From Language Models for Emotion Recognition in the Wild". IEEE Signal Processing Letters 28 (2021): 608–12. http://dx.doi.org/10.1109/lsp.2021.3065598.
Testo completoJing, Xuebin, Liang He, Zhida Song e Shaolei Wang. "Audio–Visual Fusion Based on Interactive Attention for Person Verification". Sensors 23, n. 24 (15 dicembre 2023): 9845. http://dx.doi.org/10.3390/s23249845.
Testo completoSalin, Emmanuelle, Badreddine Farah, Stéphane Ayache e Benoit Favre. "Are Vision-Language Transformers Learning Multimodal Representations? A Probing Perspective". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 10 (28 giugno 2022): 11248–57. http://dx.doi.org/10.1609/aaai.v36i10.21375.
Testo completoSkantze, Gabriel, e Bram Willemsen. "CoLLIE: Continual Learning of Language Grounding from Language-Image Embeddings". Journal of Artificial Intelligence Research 74 (9 luglio 2022): 1201–23. http://dx.doi.org/10.1613/jair.1.13689.
Testo completoWang, Jenq-Haur, Mehdi Norouzi e Shu Ming Tsai. "Augmenting Multimodal Content Representation with Transformers for Misinformation Detection". Big Data and Cognitive Computing 8, n. 10 (11 ottobre 2024): 134. http://dx.doi.org/10.3390/bdcc8100134.
Testo completoKang, Yu, Tianqiao Liu, Hang Li, Yang Hao e Wenbiao Ding. "Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 10 (28 giugno 2022): 10875–83. http://dx.doi.org/10.1609/aaai.v36i10.21334.
Testo completoYang, Bang, Yong Dai, Xuxin Cheng, Yaowei Li, Asif Raza e Yuexian Zou. "Embracing Language Inclusivity and Diversity in CLIP through Continual Language Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 6 (24 marzo 2024): 6458–66. http://dx.doi.org/10.1609/aaai.v38i6.28466.
Testo completoWang, Fengjun, Sarai Mizrachi, Moran Beladev, Guy Nadav, Gil Amsalem, Karen Lastmann Assaraf e Hadas Harush Boker. "MuMIC – Multimodal Embedding for Multi-Label Image Classification with Tempered Sigmoid". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 13 (26 giugno 2023): 15603–11. http://dx.doi.org/10.1609/aaai.v37i13.26850.
Testo completoNikzad-Khasmakhi, N., M. A. Balafar, M. Reza Feizi-Derakhshi e Cina Motamed. "BERTERS: Multimodal representation learning for expert recommendation system with transformers and graph embeddings". Chaos, Solitons & Fractals 151 (ottobre 2021): 111260. http://dx.doi.org/10.1016/j.chaos.2021.111260.
Testo completoLiu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu e Hui Xiong. "Joint Representation Learning for Multi-Modal Transportation Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Testo completoChen, Guang, Fangxiang Feng, Guangwei Zhang, Xiaoxu Li e Ruifan Li. "A Visually Enhanced Neural Encoder for Synset Induction". Electronics 12, n. 16 (20 agosto 2023): 3521. http://dx.doi.org/10.3390/electronics12163521.
Testo completoXu, Xing, Jialin Tian, Kaiyi Lin, Huimin Lu, Jie Shao e Heng Tao Shen. "Zero-shot Cross-modal Retrieval by Assembling AutoEncoder and Generative Adversarial Network". ACM Transactions on Multimedia Computing, Communications, and Applications 17, n. 1s (31 marzo 2021): 1–17. http://dx.doi.org/10.1145/3424341.
Testo completoAnitha Mummireddygari e N Ananda Reddy. "Optimizing Speaker Recognition in Complex Environments : An Enhanced Framework with Artificial Neural Networks for Multi-Speaker Settings". International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, n. 3 (28 maggio 2024): 387–98. http://dx.doi.org/10.32628/cseit24103116.
Testo completoD.S. Rao, Rakhi Madhukararao Joshi,. "Multi-camera Vehicle Tracking and Recognition with Multimodal Contrastive Domain Sharing GAN and Topological Embeddings". Journal of Electrical Systems 20, n. 2s (4 aprile 2024): 675–86. http://dx.doi.org/10.52783/jes.1532.
Testo completoKim, MinJun, SeungWoo Song, YouHan Lee, Haneol Jang e KyungTae Lim. "BOK-VQA: Bilingual outside Knowledge-Based Visual Question Answering via Graph Representation Pretraining". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 16 (24 marzo 2024): 18381–89. http://dx.doi.org/10.1609/aaai.v38i16.29798.
Testo completoAlam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 11 (28 giugno 2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Testo completoZhang, Ruochi, Tianming Zhou e Jian Ma. "Multiscale and integrative single-cell Hi-C analysis with Higashi". Nature Biotechnology 40, n. 2 (11 ottobre 2021): 254–61. http://dx.doi.org/10.1038/s41587-021-01034-y.
Testo completoLiang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang e Zhe Xue. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 12 (24 marzo 2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Testo completoZhang, Litian, Xiaoming Zhang, Ziyi Zhou, Feiran Huang e Chaozhuo Li. "Reinforced Adaptive Knowledge Learning for Multimodal Fake News Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 15 (24 marzo 2024): 16777–85. http://dx.doi.org/10.1609/aaai.v38i15.29618.
Testo completoFaizabadi, Ahmed Rimaz, Hasan Firdaus Mohd Zaki, Zulkifli Zainal Abidin, Muhammad Afif Husman e Nik Nur Wahidah Nik Hashim. "Learning a Multimodal 3D Face Embedding for Robust RGBD Face Recognition". Journal of Integrated and Advanced Engineering (JIAE) 3, n. 1 (9 marzo 2023): 37–46. http://dx.doi.org/10.51662/jiae.v3i1.84.
Testo completoBenzinho, José, João Ferreira, Joel Batista, Leandro Pereira, Marisa Maximiano, Vítor Távora, Ricardo Gomes e Orlando Remédios. "LLM Based Chatbot for Farm-to-Fork Blockchain Traceability Platform". Applied Sciences 14, n. 19 (2 ottobre 2024): 8856. http://dx.doi.org/10.3390/app14198856.
Testo completoLiu, Xinyi, Bo Peng, Meiliu Wu, Mingshu Wang, Heng Cai e Qunying Huang. "Occupation Prediction with Multimodal Learning from Tweet Messages and Google Street View Images". AGILE: GIScience Series 5 (30 maggio 2024): 1–6. http://dx.doi.org/10.5194/agile-giss-5-36-2024.
Testo completoSun, Jianguo, Hanqi Yin, Ye Tian, Junpeng Wu, Linshan Shen e Lei Chen. "Two-Level Multimodal Fusion for Sentiment Analysis in Public Security". Security and Communication Networks 2021 (3 giugno 2021): 1–10. http://dx.doi.org/10.1155/2021/6662337.
Testo completoYuan, Hui, Yuanyuan Tang, Wei Xu e Raymond Yiu Keung Lau. "Exploring the influence of multimodal social media data on stock performance: an empirical perspective and analysis". Internet Research 31, n. 3 (12 gennaio 2021): 871–91. http://dx.doi.org/10.1108/intr-11-2019-0461.
Testo completoMingote, Victoria, Ignacio Viñals, Pablo Gimeno, Antonio Miguel, Alfonso Ortega e Eduardo Lleida. "Multimodal Diarization Systems by Training Enrollment Models as Identity Representations". Applied Sciences 12, n. 3 (21 gennaio 2022): 1141. http://dx.doi.org/10.3390/app12031141.
Testo completoKrawczuk, Patrycja, Zachary Fox, Dakota Murdock, Jennifer Doherty, Antoinette Stroupe, Stephen M. Schwartz, Lynne Penberthy et al. "Abstract 2318: Multimodal machine learning for the automatic classification of recurrent cancers". Cancer Research 84, n. 6_Supplement (22 marzo 2024): 2318. http://dx.doi.org/10.1158/1538-7445.am2024-2318.
Testo completo