Artykuły w czasopismach na temat „Multimodal Embeddings”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Multimodal Embeddings”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Tyshchuk, Kirill, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev i Alexander Panchenko. "On Isotropy of Multimodal Embeddings". Information 14, nr 7 (10.07.2023): 392. http://dx.doi.org/10.3390/info14070392.
Pełny tekst źródłaGuo, Zhiqiang, Jianjun Li, Guohui Li, Chaoyang Wang, Si Shi i Bin Ruan. "LGMRec: Local and Global Graph Learning for Multimodal Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 8 (24.03.2024): 8454–62. http://dx.doi.org/10.1609/aaai.v38i8.28688.
Pełny tekst źródłaShang, Bin, Yinliang Zhao, Jun Liu i Di Wang. "LAFA: Multimodal Knowledge Graph Completion with Link Aware Fusion and Aggregation". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 8 (24.03.2024): 8957–65. http://dx.doi.org/10.1609/aaai.v38i8.28744.
Pełny tekst źródłaSun, Zhongkai, Prathusha Sarma, William Sethares i Yingyu Liang. "Learning Relationships between Text, Audio, and Video via Deep Canonical Correlation for Multimodal Language Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 8992–99. http://dx.doi.org/10.1609/aaai.v34i05.6431.
Pełny tekst źródłaMerkx, Danny, i Stefan L. Frank. "Learning semantic sentence representations from visually grounded language without lexical knowledge". Natural Language Engineering 25, nr 4 (lipiec 2019): 451–66. http://dx.doi.org/10.1017/s1351324919000196.
Pełny tekst źródłaTang, Zhenchao, Jiehui Huang, Guanxing Chen i Calvin Yu-Chian Chen. "Comprehensive View Embedding Learning for Single-Cell Multimodal Integration". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 14 (24.03.2024): 15292–300. http://dx.doi.org/10.1609/aaai.v38i14.29453.
Pełny tekst źródłaZhang, Linhai, Deyu Zhou, Yulan He i Zeng Yang. "MERL: Multimodal Event Representation Learning in Heterogeneous Embedding Spaces". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 16 (18.05.2021): 14420–27. http://dx.doi.org/10.1609/aaai.v35i16.17695.
Pełny tekst źródłaSah, Shagan, Sabarish Gopalakishnan i Raymond Ptucha. "Aligned attention for common multimodal embeddings". Journal of Electronic Imaging 29, nr 02 (25.03.2020): 1. http://dx.doi.org/10.1117/1.jei.29.2.023013.
Pełny tekst źródłaZhang, Rongchao, Yiwei Lou, Dexuan Xu, Yongzhi Cao, Hanpin Wang i Yu Huang. "A Learnable Discrete-Prior Fusion Autoencoder with Contrastive Learning for Tabular Data Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 15 (24.03.2024): 16803–11. http://dx.doi.org/10.1609/aaai.v38i15.29621.
Pełny tekst źródłaLin, Kaiyi, Xing Xu, Lianli Gao, Zheng Wang i Heng Tao Shen. "Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 11515–22. http://dx.doi.org/10.1609/aaai.v34i07.6817.
Pełny tekst źródłaZhu, Chaoyu, Zhihao Yang, Xiaoqiong Xia, Nan Li, Fan Zhong i Lei Liu. "Multimodal reasoning based on knowledge graph embedding for specific diseases". Bioinformatics 38, nr 8 (12.02.2022): 2235–45. http://dx.doi.org/10.1093/bioinformatics/btac085.
Pełny tekst źródłaTripathi, Aakash, Asim Waqas, Yasin Yilmaz i Ghulam Rasool. "Abstract 4905: Multimodal transformer model improves survival prediction in lung cancer compared to unimodal approaches". Cancer Research 84, nr 6_Supplement (22.03.2024): 4905. http://dx.doi.org/10.1158/1538-7445.am2024-4905.
Pełny tekst źródłaOta, Kosuke, Keiichiro Shirai, Hidetoshi Miyao i Minoru Maruyama. "Multimodal Analogy-Based Image Retrieval by Improving Semantic Embeddings". Journal of Advanced Computational Intelligence and Intelligent Informatics 26, nr 6 (20.11.2022): 995–1003. http://dx.doi.org/10.20965/jaciii.2022.p0995.
Pełny tekst źródłaMai, Sijie, Haifeng Hu i Songlong Xing. "Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 01 (3.04.2020): 164–72. http://dx.doi.org/10.1609/aaai.v34i01.5347.
Pełny tekst źródłaKim, Donghyun, Kuniaki Saito, Kate Saenko, Stan Sclaroff i Bryan Plummer. "MULE: Multimodal Universal Language Embedding". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 11254–61. http://dx.doi.org/10.1609/aaai.v34i07.6785.
Pełny tekst źródłaWehrmann, Jônatas, Anderson Mattjie i Rodrigo C. Barros. "Order embeddings and character-level convolutions for multimodal alignment". Pattern Recognition Letters 102 (styczeń 2018): 15–22. http://dx.doi.org/10.1016/j.patrec.2017.11.020.
Pełny tekst źródłaMithun, Niluthpol C., Juncheng Li, Florian Metze i Amit K. Roy-Chowdhury. "Joint embeddings with multimodal cues for video-text retrieval". International Journal of Multimedia Information Retrieval 8, nr 1 (12.01.2019): 3–18. http://dx.doi.org/10.1007/s13735-018-00166-3.
Pełny tekst źródłaNayak, Roshan, B. S. Ullas Kannantha, Kruthi S i C. Gururaj. "Multimodal Offensive Meme Classification u sing Transformers and BiLSTM". International Journal of Engineering and Advanced Technology 11, nr 3 (28.02.2022): 96–102. http://dx.doi.org/10.35940/ijeat.c3392.0211322.
Pełny tekst źródłaChen, Weijia, Zhijun Lu, Lijue You, Lingling Zhou, Jie Xu i Ken Chen. "Artificial Intelligence–Based Multimodal Risk Assessment Model for Surgical Site Infection (AMRAMS): Development and Validation Study". JMIR Medical Informatics 8, nr 6 (15.06.2020): e18186. http://dx.doi.org/10.2196/18186.
Pełny tekst źródłaN.D., Smelik. "Multimodal topic model for texts and images utilizing their embeddings". Machine Learning and Data Analysis 2, nr 4 (2016): 421–41. http://dx.doi.org/10.21469/22233792.2.4.05.
Pełny tekst źródłaAbdou, Ahmed, Ekta Sood, Philipp Müller i Andreas Bulling. "Gaze-enhanced Crossmodal Embeddings for Emotion Recognition". Proceedings of the ACM on Human-Computer Interaction 6, ETRA (13.05.2022): 1–18. http://dx.doi.org/10.1145/3530879.
Pełny tekst źródłaChen, Qihua, Xuejin Chen, Chenxuan Wang, Yixiong Liu, Zhiwei Xiong i Feng Wu. "Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 2 (24.03.2024): 1174–82. http://dx.doi.org/10.1609/aaai.v38i2.27879.
Pełny tekst źródłaHu, Wenbo, Yifan Xu, Yi Li, Weiyue Li, Zeyuan Chen i Zhuowen Tu. "BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual Questions". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 3 (24.03.2024): 2256–64. http://dx.doi.org/10.1609/aaai.v38i3.27999.
Pełny tekst źródłaShen, Aili, Bahar Salehi, Jianzhong Qi i Timothy Baldwin. "A General Approach to Multimodal Document Quality Assessment". Journal of Artificial Intelligence Research 68 (22.07.2020): 607–32. http://dx.doi.org/10.1613/jair.1.11647.
Pełny tekst źródłaTseng, Shao-Yen, Shrikanth Narayanan i Panayiotis Georgiou. "Multimodal Embeddings From Language Models for Emotion Recognition in the Wild". IEEE Signal Processing Letters 28 (2021): 608–12. http://dx.doi.org/10.1109/lsp.2021.3065598.
Pełny tekst źródłaJing, Xuebin, Liang He, Zhida Song i Shaolei Wang. "Audio–Visual Fusion Based on Interactive Attention for Person Verification". Sensors 23, nr 24 (15.12.2023): 9845. http://dx.doi.org/10.3390/s23249845.
Pełny tekst źródłaSalin, Emmanuelle, Badreddine Farah, Stéphane Ayache i Benoit Favre. "Are Vision-Language Transformers Learning Multimodal Representations? A Probing Perspective". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 10 (28.06.2022): 11248–57. http://dx.doi.org/10.1609/aaai.v36i10.21375.
Pełny tekst źródłaSkantze, Gabriel, i Bram Willemsen. "CoLLIE: Continual Learning of Language Grounding from Language-Image Embeddings". Journal of Artificial Intelligence Research 74 (9.07.2022): 1201–23. http://dx.doi.org/10.1613/jair.1.13689.
Pełny tekst źródłaWang, Jenq-Haur, Mehdi Norouzi i Shu Ming Tsai. "Augmenting Multimodal Content Representation with Transformers for Misinformation Detection". Big Data and Cognitive Computing 8, nr 10 (11.10.2024): 134. http://dx.doi.org/10.3390/bdcc8100134.
Pełny tekst źródłaKang, Yu, Tianqiao Liu, Hang Li, Yang Hao i Wenbiao Ding. "Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 10 (28.06.2022): 10875–83. http://dx.doi.org/10.1609/aaai.v36i10.21334.
Pełny tekst źródłaYang, Bang, Yong Dai, Xuxin Cheng, Yaowei Li, Asif Raza i Yuexian Zou. "Embracing Language Inclusivity and Diversity in CLIP through Continual Language Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 6 (24.03.2024): 6458–66. http://dx.doi.org/10.1609/aaai.v38i6.28466.
Pełny tekst źródłaWang, Fengjun, Sarai Mizrachi, Moran Beladev, Guy Nadav, Gil Amsalem, Karen Lastmann Assaraf i Hadas Harush Boker. "MuMIC – Multimodal Embedding for Multi-Label Image Classification with Tempered Sigmoid". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 13 (26.06.2023): 15603–11. http://dx.doi.org/10.1609/aaai.v37i13.26850.
Pełny tekst źródłaNikzad-Khasmakhi, N., M. A. Balafar, M. Reza Feizi-Derakhshi i Cina Motamed. "BERTERS: Multimodal representation learning for expert recommendation system with transformers and graph embeddings". Chaos, Solitons & Fractals 151 (październik 2021): 111260. http://dx.doi.org/10.1016/j.chaos.2021.111260.
Pełny tekst źródłaLiu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu i Hui Xiong. "Joint Representation Learning for Multi-Modal Transportation Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Pełny tekst źródłaChen, Guang, Fangxiang Feng, Guangwei Zhang, Xiaoxu Li i Ruifan Li. "A Visually Enhanced Neural Encoder for Synset Induction". Electronics 12, nr 16 (20.08.2023): 3521. http://dx.doi.org/10.3390/electronics12163521.
Pełny tekst źródłaXu, Xing, Jialin Tian, Kaiyi Lin, Huimin Lu, Jie Shao i Heng Tao Shen. "Zero-shot Cross-modal Retrieval by Assembling AutoEncoder and Generative Adversarial Network". ACM Transactions on Multimedia Computing, Communications, and Applications 17, nr 1s (31.03.2021): 1–17. http://dx.doi.org/10.1145/3424341.
Pełny tekst źródłaAnitha Mummireddygari i N Ananda Reddy. "Optimizing Speaker Recognition in Complex Environments : An Enhanced Framework with Artificial Neural Networks for Multi-Speaker Settings". International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, nr 3 (28.05.2024): 387–98. http://dx.doi.org/10.32628/cseit24103116.
Pełny tekst źródłaD.S. Rao, Rakhi Madhukararao Joshi,. "Multi-camera Vehicle Tracking and Recognition with Multimodal Contrastive Domain Sharing GAN and Topological Embeddings". Journal of Electrical Systems 20, nr 2s (4.04.2024): 675–86. http://dx.doi.org/10.52783/jes.1532.
Pełny tekst źródłaKim, MinJun, SeungWoo Song, YouHan Lee, Haneol Jang i KyungTae Lim. "BOK-VQA: Bilingual outside Knowledge-Based Visual Question Answering via Graph Representation Pretraining". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 16 (24.03.2024): 18381–89. http://dx.doi.org/10.1609/aaai.v38i16.29798.
Pełny tekst źródłaAlam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 11 (28.06.2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Pełny tekst źródłaZhang, Ruochi, Tianming Zhou i Jian Ma. "Multiscale and integrative single-cell Hi-C analysis with Higashi". Nature Biotechnology 40, nr 2 (11.10.2021): 254–61. http://dx.doi.org/10.1038/s41587-021-01034-y.
Pełny tekst źródłaLiang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang i Zhe Xue. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 12 (24.03.2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Pełny tekst źródłaZhang, Litian, Xiaoming Zhang, Ziyi Zhou, Feiran Huang i Chaozhuo Li. "Reinforced Adaptive Knowledge Learning for Multimodal Fake News Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 15 (24.03.2024): 16777–85. http://dx.doi.org/10.1609/aaai.v38i15.29618.
Pełny tekst źródłaFaizabadi, Ahmed Rimaz, Hasan Firdaus Mohd Zaki, Zulkifli Zainal Abidin, Muhammad Afif Husman i Nik Nur Wahidah Nik Hashim. "Learning a Multimodal 3D Face Embedding for Robust RGBD Face Recognition". Journal of Integrated and Advanced Engineering (JIAE) 3, nr 1 (9.03.2023): 37–46. http://dx.doi.org/10.51662/jiae.v3i1.84.
Pełny tekst źródłaBenzinho, José, João Ferreira, Joel Batista, Leandro Pereira, Marisa Maximiano, Vítor Távora, Ricardo Gomes i Orlando Remédios. "LLM Based Chatbot for Farm-to-Fork Blockchain Traceability Platform". Applied Sciences 14, nr 19 (2.10.2024): 8856. http://dx.doi.org/10.3390/app14198856.
Pełny tekst źródłaLiu, Xinyi, Bo Peng, Meiliu Wu, Mingshu Wang, Heng Cai i Qunying Huang. "Occupation Prediction with Multimodal Learning from Tweet Messages and Google Street View Images". AGILE: GIScience Series 5 (30.05.2024): 1–6. http://dx.doi.org/10.5194/agile-giss-5-36-2024.
Pełny tekst źródłaSun, Jianguo, Hanqi Yin, Ye Tian, Junpeng Wu, Linshan Shen i Lei Chen. "Two-Level Multimodal Fusion for Sentiment Analysis in Public Security". Security and Communication Networks 2021 (3.06.2021): 1–10. http://dx.doi.org/10.1155/2021/6662337.
Pełny tekst źródłaYuan, Hui, Yuanyuan Tang, Wei Xu i Raymond Yiu Keung Lau. "Exploring the influence of multimodal social media data on stock performance: an empirical perspective and analysis". Internet Research 31, nr 3 (12.01.2021): 871–91. http://dx.doi.org/10.1108/intr-11-2019-0461.
Pełny tekst źródłaMingote, Victoria, Ignacio Viñals, Pablo Gimeno, Antonio Miguel, Alfonso Ortega i Eduardo Lleida. "Multimodal Diarization Systems by Training Enrollment Models as Identity Representations". Applied Sciences 12, nr 3 (21.01.2022): 1141. http://dx.doi.org/10.3390/app12031141.
Pełny tekst źródłaKrawczuk, Patrycja, Zachary Fox, Dakota Murdock, Jennifer Doherty, Antoinette Stroupe, Stephen M. Schwartz, Lynne Penberthy i in. "Abstract 2318: Multimodal machine learning for the automatic classification of recurrent cancers". Cancer Research 84, nr 6_Supplement (22.03.2024): 2318. http://dx.doi.org/10.1158/1538-7445.am2024-2318.
Pełny tekst źródła