Articoli di riviste sul tema "Multi-Modal representations"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Multi-Modal representations".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.
Wu, Lianlong, Seewon Choi, Daniel Raggi, Aaron Stockdill, Grecia Garcia Garcia, Fiorenzo Colarusso, Peter C. H. Cheng e Mateja Jamnik. "Generation of Visual Representations for Multi-Modal Mathematical Knowledge". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 21 (24 marzo 2024): 23850–52. http://dx.doi.org/10.1609/aaai.v38i21.30586.
Zhang, Yi, Mingyuan Chen, Jundong Shen e Chongjun Wang. "Tailor Versatile Multi-Modal Learning for Multi-Label Emotion Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 8 (28 giugno 2022): 9100–9108. http://dx.doi.org/10.1609/aaai.v36i8.20895.
Zhang, Dong, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu e Guodong Zhou. "Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 16 (18 maggio 2021): 14347–55. http://dx.doi.org/10.1609/aaai.v35i16.17687.
Liu, Hao, Jindong Han, Yanjie Fu, Jingbo Zhou, Xinjiang Lu e Hui Xiong. "Multi-modal transportation recommendation with unified route representation learning". Proceedings of the VLDB Endowment 14, n. 3 (novembre 2020): 342–50. http://dx.doi.org/10.14778/3430915.3430924.
Wang, Huansha, Qinrang Liu, Ruiyang Huang e Jianpeng Zhang. "Multi-Modal Entity Alignment Method Based on Feature Enhancement". Applied Sciences 13, n. 11 (1 giugno 2023): 6747. http://dx.doi.org/10.3390/app13116747.
Wu, Tianxing, Chaoyu Gao, Lin Li e Yuxiang Wang. "Leveraging Multi-Modal Information for Cross-Lingual Entity Matching across Knowledge Graphs". Applied Sciences 12, n. 19 (8 ottobre 2022): 10107. http://dx.doi.org/10.3390/app121910107.
Han, Ning, Jingjing Chen, Hao Zhang, Huanwen Wang e Hao Chen. "Adversarial Multi-Grained Embedding Network for Cross-Modal Text-Video Retrieval". ACM Transactions on Multimedia Computing, Communications, and Applications 18, n. 2 (31 maggio 2022): 1–23. http://dx.doi.org/10.1145/3483381.
Ying, Qichao, Xiaoxiao Hu, Yangming Zhou, Zhenxing Qian, Dan Zeng e Shiming Ge. "Bootstrapping Multi-View Representations for Fake News Detection". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 4 (26 giugno 2023): 5384–92. http://dx.doi.org/10.1609/aaai.v37i4.25670.
Huang, Yufeng, Jiji Tang, Zhuo Chen, Rongsheng Zhang, Xinfeng Zhang, Weijie Chen, Zeng Zhao et al. "Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-Modal Structured Representations". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 3 (24 marzo 2024): 2417–25. http://dx.doi.org/10.1609/aaai.v38i3.28017.
van Tulder, Gijs, e Marleen de Bruijne. "Learning Cross-Modality Representations From Multi-Modal Images". IEEE Transactions on Medical Imaging 38, n. 2 (febbraio 2019): 638–48. http://dx.doi.org/10.1109/tmi.2018.2868977.
Kiela, Douwe, e Stephen Clark. "Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception". Journal of Artificial Intelligence Research 60 (26 dicembre 2017): 1003–30. http://dx.doi.org/10.1613/jair.5665.
Cui, Xiaohui, Xiaolong Qu, Dongmei Li, Yu Yang, Yuxun Li e Xiaoping Zhang. "MKGCN: Multi-Modal Knowledge Graph Convolutional Network for Music Recommender Systems". Electronics 12, n. 12 (15 giugno 2023): 2688. http://dx.doi.org/10.3390/electronics12122688.
Dong, Bin, Songlei Jian e Kai Lu. "Learning Multimodal Representations by Symmetrically Transferring Local Structures". Symmetry 12, n. 9 (13 settembre 2020): 1504. http://dx.doi.org/10.3390/sym12091504.
Li, Yehao, Jiahao Fan, Yingwei Pan, Ting Yao, Weiyao Lin e Tao Mei. "Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training". ACM Transactions on Multimedia Computing, Communications, and Applications 18, n. 2 (31 maggio 2022): 1–16. http://dx.doi.org/10.1145/3473140.
Gu, Zhihao, Jiangning Zhang, Liang Liu, Xu Chen, Jinlong Peng, Zhenye Gan, Guannan Jiang, Annan Shu, Yabiao Wang e Lizhuang Ma. "Rethinking Reverse Distillation for Multi-Modal Anomaly Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 8 (24 marzo 2024): 8445–53. http://dx.doi.org/10.1609/aaai.v38i8.28687.
Wang, Zi, Chenglong Li, Aihua Zheng, Ran He e Jin Tang. "Interact, Embed, and EnlargE: Boosting Modality-Specific Representations for Multi-Modal Person Re-identification". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 3 (28 giugno 2022): 2633–41. http://dx.doi.org/10.1609/aaai.v36i3.20165.
Wróblewska, Anna, Jacek Dąbrowski, Michał Pastuszak, Andrzej Michałowski, Michał Daniluk, Barbara Rychalska, Mikołaj Wieczorek e Sylwia Sysko-Romańczuk. "Designing Multi-Modal Embedding Fusion-Based Recommender". Electronics 11, n. 9 (27 aprile 2022): 1391. http://dx.doi.org/10.3390/electronics11091391.
He, Qibin. "Prompting Multi-Modal Image Segmentation with Semantic Grouping". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 3 (24 marzo 2024): 2094–102. http://dx.doi.org/10.1609/aaai.v38i3.27981.
Liang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang e Zhe Xue. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 12 (24 marzo 2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Bodapati, Jyostna Devi, Veeranjaneyulu Naralasetti, Shaik Nagur Shareef, Saqib Hakak, Muhammad Bilal, Praveen Kumar Reddy Maddikunta e Ohyun Jo. "Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction". Electronics 9, n. 6 (30 maggio 2020): 914. http://dx.doi.org/10.3390/electronics9060914.
Liu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu e Hui Xiong. "Joint Representation Learning for Multi-Modal Transportation Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Yang, Fan, Wei Li, Menglong Yang, Binbin Liang e Jianwei Zhang. "Multi-Modal Disordered Representation Learning Network for Description-Based Person Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 15 (24 marzo 2024): 16316–24. http://dx.doi.org/10.1609/aaai.v38i15.29567.
Jüttner, Martin, e Ingo Rentschler. "Imagery in multi-modal object learning". Behavioral and Brain Sciences 25, n. 2 (aprile 2002): 197–98. http://dx.doi.org/10.1017/s0140525x0238004x.
Tao, Rui, Meng Zhu, Haiyan Cao e Honge Ren. "Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective". Sensors 24, n. 10 (14 maggio 2024): 3130. http://dx.doi.org/10.3390/s24103130.
Pugeault, Nicolas, Florentin Wörgötter e Norbert Krüger. "Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints". PLoS ONE 5, n. 6 (9 giugno 2010): e10663. http://dx.doi.org/10.1371/journal.pone.0010663.
Lara, Bruno, Juan Manuel Rendon-Mancha e Marcos A. Capistran. "Prediction of Undesired Situations based on Multi-Modal Representations". IEEE Latin America Transactions 5, n. 2 (maggio 2007): 103–8. http://dx.doi.org/10.1109/tla.2007.4381351.
Geng, Shijie, Peng Gao, Moitreya Chatterjee, Chiori Hori, Jonathan Le Roux, Yongfeng Zhang, Hongsheng Li e Anoop Cherian. "Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 2 (18 maggio 2021): 1415–23. http://dx.doi.org/10.1609/aaai.v35i2.16231.
Yan, Facheng, Mingshu Zhang e Bin Wei. "Multimodal integration for fake news detection on social media platforms". MATEC Web of Conferences 395 (2024): 01013. http://dx.doi.org/10.1051/matecconf/202439501013.
Escobar-Grisales, Daniel, Cristian David Ríos-Urrego e Juan Rafael Orozco-Arroyave. "Deep Learning and Artificial Intelligence Applied to Model Speech and Language in Parkinson’s Disease". Diagnostics 13, n. 13 (25 giugno 2023): 2163. http://dx.doi.org/10.3390/diagnostics13132163.
Zhai, Hanming, Xiaojun Lv, Zhiwen Hou, Xin Tong e Fanliang Bu. "MLSFF: Multi-level structural features fusion for multi-modal knowledge graph completion". Mathematical Biosciences and Engineering 20, n. 8 (2023): 14096–116. http://dx.doi.org/10.3934/mbe.2023630.
Hua, Yan, Yingyun Yang e Jianhe Du. "Deep Multi-Modal Metric Learning with Multi-Scale Correlation for Image-Text Retrieval". Electronics 9, n. 3 (10 marzo 2020): 466. http://dx.doi.org/10.3390/electronics9030466.
Yang, Yiying, Fukun Yin, Wen Liu, Jiayuan Fan, Xin Chen, Gang Yu e Tao Chen. "PM-INR: Prior-Rich Multi-Modal Implicit Large-Scale Scene Neural Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 7 (24 marzo 2024): 6594–602. http://dx.doi.org/10.1609/aaai.v38i7.28481.
Hu, Lianyu, Liqing Gao, Zekang Liu, Chi-Man Pun e Wei Feng. "COMMA: Co-articulated Multi-Modal Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 3 (24 marzo 2024): 2238–46. http://dx.doi.org/10.1609/aaai.v38i3.27997.
Hill, Felix, Roi Reichart e Anna Korhonen. "Multi-Modal Models for Concrete and Abstract Concept Meaning". Transactions of the Association for Computational Linguistics 2 (dicembre 2014): 285–96. http://dx.doi.org/10.1162/tacl_a_00183.
Liu, Xuanwu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Yazhou Ren e Maozu Guo. "Ranking-Based Deep Cross-Modal Hashing". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 4400–4407. http://dx.doi.org/10.1609/aaai.v33i01.33014400.
Sezerer, Erhan, e Selma Tekir. "Incorporating Concreteness in Multi-Modal Language Models with Curriculum Learning". Applied Sciences 11, n. 17 (6 settembre 2021): 8241. http://dx.doi.org/10.3390/app11178241.
Gou, Yingdong, Kexin Wang, Siwen Wei e Changxin Shi. "GMDA: GCN-Based Multi-Modal Domain Adaptation for Real-Time Disaster Detection". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 31, n. 06 (dicembre 2023): 957–73. http://dx.doi.org/10.1142/s0218488523500435.
Jang, Jiho, Chaerin Kong, DongHyeon Jeon, Seonhoon Kim e Nojun Kwak. "Unifying Vision-Language Representation Space with Single-Tower Transformer". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 1 (26 giugno 2023): 980–88. http://dx.doi.org/10.1609/aaai.v37i1.25178.
Kabir, Anowarul, e Amarda Shehu. "GOProFormer: A Multi-Modal Transformer Method for Gene Ontology Protein Function Prediction". Biomolecules 12, n. 11 (18 novembre 2022): 1709. http://dx.doi.org/10.3390/biom12111709.
Alam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 11 (28 giugno 2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Qian, Shengsheng, Dizhan Xue, Huaiwen Zhang, Quan Fang e Changsheng Xu. "Dual Adversarial Graph Neural Networks for Multi-label Cross-modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 3 (18 maggio 2021): 2440–48. http://dx.doi.org/10.1609/aaai.v35i3.16345.
Lu, Lyujian, Saad Elbeleidy, Lauren Zoe Baker e Hua Wang. "Learning Multi-Modal Biomarker Representations via Globally Aligned Longitudinal Enrichments". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 01 (3 aprile 2020): 817–24. http://dx.doi.org/10.1609/aaai.v34i01.5426.
Zhang, Heng, Vishal M. Patel e Rama Chellappa. "Low-Rank and Joint Sparse Representations for Multi-Modal Recognition". IEEE Transactions on Image Processing 26, n. 10 (ottobre 2017): 4741–52. http://dx.doi.org/10.1109/tip.2017.2721838.
Blown, Eric, e Tom G. K. Bryce. "Conceptual Coherence Revealed in Multi‐Modal Representations of Astronomy Knowledge". International Journal of Science Education 32, n. 1 (30 giugno 2009): 31–67. http://dx.doi.org/10.1080/09500690902974207.
Torasso, Pietro. "Multiple representations and multi-modal reasoning in medical diagnostic systems". Artificial Intelligence in Medicine 23, n. 1 (agosto 2001): 49–69. http://dx.doi.org/10.1016/s0933-3657(01)00075-6.
Wachinger, Christian, e Nassir Navab. "Entropy and Laplacian images: Structural representations for multi-modal registration". Medical Image Analysis 16, n. 1 (gennaio 2012): 1–17. http://dx.doi.org/10.1016/j.media.2011.03.001.
Fang, Feiyi, Tao Zhou, Zhenbo Song e Jianfeng Lu. "MMCAN: Multi-Modal Cross-Attention Network for Free-Space Detection with Uncalibrated Hyperspectral Sensors". Remote Sensing 15, n. 4 (20 febbraio 2023): 1142. http://dx.doi.org/10.3390/rs15041142.
Gao, Jingsheng, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu e Yuzhuo Fu. "LAMM: Label Alignment for Multi-Modal Prompt Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 3 (24 marzo 2024): 1815–23. http://dx.doi.org/10.1609/aaai.v38i3.27950.
Bao, Peijun, Wenhan Yang, Boon Poh Ng, Meng Hwa Er e Alex C. Kot. "Cross-Modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 1 (26 giugno 2023): 215–22. http://dx.doi.org/10.1609/aaai.v37i1.25093.
Liu, Shuang, Mei Li, Zhong Zhang, Baihua Xiao e Tariq S. Durrani. "Multi-Evidence and Multi-Modal Fusion Network for Ground-Based Cloud Recognition". Remote Sensing 12, n. 3 (2 febbraio 2020): 464. http://dx.doi.org/10.3390/rs12030464.