Artículos de revistas sobre el tema "Multi-Modal representations"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Multi-Modal representations".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Wu, Lianlong, Seewon Choi, Daniel Raggi, Aaron Stockdill, Grecia Garcia Garcia, Fiorenzo Colarusso, Peter C. H. Cheng y Mateja Jamnik. "Generation of Visual Representations for Multi-Modal Mathematical Knowledge". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 21 (24 de marzo de 2024): 23850–52. http://dx.doi.org/10.1609/aaai.v38i21.30586.
Texto completoZhang, Yi, Mingyuan Chen, Jundong Shen y Chongjun Wang. "Tailor Versatile Multi-Modal Learning for Multi-Label Emotion Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 8 (28 de junio de 2022): 9100–9108. http://dx.doi.org/10.1609/aaai.v36i8.20895.
Texto completoZhang, Dong, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu y Guodong Zhou. "Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 16 (18 de mayo de 2021): 14347–55. http://dx.doi.org/10.1609/aaai.v35i16.17687.
Texto completoLiu, Hao, Jindong Han, Yanjie Fu, Jingbo Zhou, Xinjiang Lu y Hui Xiong. "Multi-modal transportation recommendation with unified route representation learning". Proceedings of the VLDB Endowment 14, n.º 3 (noviembre de 2020): 342–50. http://dx.doi.org/10.14778/3430915.3430924.
Texto completoWang, Huansha, Qinrang Liu, Ruiyang Huang y Jianpeng Zhang. "Multi-Modal Entity Alignment Method Based on Feature Enhancement". Applied Sciences 13, n.º 11 (1 de junio de 2023): 6747. http://dx.doi.org/10.3390/app13116747.
Texto completoWu, Tianxing, Chaoyu Gao, Lin Li y Yuxiang Wang. "Leveraging Multi-Modal Information for Cross-Lingual Entity Matching across Knowledge Graphs". Applied Sciences 12, n.º 19 (8 de octubre de 2022): 10107. http://dx.doi.org/10.3390/app121910107.
Texto completoHan, Ning, Jingjing Chen, Hao Zhang, Huanwen Wang y Hao Chen. "Adversarial Multi-Grained Embedding Network for Cross-Modal Text-Video Retrieval". ACM Transactions on Multimedia Computing, Communications, and Applications 18, n.º 2 (31 de mayo de 2022): 1–23. http://dx.doi.org/10.1145/3483381.
Texto completoYing, Qichao, Xiaoxiao Hu, Yangming Zhou, Zhenxing Qian, Dan Zeng y Shiming Ge. "Bootstrapping Multi-View Representations for Fake News Detection". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 4 (26 de junio de 2023): 5384–92. http://dx.doi.org/10.1609/aaai.v37i4.25670.
Texto completoHuang, Yufeng, Jiji Tang, Zhuo Chen, Rongsheng Zhang, Xinfeng Zhang, Weijie Chen, Zeng Zhao et al. "Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-Modal Structured Representations". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 2417–25. http://dx.doi.org/10.1609/aaai.v38i3.28017.
Texto completovan Tulder, Gijs y Marleen de Bruijne. "Learning Cross-Modality Representations From Multi-Modal Images". IEEE Transactions on Medical Imaging 38, n.º 2 (febrero de 2019): 638–48. http://dx.doi.org/10.1109/tmi.2018.2868977.
Texto completoKiela, Douwe y Stephen Clark. "Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception". Journal of Artificial Intelligence Research 60 (26 de diciembre de 2017): 1003–30. http://dx.doi.org/10.1613/jair.5665.
Texto completoCui, Xiaohui, Xiaolong Qu, Dongmei Li, Yu Yang, Yuxun Li y Xiaoping Zhang. "MKGCN: Multi-Modal Knowledge Graph Convolutional Network for Music Recommender Systems". Electronics 12, n.º 12 (15 de junio de 2023): 2688. http://dx.doi.org/10.3390/electronics12122688.
Texto completoDong, Bin, Songlei Jian y Kai Lu. "Learning Multimodal Representations by Symmetrically Transferring Local Structures". Symmetry 12, n.º 9 (13 de septiembre de 2020): 1504. http://dx.doi.org/10.3390/sym12091504.
Texto completoLi, Yehao, Jiahao Fan, Yingwei Pan, Ting Yao, Weiyao Lin y Tao Mei. "Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training". ACM Transactions on Multimedia Computing, Communications, and Applications 18, n.º 2 (31 de mayo de 2022): 1–16. http://dx.doi.org/10.1145/3473140.
Texto completoGu, Zhihao, Jiangning Zhang, Liang Liu, Xu Chen, Jinlong Peng, Zhenye Gan, Guannan Jiang, Annan Shu, Yabiao Wang y Lizhuang Ma. "Rethinking Reverse Distillation for Multi-Modal Anomaly Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 8 (24 de marzo de 2024): 8445–53. http://dx.doi.org/10.1609/aaai.v38i8.28687.
Texto completoWang, Zi, Chenglong Li, Aihua Zheng, Ran He y Jin Tang. "Interact, Embed, and EnlargE: Boosting Modality-Specific Representations for Multi-Modal Person Re-identification". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 3 (28 de junio de 2022): 2633–41. http://dx.doi.org/10.1609/aaai.v36i3.20165.
Texto completoWróblewska, Anna, Jacek Dąbrowski, Michał Pastuszak, Andrzej Michałowski, Michał Daniluk, Barbara Rychalska, Mikołaj Wieczorek y Sylwia Sysko-Romańczuk. "Designing Multi-Modal Embedding Fusion-Based Recommender". Electronics 11, n.º 9 (27 de abril de 2022): 1391. http://dx.doi.org/10.3390/electronics11091391.
Texto completoHe, Qibin. "Prompting Multi-Modal Image Segmentation with Semantic Grouping". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 2094–102. http://dx.doi.org/10.1609/aaai.v38i3.27981.
Texto completoLiang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang y Zhe Xue. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 12 (24 de marzo de 2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Texto completoBodapati, Jyostna Devi, Veeranjaneyulu Naralasetti, Shaik Nagur Shareef, Saqib Hakak, Muhammad Bilal, Praveen Kumar Reddy Maddikunta y Ohyun Jo. "Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction". Electronics 9, n.º 6 (30 de mayo de 2020): 914. http://dx.doi.org/10.3390/electronics9060914.
Texto completoLiu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu y Hui Xiong. "Joint Representation Learning for Multi-Modal Transportation Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Texto completoYang, Fan, Wei Li, Menglong Yang, Binbin Liang y Jianwei Zhang. "Multi-Modal Disordered Representation Learning Network for Description-Based Person Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de marzo de 2024): 16316–24. http://dx.doi.org/10.1609/aaai.v38i15.29567.
Texto completoJüttner, Martin y Ingo Rentschler. "Imagery in multi-modal object learning". Behavioral and Brain Sciences 25, n.º 2 (abril de 2002): 197–98. http://dx.doi.org/10.1017/s0140525x0238004x.
Texto completoTao, Rui, Meng Zhu, Haiyan Cao y Honge Ren. "Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective". Sensors 24, n.º 10 (14 de mayo de 2024): 3130. http://dx.doi.org/10.3390/s24103130.
Texto completoPugeault, Nicolas, Florentin Wörgötter y Norbert Krüger. "Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints". PLoS ONE 5, n.º 6 (9 de junio de 2010): e10663. http://dx.doi.org/10.1371/journal.pone.0010663.
Texto completoLara, Bruno, Juan Manuel Rendon-Mancha y Marcos A. Capistran. "Prediction of Undesired Situations based on Multi-Modal Representations". IEEE Latin America Transactions 5, n.º 2 (mayo de 2007): 103–8. http://dx.doi.org/10.1109/tla.2007.4381351.
Texto completoGeng, Shijie, Peng Gao, Moitreya Chatterjee, Chiori Hori, Jonathan Le Roux, Yongfeng Zhang, Hongsheng Li y Anoop Cherian. "Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de mayo de 2021): 1415–23. http://dx.doi.org/10.1609/aaai.v35i2.16231.
Texto completoYan, Facheng, Mingshu Zhang y Bin Wei. "Multimodal integration for fake news detection on social media platforms". MATEC Web of Conferences 395 (2024): 01013. http://dx.doi.org/10.1051/matecconf/202439501013.
Texto completoEscobar-Grisales, Daniel, Cristian David Ríos-Urrego y Juan Rafael Orozco-Arroyave. "Deep Learning and Artificial Intelligence Applied to Model Speech and Language in Parkinson’s Disease". Diagnostics 13, n.º 13 (25 de junio de 2023): 2163. http://dx.doi.org/10.3390/diagnostics13132163.
Texto completoZhai, Hanming, Xiaojun Lv, Zhiwen Hou, Xin Tong y Fanliang Bu. "MLSFF: Multi-level structural features fusion for multi-modal knowledge graph completion". Mathematical Biosciences and Engineering 20, n.º 8 (2023): 14096–116. http://dx.doi.org/10.3934/mbe.2023630.
Texto completoHua, Yan, Yingyun Yang y Jianhe Du. "Deep Multi-Modal Metric Learning with Multi-Scale Correlation for Image-Text Retrieval". Electronics 9, n.º 3 (10 de marzo de 2020): 466. http://dx.doi.org/10.3390/electronics9030466.
Texto completoYang, Yiying, Fukun Yin, Wen Liu, Jiayuan Fan, Xin Chen, Gang Yu y Tao Chen. "PM-INR: Prior-Rich Multi-Modal Implicit Large-Scale Scene Neural Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 7 (24 de marzo de 2024): 6594–602. http://dx.doi.org/10.1609/aaai.v38i7.28481.
Texto completoHu, Lianyu, Liqing Gao, Zekang Liu, Chi-Man Pun y Wei Feng. "COMMA: Co-articulated Multi-Modal Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 2238–46. http://dx.doi.org/10.1609/aaai.v38i3.27997.
Texto completoHill, Felix, Roi Reichart y Anna Korhonen. "Multi-Modal Models for Concrete and Abstract Concept Meaning". Transactions of the Association for Computational Linguistics 2 (diciembre de 2014): 285–96. http://dx.doi.org/10.1162/tacl_a_00183.
Texto completoLiu, Xuanwu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Yazhou Ren y Maozu Guo. "Ranking-Based Deep Cross-Modal Hashing". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 4400–4407. http://dx.doi.org/10.1609/aaai.v33i01.33014400.
Texto completoSezerer, Erhan y Selma Tekir. "Incorporating Concreteness in Multi-Modal Language Models with Curriculum Learning". Applied Sciences 11, n.º 17 (6 de septiembre de 2021): 8241. http://dx.doi.org/10.3390/app11178241.
Texto completoGou, Yingdong, Kexin Wang, Siwen Wei y Changxin Shi. "GMDA: GCN-Based Multi-Modal Domain Adaptation for Real-Time Disaster Detection". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 31, n.º 06 (diciembre de 2023): 957–73. http://dx.doi.org/10.1142/s0218488523500435.
Texto completoJang, Jiho, Chaerin Kong, DongHyeon Jeon, Seonhoon Kim y Nojun Kwak. "Unifying Vision-Language Representation Space with Single-Tower Transformer". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 1 (26 de junio de 2023): 980–88. http://dx.doi.org/10.1609/aaai.v37i1.25178.
Texto completoKabir, Anowarul y Amarda Shehu. "GOProFormer: A Multi-Modal Transformer Method for Gene Ontology Protein Function Prediction". Biomolecules 12, n.º 11 (18 de noviembre de 2022): 1709. http://dx.doi.org/10.3390/biom12111709.
Texto completoAlam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junio de 2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Texto completoQian, Shengsheng, Dizhan Xue, Huaiwen Zhang, Quan Fang y Changsheng Xu. "Dual Adversarial Graph Neural Networks for Multi-label Cross-modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 3 (18 de mayo de 2021): 2440–48. http://dx.doi.org/10.1609/aaai.v35i3.16345.
Texto completoLu, Lyujian, Saad Elbeleidy, Lauren Zoe Baker y Hua Wang. "Learning Multi-Modal Biomarker Representations via Globally Aligned Longitudinal Enrichments". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 01 (3 de abril de 2020): 817–24. http://dx.doi.org/10.1609/aaai.v34i01.5426.
Texto completoZhang, Heng, Vishal M. Patel y Rama Chellappa. "Low-Rank and Joint Sparse Representations for Multi-Modal Recognition". IEEE Transactions on Image Processing 26, n.º 10 (octubre de 2017): 4741–52. http://dx.doi.org/10.1109/tip.2017.2721838.
Texto completoBlown, Eric y Tom G. K. Bryce. "Conceptual Coherence Revealed in Multi‐Modal Representations of Astronomy Knowledge". International Journal of Science Education 32, n.º 1 (30 de junio de 2009): 31–67. http://dx.doi.org/10.1080/09500690902974207.
Texto completoTorasso, Pietro. "Multiple representations and multi-modal reasoning in medical diagnostic systems". Artificial Intelligence in Medicine 23, n.º 1 (agosto de 2001): 49–69. http://dx.doi.org/10.1016/s0933-3657(01)00075-6.
Texto completoWachinger, Christian y Nassir Navab. "Entropy and Laplacian images: Structural representations for multi-modal registration". Medical Image Analysis 16, n.º 1 (enero de 2012): 1–17. http://dx.doi.org/10.1016/j.media.2011.03.001.
Texto completoFang, Feiyi, Tao Zhou, Zhenbo Song y Jianfeng Lu. "MMCAN: Multi-Modal Cross-Attention Network for Free-Space Detection with Uncalibrated Hyperspectral Sensors". Remote Sensing 15, n.º 4 (20 de febrero de 2023): 1142. http://dx.doi.org/10.3390/rs15041142.
Texto completoGao, Jingsheng, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu y Yuzhuo Fu. "LAMM: Label Alignment for Multi-Modal Prompt Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 1815–23. http://dx.doi.org/10.1609/aaai.v38i3.27950.
Texto completoBao, Peijun, Wenhan Yang, Boon Poh Ng, Meng Hwa Er y Alex C. Kot. "Cross-Modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 1 (26 de junio de 2023): 215–22. http://dx.doi.org/10.1609/aaai.v37i1.25093.
Texto completoLiu, Shuang, Mei Li, Zhong Zhang, Baihua Xiao y Tariq S. Durrani. "Multi-Evidence and Multi-Modal Fusion Network for Ground-Based Cloud Recognition". Remote Sensing 12, n.º 3 (2 de febrero de 2020): 464. http://dx.doi.org/10.3390/rs12030464.
Texto completo