Artigos de revistas sobre o tema "Multi-Modal representations"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Multi-Modal representations".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Wu, Lianlong, Seewon Choi, Daniel Raggi, Aaron Stockdill, Grecia Garcia Garcia, Fiorenzo Colarusso, Peter C. H. Cheng e Mateja Jamnik. "Generation of Visual Representations for Multi-Modal Mathematical Knowledge". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 21 (24 de março de 2024): 23850–52. http://dx.doi.org/10.1609/aaai.v38i21.30586.
Texto completo da fonteZhang, Yi, Mingyuan Chen, Jundong Shen e Chongjun Wang. "Tailor Versatile Multi-Modal Learning for Multi-Label Emotion Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 8 (28 de junho de 2022): 9100–9108. http://dx.doi.org/10.1609/aaai.v36i8.20895.
Texto completo da fonteZhang, Dong, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu e Guodong Zhou. "Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 16 (18 de maio de 2021): 14347–55. http://dx.doi.org/10.1609/aaai.v35i16.17687.
Texto completo da fonteLiu, Hao, Jindong Han, Yanjie Fu, Jingbo Zhou, Xinjiang Lu e Hui Xiong. "Multi-modal transportation recommendation with unified route representation learning". Proceedings of the VLDB Endowment 14, n.º 3 (novembro de 2020): 342–50. http://dx.doi.org/10.14778/3430915.3430924.
Texto completo da fonteWang, Huansha, Qinrang Liu, Ruiyang Huang e Jianpeng Zhang. "Multi-Modal Entity Alignment Method Based on Feature Enhancement". Applied Sciences 13, n.º 11 (1 de junho de 2023): 6747. http://dx.doi.org/10.3390/app13116747.
Texto completo da fonteWu, Tianxing, Chaoyu Gao, Lin Li e Yuxiang Wang. "Leveraging Multi-Modal Information for Cross-Lingual Entity Matching across Knowledge Graphs". Applied Sciences 12, n.º 19 (8 de outubro de 2022): 10107. http://dx.doi.org/10.3390/app121910107.
Texto completo da fonteHan, Ning, Jingjing Chen, Hao Zhang, Huanwen Wang e Hao Chen. "Adversarial Multi-Grained Embedding Network for Cross-Modal Text-Video Retrieval". ACM Transactions on Multimedia Computing, Communications, and Applications 18, n.º 2 (31 de maio de 2022): 1–23. http://dx.doi.org/10.1145/3483381.
Texto completo da fonteYing, Qichao, Xiaoxiao Hu, Yangming Zhou, Zhenxing Qian, Dan Zeng e Shiming Ge. "Bootstrapping Multi-View Representations for Fake News Detection". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 4 (26 de junho de 2023): 5384–92. http://dx.doi.org/10.1609/aaai.v37i4.25670.
Texto completo da fonteHuang, Yufeng, Jiji Tang, Zhuo Chen, Rongsheng Zhang, Xinfeng Zhang, Weijie Chen, Zeng Zhao et al. "Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-Modal Structured Representations". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de março de 2024): 2417–25. http://dx.doi.org/10.1609/aaai.v38i3.28017.
Texto completo da fontevan Tulder, Gijs, e Marleen de Bruijne. "Learning Cross-Modality Representations From Multi-Modal Images". IEEE Transactions on Medical Imaging 38, n.º 2 (fevereiro de 2019): 638–48. http://dx.doi.org/10.1109/tmi.2018.2868977.
Texto completo da fonteKiela, Douwe, e Stephen Clark. "Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception". Journal of Artificial Intelligence Research 60 (26 de dezembro de 2017): 1003–30. http://dx.doi.org/10.1613/jair.5665.
Texto completo da fonteCui, Xiaohui, Xiaolong Qu, Dongmei Li, Yu Yang, Yuxun Li e Xiaoping Zhang. "MKGCN: Multi-Modal Knowledge Graph Convolutional Network for Music Recommender Systems". Electronics 12, n.º 12 (15 de junho de 2023): 2688. http://dx.doi.org/10.3390/electronics12122688.
Texto completo da fonteDong, Bin, Songlei Jian e Kai Lu. "Learning Multimodal Representations by Symmetrically Transferring Local Structures". Symmetry 12, n.º 9 (13 de setembro de 2020): 1504. http://dx.doi.org/10.3390/sym12091504.
Texto completo da fonteLi, Yehao, Jiahao Fan, Yingwei Pan, Ting Yao, Weiyao Lin e Tao Mei. "Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training". ACM Transactions on Multimedia Computing, Communications, and Applications 18, n.º 2 (31 de maio de 2022): 1–16. http://dx.doi.org/10.1145/3473140.
Texto completo da fonteGu, Zhihao, Jiangning Zhang, Liang Liu, Xu Chen, Jinlong Peng, Zhenye Gan, Guannan Jiang, Annan Shu, Yabiao Wang e Lizhuang Ma. "Rethinking Reverse Distillation for Multi-Modal Anomaly Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 8 (24 de março de 2024): 8445–53. http://dx.doi.org/10.1609/aaai.v38i8.28687.
Texto completo da fonteWang, Zi, Chenglong Li, Aihua Zheng, Ran He e Jin Tang. "Interact, Embed, and EnlargE: Boosting Modality-Specific Representations for Multi-Modal Person Re-identification". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 3 (28 de junho de 2022): 2633–41. http://dx.doi.org/10.1609/aaai.v36i3.20165.
Texto completo da fonteWróblewska, Anna, Jacek Dąbrowski, Michał Pastuszak, Andrzej Michałowski, Michał Daniluk, Barbara Rychalska, Mikołaj Wieczorek e Sylwia Sysko-Romańczuk. "Designing Multi-Modal Embedding Fusion-Based Recommender". Electronics 11, n.º 9 (27 de abril de 2022): 1391. http://dx.doi.org/10.3390/electronics11091391.
Texto completo da fonteHe, Qibin. "Prompting Multi-Modal Image Segmentation with Semantic Grouping". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de março de 2024): 2094–102. http://dx.doi.org/10.1609/aaai.v38i3.27981.
Texto completo da fonteLiang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang e Zhe Xue. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 12 (24 de março de 2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Texto completo da fonteBodapati, Jyostna Devi, Veeranjaneyulu Naralasetti, Shaik Nagur Shareef, Saqib Hakak, Muhammad Bilal, Praveen Kumar Reddy Maddikunta e Ohyun Jo. "Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction". Electronics 9, n.º 6 (30 de maio de 2020): 914. http://dx.doi.org/10.3390/electronics9060914.
Texto completo da fonteLiu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu e Hui Xiong. "Joint Representation Learning for Multi-Modal Transportation Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Texto completo da fonteYang, Fan, Wei Li, Menglong Yang, Binbin Liang e Jianwei Zhang. "Multi-Modal Disordered Representation Learning Network for Description-Based Person Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de março de 2024): 16316–24. http://dx.doi.org/10.1609/aaai.v38i15.29567.
Texto completo da fonteJüttner, Martin, e Ingo Rentschler. "Imagery in multi-modal object learning". Behavioral and Brain Sciences 25, n.º 2 (abril de 2002): 197–98. http://dx.doi.org/10.1017/s0140525x0238004x.
Texto completo da fonteTao, Rui, Meng Zhu, Haiyan Cao e Honge Ren. "Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective". Sensors 24, n.º 10 (14 de maio de 2024): 3130. http://dx.doi.org/10.3390/s24103130.
Texto completo da fontePugeault, Nicolas, Florentin Wörgötter e Norbert Krüger. "Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints". PLoS ONE 5, n.º 6 (9 de junho de 2010): e10663. http://dx.doi.org/10.1371/journal.pone.0010663.
Texto completo da fonteLara, Bruno, Juan Manuel Rendon-Mancha e Marcos A. Capistran. "Prediction of Undesired Situations based on Multi-Modal Representations". IEEE Latin America Transactions 5, n.º 2 (maio de 2007): 103–8. http://dx.doi.org/10.1109/tla.2007.4381351.
Texto completo da fonteGeng, Shijie, Peng Gao, Moitreya Chatterjee, Chiori Hori, Jonathan Le Roux, Yongfeng Zhang, Hongsheng Li e Anoop Cherian. "Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de maio de 2021): 1415–23. http://dx.doi.org/10.1609/aaai.v35i2.16231.
Texto completo da fonteYan, Facheng, Mingshu Zhang e Bin Wei. "Multimodal integration for fake news detection on social media platforms". MATEC Web of Conferences 395 (2024): 01013. http://dx.doi.org/10.1051/matecconf/202439501013.
Texto completo da fonteEscobar-Grisales, Daniel, Cristian David Ríos-Urrego e Juan Rafael Orozco-Arroyave. "Deep Learning and Artificial Intelligence Applied to Model Speech and Language in Parkinson’s Disease". Diagnostics 13, n.º 13 (25 de junho de 2023): 2163. http://dx.doi.org/10.3390/diagnostics13132163.
Texto completo da fonteZhai, Hanming, Xiaojun Lv, Zhiwen Hou, Xin Tong e Fanliang Bu. "MLSFF: Multi-level structural features fusion for multi-modal knowledge graph completion". Mathematical Biosciences and Engineering 20, n.º 8 (2023): 14096–116. http://dx.doi.org/10.3934/mbe.2023630.
Texto completo da fonteHua, Yan, Yingyun Yang e Jianhe Du. "Deep Multi-Modal Metric Learning with Multi-Scale Correlation for Image-Text Retrieval". Electronics 9, n.º 3 (10 de março de 2020): 466. http://dx.doi.org/10.3390/electronics9030466.
Texto completo da fonteYang, Yiying, Fukun Yin, Wen Liu, Jiayuan Fan, Xin Chen, Gang Yu e Tao Chen. "PM-INR: Prior-Rich Multi-Modal Implicit Large-Scale Scene Neural Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 7 (24 de março de 2024): 6594–602. http://dx.doi.org/10.1609/aaai.v38i7.28481.
Texto completo da fonteHu, Lianyu, Liqing Gao, Zekang Liu, Chi-Man Pun e Wei Feng. "COMMA: Co-articulated Multi-Modal Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de março de 2024): 2238–46. http://dx.doi.org/10.1609/aaai.v38i3.27997.
Texto completo da fonteHill, Felix, Roi Reichart e Anna Korhonen. "Multi-Modal Models for Concrete and Abstract Concept Meaning". Transactions of the Association for Computational Linguistics 2 (dezembro de 2014): 285–96. http://dx.doi.org/10.1162/tacl_a_00183.
Texto completo da fonteLiu, Xuanwu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Yazhou Ren e Maozu Guo. "Ranking-Based Deep Cross-Modal Hashing". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 4400–4407. http://dx.doi.org/10.1609/aaai.v33i01.33014400.
Texto completo da fonteSezerer, Erhan, e Selma Tekir. "Incorporating Concreteness in Multi-Modal Language Models with Curriculum Learning". Applied Sciences 11, n.º 17 (6 de setembro de 2021): 8241. http://dx.doi.org/10.3390/app11178241.
Texto completo da fonteGou, Yingdong, Kexin Wang, Siwen Wei e Changxin Shi. "GMDA: GCN-Based Multi-Modal Domain Adaptation for Real-Time Disaster Detection". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 31, n.º 06 (dezembro de 2023): 957–73. http://dx.doi.org/10.1142/s0218488523500435.
Texto completo da fonteJang, Jiho, Chaerin Kong, DongHyeon Jeon, Seonhoon Kim e Nojun Kwak. "Unifying Vision-Language Representation Space with Single-Tower Transformer". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 1 (26 de junho de 2023): 980–88. http://dx.doi.org/10.1609/aaai.v37i1.25178.
Texto completo da fonteKabir, Anowarul, e Amarda Shehu. "GOProFormer: A Multi-Modal Transformer Method for Gene Ontology Protein Function Prediction". Biomolecules 12, n.º 11 (18 de novembro de 2022): 1709. http://dx.doi.org/10.3390/biom12111709.
Texto completo da fonteAlam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junho de 2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Texto completo da fonteQian, Shengsheng, Dizhan Xue, Huaiwen Zhang, Quan Fang e Changsheng Xu. "Dual Adversarial Graph Neural Networks for Multi-label Cross-modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 3 (18 de maio de 2021): 2440–48. http://dx.doi.org/10.1609/aaai.v35i3.16345.
Texto completo da fonteLu, Lyujian, Saad Elbeleidy, Lauren Zoe Baker e Hua Wang. "Learning Multi-Modal Biomarker Representations via Globally Aligned Longitudinal Enrichments". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 01 (3 de abril de 2020): 817–24. http://dx.doi.org/10.1609/aaai.v34i01.5426.
Texto completo da fonteZhang, Heng, Vishal M. Patel e Rama Chellappa. "Low-Rank and Joint Sparse Representations for Multi-Modal Recognition". IEEE Transactions on Image Processing 26, n.º 10 (outubro de 2017): 4741–52. http://dx.doi.org/10.1109/tip.2017.2721838.
Texto completo da fonteBlown, Eric, e Tom G. K. Bryce. "Conceptual Coherence Revealed in Multi‐Modal Representations of Astronomy Knowledge". International Journal of Science Education 32, n.º 1 (30 de junho de 2009): 31–67. http://dx.doi.org/10.1080/09500690902974207.
Texto completo da fonteTorasso, Pietro. "Multiple representations and multi-modal reasoning in medical diagnostic systems". Artificial Intelligence in Medicine 23, n.º 1 (agosto de 2001): 49–69. http://dx.doi.org/10.1016/s0933-3657(01)00075-6.
Texto completo da fonteWachinger, Christian, e Nassir Navab. "Entropy and Laplacian images: Structural representations for multi-modal registration". Medical Image Analysis 16, n.º 1 (janeiro de 2012): 1–17. http://dx.doi.org/10.1016/j.media.2011.03.001.
Texto completo da fonteFang, Feiyi, Tao Zhou, Zhenbo Song e Jianfeng Lu. "MMCAN: Multi-Modal Cross-Attention Network for Free-Space Detection with Uncalibrated Hyperspectral Sensors". Remote Sensing 15, n.º 4 (20 de fevereiro de 2023): 1142. http://dx.doi.org/10.3390/rs15041142.
Texto completo da fonteGao, Jingsheng, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu e Yuzhuo Fu. "LAMM: Label Alignment for Multi-Modal Prompt Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de março de 2024): 1815–23. http://dx.doi.org/10.1609/aaai.v38i3.27950.
Texto completo da fonteBao, Peijun, Wenhan Yang, Boon Poh Ng, Meng Hwa Er e Alex C. Kot. "Cross-Modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 1 (26 de junho de 2023): 215–22. http://dx.doi.org/10.1609/aaai.v37i1.25093.
Texto completo da fonteLiu, Shuang, Mei Li, Zhong Zhang, Baihua Xiao e Tariq S. Durrani. "Multi-Evidence and Multi-Modal Fusion Network for Ground-Based Cloud Recognition". Remote Sensing 12, n.º 3 (2 de fevereiro de 2020): 464. http://dx.doi.org/10.3390/rs12030464.
Texto completo da fonte