Artykuły w czasopismach na temat „Multi-Modal representations”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Multi-Modal representations”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Wu, Lianlong, Seewon Choi, Daniel Raggi, Aaron Stockdill, Grecia Garcia Garcia, Fiorenzo Colarusso, Peter C. H. Cheng i Mateja Jamnik. "Generation of Visual Representations for Multi-Modal Mathematical Knowledge". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 21 (24.03.2024): 23850–52. http://dx.doi.org/10.1609/aaai.v38i21.30586.
Pełny tekst źródłaZhang, Yi, Mingyuan Chen, Jundong Shen i Chongjun Wang. "Tailor Versatile Multi-Modal Learning for Multi-Label Emotion Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 9100–9108. http://dx.doi.org/10.1609/aaai.v36i8.20895.
Pełny tekst źródłaZhang, Dong, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu i Guodong Zhou. "Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 16 (18.05.2021): 14347–55. http://dx.doi.org/10.1609/aaai.v35i16.17687.
Pełny tekst źródłaLiu, Hao, Jindong Han, Yanjie Fu, Jingbo Zhou, Xinjiang Lu i Hui Xiong. "Multi-modal transportation recommendation with unified route representation learning". Proceedings of the VLDB Endowment 14, nr 3 (listopad 2020): 342–50. http://dx.doi.org/10.14778/3430915.3430924.
Pełny tekst źródłaWang, Huansha, Qinrang Liu, Ruiyang Huang i Jianpeng Zhang. "Multi-Modal Entity Alignment Method Based on Feature Enhancement". Applied Sciences 13, nr 11 (1.06.2023): 6747. http://dx.doi.org/10.3390/app13116747.
Pełny tekst źródłaWu, Tianxing, Chaoyu Gao, Lin Li i Yuxiang Wang. "Leveraging Multi-Modal Information for Cross-Lingual Entity Matching across Knowledge Graphs". Applied Sciences 12, nr 19 (8.10.2022): 10107. http://dx.doi.org/10.3390/app121910107.
Pełny tekst źródłaHan, Ning, Jingjing Chen, Hao Zhang, Huanwen Wang i Hao Chen. "Adversarial Multi-Grained Embedding Network for Cross-Modal Text-Video Retrieval". ACM Transactions on Multimedia Computing, Communications, and Applications 18, nr 2 (31.05.2022): 1–23. http://dx.doi.org/10.1145/3483381.
Pełny tekst źródłaYing, Qichao, Xiaoxiao Hu, Yangming Zhou, Zhenxing Qian, Dan Zeng i Shiming Ge. "Bootstrapping Multi-View Representations for Fake News Detection". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 4 (26.06.2023): 5384–92. http://dx.doi.org/10.1609/aaai.v37i4.25670.
Pełny tekst źródłaHuang, Yufeng, Jiji Tang, Zhuo Chen, Rongsheng Zhang, Xinfeng Zhang, Weijie Chen, Zeng Zhao i in. "Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-Modal Structured Representations". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 3 (24.03.2024): 2417–25. http://dx.doi.org/10.1609/aaai.v38i3.28017.
Pełny tekst źródłavan Tulder, Gijs, i Marleen de Bruijne. "Learning Cross-Modality Representations From Multi-Modal Images". IEEE Transactions on Medical Imaging 38, nr 2 (luty 2019): 638–48. http://dx.doi.org/10.1109/tmi.2018.2868977.
Pełny tekst źródłaKiela, Douwe, i Stephen Clark. "Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception". Journal of Artificial Intelligence Research 60 (26.12.2017): 1003–30. http://dx.doi.org/10.1613/jair.5665.
Pełny tekst źródłaCui, Xiaohui, Xiaolong Qu, Dongmei Li, Yu Yang, Yuxun Li i Xiaoping Zhang. "MKGCN: Multi-Modal Knowledge Graph Convolutional Network for Music Recommender Systems". Electronics 12, nr 12 (15.06.2023): 2688. http://dx.doi.org/10.3390/electronics12122688.
Pełny tekst źródłaDong, Bin, Songlei Jian i Kai Lu. "Learning Multimodal Representations by Symmetrically Transferring Local Structures". Symmetry 12, nr 9 (13.09.2020): 1504. http://dx.doi.org/10.3390/sym12091504.
Pełny tekst źródłaLi, Yehao, Jiahao Fan, Yingwei Pan, Ting Yao, Weiyao Lin i Tao Mei. "Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training". ACM Transactions on Multimedia Computing, Communications, and Applications 18, nr 2 (31.05.2022): 1–16. http://dx.doi.org/10.1145/3473140.
Pełny tekst źródłaGu, Zhihao, Jiangning Zhang, Liang Liu, Xu Chen, Jinlong Peng, Zhenye Gan, Guannan Jiang, Annan Shu, Yabiao Wang i Lizhuang Ma. "Rethinking Reverse Distillation for Multi-Modal Anomaly Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 8 (24.03.2024): 8445–53. http://dx.doi.org/10.1609/aaai.v38i8.28687.
Pełny tekst źródłaWang, Zi, Chenglong Li, Aihua Zheng, Ran He i Jin Tang. "Interact, Embed, and EnlargE: Boosting Modality-Specific Representations for Multi-Modal Person Re-identification". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 3 (28.06.2022): 2633–41. http://dx.doi.org/10.1609/aaai.v36i3.20165.
Pełny tekst źródłaWróblewska, Anna, Jacek Dąbrowski, Michał Pastuszak, Andrzej Michałowski, Michał Daniluk, Barbara Rychalska, Mikołaj Wieczorek i Sylwia Sysko-Romańczuk. "Designing Multi-Modal Embedding Fusion-Based Recommender". Electronics 11, nr 9 (27.04.2022): 1391. http://dx.doi.org/10.3390/electronics11091391.
Pełny tekst źródłaHe, Qibin. "Prompting Multi-Modal Image Segmentation with Semantic Grouping". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 3 (24.03.2024): 2094–102. http://dx.doi.org/10.1609/aaai.v38i3.27981.
Pełny tekst źródłaLiang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang i Zhe Xue. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 12 (24.03.2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Pełny tekst źródłaBodapati, Jyostna Devi, Veeranjaneyulu Naralasetti, Shaik Nagur Shareef, Saqib Hakak, Muhammad Bilal, Praveen Kumar Reddy Maddikunta i Ohyun Jo. "Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction". Electronics 9, nr 6 (30.05.2020): 914. http://dx.doi.org/10.3390/electronics9060914.
Pełny tekst źródłaLiu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu i Hui Xiong. "Joint Representation Learning for Multi-Modal Transportation Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Pełny tekst źródłaYang, Fan, Wei Li, Menglong Yang, Binbin Liang i Jianwei Zhang. "Multi-Modal Disordered Representation Learning Network for Description-Based Person Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 15 (24.03.2024): 16316–24. http://dx.doi.org/10.1609/aaai.v38i15.29567.
Pełny tekst źródłaJüttner, Martin, i Ingo Rentschler. "Imagery in multi-modal object learning". Behavioral and Brain Sciences 25, nr 2 (kwiecień 2002): 197–98. http://dx.doi.org/10.1017/s0140525x0238004x.
Pełny tekst źródłaTao, Rui, Meng Zhu, Haiyan Cao i Honge Ren. "Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective". Sensors 24, nr 10 (14.05.2024): 3130. http://dx.doi.org/10.3390/s24103130.
Pełny tekst źródłaPugeault, Nicolas, Florentin Wörgötter i Norbert Krüger. "Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints". PLoS ONE 5, nr 6 (9.06.2010): e10663. http://dx.doi.org/10.1371/journal.pone.0010663.
Pełny tekst źródłaLara, Bruno, Juan Manuel Rendon-Mancha i Marcos A. Capistran. "Prediction of Undesired Situations based on Multi-Modal Representations". IEEE Latin America Transactions 5, nr 2 (maj 2007): 103–8. http://dx.doi.org/10.1109/tla.2007.4381351.
Pełny tekst źródłaGeng, Shijie, Peng Gao, Moitreya Chatterjee, Chiori Hori, Jonathan Le Roux, Yongfeng Zhang, Hongsheng Li i Anoop Cherian. "Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 2 (18.05.2021): 1415–23. http://dx.doi.org/10.1609/aaai.v35i2.16231.
Pełny tekst źródłaYan, Facheng, Mingshu Zhang i Bin Wei. "Multimodal integration for fake news detection on social media platforms". MATEC Web of Conferences 395 (2024): 01013. http://dx.doi.org/10.1051/matecconf/202439501013.
Pełny tekst źródłaEscobar-Grisales, Daniel, Cristian David Ríos-Urrego i Juan Rafael Orozco-Arroyave. "Deep Learning and Artificial Intelligence Applied to Model Speech and Language in Parkinson’s Disease". Diagnostics 13, nr 13 (25.06.2023): 2163. http://dx.doi.org/10.3390/diagnostics13132163.
Pełny tekst źródłaZhai, Hanming, Xiaojun Lv, Zhiwen Hou, Xin Tong i Fanliang Bu. "MLSFF: Multi-level structural features fusion for multi-modal knowledge graph completion". Mathematical Biosciences and Engineering 20, nr 8 (2023): 14096–116. http://dx.doi.org/10.3934/mbe.2023630.
Pełny tekst źródłaHua, Yan, Yingyun Yang i Jianhe Du. "Deep Multi-Modal Metric Learning with Multi-Scale Correlation for Image-Text Retrieval". Electronics 9, nr 3 (10.03.2020): 466. http://dx.doi.org/10.3390/electronics9030466.
Pełny tekst źródłaYang, Yiying, Fukun Yin, Wen Liu, Jiayuan Fan, Xin Chen, Gang Yu i Tao Chen. "PM-INR: Prior-Rich Multi-Modal Implicit Large-Scale Scene Neural Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 7 (24.03.2024): 6594–602. http://dx.doi.org/10.1609/aaai.v38i7.28481.
Pełny tekst źródłaHu, Lianyu, Liqing Gao, Zekang Liu, Chi-Man Pun i Wei Feng. "COMMA: Co-articulated Multi-Modal Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 3 (24.03.2024): 2238–46. http://dx.doi.org/10.1609/aaai.v38i3.27997.
Pełny tekst źródłaHill, Felix, Roi Reichart i Anna Korhonen. "Multi-Modal Models for Concrete and Abstract Concept Meaning". Transactions of the Association for Computational Linguistics 2 (grudzień 2014): 285–96. http://dx.doi.org/10.1162/tacl_a_00183.
Pełny tekst źródłaLiu, Xuanwu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Yazhou Ren i Maozu Guo. "Ranking-Based Deep Cross-Modal Hashing". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4400–4407. http://dx.doi.org/10.1609/aaai.v33i01.33014400.
Pełny tekst źródłaSezerer, Erhan, i Selma Tekir. "Incorporating Concreteness in Multi-Modal Language Models with Curriculum Learning". Applied Sciences 11, nr 17 (6.09.2021): 8241. http://dx.doi.org/10.3390/app11178241.
Pełny tekst źródłaGou, Yingdong, Kexin Wang, Siwen Wei i Changxin Shi. "GMDA: GCN-Based Multi-Modal Domain Adaptation for Real-Time Disaster Detection". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 31, nr 06 (grudzień 2023): 957–73. http://dx.doi.org/10.1142/s0218488523500435.
Pełny tekst źródłaJang, Jiho, Chaerin Kong, DongHyeon Jeon, Seonhoon Kim i Nojun Kwak. "Unifying Vision-Language Representation Space with Single-Tower Transformer". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 1 (26.06.2023): 980–88. http://dx.doi.org/10.1609/aaai.v37i1.25178.
Pełny tekst źródłaKabir, Anowarul, i Amarda Shehu. "GOProFormer: A Multi-Modal Transformer Method for Gene Ontology Protein Function Prediction". Biomolecules 12, nr 11 (18.11.2022): 1709. http://dx.doi.org/10.3390/biom12111709.
Pełny tekst źródłaAlam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 11 (28.06.2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Pełny tekst źródłaQian, Shengsheng, Dizhan Xue, Huaiwen Zhang, Quan Fang i Changsheng Xu. "Dual Adversarial Graph Neural Networks for Multi-label Cross-modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 3 (18.05.2021): 2440–48. http://dx.doi.org/10.1609/aaai.v35i3.16345.
Pełny tekst źródłaLu, Lyujian, Saad Elbeleidy, Lauren Zoe Baker i Hua Wang. "Learning Multi-Modal Biomarker Representations via Globally Aligned Longitudinal Enrichments". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 01 (3.04.2020): 817–24. http://dx.doi.org/10.1609/aaai.v34i01.5426.
Pełny tekst źródłaZhang, Heng, Vishal M. Patel i Rama Chellappa. "Low-Rank and Joint Sparse Representations for Multi-Modal Recognition". IEEE Transactions on Image Processing 26, nr 10 (październik 2017): 4741–52. http://dx.doi.org/10.1109/tip.2017.2721838.
Pełny tekst źródłaBlown, Eric, i Tom G. K. Bryce. "Conceptual Coherence Revealed in Multi‐Modal Representations of Astronomy Knowledge". International Journal of Science Education 32, nr 1 (30.06.2009): 31–67. http://dx.doi.org/10.1080/09500690902974207.
Pełny tekst źródłaTorasso, Pietro. "Multiple representations and multi-modal reasoning in medical diagnostic systems". Artificial Intelligence in Medicine 23, nr 1 (sierpień 2001): 49–69. http://dx.doi.org/10.1016/s0933-3657(01)00075-6.
Pełny tekst źródłaWachinger, Christian, i Nassir Navab. "Entropy and Laplacian images: Structural representations for multi-modal registration". Medical Image Analysis 16, nr 1 (styczeń 2012): 1–17. http://dx.doi.org/10.1016/j.media.2011.03.001.
Pełny tekst źródłaFang, Feiyi, Tao Zhou, Zhenbo Song i Jianfeng Lu. "MMCAN: Multi-Modal Cross-Attention Network for Free-Space Detection with Uncalibrated Hyperspectral Sensors". Remote Sensing 15, nr 4 (20.02.2023): 1142. http://dx.doi.org/10.3390/rs15041142.
Pełny tekst źródłaGao, Jingsheng, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu i Yuzhuo Fu. "LAMM: Label Alignment for Multi-Modal Prompt Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 3 (24.03.2024): 1815–23. http://dx.doi.org/10.1609/aaai.v38i3.27950.
Pełny tekst źródłaBao, Peijun, Wenhan Yang, Boon Poh Ng, Meng Hwa Er i Alex C. Kot. "Cross-Modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 1 (26.06.2023): 215–22. http://dx.doi.org/10.1609/aaai.v37i1.25093.
Pełny tekst źródłaLiu, Shuang, Mei Li, Zhong Zhang, Baihua Xiao i Tariq S. Durrani. "Multi-Evidence and Multi-Modal Fusion Network for Ground-Based Cloud Recognition". Remote Sensing 12, nr 3 (2.02.2020): 464. http://dx.doi.org/10.3390/rs12030464.
Pełny tekst źródła