Artículos de revistas sobre el tema "Transformers Multimodaux"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Transformers Multimodaux".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Jaiswal, Sushma, Harikumar Pallthadka, Rajesh P. Chinchewadi y Tarun Jaiswal. "Optimized Image Captioning: Hybrid Transformers Vision Transformers and Convolutional Neural Networks: Enhanced with Beam Search". International Journal of Intelligent Systems and Applications 16, n.º 2 (8 de abril de 2024): 53–61. http://dx.doi.org/10.5815/ijisa.2024.02.05.
Texto completoBayat, Nasrin, Jong-Hwan Kim, Renoa Choudhury, Ibrahim F. Kadhim, Zubaidah Al-Mashhadani, Mark Aldritz Dela Virgen, Reuben Latorre, Ricardo De La Paz y Joon-Hyuk Park. "Vision Transformer Customized for Environment Detection and Collision Prediction to Assist the Visually Impaired". Journal of Imaging 9, n.º 8 (15 de agosto de 2023): 161. http://dx.doi.org/10.3390/jimaging9080161.
Texto completoShao, Zilei. "A literature review on multimodal deep learning models for detecting mental disorders in conversational data: Pre-transformer and transformer-based approaches". Applied and Computational Engineering 18, n.º 1 (23 de octubre de 2023): 215–24. http://dx.doi.org/10.54254/2755-2721/18/20230993.
Texto completoHendricks, Lisa Anne, John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac y Aida Nematzadeh. "Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers". Transactions of the Association for Computational Linguistics 9 (2021): 570–85. http://dx.doi.org/10.1162/tacl_a_00385.
Texto completoChen, Yu, Ming Yin, Yu Li y Qian Cai. "CSU-Net: A CNN-Transformer Parallel Network for Multimodal Brain Tumour Segmentation". Electronics 11, n.º 14 (16 de julio de 2022): 2226. http://dx.doi.org/10.3390/electronics11142226.
Texto completoSun, Qixuan, Nianhua Fang, Zhuo Liu, Liang Zhao, Youpeng Wen y Hongxiang Lin. "HybridCTrm: Bridging CNN and Transformer for Multimodal Brain Image Segmentation". Journal of Healthcare Engineering 2021 (1 de octubre de 2021): 1–10. http://dx.doi.org/10.1155/2021/7467261.
Texto completoYu Tian, Qiyang Zhao, Zine el abidine Kherroubi, Fouzi Boukhalfa, Kebin Wu y Faouzi Bader. "Multimodal transformers for wireless communications: A case study in beam prediction". ITU Journal on Future and Evolving Technologies 4, n.º 3 (5 de septiembre de 2023): 461–71. http://dx.doi.org/10.52953/jwra8095.
Texto completoXu, Yifan, Huapeng Wei, Minxuan Lin, Yingying Deng, Kekai Sheng, Mengdan Zhang, Fan Tang, Weiming Dong, Feiyue Huang y Changsheng Xu. "Transformers in computational visual media: A survey". Computational Visual Media 8, n.º 1 (27 de octubre de 2021): 33–62. http://dx.doi.org/10.1007/s41095-021-0247-3.
Texto completoZhong, Enmin, Carlos R. del-Blanco, Daniel Berjón, Fernando Jaureguizar y Narciso García. "Real-Time Monocular Skeleton-Based Hand Gesture Recognition Using 3D-Jointsformer". Sensors 23, n.º 16 (10 de agosto de 2023): 7066. http://dx.doi.org/10.3390/s23167066.
Texto completoNia, Zahra Movahedi, Ali Ahmadi, Bruce Mellado, Jianhong Wu, James Orbinski, Ali Asgary y Jude D. Kong. "Twitter-based gender recognition using transformers". Mathematical Biosciences and Engineering 20, n.º 9 (2023): 15957–77. http://dx.doi.org/10.3934/mbe.2023711.
Texto completoLiang, Yi, Turdi Tohti y Askar Hamdulla. "False Information Detection via Multimodal Feature Fusion and Multi-Classifier Hybrid Prediction". Algorithms 15, n.º 4 (29 de marzo de 2022): 119. http://dx.doi.org/10.3390/a15040119.
Texto completoDesai, Poorav, Tanmoy Chakraborty y Md Shad Akhtar. "Nice Perfume. How Long Did You Marinate in It? Multimodal Sarcasm Explanation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 10563–71. http://dx.doi.org/10.1609/aaai.v36i10.21300.
Texto completoShan, Qishang, Xiangsen Wei y Ziyun Cai. "Modality-Invariant and -Specific Representations with Crossmodal Transformer for Multimodal Sentiment Analysis". Journal of Physics: Conference Series 2224, n.º 1 (1 de abril de 2022): 012024. http://dx.doi.org/10.1088/1742-6596/2224/1/012024.
Texto completoGupta, Arpit, Himanshu Goyal y Ishita Kohli. "Synthesis of Vision and Language: Multifaceted Image Captioning Application". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, n.º 12 (23 de diciembre de 2023): 1–10. http://dx.doi.org/10.55041/ijsrem27770.
Texto completoLiu, Bo, Lejian He, Yafei Liu, Tianyao Yu, Yuejia Xiang, Li Zhu y Weijian Ruan. "Transformer-Based Multimodal Infusion Dialogue Systems". Electronics 11, n.º 20 (20 de octubre de 2022): 3409. http://dx.doi.org/10.3390/electronics11203409.
Texto completoWang, LeiChen, Simon Giebenhain, Carsten Anklam y Bastian Goldluecke. "Radar Ghost Target Detection via Multimodal Transformers". IEEE Robotics and Automation Letters 6, n.º 4 (octubre de 2021): 7758–65. http://dx.doi.org/10.1109/lra.2021.3100176.
Texto completoSalin, Emmanuelle, Badreddine Farah, Stéphane Ayache y Benoit Favre. "Are Vision-Language Transformers Learning Multimodal Representations? A Probing Perspective". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 11248–57. http://dx.doi.org/10.1609/aaai.v36i10.21375.
Texto completoZhao, Bin, Maoguo Gong y Xuelong Li. "Hierarchical multimodal transformer to summarize videos". Neurocomputing 468 (enero de 2022): 360–69. http://dx.doi.org/10.1016/j.neucom.2021.10.039.
Texto completoDing, Lan. "Online teaching emotion analysis based on GRU and nonlinear transformer algorithm". PeerJ Computer Science 9 (21 de noviembre de 2023): e1696. http://dx.doi.org/10.7717/peerj-cs.1696.
Texto completoWang, Zhaokai, Renda Bao, Qi Wu y Si Liu. "Confidence-aware Non-repetitive Multimodal Transformers for TextCaps". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 4 (18 de mayo de 2021): 2835–43. http://dx.doi.org/10.1609/aaai.v35i4.16389.
Texto completoXiang, Yunfan, Xiangyu Tian, Yue Xu, Xiaokun Guan y Zhengchao Chen. "EGMT-CD: Edge-Guided Multimodal Transformers Change Detection from Satellite and Aerial Images". Remote Sensing 16, n.º 1 (25 de diciembre de 2023): 86. http://dx.doi.org/10.3390/rs16010086.
Texto completoLi, Ning, Jie Chen, Nanxin Fu, Wenzhuo Xiao, Tianrun Ye, Chunming Gao y Ping Zhang. "Leveraging Dual Variational Autoencoders and Generative Adversarial Networks for Enhanced Multimodal Interaction in Zero-Shot Learning". Electronics 13, n.º 3 (29 de enero de 2024): 539. http://dx.doi.org/10.3390/electronics13030539.
Texto completoAbdine, Hadi, Michail Chatzianastasis, Costas Bouyioukos y Michalis Vazirgiannis. "Prot2Text: Multimodal Protein’s Function Generation with GNNs and Transformers". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 10 (24 de marzo de 2024): 10757–65. http://dx.doi.org/10.1609/aaai.v38i10.28948.
Texto completoLi, Zuhe, Qingbing Guo, Chengyao Feng, Lujuan Deng, Qiuwen Zhang, Jianwei Zhang, Fengqin Wang y Qian Sun. "Multimodal Sentiment Analysis Based on Interactive Transformer and Soft Mapping". Wireless Communications and Mobile Computing 2022 (3 de febrero de 2022): 1–12. http://dx.doi.org/10.1155/2022/6243347.
Texto completoZhang, Yinshuo, Lei Chen y Yuan Yuan. "Multimodal Fine-Grained Transformer Model for Pest Recognition". Electronics 12, n.º 12 (10 de junio de 2023): 2620. http://dx.doi.org/10.3390/electronics12122620.
Texto completoZhang, Tianze. "Investigation on task effect analysis and optimization strategy of multimodal large model based on Transformers architecture for various languages". Applied and Computational Engineering 47, n.º 1 (15 de marzo de 2024): 213–24. http://dx.doi.org/10.54254/2755-2721/47/20241374.
Texto completoWang, Zhecan, Haoxuan You, Liunian Harold Li, Alireza Zareian, Suji Park, Yiqing Liang, Kai-Wei Chang y Shih-Fu Chang. "SGEITL: Scene Graph Enhanced Image-Text Learning for Visual Commonsense Reasoning". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 5 (28 de junio de 2022): 5914–22. http://dx.doi.org/10.1609/aaai.v36i5.20536.
Texto completoWei, Jiaqi, Bin Jiang y Yanxia Zhang. "Identification of Blue Horizontal Branch Stars with Multimodal Fusion". Publications of the Astronomical Society of the Pacific 135, n.º 1050 (1 de agosto de 2023): 084501. http://dx.doi.org/10.1088/1538-3873/acea43.
Texto completoSams, Andrew Steven y Amalia Zahra. "Multimodal music emotion recognition in Indonesian songs based on CNN-LSTM, XLNet transformers". Bulletin of Electrical Engineering and Informatics 12, n.º 1 (1 de febrero de 2023): 355–64. http://dx.doi.org/10.11591/eei.v12i1.4231.
Texto completoNayak, Roshan, B. S. Ullas Kannantha, Kruthi S y C. Gururaj. "Multimodal Offensive Meme Classification u sing Transformers and BiLSTM". International Journal of Engineering and Advanced Technology 11, n.º 3 (28 de febrero de 2022): 96–102. http://dx.doi.org/10.35940/ijeat.c3392.0211322.
Texto completoNadal, Clement y Francois Pigache. "Multimodal electromechanical model of piezoelectric transformers by Hamilton's principle". IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 56, n.º 11 (noviembre de 2009): 2530–43. http://dx.doi.org/10.1109/tuffc.2009.1340.
Texto completoChen, Yunfan, Jinxing Ye y Xiangkui Wan. "TF-YOLO: A Transformer–Fusion-Based YOLO Detector for Multimodal Pedestrian Detection in Autonomous Driving Scenes". World Electric Vehicle Journal 14, n.º 12 (18 de diciembre de 2023): 352. http://dx.doi.org/10.3390/wevj14120352.
Texto completoPezzelle, Sandro, Ece Takmaz y Raquel Fernández. "Word Representation Learning in Multimodal Pre-Trained Transformers: An Intrinsic Evaluation". Transactions of the Association for Computational Linguistics 9 (2021): 1563–79. http://dx.doi.org/10.1162/tacl_a_00443.
Texto completoZhang, Yingjie. "The current status and prospects of transformer in multimodality". Applied and Computational Engineering 11, n.º 1 (25 de septiembre de 2023): 224–30. http://dx.doi.org/10.54254/2755-2721/11/20230240.
Texto completoHasan, Md Kamrul, Sangwu Lee, Wasifur Rahman, Amir Zadeh, Rada Mihalcea, Louis-Philippe Morency y Ehsan Hoque. "Humor Knowledge Enriched Transformer for Understanding Multimodal Humor". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 14 (18 de mayo de 2021): 12972–80. http://dx.doi.org/10.1609/aaai.v35i14.17534.
Texto completoZhang, Xiaojuan, Yongxiu Zhou, Peihao Peng y Guoyan Wang. "A Novel Multimodal Species Distribution Model Fusing Remote Sensing Images and Environmental Features". Sustainability 14, n.º 21 (28 de octubre de 2022): 14034. http://dx.doi.org/10.3390/su142114034.
Texto completoZhang, Guihao y Jiangzhong Cao. "Feature Fusion Based on Transformer for Cross-modal Retrieval". Journal of Physics: Conference Series 2558, n.º 1 (1 de agosto de 2023): 012012. http://dx.doi.org/10.1088/1742-6596/2558/1/012012.
Texto completoPark, Junhee y Nammee Moon. "Design and Implementation of Attention Depression Detection Model Based on Multimodal Analysis". Sustainability 14, n.º 6 (18 de marzo de 2022): 3569. http://dx.doi.org/10.3390/su14063569.
Texto completoQi, Qingfu, Liyuan Lin, Rui Zhang y Chengrong Xue. "MEDT: Using Multimodal Encoding-Decoding Network as in Transformer for Multimodal Sentiment Analysis". IEEE Access 10 (2022): 28750–59. http://dx.doi.org/10.1109/access.2022.3157712.
Texto completoLi, Lei, Xiang Chen, Shuofei Qiao, Feiyu Xiong, Huajun Chen y Ningyu Zhang. "On Analyzing the Role of Image for Visual-Enhanced Relation Extraction (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 13 (26 de junio de 2023): 16254–55. http://dx.doi.org/10.1609/aaai.v37i13.26987.
Texto completoZhang, Junyan. "Research on transformer and attention in applied algorithms". Applied and Computational Engineering 13, n.º 1 (23 de octubre de 2023): 221–28. http://dx.doi.org/10.54254/2755-2721/13/20230737.
Texto completoGao, Jialin, Jianyu Chen, Jiaqi Wei, Bin Jiang y A.-Li Luo. "Deep Multimodal Networks for M-type Star Classification with Paired Spectrum and Photometric Image". Publications of the Astronomical Society of the Pacific 135, n.º 1046 (1 de abril de 2023): 044503. http://dx.doi.org/10.1088/1538-3873/acc7ca.
Texto completoZong, Daoming y Shiliang Sun. "McOmet: Multimodal Fusion Transformer for Physical Audiovisual Commonsense Reasoning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 5 (26 de junio de 2023): 6621–29. http://dx.doi.org/10.1609/aaai.v37i5.25813.
Texto completoJayaLakshmi, Gundabathina, Abburi Madhuri, Deepak Vasudevan, Balamuralikrishna Thati, Uddagiri Sirisha y Surapaneni Phani Praveen. "Effective Disaster Management Through Transformer-Based Multimodal Tweet Classification". Revue d'Intelligence Artificielle 37, n.º 5 (31 de octubre de 2023): 1263–72. http://dx.doi.org/10.18280/ria.370519.
Texto completoLiu, Biyuan, Huaixin Chen, Kun Li y Michael Ying Yang. "Transformer-based multimodal change detection with multitask consistency constraints". Information Fusion 108 (agosto de 2024): 102358. http://dx.doi.org/10.1016/j.inffus.2024.102358.
Texto completoAbiyev, Rahib H., Mohamad Ziad Altabel, Manal Darwish y Abdulkader Helwan. "A Multimodal Transformer Model for Recognition of Images from Complex Laparoscopic Surgical Videos". Diagnostics 14, n.º 7 (23 de marzo de 2024): 681. http://dx.doi.org/10.3390/diagnostics14070681.
Texto completoChaudhari, Aayushi, Chintan Bhatt, Achyut Krishna y Carlos M. Travieso-González. "Facial Emotion Recognition with Inter-Modality-Attention-Transformer-Based Self-Supervised Learning". Electronics 12, n.º 2 (5 de enero de 2023): 288. http://dx.doi.org/10.3390/electronics12020288.
Texto completoXu, Zhen, David R. So y Andrew M. Dai. "MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de mayo de 2021): 10532–40. http://dx.doi.org/10.1609/aaai.v35i12.17260.
Texto completoIlmi, Yuslimu, Pratiwi Retnaningdyah y Ahmad Munir. "Exploring Digital Multimodal Text in EFL Classroom: Transformed Practice in Multiliteracies Pedagogy". Linguistic, English Education and Art (LEEA) Journal 4, n.º 1 (28 de diciembre de 2020): 99–108. http://dx.doi.org/10.31539/leea.v4i1.1416.
Texto completoAmmour, Nassim, Yakoub Bazi y Naif Alajlan. "Multimodal Approach for Enhancing Biometric Authentication". Journal of Imaging 9, n.º 9 (22 de agosto de 2023): 168. http://dx.doi.org/10.3390/jimaging9090168.
Texto completo