Artykuły w czasopismach na temat „Multimodal Transformers”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Multimodal Transformers”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Jaiswal, Sushma, Harikumar Pallthadka, Rajesh P. Chinchewadi i Tarun Jaiswal. "Optimized Image Captioning: Hybrid Transformers Vision Transformers and Convolutional Neural Networks: Enhanced with Beam Search". International Journal of Intelligent Systems and Applications 16, nr 2 (8.04.2024): 53–61. http://dx.doi.org/10.5815/ijisa.2024.02.05.
Pełny tekst źródłaBayat, Nasrin, Jong-Hwan Kim, Renoa Choudhury, Ibrahim F. Kadhim, Zubaidah Al-Mashhadani, Mark Aldritz Dela Virgen, Reuben Latorre, Ricardo De La Paz i Joon-Hyuk Park. "Vision Transformer Customized for Environment Detection and Collision Prediction to Assist the Visually Impaired". Journal of Imaging 9, nr 8 (15.08.2023): 161. http://dx.doi.org/10.3390/jimaging9080161.
Pełny tekst źródłaHendricks, Lisa Anne, John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac i Aida Nematzadeh. "Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers". Transactions of the Association for Computational Linguistics 9 (2021): 570–85. http://dx.doi.org/10.1162/tacl_a_00385.
Pełny tekst źródłaShao, Zilei. "A literature review on multimodal deep learning models for detecting mental disorders in conversational data: Pre-transformer and transformer-based approaches". Applied and Computational Engineering 18, nr 1 (23.10.2023): 215–24. http://dx.doi.org/10.54254/2755-2721/18/20230993.
Pełny tekst źródłaWang, LeiChen, Simon Giebenhain, Carsten Anklam i Bastian Goldluecke. "Radar Ghost Target Detection via Multimodal Transformers". IEEE Robotics and Automation Letters 6, nr 4 (październik 2021): 7758–65. http://dx.doi.org/10.1109/lra.2021.3100176.
Pełny tekst źródłaSalin, Emmanuelle, Badreddine Farah, Stéphane Ayache i Benoit Favre. "Are Vision-Language Transformers Learning Multimodal Representations? A Probing Perspective". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 10 (28.06.2022): 11248–57. http://dx.doi.org/10.1609/aaai.v36i10.21375.
Pełny tekst źródłaSun, Qixuan, Nianhua Fang, Zhuo Liu, Liang Zhao, Youpeng Wen i Hongxiang Lin. "HybridCTrm: Bridging CNN and Transformer for Multimodal Brain Image Segmentation". Journal of Healthcare Engineering 2021 (1.10.2021): 1–10. http://dx.doi.org/10.1155/2021/7467261.
Pełny tekst źródłaYu Tian, Qiyang Zhao, Zine el abidine Kherroubi, Fouzi Boukhalfa, Kebin Wu i Faouzi Bader. "Multimodal transformers for wireless communications: A case study in beam prediction". ITU Journal on Future and Evolving Technologies 4, nr 3 (5.09.2023): 461–71. http://dx.doi.org/10.52953/jwra8095.
Pełny tekst źródłaChen, Yu, Ming Yin, Yu Li i Qian Cai. "CSU-Net: A CNN-Transformer Parallel Network for Multimodal Brain Tumour Segmentation". Electronics 11, nr 14 (16.07.2022): 2226. http://dx.doi.org/10.3390/electronics11142226.
Pełny tekst źródłaWang, Zhaokai, Renda Bao, Qi Wu i Si Liu. "Confidence-aware Non-repetitive Multimodal Transformers for TextCaps". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 4 (18.05.2021): 2835–43. http://dx.doi.org/10.1609/aaai.v35i4.16389.
Pełny tekst źródłaXu, Yifan, Huapeng Wei, Minxuan Lin, Yingying Deng, Kekai Sheng, Mengdan Zhang, Fan Tang, Weiming Dong, Feiyue Huang i Changsheng Xu. "Transformers in computational visual media: A survey". Computational Visual Media 8, nr 1 (27.10.2021): 33–62. http://dx.doi.org/10.1007/s41095-021-0247-3.
Pełny tekst źródłaAbdine, Hadi, Michail Chatzianastasis, Costas Bouyioukos i Michalis Vazirgiannis. "Prot2Text: Multimodal Protein’s Function Generation with GNNs and Transformers". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 10 (24.03.2024): 10757–65. http://dx.doi.org/10.1609/aaai.v38i10.28948.
Pełny tekst źródłaSams, Andrew Steven, i Amalia Zahra. "Multimodal music emotion recognition in Indonesian songs based on CNN-LSTM, XLNet transformers". Bulletin of Electrical Engineering and Informatics 12, nr 1 (1.02.2023): 355–64. http://dx.doi.org/10.11591/eei.v12i1.4231.
Pełny tekst źródłaNayak, Roshan, B. S. Ullas Kannantha, Kruthi S i C. Gururaj. "Multimodal Offensive Meme Classification u sing Transformers and BiLSTM". International Journal of Engineering and Advanced Technology 11, nr 3 (28.02.2022): 96–102. http://dx.doi.org/10.35940/ijeat.c3392.0211322.
Pełny tekst źródłaNadal, Clement, i Francois Pigache. "Multimodal electromechanical model of piezoelectric transformers by Hamilton's principle". IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 56, nr 11 (listopad 2009): 2530–43. http://dx.doi.org/10.1109/tuffc.2009.1340.
Pełny tekst źródłaPezzelle, Sandro, Ece Takmaz i Raquel Fernández. "Word Representation Learning in Multimodal Pre-Trained Transformers: An Intrinsic Evaluation". Transactions of the Association for Computational Linguistics 9 (2021): 1563–79. http://dx.doi.org/10.1162/tacl_a_00443.
Pełny tekst źródłaLiang, Yi, Turdi Tohti i Askar Hamdulla. "False Information Detection via Multimodal Feature Fusion and Multi-Classifier Hybrid Prediction". Algorithms 15, nr 4 (29.03.2022): 119. http://dx.doi.org/10.3390/a15040119.
Pełny tekst źródłaZhang, Tianze. "Investigation on task effect analysis and optimization strategy of multimodal large model based on Transformers architecture for various languages". Applied and Computational Engineering 47, nr 1 (15.03.2024): 213–24. http://dx.doi.org/10.54254/2755-2721/47/20241374.
Pełny tekst źródłaNia, Zahra Movahedi, Ali Ahmadi, Bruce Mellado, Jianhong Wu, James Orbinski, Ali Asgary i Jude D. Kong. "Twitter-based gender recognition using transformers". Mathematical Biosciences and Engineering 20, nr 9 (2023): 15957–77. http://dx.doi.org/10.3934/mbe.2023711.
Pełny tekst źródłaPark, Junhee, i Nammee Moon. "Design and Implementation of Attention Depression Detection Model Based on Multimodal Analysis". Sustainability 14, nr 6 (18.03.2022): 3569. http://dx.doi.org/10.3390/su14063569.
Pełny tekst źródłaXiang, Yunfan, Xiangyu Tian, Yue Xu, Xiaokun Guan i Zhengchao Chen. "EGMT-CD: Edge-Guided Multimodal Transformers Change Detection from Satellite and Aerial Images". Remote Sensing 16, nr 1 (25.12.2023): 86. http://dx.doi.org/10.3390/rs16010086.
Pełny tekst źródłaAmmour, Nassim, Yakoub Bazi i Naif Alajlan. "Multimodal Approach for Enhancing Biometric Authentication". Journal of Imaging 9, nr 9 (22.08.2023): 168. http://dx.doi.org/10.3390/jimaging9090168.
Pełny tekst źródłaSegura-Bedmar, Isabel, i Santiago Alonso-Bartolome. "Multimodal Fake News Detection". Information 13, nr 6 (2.06.2022): 284. http://dx.doi.org/10.3390/info13060284.
Pełny tekst źródłaMingyu, Ji, Zhou Jiawei i Wei Ning. "AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model". PLOS ONE 17, nr 9 (9.09.2022): e0273936. http://dx.doi.org/10.1371/journal.pone.0273936.
Pełny tekst źródłaArgade, Dakshata, Vaishali Khairnar, Deepali Vora, Shruti Patil, Ketan Kotecha i Sultan Alfarhood. "Multimodal Abstractive Summarization using bidirectional encoder representations from transformers with attention mechanism". Heliyon 10, nr 4 (luty 2024): e26162. http://dx.doi.org/10.1016/j.heliyon.2024.e26162.
Pełny tekst źródłaGupta, Arpit, Himanshu Goyal i Ishita Kohli. "Synthesis of Vision and Language: Multifaceted Image Captioning Application". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, nr 12 (23.12.2023): 1–10. http://dx.doi.org/10.55041/ijsrem27770.
Pełny tekst źródłaZhong, Enmin, Carlos R. del-Blanco, Daniel Berjón, Fernando Jaureguizar i Narciso García. "Real-Time Monocular Skeleton-Based Hand Gesture Recognition Using 3D-Jointsformer". Sensors 23, nr 16 (10.08.2023): 7066. http://dx.doi.org/10.3390/s23167066.
Pełny tekst źródłaNikzad-Khasmakhi, N., M. A. Balafar, M. Reza Feizi-Derakhshi i Cina Motamed. "BERTERS: Multimodal representation learning for expert recommendation system with transformers and graph embeddings". Chaos, Solitons & Fractals 151 (październik 2021): 111260. http://dx.doi.org/10.1016/j.chaos.2021.111260.
Pełny tekst źródłaHazmoune, Samira, i Fateh Bougamouza. "Using transformers for multimodal emotion recognition: Taxonomies and state of the art review". Engineering Applications of Artificial Intelligence 133 (lipiec 2024): 108339. http://dx.doi.org/10.1016/j.engappai.2024.108339.
Pełny tekst źródłaPerifanos, Konstantinos, i Dionysis Goutsos. "Multimodal Hate Speech Detection in Greek Social Media". Multimodal Technologies and Interaction 5, nr 7 (29.06.2021): 34. http://dx.doi.org/10.3390/mti5070034.
Pełny tekst źródłaLi, Ning, Jie Chen, Nanxin Fu, Wenzhuo Xiao, Tianrun Ye, Chunming Gao i Ping Zhang. "Leveraging Dual Variational Autoencoders and Generative Adversarial Networks for Enhanced Multimodal Interaction in Zero-Shot Learning". Electronics 13, nr 3 (29.01.2024): 539. http://dx.doi.org/10.3390/electronics13030539.
Pełny tekst źródłaMeng, Yiwen, William Speier, Michael K. Ong i Corey W. Arnold. "Bidirectional Representation Learning From Transformers Using Multimodal Electronic Health Record Data to Predict Depression". IEEE Journal of Biomedical and Health Informatics 25, nr 8 (sierpień 2021): 3121–29. http://dx.doi.org/10.1109/jbhi.2021.3063721.
Pełny tekst źródłaZhang, Mengna, Qisong Huang i Hua Liu. "A Multimodal Data Analysis Approach to Social Media during Natural Disasters". Sustainability 14, nr 9 (5.05.2022): 5536. http://dx.doi.org/10.3390/su14095536.
Pełny tekst źródłaMacfadyen, Craig, Ajay Duraiswamy i David Harris-Birtill. "Classification of hyper-scale multimodal imaging datasets". PLOS Digital Health 2, nr 12 (13.12.2023): e0000191. http://dx.doi.org/10.1371/journal.pdig.0000191.
Pełny tekst źródłaSvyatov, Kirill V., Daniil P. Kanin i Sergey V. Sukhov. "THE CONTROL SYSTEM FOR UNMANNED VEHICLES BASED ON MULTIMODAL DATA AND IDENTIFIED FEATURE HIERARCHY". Автоматизация процессов управления 1, nr 67 (2022): 52–59. http://dx.doi.org/10.35752/1991-2927-2022-1-67-52-59.
Pełny tekst źródłaWatson, Eleanor, Thiago Viana i Shujun Zhang. "Augmented Behavioral Annotation Tools, with Application to Multimodal Datasets and Models: A Systematic Review". AI 4, nr 1 (28.01.2023): 128–71. http://dx.doi.org/10.3390/ai4010007.
Pełny tekst źródłaZhang, Ke, Shunmin Wang i Yuyuan Yu. "A TBGAV-Based Image-Text Multimodal Sentiment Analysis Method for Tourism Reviews". International Journal of Information Technology and Web Engineering 18, nr 1 (7.12.2023): 1–17. http://dx.doi.org/10.4018/ijitwe.334595.
Pełny tekst źródłaLuna-Jiménez, Cristina, Ricardo Kleinlein, David Griol, Zoraida Callejas, Juan M. Montero i Fernando Fernández-Martínez. "A Proposal for Multimodal Emotion Recognition Using Aural Transformers and Action Units on RAVDESS Dataset". Applied Sciences 12, nr 1 (30.12.2021): 327. http://dx.doi.org/10.3390/app12010327.
Pełny tekst źródłaSingh, Aman, Ankit Gautam, Deepanshu, Gautam Kumar, Lokesh Kumar Meena i Shashank Saroop. "Automated Minutes of Meeting Using a Multimodal Approach". International Journal for Research in Applied Science and Engineering Technology 11, nr 12 (31.12.2023): 2059–63. http://dx.doi.org/10.22214/ijraset.2023.57787.
Pełny tekst źródłaWang, Zhecan, Haoxuan You, Liunian Harold Li, Alireza Zareian, Suji Park, Yiqing Liang, Kai-Wei Chang i Shih-Fu Chang. "SGEITL: Scene Graph Enhanced Image-Text Learning for Visual Commonsense Reasoning". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 5 (28.06.2022): 5914–22. http://dx.doi.org/10.1609/aaai.v36i5.20536.
Pełny tekst źródłaLi, Weisheng, Yin Zhang, Guofen Wang, Yuping Huang i Ruyue Li. "DFENet: A dual-branch feature enhanced network integrating transformers and convolutional feature learning for multimodal medical image fusion". Biomedical Signal Processing and Control 80 (luty 2023): 104402. http://dx.doi.org/10.1016/j.bspc.2022.104402.
Pełny tekst źródłaLiu, Mingfei, Bin Zhou, Jie Li, Xinyu Li i Jinsong Bao. "A Knowledge Graph-Based Approach for Assembly Sequence Recommendations for Wind Turbines". Machines 11, nr 10 (27.09.2023): 930. http://dx.doi.org/10.3390/machines11100930.
Pełny tekst źródłaKalra, Sakshi, Chitneedi Hemanth Sai Kumar, Yashvardhan Sharma i Gajendra Singh Chauhan. "FakeExpose: Uncovering the falsity of news by targeting the multimodality via transfer learning". Journal of Information and Optimization Sciences 44, nr 3 (2023): 301–14. http://dx.doi.org/10.47974/jios-1342.
Pełny tekst źródłaColeman, Matthew, Joanna F. Dipnall, Myong Jung i Lan Du. "PreRadE: Pretraining Tasks on Radiology Images and Reports Evaluation Framework". Mathematics 10, nr 24 (8.12.2022): 4661. http://dx.doi.org/10.3390/math10244661.
Pełny tekst źródłaSriram, K., S. P. Mangaiyarkarasi, S. Sakthivel i L. Jebaraj. "An Extensive Study Using the Beetle Swarm Method to Optimize Single and Multiple Objectives of Various Optimal Power Flow Problems". International Transactions on Electrical Energy Systems 2023 (30.03.2023): 1–33. http://dx.doi.org/10.1155/2023/5779700.
Pełny tekst źródłaBoehm, Kevin M., Antonio Marra, Jorge S. Reis-Filho, Sarat Chandarlapaty, Fresia Pareja i Sohrab P. Shah. "Abstract 890: Multimodal modeling of digitized histopathology slides improves risk stratification in hormone receptor-positive breast cancer patients". Cancer Research 84, nr 6_Supplement (22.03.2024): 890. http://dx.doi.org/10.1158/1538-7445.am2024-890.
Pełny tekst źródłaAlam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 11 (28.06.2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Pełny tekst źródłaWu, Di, Lihua Cao, Pengji Zhou, Ning Li, Yi Li i Dejun Wang. "Infrared Small-Target Detection Based on Radiation Characteristics with a Multimodal Feature Fusion Network". Remote Sensing 14, nr 15 (25.07.2022): 3570. http://dx.doi.org/10.3390/rs14153570.
Pełny tekst źródłade Hond, Anne, Marieke van Buchem, Claudio Fanconi, Mohana Roy, Douglas Blayney, Ilse Kant, Ewout Steyerberg i Tina Hernandez-Boussard. "Predicting Depression Risk in Patients With Cancer Using Multimodal Data: Algorithm Development Study". JMIR Medical Informatics 12 (18.01.2024): e51925. http://dx.doi.org/10.2196/51925.
Pełny tekst źródłaNooralahzadeh, Farhad, i Rico Sennrich. "Improving the Cross-Lingual Generalisation in Visual Question Answering". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 11 (26.06.2023): 13419–27. http://dx.doi.org/10.1609/aaai.v37i11.26574.
Pełny tekst źródła