Artigos de revistas sobre o tema "Visual question generation"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Visual question generation".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Patil, Charulata, e Manasi Patwardhan. "Visual Question Generation". ACM Computing Surveys 53, n.º 3 (5 de julho de 2020): 1–22. http://dx.doi.org/10.1145/3383465.
Texto completo da fonteLiu, Hongfei, Jiali Chen, Wenhao Fang, Jiayuan Xie e Yi Cai. "Category-Guided Visual Question Generation (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 13 (26 de junho de 2023): 16262–63. http://dx.doi.org/10.1609/aaai.v37i13.26991.
Texto completo da fonteMi, Li, Syrielle Montariol, Javiera Castillo Navarro, Xianjie Dai, Antoine Bosselut e Devis Tuia. "ConVQG: Contrastive Visual Question Generation with Multimodal Guidance". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 5 (24 de março de 2024): 4207–15. http://dx.doi.org/10.1609/aaai.v38i5.28216.
Texto completo da fonteSarrouti, Mourad, Asma Ben Abacha e Dina Demner-Fushman. "Goal-Driven Visual Question Generation from Radiology Images". Information 12, n.º 8 (20 de agosto de 2021): 334. http://dx.doi.org/10.3390/info12080334.
Texto completo da fontePang, Wei, e Xiaojie Wang. "Visual Dialogue State Tracking for Question Generation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11831–38. http://dx.doi.org/10.1609/aaai.v34i07.6856.
Texto completo da fonteKamala, M. "Visual Question Generation from Remote Sensing Images Using Gemini API". International Journal for Research in Applied Science and Engineering Technology 12, n.º 3 (31 de março de 2024): 2924–29. http://dx.doi.org/10.22214/ijraset.2024.59537.
Texto completo da fonteKachare, Atul, Mukesh Kalla e Ashutosh Gupta. "Visual Question Generation Answering (VQG-VQA) using Machine Learning Models". WSEAS TRANSACTIONS ON SYSTEMS 22 (28 de junho de 2023): 663–70. http://dx.doi.org/10.37394/23202.2023.22.67.
Texto completo da fonteZhu, He, Ren Togo, Takahiro Ogawa e Miki Haseyama. "Diversity Learning Based on Multi-Latent Space for Medical Image Visual Question Generation". Sensors 23, n.º 3 (17 de janeiro de 2023): 1057. http://dx.doi.org/10.3390/s23031057.
Texto completo da fonteBoukhers, Zeyd, Timo Hartmann e Jan Jürjens. "COIN: Counterfactual Image Generation for Visual Question Answering Interpretation". Sensors 22, n.º 6 (14 de março de 2022): 2245. http://dx.doi.org/10.3390/s22062245.
Texto completo da fonteGuo, Zihan, Dezhi Han e Kuan-Ching Li. "Double-layer affective visual question answering network". Computer Science and Information Systems, n.º 00 (2020): 38. http://dx.doi.org/10.2298/csis200515038g.
Texto completo da fonteShridhar, Mohit, Dixant Mittal e David Hsu. "INGRESS: Interactive visual grounding of referring expressions". International Journal of Robotics Research 39, n.º 2-3 (2 de janeiro de 2020): 217–32. http://dx.doi.org/10.1177/0278364919897133.
Texto completo da fonteKim, Incheol. "Visual Experience-Based Question Answering with Complex Multimodal Environments". Mathematical Problems in Engineering 2020 (19 de novembro de 2020): 1–18. http://dx.doi.org/10.1155/2020/8567271.
Texto completo da fonteSingh, Anjali, Ruhi Sharma Mittal, Shubham Atreja, Mourvi Sharma, Seema Nagar, Prasenjit Dey e Mohit Jain. "Automatic Generation of Leveled Visual Assessments for Young Learners". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 9713–20. http://dx.doi.org/10.1609/aaai.v33i01.33019713.
Texto completo da fonteKim, Jung-Jun, Dong-Gyu Lee, Jialin Wu, Hong-Gyu Jung e Seong-Whan Lee. "Visual question answering based on local-scene-aware referring expression generation". Neural Networks 139 (julho de 2021): 158–67. http://dx.doi.org/10.1016/j.neunet.2021.02.001.
Texto completo da fonteLiu, Yuhang, Daowan Peng, Wei Wei, Yuanyuan Fu, Wenfeng Xie e Dangyang Chen. "Detection-Based Intermediate Supervision for Visual Question Answering". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 12 (24 de março de 2024): 14061–68. http://dx.doi.org/10.1609/aaai.v38i12.29315.
Texto completo da fonteGhosh, Akash, Arkadeep Acharya, Raghav Jain, Sriparna Saha, Aman Chadha e Setu Sinha. "CLIPSyntel: CLIP and LLM Synergy for Multimodal Question Summarization in Healthcare". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 20 (24 de março de 2024): 22031–39. http://dx.doi.org/10.1609/aaai.v38i20.30206.
Texto completo da fonteZhang, Weifeng, Jing Yu, Wenhong Zhao e Chuan Ran. "DMRFNet: Deep Multimodal Reasoning and Fusion for Visual Question Answering and explanation generation". Information Fusion 72 (agosto de 2021): 70–79. http://dx.doi.org/10.1016/j.inffus.2021.02.006.
Texto completo da fonteZhang, Lizong, Haojun Yin, Bei Hui, Sijuan Liu e Wei Zhang. "Knowledge-Based Scene Graph Generation with Visual Contextual Dependency". Mathematics 10, n.º 14 (20 de julho de 2022): 2525. http://dx.doi.org/10.3390/math10142525.
Texto completo da fonteZhu, He, Ren Togo, Takahiro Ogawa e Miki Haseyama. "Multimodal Natural Language Explanation Generation for Visual Question Answering Based on Multiple Reference Data". Electronics 12, n.º 10 (10 de maio de 2023): 2183. http://dx.doi.org/10.3390/electronics12102183.
Texto completo da fonteKruchinin, Vladimir, e Vladimir Kuzovkin. "Overview of Existing Methods for Automatic Generation of Tasks with Conditions in Natural Language". Computer tools in education, n.º 1 (28 de março de 2022): 85–96. http://dx.doi.org/10.32603/2071-2340-2022-1-85-96.
Texto completo da fonteLi, Xiaochuan, Baoyu Fan, Runze Zhang, Liang Jin, Di Wang, Zhenhua Guo, Yaqian Zhao e Rengang Li. "Image Content Generation with Causal Reasoning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 12 (24 de março de 2024): 13646–54. http://dx.doi.org/10.1609/aaai.v38i12.29269.
Texto completo da fonteTanaka, Ryota, Kyosuke Nishida e Sen Yoshida. "VisualMRC: Machine Reading Comprehension on Document Images". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 15 (18 de maio de 2021): 13878–88. http://dx.doi.org/10.1609/aaai.v35i15.17635.
Texto completo da fonteWörgötter, Florentin, Ernst Niebur e Christof Koch. "Generation of Direction Selectivity by Isotropic Intracortical Connections". Neural Computation 4, n.º 3 (maio de 1992): 332–40. http://dx.doi.org/10.1162/neco.1992.4.3.332.
Texto completo da fonteWang, Junjue, Zhuo Zheng, Zihang Chen, Ailong Ma e Yanfei Zhong. "EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de março de 2024): 5481–89. http://dx.doi.org/10.1609/aaai.v38i6.28357.
Texto completo da fonteAbrecht, Stephanie, Lydia Gauerhof, Christoph Gladisch, Konrad Groh, Christian Heinzemann e Matthias Woehrle. "Testing Deep Learning-based Visual Perception for Automated Driving". ACM Transactions on Cyber-Physical Systems 5, n.º 4 (31 de outubro de 2021): 1–28. http://dx.doi.org/10.1145/3450356.
Texto completo da fonteCheng, Zesen, Kehan Li, Peng Jin, Siheng Li, Xiangyang Ji, Li Yuan, Chang Liu e Jie Chen. "Parallel Vertex Diffusion for Unified Visual Grounding". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 2 (24 de março de 2024): 1326–34. http://dx.doi.org/10.1609/aaai.v38i2.27896.
Texto completo da fonteKhademi, Mahmoud, e Oliver Schulte. "Deep Generative Probabilistic Graph Neural Networks for Scene Graph Generation". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11237–45. http://dx.doi.org/10.1609/aaai.v34i07.6783.
Texto completo da fonteBELZ, A., T. L. BERG e L. YU. "From image to language and back again". Natural Language Engineering 24, n.º 3 (23 de abril de 2018): 325–62. http://dx.doi.org/10.1017/s1351324918000086.
Texto completo da fonteLiu, Xiulong, Sudipta Paul, Moitreya Chatterjee e Anoop Cherian. "CAVEN: An Embodied Conversational Agent for Efficient Audio-Visual Navigation in Noisy Environments". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 4 (24 de março de 2024): 3765–73. http://dx.doi.org/10.1609/aaai.v38i4.28167.
Texto completo da fonteZhou, Luowei, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso e Jianfeng Gao. "Unified Vision-Language Pre-Training for Image Captioning and VQA". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 13041–49. http://dx.doi.org/10.1609/aaai.v34i07.7005.
Texto completo da fonteKatz, Chaim N., Kramay Patel, Omid Talakoub, David Groppe, Kari Hoffman e Taufik A. Valiante. "Differential Generation of Saccade, Fixation, and Image-Onset Event-Related Potentials in the Human Mesial Temporal Lobe". Cerebral Cortex 30, n.º 10 (4 de junho de 2020): 5502–16. http://dx.doi.org/10.1093/cercor/bhaa132.
Texto completo da fonteReddy, Revant Gangi, Xilin Rui, Manling Li, Xudong Lin, Haoyang Wen, Jaemin Cho, Lifu Huang et al. "MuMuQA: Multimedia Multi-Hop News Question Answering via Cross-Media Knowledge Extraction and Grounding". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junho de 2022): 11200–11208. http://dx.doi.org/10.1609/aaai.v36i10.21370.
Texto completo da fonteSejati, Sadewa Purba, e Ifnu Rifki Nurhidayanto. "Peningkatan Literasi Sumber Daya Air Tanah Menggunakan Media Interaktif Berbasis Android". Dinamisia : Jurnal Pengabdian Kepada Masyarakat 6, n.º 6 (30 de dezembro de 2022): 1454–60. http://dx.doi.org/10.31849/dinamisia.v6i6.11118.
Texto completo da fonteOetken, L. "β CrB – a Rosetta Stone?" International Astronomical Union Colloquium 90 (1986): 355–58. http://dx.doi.org/10.1017/s025292110009179x.
Texto completo da fonteRannula, Kateriina, Elle Sõrmus e Siret Piirsalu. "GENERATION Z IN HIGHER EDUCATION – INVESTIGATING THE PREFERRED MEDIUM OF TEXT IN ACADEMIC READING". EPH - International Journal of Educational Research 4, n.º 3 (18 de novembro de 2020): 1–6. http://dx.doi.org/10.53555/ephijer.v4i3.67.
Texto completo da fonteGladston, Angelin, e Deeban Balaji. "Semantic Attention Network for Image Captioning and Visual Question Answering Based on Image High-Level Semantic Attributes". International Journal of Big Data Intelligence and Applications 3, n.º 1 (1 de janeiro de 2022): 1–18. http://dx.doi.org/10.4018/ijbdia.313201.
Texto completo da fonteGeng, Shijie, Peng Gao, Moitreya Chatterjee, Chiori Hori, Jonathan Le Roux, Yongfeng Zhang, Hongsheng Li e Anoop Cherian. "Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de maio de 2021): 1415–23. http://dx.doi.org/10.1609/aaai.v35i2.16231.
Texto completo da fonteZhu, Yongxin, Zhen Liu, Yukang Liang, Xin Li, Hao Liu, Changcun Bao e Linli Xu. "Locate Then Generate: Bridging Vision and Language with Bounding Box for Scene-Text VQA". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junho de 2023): 11479–87. http://dx.doi.org/10.1609/aaai.v37i9.26357.
Texto completo da fonteGil, Bruno. "Digital redux: the confluence of technologies and politics in architecture". Architectural Research Quarterly 19, n.º 3 (setembro de 2015): 259–68. http://dx.doi.org/10.1017/s135913551500055x.
Texto completo da fonteSevastjanova, Rita, Wolfgang Jentner, Fabian Sperrle, Rebecca Kehlbeck, Jürgen Bernard e Mennatallah El-assady. "QuestionComb: A Gamification Approach for the Visual Explanation of Linguistic Phenomena through Interactive Labeling". ACM Transactions on Interactive Intelligent Systems 11, n.º 3-4 (31 de dezembro de 2021): 1–38. http://dx.doi.org/10.1145/3429448.
Texto completo da fonteHalwani, Noha. "Visual Aids and Multimedia in Second Language Acquisition". English Language Teaching 10, n.º 6 (25 de maio de 2017): 53. http://dx.doi.org/10.5539/elt.v10n6p53.
Texto completo da fonteMoore, Bartlett D., Henry J. Alitto e W. Martin Usrey. "Orientation Tuning, But Not Direction Selectivity, Is Invariant to Temporal Frequency in Primary Visual Cortex". Journal of Neurophysiology 94, n.º 2 (agosto de 2005): 1336–45. http://dx.doi.org/10.1152/jn.01224.2004.
Texto completo da fonteWardani, Winny Gunarti Widya, e Ahmad Faiz Muntazori. "Islamic Memes as Media of Da'wah for Millennials Generations: Analysis of Visual Language On Islamic Memes With Illustration Style". Cultural Syndrome 1, n.º 1 (23 de julho de 2019): 61–78. http://dx.doi.org/10.30998/cs.v1i1.16.
Texto completo da fonteBusch, Steffen, Alexander Schlichting e Claus Brenner. "Generation and communication of dynamic maps using light projection". Proceedings of the ICA 1 (16 de maio de 2018): 1–8. http://dx.doi.org/10.5194/ica-proc-1-16-2018.
Texto completo da fonteMa, Han, Baoyu Fan, Benjamin K. Ng e Chan-Tong Lam. "VL-Few: Vision Language Alignment for Multimodal Few-Shot Meta Learning". Applied Sciences 14, n.º 3 (30 de janeiro de 2024): 1169. http://dx.doi.org/10.3390/app14031169.
Texto completo da fonteRiaubiene, Edita, Eglė Navickeinė e Dalia Dijokienė. "The profile of Lithuanian architects in relation to the professional generations active today". Landscape architecture and art 22, n.º 22 (20 de dezembro de 2023): 69–80. http://dx.doi.org/10.22616/j.landarchart.2023.22.07.
Texto completo da fonteUmanskaya, Zhanna V. "VISUALIZATION OF SOVIET CHILDHOOD IN DRAWINGS BY EUGENIYA DVOSKINA". RSUH/RGGU Bulletin. "Literary Theory. Linguistics. Cultural Studies" Series, n.º 8 (2020): 96–115. http://dx.doi.org/10.28995/2686-7249-2020-8-96-115.
Texto completo da fonteLi, Yehao, Jiahao Fan, Yingwei Pan, Ting Yao, Weiyao Lin e Tao Mei. "Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training". ACM Transactions on Multimedia Computing, Communications, and Applications 18, n.º 2 (31 de maio de 2022): 1–16. http://dx.doi.org/10.1145/3473140.
Texto completo da fonteLadai, A. D., e J. Miller. "Point Cloud Generation from sUAS-Mounted iPhone Imagery: Performance Analysis". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1 (7 de novembro de 2014): 201–5. http://dx.doi.org/10.5194/isprsarchives-xl-1-201-2014.
Texto completo da fontede Vries, Jan. "Renaissance Cities". Renaissance Quarterly 42, n.º 4 (1989): 781–93. http://dx.doi.org/10.2307/2862282.
Texto completo da fonte