Literatura académica sobre el tema "Visual grounding of text"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Visual grounding of text".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Visual grounding of text"
Chao Wang, Chao Wang, Wei Luo Chao Wang, Jia-Rui Zhu Wei Luo, Ying-Chun Xia Jia-Rui Zhu, Jin He Ying-Chun Xia y Li-Chuan Gu Jin He. "End-to-end Visual Grounding Based on Query Text Guidance and Multi-stage Reasoning". 電腦學刊 35, n.º 1 (febrero de 2024): 083–95. http://dx.doi.org/10.53106/199115992024023501006.
Texto completoRegneri, Michaela, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele y Manfred Pinkal. "Grounding Action Descriptions in Videos". Transactions of the Association for Computational Linguistics 1 (diciembre de 2013): 25–36. http://dx.doi.org/10.1162/tacl_a_00207.
Texto completoZhan, Yang, Yuan Yuan y Zhitong Xiong. "Mono3DVG: 3D Visual Grounding in Monocular Images". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 7 (24 de marzo de 2024): 6988–96. http://dx.doi.org/10.1609/aaai.v38i7.28525.
Texto completoZhang, Qianjun y Jin Yuan. "Semantic-Aligned Cross-Modal Visual Grounding Network with Transformers". Applied Sciences 13, n.º 9 (4 de mayo de 2023): 5649. http://dx.doi.org/10.3390/app13095649.
Texto completoShen, Haozhan, Tiancheng Zhao, Mingwei Zhu y Jianwei Yin. "GroundVLP: Harnessing Zero-Shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 5 (24 de marzo de 2024): 4766–75. http://dx.doi.org/10.1609/aaai.v38i5.28278.
Texto completoLiu, Shilong, Shijia Huang, Feng Li, Hao Zhang, Yaoyuan Liang, Hang Su, Jun Zhu y Lei Zhang. "DQ-DETR: Dual Query Detection Transformer for Phrase Extraction and Grounding". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 2 (26 de junio de 2023): 1728–36. http://dx.doi.org/10.1609/aaai.v37i2.25261.
Texto completoCheng, Zesen, Kehan Li, Peng Jin, Siheng Li, Xiangyang Ji, Li Yuan, Chang Liu y Jie Chen. "Parallel Vertex Diffusion for Unified Visual Grounding". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 2 (24 de marzo de 2024): 1326–34. http://dx.doi.org/10.1609/aaai.v38i2.27896.
Texto completoFeng, Steven Y., Kevin Lu, Zhuofu Tao, Malihe Alikhani, Teruko Mitamura, Eduard Hovy y Varun Gangal. "Retrieve, Caption, Generate: Visual Grounding for Enhancing Commonsense in Text Generation Models". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 10 (28 de junio de 2022): 10618–26. http://dx.doi.org/10.1609/aaai.v36i10.21306.
Texto completoJia, Meihuizi, Lei Shen, Xin Shen, Lejian Liao, Meng Chen, Xiaodong He, Zhendong Chen y Jiaqi Li. "MNER-QG: An End-to-End MRC Framework for Multimodal Named Entity Recognition with Query Grounding". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 7 (26 de junio de 2023): 8032–40. http://dx.doi.org/10.1609/aaai.v37i7.25971.
Texto completoShi, Zhan, Yilin Shen, Hongxia Jin y Xiaodan Zhu. "Improving Zero-Shot Phrase Grounding via Reasoning on External Knowledge and Spatial Relations". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 2 (28 de junio de 2022): 2253–61. http://dx.doi.org/10.1609/aaai.v36i2.20123.
Texto completoTesis sobre el tema "Visual grounding of text"
Engilberge, Martin. "Deep Inside Visual-Semantic Embeddings". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS150.
Texto completoNowadays Artificial Intelligence (AI) is omnipresent in our society. The recentdevelopment of learning methods based on deep neural networks alsocalled "Deep Learning" has led to a significant improvement in visual representation models.and textual.In this thesis, we aim to further advance image representation and understanding.Revolving around Visual Semantic Embedding (VSE) approaches, we explore different directions: We present relevant background covering images and textual representation and existing multimodal approaches. We propose novel architectures further improving retrieval capability of VSE and we extend VSE models to novel applications and leverage embedding models to visually ground semantic concept. Finally, we delve into the learning process andin particular the loss function by learning differentiable approximation of ranking based metric
Emmott, Stephen J. "The visual processing of text". Thesis, University of Stirling, 1993. http://hdl.handle.net/1893/1837.
Texto completoMi, Jinpeng Verfasser] y Jianwei [Akademischer Betreuer] [Zhang. "Natural Language Visual Grounding via Multimodal Learning / Jinpeng Mi ; Betreuer: Jianwei Zhang". Hamburg : Staats- und Universitätsbibliothek Hamburg, 2020. http://d-nb.info/1205070885/34.
Texto completoMi, Jinpeng [Verfasser] y Jianwei [Akademischer Betreuer] Zhang. "Natural Language Visual Grounding via Multimodal Learning / Jinpeng Mi ; Betreuer: Jianwei Zhang". Hamburg : Staats- und Universitätsbibliothek Hamburg, 2020. http://d-nb.info/1205070885/34.
Texto completoPrince, Md Enamul Hoque. "Visual text analytics for online conversations". Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/61772.
Texto completoScience, Faculty of
Computer Science, Department of
Graduate
Chauhan, Aneesh. "Grounding human vocabulary in robot perception through interaction". Doctoral thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/12841.
Texto completoThis thesis addresses the problem of word learning in computational agents. The motivation behind this work lies in the need to support language-based communication between service robots and their human users, as well as grounded reasoning using symbols relevant for the assigned tasks. The research focuses on the problem of grounding human vocabulary in robotic agent’s sensori-motor perception. Words have to be grounded in bodily experiences, which emphasizes the role of appropriate embodiments. On the other hand, language is a cultural product created and acquired through social interactions. This emphasizes the role of society as a source of linguistic input. Taking these aspects into account, an experimental scenario is set up where a human instructor teaches a robotic agent the names of the objects present in a visually shared environment. The agent grounds the names of these objects in visual perception. Word learning is an open-ended problem. Therefore, the learning architecture of the agent will have to be able to acquire words and categories in an openended manner. In this work, four learning architectures were designed that can be used by robotic agents for long-term and open-ended word and category acquisition. The learning methods used in these architectures are designed for incrementally scaling-up to larger sets of words and categories. A novel experimental evaluation methodology, that takes into account the openended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. An extensive set of systematic experiments, in multiple experimental settings, was carried out to thoroughly evaluate the described learning approaches. The results indicate that all approaches were able to incrementally acquire new words and categories. Although some of the approaches could not scale-up to larger vocabularies, one approach was shown to learn up to 293 categories, with potential for learning many more.
Esta tese aborda o problema da aprendizagem de palavras em agentes computacionais. A motivação por trás deste trabalho reside na necessidade de suportar a comunicação baseada em linguagem entre os robôs de serviço e os seus utilizadores humanos, bem como suportar o raciocínio baseado em símbolos que sejam relevantes no contexto das tarefas atribuídas e cujo significado seja definido com base na experiência perceptiva. Mais especificamente, o foco da investigação é o problema de estabelecer o significado das palavras na percepção do robô através da interacção homemrobô. A definição do significado das palavras com base em experiências perceptuais e perceptuo-motoras enfatiza o papel da configuração física e perceptuomotora do robô. Entretanto, a língua é um produto cultural criado e adquirido através de interacções sociais. Isso destaca o papel da sociedade como fonte linguística. Tendo em conta estes aspectos, um cenário experimental foi definido no qual um instrutor humano ensina a um agente robótico os nomes dos objectos presentes num ambiente visualmente partilhado. O agente associa os nomes desses objectos à sua percepção visual desses objectos. A aprendizagem de palavras é um problema sem objectivo pré-estabelecido. Nós adquirimos novas palavras ao longo das nossas vidas. Assim, a arquitectura de aprendizagem do agente deve poder adquirir palavras e categorias de uma forma semelhante. Neste trabalho foram concebidas quatro arquitecturas de aprendizagem que podem ser usadas por agentes robóticos para aprendizagem e aquisição de novas palavras e categorias, incrementalmente. Os métodos de aprendizagem utilizados nestas arquitecturas foram projectados para funcionar de forma incremental, acumulando um conjunto cada vez maior de palavras e categorias. É proposta e aplicada uma nova metodologia da avaliação experimental que leva em conta a natureza aberta e incremental da aprendizagem de palavras. Esta metodologia leva em consideração a constatação de que o vocabulário de um robô será limitado pela sua capacidade de discriminação, a qual, por sua vez, depende dos seus sensores e capacidades perceptuais. Foi realizado um extenso conjunto de experiências sistemáticas em múltiplas situações experimentais, para avaliar cuidadosamente estas abordagens de aprendizagem. Os resultados indicam que todas as abordagens foram capazes de adquirir novas palavras e categorias incrementalmente. Embora em algumas das abordagens não tenha sido possível atingir vocabulários maiores, verificou-se que uma das abordagens conseguiu aprender até 293 categorias, com potencial para aprender muitas mais.
Sabir, Ahmed. "Enhancing scene text recognition with visual context information". Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/670286.
Texto completoAquesta tesi aborda el problema de millorar els sistemes de reconeixement de text, que permeten detectar i reconèixer text en imatges no restringides (per exemple, un cartell al carrer, un anunci, una destinació d’autobús, etc.). L’objectiu és millorar el rendiment dels sistemes de visió existents explotant la informació semàntica derivada de la pròpia imatge. La idea principal és que conèixer el contingut de la imatge o el context visual en el que un text apareix, pot ajudar a decidir quines són les paraules correctes. Per exemple, el fet que una imatge mostri una cafeteria fa que sigui més probable que una paraula en un rètol es llegeixi com a Dunkin que no pas com unkind. Abordem aquest problema recorrent a avenços en el processament del llenguatge natural i l’aprenentatge automàtic, en particular, aprenent re-rankers i xarxes neuronals, per presentar solucions de postprocés que milloren els sistemes de l’estat de l’art de reconeixement de text, sense necessitat de costosos procediments de reentrenament o afinació que requereixin grans quantitats de dades. Descobrir el grau de relació semàntica entre les paraules candidates i el seu context d’imatge és una tasca relacionada amb l’avaluació de la semblança semàntica entre paraules o fragments de text. Tanmateix, determinar l’existència d’una relació semàntica és una tasca més general que avaluar la semblança (per exemple, cotxe, carretera i semàfor estan relacionats però no són similars) i per tant els mètodes existents requereixen certes adaptacions. Per satisfer els requisits d’aquestes perspectives més àmplies de relació semàntica, desenvolupem dos enfocaments per aprendre la relació semàntica de la paraula reconeguda i el seu context: paraula-a-paraula (amb els objectes a la imatge) o paraula-a-frase (subtítol de la imatge). En l’enfocament de paraula-a-paraula s’usen re-rankers basats en word-embeddings. El re-ranker pren les paraules proposades pel sistema base i les torna a reordenar en funció del context visual proporcionat pel classificador d’objectes. Per al segon cas, s’ha dissenyat un enfocament neuronal d’extrem a extrem per explotar la descripció de la imatge (subtítol) tant a nivell de frase com a nivell de paraula i re-ordenar les paraules candidates basant-se tant en el context visual com en les co-ocurrències amb el subtítol. Com a contribució addicional, per satisfer els requisits dels enfocs basats en dades com ara les xarxes neuronals, presentem un conjunt de dades de contextos visuals per a aquesta tasca, en el què el conjunt de dades COCO-text disponible públicament [Veit et al. 2016] s’ha ampliat amb informació sobre l’escena (inclosos els objectes i els llocs que apareixen a la imatge) per permetre als investigadors incloure les relacions semàntiques entre textos i escena als seus sistemes de reconeixement de text, i oferir una base d’avaluació comuna per a aquests enfocaments.
Willems, Heather Marie. "Writing the written: text as a visual image". The Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=osu1382952227.
Texto completoKan, Jichao. "Visual-Text Translation with Deep Graph Neural Networks". Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23759.
Texto completoShmueli, Yael. "Integrating speech and visual text in multimodal interfaces". Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1446688/.
Texto completoLibros sobre el tema "Visual grounding of text"
Jessica, Wyman, ed. Pro forma: Language, text, visual art. Toronto, ON, Canada: YYZ Books, 2005.
Buscar texto completoStrassner, Erich. Text-Bild-Kommunikation - Bild-Text-Kommunikation. Tübingen: Niemeyer, 2001.
Buscar texto completoWolfgang, Harms y Deutsche Forschungsgemeinschaft, eds. Text und Bild, Bild und Text: DFG-Symposion 1988. Stuttgart: J.B. Metzler, 1990.
Buscar texto completoText und Bild: Grundfragen der Beschreibung von Text-Bild-Kommunikationen aus sprachwissenschaftlicher Sicht. Tübingen: Narr, 1986.
Buscar texto completoLeidner, Jochen L. Toponym resolution in text: Annotation, evaluation and applications of spatial grounding of place names. Boca Raton: Dissertation.com, 2007.
Buscar texto completoK, Ranai, ed. Visual editing on unix. Singapore: World Scientific, 1989.
Buscar texto completo1948-, John Samuel G. y Institute of Asian Studies (Madras, India), eds. The Great penance at Māmallapuram: Deciphering a visual text. Chennai: Institute of Asian Studies, 2001.
Buscar texto completoThe Bible as visual culture: When text becomes image. Sheffield: Sheffield Phoenix Press, 2013.
Buscar texto completoV, Drake Michael, ed. The visual fields: Text and atlas of clinical perimetry. 6a ed. St. Louis: Mosby, 1990.
Buscar texto completoGail, Finney, ed. Visual culture in twentieth-century Germany: Text as spectacle. Bloomington, Ind: Indiana University Press, 2006.
Buscar texto completoCapítulos de libros sobre el tema "Visual grounding of text"
Min, Seonwoo, Nokyung Park, Siwon Kim, Seunghyun Park y Jinkyu Kim. "Grounding Visual Representations with Texts for Domain Generalization". En Lecture Notes in Computer Science, 37–53. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19836-6_3.
Texto completoHong, Tao, Ya Wang, Xingwu Sun, Xiaoqing Li y Jinwen Ma. "CMMix: Cross-Modal Mix Augmentation Between Images and Texts for Visual Grounding". En Communications in Computer and Information Science, 471–82. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8148-9_37.
Texto completoHendricks, Lisa Anne, Ronghang Hu, Trevor Darrell y Zeynep Akata. "Grounding Visual Explanations". En Computer Vision – ECCV 2018, 269–86. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01216-8_17.
Texto completoJohari, Kritika, Christopher Tay Zi Tong, Vigneshwaran Subbaraju, Jung-Jae Kim y U.-Xuan Tan. "Gaze Assisted Visual Grounding". En Social Robotics, 191–202. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90525-5_17.
Texto completoXiao, Junbin, Xindi Shang, Xun Yang, Sheng Tang y Tat-Seng Chua. "Visual Relation Grounding in Videos". En Computer Vision – ECCV 2020, 447–64. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58539-6_27.
Texto completoGoy, Anna. "Grounding Meaning in Visual Knowledge". En Spatial Language, 121–45. Dordrecht: Springer Netherlands, 2002. http://dx.doi.org/10.1007/978-94-015-9928-3_7.
Texto completoSilberer, Carina. "Grounding the Meaning of Words with Visual Attributes". En Visual Attributes, 331–62. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_13.
Texto completoMazaheri, Amir y Mubarak Shah. "Visual Text Correction". En Computer Vision – ECCV 2018, 159–75. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01261-8_10.
Texto completoWainer, Howard. "Integrating Figures and Text". En Visual Revelations, 143–45. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-2282-8_18.
Texto completoKittler, Josef, Mikhail Shevchenko y David Windridge. "Visual Bootstrapping for Unsupervised Symbol Grounding". En Advanced Concepts for Intelligent Vision Systems, 1037–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11864349_94.
Texto completoActas de conferencias sobre el tema "Visual grounding of text"
Zhang, Yimeng, Xin Chen, Jinghan Jia, Sijia Liu y Ke Ding. "Text-Visual Prompting for Efficient 2D Temporal Video Grounding". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01421.
Texto completoWu, Yanmin, Xinhua Cheng, Renrui Zhang, Zesen Cheng y Jian Zhang. "EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Grounding". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01843.
Texto completoEndo, Ko, Masaki Aono, Eric Nichols y Kotaro Funakoshi. "An Attention-based Regression Model for Grounding Textual Phrases in Images". En Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/558.
Texto completoConser, Erik, Kennedy Hahn, Chandler Watson y Melanie Mitchell. "Revisiting Visual Grounding". En Proceedings of the Second Workshop on Shortcomings in Vision and Language. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/w19-1804.
Texto completoKim, Yongmin, Chenhui Chu y Sadao Kurohashi. "Flexible Visual Grounding". En Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.acl-srw.22.
Texto completoDu, Ye, Zehua Fu, Qingjie Liu y Yunhong Wang. "Visual Grounding with Transformers". En 2022 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2022. http://dx.doi.org/10.1109/icme52920.2022.9859880.
Texto completoJing, Chenchen, Yuwei Wu, Mingtao Pei, Yao Hu, Yunde Jia y Qi Wu. "Visual-Semantic Graph Matching for Visual Grounding". En MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3413902.
Texto completoDeng, Chaorui, Qi Wu, Qingyao Wu, Fuyuan Hu, Fan Lyu y Mingkui Tan. "Visual Grounding via Accumulated Attention". En 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00808.
Texto completoLee, Jason, Kyunghyun Cho y Douwe Kiela. "Countering Language Drift via Visual Grounding". En Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-1447.
Texto completoSun, Yuxi, Shanshan Feng, Xutao Li, Yunming Ye, Jian Kang y Xu Huang. "Visual Grounding in Remote Sensing Images". En MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548316.
Texto completoInformes sobre el tema "Visual grounding of text"
Steed, Chad A., Christopher T. Symons, James K. Senter y Frank A. DeNap. Guided Text Search Using Adaptive Visual Analytics. Office of Scientific and Technical Information (OSTI), octubre de 2012. http://dx.doi.org/10.2172/1055105.
Texto completoBeiker, Sven, ed. Unsettled Issues Regarding Visual Communication Between Automated Vehicles and Other Road Users. SAE International, julio de 2021. http://dx.doi.org/10.4271/epr2021016.
Texto completoДирда, І. А. y З. П. Бакум. Linguodidactic fundamentals of the development of foreign students’ polycultural competence during the Ukrainian language training. Association 1901 "SEPIKE", 2016. http://dx.doi.org/10.31812/123456789/2994.
Texto completoБакум, З. П. y І. А. Дирда. Linguodidactic Fundamentals of the Development of Foreign Students' Polycultural Competence During the Ukrainian Language Training. Криворізький державний педагогічний університет, 2016. http://dx.doi.org/10.31812/0564/398.
Texto completoFigueredo, Luisa, Liliana Martinez y Joao Paulo Almeida. Current role of Endoscopic Endonasal Approach for Craniopharyngiomas. A 10-year Systematic review and Meta-Analysis Comparison with the Open Transcranial Approach. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, enero de 2023. http://dx.doi.org/10.37766/inplasy2023.1.0045.
Texto completoYatsymirska, Mariya. Мова війни і «контрнаступальна» лексика у стислих медійних текстах. Ivan Franko National University of Lviv, marzo de 2023. http://dx.doi.org/10.30970/vjo.2023.52-53.11742.
Texto completoBaluk, Nadia, Natalia Basij, Larysa Buk y Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, febrero de 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.
Texto completoMakhachashvili, Rusudan K., Svetlana I. Kovpik, Anna O. Bakhtina y Ekaterina O. Shmeltser. Technology of presentation of literature on the Emoji Maker platform: pedagogical function of graphic mimesis. [б. в.], julio de 2020. http://dx.doi.org/10.31812/123456789/3864.
Texto completoYatsymirska, Mariya. SOCIAL EXPRESSION IN MULTIMEDIA TEXTS. Ivan Franko National University of Lviv, febrero de 2021. http://dx.doi.org/10.30970/vjo.2021.49.11072.
Texto completo