Literatura académica sobre el tema "Zero-shot Retrieval"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Zero-shot Retrieval".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Zero-shot Retrieval"
Dutta, Titir y Soma Biswas. "Generalized Zero-Shot Cross-Modal Retrieval". IEEE Transactions on Image Processing 28, n.º 12 (diciembre de 2019): 5953–62. http://dx.doi.org/10.1109/tip.2019.2923287.
Texto completoSeo, Sanghyun y Juntae Kim. "Hierarchical Semantic Loss and Confidence Estimator for Visual-Semantic Embedding-Based Zero-Shot Learning". Applied Sciences 9, n.º 15 (2 de agosto de 2019): 3133. http://dx.doi.org/10.3390/app9153133.
Texto completoWang, Xiao, Craig Macdonald y Iadh Ounis. "Improving zero-shot retrieval using dense external expansion". Information Processing & Management 59, n.º 5 (septiembre de 2022): 103026. http://dx.doi.org/10.1016/j.ipm.2022.103026.
Texto completoKumar, Sanjeev. "Phase retrieval with physics informed zero-shot network". Optics Letters 46, n.º 23 (29 de noviembre de 2021): 5942. http://dx.doi.org/10.1364/ol.433625.
Texto completoLin, Kaiyi, Xing Xu, Lianli Gao, Zheng Wang y Heng Tao Shen. "Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11515–22. http://dx.doi.org/10.1609/aaai.v34i07.6817.
Texto completoZhang, Haofeng, Yang Long y Ling Shao. "Zero-shot Hashing with orthogonal projection for image retrieval". Pattern Recognition Letters 117 (enero de 2019): 201–9. http://dx.doi.org/10.1016/j.patrec.2018.04.011.
Texto completoZhang, Zhaolong, Yuejie Zhang, Rui Feng, Tao Zhang y Weiguo Fan. "Zero-Shot Sketch-Based Image Retrieval via Graph Convolution Network". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 12943–50. http://dx.doi.org/10.1609/aaai.v34i07.6993.
Texto completoYang, Fan, Zheng Wang, Jing Xiao y Shin'ichi Satoh. "Mining on Heterogeneous Manifolds for Zero-Shot Cross-Modal Image Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 12589–96. http://dx.doi.org/10.1609/aaai.v34i07.6949.
Texto completoXu, Rui, Zongyan Han, Le Hui, Jianjun Qian y Jin Xie. "Domain Disentangled Generative Adversarial Network for Zero-Shot Sketch-Based 3D Shape Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 3 (28 de junio de 2022): 2902–10. http://dx.doi.org/10.1609/aaai.v36i3.20195.
Texto completoXu, Xing, Jialin Tian, Kaiyi Lin, Huimin Lu, Jie Shao y Heng Tao Shen. "Zero-shot Cross-modal Retrieval by Assembling AutoEncoder and Generative Adversarial Network". ACM Transactions on Multimedia Computing, Communications, and Applications 17, n.º 1s (31 de marzo de 2021): 1–17. http://dx.doi.org/10.1145/3424341.
Texto completoTesis sobre el tema "Zero-shot Retrieval"
Efes, Stergios. "Zero-shot, One Kill: BERT for Neural Information Retrieval". Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444835.
Texto completoBucher, Maxime. "Apprentissage et exploitation de représentations sémantiques pour la classification et la recherche d'images". Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC250/document.
Texto completoIn this thesis, we examine some practical difficulties of deep learning models.Indeed, despite the promising results in computer vision, implementing them in some situations raises some questions. For example, in classification tasks where thousands of categories have to be recognised, it is sometimes difficult to gather enough training data for each category.We propose two new approaches for this learning scenario, called <>. We use semantic information to model classes which allows us to define models by description, as opposed to modelling from a set of examples.In the first chapter we propose to optimize a metric in order to transform the distribution of the original data and to obtain an optimal attribute distribution. In the following chapter, unlike the standard approaches of the literature that rely on the learning of a common integration space, we propose to generate visual features from a conditional generator. The artificial examples can be used in addition to real data for learning a discriminant classifier. In the second part of this thesis, we address the question of computational intelligibility for computer vision tasks. Due to the many and complex transformations of deep learning algorithms, it is difficult for a user to interpret the returned prediction. Our proposition is to introduce what we call a <> in the processing pipeline, which is a crossing point in which the representation of the image is entirely expressed with natural language, while retaining the efficiency of numerical representations. This semantic bottleneck allows to detect failure cases in the prediction process so as to accept or reject the decision
Mensink, Thomas. "Learning Image Classification and Retrieval Models". Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM113/document.
Texto completoWe are currently experiencing an exceptional growth of visual data, for example, millions of photos are shared daily on social-networks. Image understanding methods aim to facilitate access to this visual data in a semantically meaningful manner. In this dissertation, we define several detailed goals which are of interest for the image understanding tasks of image classification and retrieval, which we address in three main chapters. First, we aim to exploit the multi-modal nature of many databases, wherein documents consists of images with a form of textual description. In order to do so we define similarities between the visual content of one document and the textual description of another document. These similarities are computed in two steps, first we find the visually similar neighbors in the multi-modal database, and then use the textual descriptions of these neighbors to define a similarity to the textual description of any document. Second, we introduce a series of structured image classification models, which explicitly encode pairwise label interactions. These models are more expressive than independent label predictors, and lead to more accurate predictions. Especially in an interactive prediction scenario where a user provides the value of some of the image labels. Such an interactive scenario offers an interesting trade-off between accuracy and manual labeling effort. We explore structured models for multi-label image classification, for attribute-based image classification, and for optimizing for specific ranking measures. Finally, we explore k-nearest neighbors and nearest-class mean classifiers for large-scale image classification. We propose efficient metric learning methods to improve classification performance, and use these methods to learn on a data set of more than one million training images from one thousand classes. Since both classification methods allow for the incorporation of classes not seen during training at near-zero cost, we study their generalization performances. We show that the nearest-class mean classification method can generalize from one thousand to ten thousand classes at negligible cost, and still perform competitively with the state-of-the-art
Dutta, Titir. "Generalizing Cross-domain Retrieval Algorithms". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5869.
Texto completo"Video2Vec: Learning Semantic Spatio-Temporal Embedding for Video Representations". Master's thesis, 2016. http://hdl.handle.net/2286/R.I.40765.
Texto completoDissertation/Thesis
Masters Thesis Computer Science 2016
Capítulos de libros sobre el tema "Zero-shot Retrieval"
Fröbe, Maik, Christopher Akiki, Martin Potthast y Matthias Hagen. "How Train–Test Leakage Affects Zero-Shot Retrieval". En String Processing and Information Retrieval, 147–61. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20643-6_11.
Texto completoChi, Jingze, Xin Huang y Yuxin Peng. "Zero-Shot Cross-Media Retrieval with External Knowledge". En Communications in Computer and Information Science, 200–211. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8530-7_20.
Texto completoYelamarthi, Sasi Kiran, Shiva Krishna Reddy, Ashish Mishra y Anurag Mittal. "A Zero-Shot Framework for Sketch Based Image Retrieval". En Computer Vision – ECCV 2018, 316–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01225-0_19.
Texto completoLi, Chuang, Lunke Fei, Peipei Kang, Jiahao Liang, Xiaozhao Fang y Shaohua Teng. "Semantic-Adversarial Graph Convolutional Network for Zero-Shot Cross-Modal Retrieval". En Lecture Notes in Computer Science, 459–72. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20865-2_34.
Texto completoZhang, Donglin, Xiao-Jun Wu y Jun Yu. "Discrete Bidirectional Matrix Factorization Hashing for Zero-Shot Cross-Media Retrieval". En Pattern Recognition and Computer Vision, 524–36. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88007-1_43.
Texto completoLi, Mingkang y Yonggang Qi. "XPNet: Cross-Domain Prototypical Network for Zero-Shot Sketch-Based Image Retrieval". En Pattern Recognition and Computer Vision, 394–410. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18907-4_31.
Texto completoChen, Tao, Mingyang Zhang, Jing Lu, Michael Bendersky y Marc Najork. "Out-of-Domain Semantics to the Rescue! Zero-Shot Hybrid Retrieval Models". En Lecture Notes in Computer Science, 95–110. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99736-6_7.
Texto completoZan, Daoguang, Sirui Wang, Hongzhi Zhang, Yuanmeng Yan, Wei Wu, Bei Guan y Yongji Wang. "S$$^2$$QL: Retrieval Augmented Zero-Shot Question Answering over Knowledge Graph". En Advances in Knowledge Discovery and Data Mining, 223–36. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05981-0_18.
Texto completoMacAvaney, Sean, Luca Soldaini y Nazli Goharian. "Teaching a New Dog Old Tricks: Resurrecting Multilingual Retrieval Using Zero-Shot Learning". En Lecture Notes in Computer Science, 246–54. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45442-5_31.
Texto completoGlavaš, Goran y Ivan Vulić. "Zero-Shot Language Transfer for Cross-Lingual Sentence Retrieval Using Bidirectional Attention Model". En Lecture Notes in Computer Science, 523–38. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15712-8_34.
Texto completoActas de conferencias sobre el tema "Zero-shot Retrieval"
Chi, Jingze y Yuxin Peng. "Dual Adversarial Networks for Zero-shot Cross-media Retrieval". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/92.
Texto completoHonbu, Yuma y Keiji Yanai. "Few-Shot and Zero-Shot Semantic Segmentation for Food Images". En ICMR '21: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3463947.3469234.
Texto completoHuang, Siteng, Qiyao Wei y Donglin Wang. "Reference-Limited Compositional Zero-Shot Learning". En ICMR '23: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3591106.3592225.
Texto completoWang, Zhipeng, Hao Wang, Jiexi Yan, Aming Wu y Cheng Deng. "Domain-Smoothing Network for Zero-Shot Sketch-Based Image Retrieval". En Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/158.
Texto completoXu, Yahui, Yang Yang, Fumin Shen, Xing Xu, Yuxuan Zhou y Heng Tao Shen. "Attribute hashing for zero-shot image retrieval". En 2017 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2017. http://dx.doi.org/10.1109/icme.2017.8019425.
Texto completoWang, Guolong, Xun Wu, Zhaoyuan Liu y Junchi Yan. "Prompt-based Zero-shot Video Moment Retrieval". En MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548004.
Texto completoXu, Canwen, Daya Guo, Nan Duan y Julian McAuley. "LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval". En Findings of the Association for Computational Linguistics: ACL 2022. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.findings-acl.281.
Texto completoXu, Xing, Fumin Shen, Yang Yang, Jie Shao y Zi Huang. "Transductive Visual-Semantic Embedding for Zero-shot Learning". En ICMR '17: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3078971.3078977.
Texto completoGao, LianLi, Jingkuan Song, Junming Shao, Xiaofeng Zhu y HengTao Shen. "Zero-shot Image Categorization by Image Correlation Exploration". En ICMR '15: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2671188.2749309.
Texto completoSharma, Prawaal y Navneet Goyal. "Zero-shot reductive paraphrasing for digitally semi-literate". En FIRE 2021: Forum for Information Retrieval Evaluation. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3503162.3503171.
Texto completo