Gotowa bibliografia na temat „Zero-shot Retrieval”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Zero-shot Retrieval”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Zero-shot Retrieval"
Dutta, Titir, i Soma Biswas. "Generalized Zero-Shot Cross-Modal Retrieval". IEEE Transactions on Image Processing 28, nr 12 (grudzień 2019): 5953–62. http://dx.doi.org/10.1109/tip.2019.2923287.
Pełny tekst źródłaSeo, Sanghyun, i Juntae Kim. "Hierarchical Semantic Loss and Confidence Estimator for Visual-Semantic Embedding-Based Zero-Shot Learning". Applied Sciences 9, nr 15 (2.08.2019): 3133. http://dx.doi.org/10.3390/app9153133.
Pełny tekst źródłaWang, Xiao, Craig Macdonald i Iadh Ounis. "Improving zero-shot retrieval using dense external expansion". Information Processing & Management 59, nr 5 (wrzesień 2022): 103026. http://dx.doi.org/10.1016/j.ipm.2022.103026.
Pełny tekst źródłaKumar, Sanjeev. "Phase retrieval with physics informed zero-shot network". Optics Letters 46, nr 23 (29.11.2021): 5942. http://dx.doi.org/10.1364/ol.433625.
Pełny tekst źródłaLin, Kaiyi, Xing Xu, Lianli Gao, Zheng Wang i Heng Tao Shen. "Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 11515–22. http://dx.doi.org/10.1609/aaai.v34i07.6817.
Pełny tekst źródłaZhang, Haofeng, Yang Long i Ling Shao. "Zero-shot Hashing with orthogonal projection for image retrieval". Pattern Recognition Letters 117 (styczeń 2019): 201–9. http://dx.doi.org/10.1016/j.patrec.2018.04.011.
Pełny tekst źródłaZhang, Zhaolong, Yuejie Zhang, Rui Feng, Tao Zhang i Weiguo Fan. "Zero-Shot Sketch-Based Image Retrieval via Graph Convolution Network". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 12943–50. http://dx.doi.org/10.1609/aaai.v34i07.6993.
Pełny tekst źródłaYang, Fan, Zheng Wang, Jing Xiao i Shin'ichi Satoh. "Mining on Heterogeneous Manifolds for Zero-Shot Cross-Modal Image Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 12589–96. http://dx.doi.org/10.1609/aaai.v34i07.6949.
Pełny tekst źródłaXu, Rui, Zongyan Han, Le Hui, Jianjun Qian i Jin Xie. "Domain Disentangled Generative Adversarial Network for Zero-Shot Sketch-Based 3D Shape Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 3 (28.06.2022): 2902–10. http://dx.doi.org/10.1609/aaai.v36i3.20195.
Pełny tekst źródłaXu, Xing, Jialin Tian, Kaiyi Lin, Huimin Lu, Jie Shao i Heng Tao Shen. "Zero-shot Cross-modal Retrieval by Assembling AutoEncoder and Generative Adversarial Network". ACM Transactions on Multimedia Computing, Communications, and Applications 17, nr 1s (31.03.2021): 1–17. http://dx.doi.org/10.1145/3424341.
Pełny tekst źródłaRozprawy doktorskie na temat "Zero-shot Retrieval"
Efes, Stergios. "Zero-shot, One Kill: BERT for Neural Information Retrieval". Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444835.
Pełny tekst źródłaBucher, Maxime. "Apprentissage et exploitation de représentations sémantiques pour la classification et la recherche d'images". Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC250/document.
Pełny tekst źródłaIn this thesis, we examine some practical difficulties of deep learning models.Indeed, despite the promising results in computer vision, implementing them in some situations raises some questions. For example, in classification tasks where thousands of categories have to be recognised, it is sometimes difficult to gather enough training data for each category.We propose two new approaches for this learning scenario, called <>. We use semantic information to model classes which allows us to define models by description, as opposed to modelling from a set of examples.In the first chapter we propose to optimize a metric in order to transform the distribution of the original data and to obtain an optimal attribute distribution. In the following chapter, unlike the standard approaches of the literature that rely on the learning of a common integration space, we propose to generate visual features from a conditional generator. The artificial examples can be used in addition to real data for learning a discriminant classifier. In the second part of this thesis, we address the question of computational intelligibility for computer vision tasks. Due to the many and complex transformations of deep learning algorithms, it is difficult for a user to interpret the returned prediction. Our proposition is to introduce what we call a <> in the processing pipeline, which is a crossing point in which the representation of the image is entirely expressed with natural language, while retaining the efficiency of numerical representations. This semantic bottleneck allows to detect failure cases in the prediction process so as to accept or reject the decision
Mensink, Thomas. "Learning Image Classification and Retrieval Models". Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM113/document.
Pełny tekst źródłaWe are currently experiencing an exceptional growth of visual data, for example, millions of photos are shared daily on social-networks. Image understanding methods aim to facilitate access to this visual data in a semantically meaningful manner. In this dissertation, we define several detailed goals which are of interest for the image understanding tasks of image classification and retrieval, which we address in three main chapters. First, we aim to exploit the multi-modal nature of many databases, wherein documents consists of images with a form of textual description. In order to do so we define similarities between the visual content of one document and the textual description of another document. These similarities are computed in two steps, first we find the visually similar neighbors in the multi-modal database, and then use the textual descriptions of these neighbors to define a similarity to the textual description of any document. Second, we introduce a series of structured image classification models, which explicitly encode pairwise label interactions. These models are more expressive than independent label predictors, and lead to more accurate predictions. Especially in an interactive prediction scenario where a user provides the value of some of the image labels. Such an interactive scenario offers an interesting trade-off between accuracy and manual labeling effort. We explore structured models for multi-label image classification, for attribute-based image classification, and for optimizing for specific ranking measures. Finally, we explore k-nearest neighbors and nearest-class mean classifiers for large-scale image classification. We propose efficient metric learning methods to improve classification performance, and use these methods to learn on a data set of more than one million training images from one thousand classes. Since both classification methods allow for the incorporation of classes not seen during training at near-zero cost, we study their generalization performances. We show that the nearest-class mean classification method can generalize from one thousand to ten thousand classes at negligible cost, and still perform competitively with the state-of-the-art
Dutta, Titir. "Generalizing Cross-domain Retrieval Algorithms". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5869.
Pełny tekst źródła"Video2Vec: Learning Semantic Spatio-Temporal Embedding for Video Representations". Master's thesis, 2016. http://hdl.handle.net/2286/R.I.40765.
Pełny tekst źródłaDissertation/Thesis
Masters Thesis Computer Science 2016
Części książek na temat "Zero-shot Retrieval"
Fröbe, Maik, Christopher Akiki, Martin Potthast i Matthias Hagen. "How Train–Test Leakage Affects Zero-Shot Retrieval". W String Processing and Information Retrieval, 147–61. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20643-6_11.
Pełny tekst źródłaChi, Jingze, Xin Huang i Yuxin Peng. "Zero-Shot Cross-Media Retrieval with External Knowledge". W Communications in Computer and Information Science, 200–211. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8530-7_20.
Pełny tekst źródłaYelamarthi, Sasi Kiran, Shiva Krishna Reddy, Ashish Mishra i Anurag Mittal. "A Zero-Shot Framework for Sketch Based Image Retrieval". W Computer Vision – ECCV 2018, 316–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01225-0_19.
Pełny tekst źródłaLi, Chuang, Lunke Fei, Peipei Kang, Jiahao Liang, Xiaozhao Fang i Shaohua Teng. "Semantic-Adversarial Graph Convolutional Network for Zero-Shot Cross-Modal Retrieval". W Lecture Notes in Computer Science, 459–72. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20865-2_34.
Pełny tekst źródłaZhang, Donglin, Xiao-Jun Wu i Jun Yu. "Discrete Bidirectional Matrix Factorization Hashing for Zero-Shot Cross-Media Retrieval". W Pattern Recognition and Computer Vision, 524–36. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88007-1_43.
Pełny tekst źródłaLi, Mingkang, i Yonggang Qi. "XPNet: Cross-Domain Prototypical Network for Zero-Shot Sketch-Based Image Retrieval". W Pattern Recognition and Computer Vision, 394–410. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18907-4_31.
Pełny tekst źródłaChen, Tao, Mingyang Zhang, Jing Lu, Michael Bendersky i Marc Najork. "Out-of-Domain Semantics to the Rescue! Zero-Shot Hybrid Retrieval Models". W Lecture Notes in Computer Science, 95–110. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99736-6_7.
Pełny tekst źródłaZan, Daoguang, Sirui Wang, Hongzhi Zhang, Yuanmeng Yan, Wei Wu, Bei Guan i Yongji Wang. "S$$^2$$QL: Retrieval Augmented Zero-Shot Question Answering over Knowledge Graph". W Advances in Knowledge Discovery and Data Mining, 223–36. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05981-0_18.
Pełny tekst źródłaMacAvaney, Sean, Luca Soldaini i Nazli Goharian. "Teaching a New Dog Old Tricks: Resurrecting Multilingual Retrieval Using Zero-Shot Learning". W Lecture Notes in Computer Science, 246–54. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45442-5_31.
Pełny tekst źródłaGlavaš, Goran, i Ivan Vulić. "Zero-Shot Language Transfer for Cross-Lingual Sentence Retrieval Using Bidirectional Attention Model". W Lecture Notes in Computer Science, 523–38. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15712-8_34.
Pełny tekst źródłaStreszczenia konferencji na temat "Zero-shot Retrieval"
Chi, Jingze, i Yuxin Peng. "Dual Adversarial Networks for Zero-shot Cross-media Retrieval". W Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/92.
Pełny tekst źródłaHonbu, Yuma, i Keiji Yanai. "Few-Shot and Zero-Shot Semantic Segmentation for Food Images". W ICMR '21: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3463947.3469234.
Pełny tekst źródłaHuang, Siteng, Qiyao Wei i Donglin Wang. "Reference-Limited Compositional Zero-Shot Learning". W ICMR '23: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3591106.3592225.
Pełny tekst źródłaWang, Zhipeng, Hao Wang, Jiexi Yan, Aming Wu i Cheng Deng. "Domain-Smoothing Network for Zero-Shot Sketch-Based Image Retrieval". W Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/158.
Pełny tekst źródłaXu, Yahui, Yang Yang, Fumin Shen, Xing Xu, Yuxuan Zhou i Heng Tao Shen. "Attribute hashing for zero-shot image retrieval". W 2017 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2017. http://dx.doi.org/10.1109/icme.2017.8019425.
Pełny tekst źródłaWang, Guolong, Xun Wu, Zhaoyuan Liu i Junchi Yan. "Prompt-based Zero-shot Video Moment Retrieval". W MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548004.
Pełny tekst źródłaXu, Canwen, Daya Guo, Nan Duan i Julian McAuley. "LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval". W Findings of the Association for Computational Linguistics: ACL 2022. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.findings-acl.281.
Pełny tekst źródłaXu, Xing, Fumin Shen, Yang Yang, Jie Shao i Zi Huang. "Transductive Visual-Semantic Embedding for Zero-shot Learning". W ICMR '17: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3078971.3078977.
Pełny tekst źródłaGao, LianLi, Jingkuan Song, Junming Shao, Xiaofeng Zhu i HengTao Shen. "Zero-shot Image Categorization by Image Correlation Exploration". W ICMR '15: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2671188.2749309.
Pełny tekst źródłaSharma, Prawaal, i Navneet Goyal. "Zero-shot reductive paraphrasing for digitally semi-literate". W FIRE 2021: Forum for Information Retrieval Evaluation. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3503162.3503171.
Pełny tekst źródła