Gotowa bibliografia na temat „Cross-Modal Retrieval and Hashing”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Cross-Modal Retrieval and Hashing”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Cross-Modal Retrieval and Hashing"
Liu, Huan, Jiang Xiong, Nian Zhang, Fuming Liu i Xitao Zou. "Quadruplet-Based Deep Cross-Modal Hashing". Computational Intelligence and Neuroscience 2021 (2.07.2021): 1–10. http://dx.doi.org/10.1155/2021/9968716.
Pełny tekst źródłaLiu, Xuanwu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Yazhou Ren i Maozu Guo. "Ranking-Based Deep Cross-Modal Hashing". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4400–4407. http://dx.doi.org/10.1609/aaai.v33i01.33014400.
Pełny tekst źródłaYang, Xiaohan, Zhen Wang, Nannan Wu, Guokun Li, Chuang Feng i Pingping Liu. "Unsupervised Deep Relative Neighbor Relationship Preserving Cross-Modal Hashing". Mathematics 10, nr 15 (28.07.2022): 2644. http://dx.doi.org/10.3390/math10152644.
Pełny tekst źródłaLi, Chao, Cheng Deng, Lei Wang, De Xie i Xianglong Liu. "Coupled CycleGAN: Unsupervised Hashing Network for Cross-Modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 176–83. http://dx.doi.org/10.1609/aaai.v33i01.3301176.
Pełny tekst źródła刘, 志虎. "Label Consistency Hashing for Cross-Modal Retrieval". Computer Science and Application 11, nr 04 (2021): 1104–12. http://dx.doi.org/10.12677/csa.2021.114114.
Pełny tekst źródłaYao, Tao, Xiangwei Kong, Haiyan Fu i Qi Tian. "Semantic consistency hashing for cross-modal retrieval". Neurocomputing 193 (czerwiec 2016): 250–59. http://dx.doi.org/10.1016/j.neucom.2016.02.016.
Pełny tekst źródłaChen, Shubai, Song Wu i Li Wang. "Hierarchical semantic interaction-based deep hashing network for cross-modal retrieval". PeerJ Computer Science 7 (25.05.2021): e552. http://dx.doi.org/10.7717/peerj-cs.552.
Pełny tekst źródłaLi, Mingyong, Qiqi Li, Lirong Tang, Shuang Peng, Yan Ma i Degang Yang. "Deep Unsupervised Hashing for Large-Scale Cross-Modal Retrieval Using Knowledge Distillation Model". Computational Intelligence and Neuroscience 2021 (17.07.2021): 1–11. http://dx.doi.org/10.1155/2021/5107034.
Pełny tekst źródłaZhong, Fangming, Zhikui Chen i Geyong Min. "Deep Discrete Cross-Modal Hashing for Cross-Media Retrieval". Pattern Recognition 83 (listopad 2018): 64–77. http://dx.doi.org/10.1016/j.patcog.2018.05.018.
Pełny tekst źródłaQi, Xiaojun, Xianhua Zeng, Shumin Wang, Yicai Xie i Liming Xu. "Cross-modal variable-length hashing based on hierarchy". Intelligent Data Analysis 25, nr 3 (20.04.2021): 669–85. http://dx.doi.org/10.3233/ida-205162.
Pełny tekst źródłaRozprawy doktorskie na temat "Cross-Modal Retrieval and Hashing"
Shen, Yuming. "Deep binary representation learning for single/cross-modal data retrieval". Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/67635/.
Pełny tekst źródłaZhu, Meng. "Cross-modal semantic-associative labelling, indexing and retrieval of multimodal data". Thesis, University of Reading, 2010. http://centaur.reading.ac.uk/24828/.
Pełny tekst źródłaSaragiotis, Panagiotis. "Cross-modal classification and retrieval of multimodal data using combinations of neural networks". Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/843338/.
Pełny tekst źródłaSurian, Didi. "Novel Applications Using Latent Variable Models". Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14014.
Pełny tekst źródłaTran, Thi Quynh Nhi. "Robust and comprehensive joint image-text representations". Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096/document.
Pełny tekst źródłaThis thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
Mandal, Devraj. "Cross-Modal Retrieval and Hashing". Thesis, 2020. https://etd.iisc.ac.in/handle/2005/4685.
Pełny tekst źródłaLi, Yan-Fu, i 李彥甫. "The Cross-Modal Method of Tag Labeling in Music Information Retrieval". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/45038305568580924323.
Pełny tekst źródła輔仁大學
資訊工程學系
96
A music object contains multi-facet feature, such as the average frequency, speed, timbre, melody, rhythm, genre and so on. We conclude that these features are extracted from various feature domains, respectively. Moreover, these feature do- mains are separated into two types, the quantified and the unquantifiable. Within the quantified feature domain, the features are quantified as the numerical value, for example, if there are three important average frequencies in a music object, we quantify and denote as three numerical values: 20Hz, 80Hz and 100Hz in the feature domain: average frequency. On the other hand, the features in the unquan- tifiable feature domain are described as the non-numerical value (e.g. letters) and it is difficultly defined by the mathematic method. For example, the genre of the music object is difficultly extracted by the filter. However, among the features of a music object, the unquantifiable features are important for the human auditory system. Therefore, we introduce a cross-modal association method to associate the quantified and the unquantifiable features. We represent the music objects including quantified and unquantifiable features as the multimedia graph (MMG) [1] which converts the association problem to the graph problem, and apply the link analysis algorithm to rank the nodes in the graph. Thus we label the music object by the rank of nodes.
Yang, Bo. "Semantic-aware data processing towards cross-modal multimedia analysis and content-based retrieval in distributed and mobile environments /". 2007. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-1850/index.html.
Pełny tekst źródłaRamanishka, Vasili. "Describing and retrieving visual content using natural language". Thesis, 2020. https://hdl.handle.net/2144/42026.
Pełny tekst źródłaKsiążki na temat "Cross-Modal Retrieval and Hashing"
C, Peters, red. Evaluation of multilingual and multi-modal information retrieval: 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, Alicante, Spain, September 20-22, 2006 ; revised selected papers. Berlin: Springer, 2007.
Znajdź pełny tekst źródłaGey, Fredric C., Paul Clough, Bernardo Magnini, Douglas W. Oard i Jussi Karlgren. Evaluation of Multilingual and Multi-Modal Information Retrieval: 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, Alicante, Spain, September 20-22, 2006, Revised Selected Papers. Springer London, Limited, 2007.
Znajdź pełny tekst źródłaCzęści książek na temat "Cross-Modal Retrieval and Hashing"
Zhu, Lei, Jingjing Li i Weili Guan. "Cross-Modal Hashing". W Synthesis Lectures on Information Concepts, Retrieval, and Services, 45–89. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-37291-9_3.
Pełny tekst źródłaMandal, Devraj, Yashas Annadani i Soma Biswas. "GrowBit: Incremental Hashing for Cross-Modal Retrieval". W Computer Vision – ACCV 2018, 305–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_19.
Pełny tekst źródłaZhang, Peng-Fei, Zi Huang i Zheng Zhang. "Semantics-Reconstructing Hashing for Cross-Modal Retrieval". W Advances in Knowledge Discovery and Data Mining, 315–27. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47436-2_24.
Pełny tekst źródłaWang, Zening, Yungong Sun, Liang Liu i Ao Li. "Critical Separation Hashing for Cross-Modal Retrieval". W Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 171–79. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36011-4_15.
Pełny tekst źródłaLi, Mingyang, Xiangwei Kong, Tao Yao i Yujia Zhang. "Discrete Similarity Preserving Hashing for Cross-modal Retrieval". W Lecture Notes in Computer Science, 202–13. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-24265-7_18.
Pełny tekst źródłaXu, Jingnan, Tieying Li, Chong Xi i Xiaochun Yang. "Self-auxiliary Hashing for Unsupervised Cross Modal Retrieval". W Computer Supported Cooperative Work and Social Computing, 431–43. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4549-6_33.
Pełny tekst źródłaZhu, Lei, Jingjing Li i Weili Guan. "Composite Multi-modal Hashing". W Synthesis Lectures on Information Concepts, Retrieval, and Services, 91–144. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-37291-9_4.
Pełny tekst źródłaWeng, Weiwei, Jiagao Wu, Lu Yang, Linfeng Liu i Bin Hu. "Label-Based Deep Semantic Hashing for Cross-Modal Retrieval". W Neural Information Processing, 24–36. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36718-3_3.
Pełny tekst źródłaZhang, Xi, Hanjiang Lai i Jiashi Feng. "Attention-Aware Deep Adversarial Hashing for Cross-Modal Retrieval". W Computer Vision – ECCV 2018, 614–29. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01267-0_36.
Pełny tekst źródłaTang, Dianjuan, Hui Cui, Dan Shi i Hua Ji. "Hypergraph-Based Discrete Hashing Learning for Cross-Modal Retrieval". W Advances in Multimedia Information Processing – PCM 2018, 776–86. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00776-8_71.
Pełny tekst źródłaStreszczenia konferencji na temat "Cross-Modal Retrieval and Hashing"
Xu, Xing, Fumin Shen, Yang Yang i Heng Tao Shen. "Discriminant Cross-modal Hashing". W ICMR'16: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2911996.2912056.
Pełny tekst źródłaLuo, Xin, Peng-Fei Zhang, Ye Wu, Zhen-Duo Chen, Hua-Junjie Huang i Xin-Shun Xu. "Asymmetric Discrete Cross-Modal Hashing". W ICMR '18: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3206025.3206034.
Pełny tekst źródłaMoran, Sean, i Victor Lavrenko. "Regularised Cross-Modal Hashing". W SIGIR '15: The 38th International ACM SIGIR conference on research and development in Information Retrieval. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2766462.2767816.
Pełny tekst źródłaLiu, Yao, Yanhong Yuan, Qialli Huang i Zhixing Huang. "Hashing for Cross-Modal Similarity Retrieval". W 2015 11th International Conference on Semantics, Knowledge and Grids (SKG). IEEE, 2015. http://dx.doi.org/10.1109/skg.2015.9.
Pełny tekst źródłaChen, Tian-yi, Lan Zhang, Shi-cong Zhang, Zi-long Li i Bai-chuan Huang. "Extensible Cross-Modal Hashing". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/292.
Pełny tekst źródłaSun, Changchang, Xuemeng Song, Fuli Feng, Wayne Xin Zhao, Hao Zhang i Liqiang Nie. "Supervised Hierarchical Cross-Modal Hashing". W SIGIR '19: The 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3331184.3331229.
Pełny tekst źródłaYao, Hong-Lei, Yu-Wei Zhan, Zhen-Duo Chen, Xin Luo i Xin-Shun Xu. "TEACH: Attention-Aware Deep Cross-Modal Hashing". W ICMR '21: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3460426.3463625.
Pełny tekst źródłaWang, Hongya, Shunxin Dai, Ming Du, Bo Xu i Mingyong Li. "Revisiting Performance Measures for Cross-Modal Hashing". W ICMR '22: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3512527.3531363.
Pełny tekst źródłaShi, Yufeng, Xinge You, Feng Zheng, Shuo Wang i Qinmu Peng. "Equally-Guided Discriminative Hashing for Cross-modal Retrieval". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/662.
Pełny tekst źródłaTan, Shoubiao, Lingyu Hu, Anqi Wang-Xu, Jun Tang i Zhaohong Jia. "Kernelized cross-modal hashing for multimedia retrieval". W 2016 12th World Congress on Intelligent Control and Automation (WCICA). IEEE, 2016. http://dx.doi.org/10.1109/wcica.2016.7578693.
Pełny tekst źródła