Academic literature on the topic 'Cross-Modal Retrieval and Hashing'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cross-Modal Retrieval and Hashing.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Cross-Modal Retrieval and Hashing"
Liu, Huan, Jiang Xiong, Nian Zhang, Fuming Liu, and Xitao Zou. "Quadruplet-Based Deep Cross-Modal Hashing." Computational Intelligence and Neuroscience 2021 (July 2, 2021): 1–10. http://dx.doi.org/10.1155/2021/9968716.
Full textLiu, Xuanwu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Yazhou Ren, and Maozu Guo. "Ranking-Based Deep Cross-Modal Hashing." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4400–4407. http://dx.doi.org/10.1609/aaai.v33i01.33014400.
Full textYang, Xiaohan, Zhen Wang, Nannan Wu, Guokun Li, Chuang Feng, and Pingping Liu. "Unsupervised Deep Relative Neighbor Relationship Preserving Cross-Modal Hashing." Mathematics 10, no. 15 (July 28, 2022): 2644. http://dx.doi.org/10.3390/math10152644.
Full textLi, Chao, Cheng Deng, Lei Wang, De Xie, and Xianglong Liu. "Coupled CycleGAN: Unsupervised Hashing Network for Cross-Modal Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 176–83. http://dx.doi.org/10.1609/aaai.v33i01.3301176.
Full text刘, 志虎. "Label Consistency Hashing for Cross-Modal Retrieval." Computer Science and Application 11, no. 04 (2021): 1104–12. http://dx.doi.org/10.12677/csa.2021.114114.
Full textYao, Tao, Xiangwei Kong, Haiyan Fu, and Qi Tian. "Semantic consistency hashing for cross-modal retrieval." Neurocomputing 193 (June 2016): 250–59. http://dx.doi.org/10.1016/j.neucom.2016.02.016.
Full textChen, Shubai, Song Wu, and Li Wang. "Hierarchical semantic interaction-based deep hashing network for cross-modal retrieval." PeerJ Computer Science 7 (May 25, 2021): e552. http://dx.doi.org/10.7717/peerj-cs.552.
Full textLi, Mingyong, Qiqi Li, Lirong Tang, Shuang Peng, Yan Ma, and Degang Yang. "Deep Unsupervised Hashing for Large-Scale Cross-Modal Retrieval Using Knowledge Distillation Model." Computational Intelligence and Neuroscience 2021 (July 17, 2021): 1–11. http://dx.doi.org/10.1155/2021/5107034.
Full textZhong, Fangming, Zhikui Chen, and Geyong Min. "Deep Discrete Cross-Modal Hashing for Cross-Media Retrieval." Pattern Recognition 83 (November 2018): 64–77. http://dx.doi.org/10.1016/j.patcog.2018.05.018.
Full textQi, Xiaojun, Xianhua Zeng, Shumin Wang, Yicai Xie, and Liming Xu. "Cross-modal variable-length hashing based on hierarchy." Intelligent Data Analysis 25, no. 3 (April 20, 2021): 669–85. http://dx.doi.org/10.3233/ida-205162.
Full textDissertations / Theses on the topic "Cross-Modal Retrieval and Hashing"
Shen, Yuming. "Deep binary representation learning for single/cross-modal data retrieval." Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/67635/.
Full textZhu, Meng. "Cross-modal semantic-associative labelling, indexing and retrieval of multimodal data." Thesis, University of Reading, 2010. http://centaur.reading.ac.uk/24828/.
Full textSaragiotis, Panagiotis. "Cross-modal classification and retrieval of multimodal data using combinations of neural networks." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/843338/.
Full textSurian, Didi. "Novel Applications Using Latent Variable Models." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14014.
Full textTran, Thi Quynh Nhi. "Robust and comprehensive joint image-text representations." Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096/document.
Full textThis thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
Mandal, Devraj. "Cross-Modal Retrieval and Hashing." Thesis, 2020. https://etd.iisc.ac.in/handle/2005/4685.
Full textLi, Yan-Fu, and 李彥甫. "The Cross-Modal Method of Tag Labeling in Music Information Retrieval." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/45038305568580924323.
Full text輔仁大學
資訊工程學系
96
A music object contains multi-facet feature, such as the average frequency, speed, timbre, melody, rhythm, genre and so on. We conclude that these features are extracted from various feature domains, respectively. Moreover, these feature do- mains are separated into two types, the quantified and the unquantifiable. Within the quantified feature domain, the features are quantified as the numerical value, for example, if there are three important average frequencies in a music object, we quantify and denote as three numerical values: 20Hz, 80Hz and 100Hz in the feature domain: average frequency. On the other hand, the features in the unquan- tifiable feature domain are described as the non-numerical value (e.g. letters) and it is difficultly defined by the mathematic method. For example, the genre of the music object is difficultly extracted by the filter. However, among the features of a music object, the unquantifiable features are important for the human auditory system. Therefore, we introduce a cross-modal association method to associate the quantified and the unquantifiable features. We represent the music objects including quantified and unquantifiable features as the multimedia graph (MMG) [1] which converts the association problem to the graph problem, and apply the link analysis algorithm to rank the nodes in the graph. Thus we label the music object by the rank of nodes.
Yang, Bo. "Semantic-aware data processing towards cross-modal multimedia analysis and content-based retrieval in distributed and mobile environments /." 2007. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-1850/index.html.
Full textRamanishka, Vasili. "Describing and retrieving visual content using natural language." Thesis, 2020. https://hdl.handle.net/2144/42026.
Full textBooks on the topic "Cross-Modal Retrieval and Hashing"
C, Peters, ed. Evaluation of multilingual and multi-modal information retrieval: 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, Alicante, Spain, September 20-22, 2006 ; revised selected papers. Berlin: Springer, 2007.
Find full textGey, Fredric C., Paul Clough, Bernardo Magnini, Douglas W. Oard, and Jussi Karlgren. Evaluation of Multilingual and Multi-Modal Information Retrieval: 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, Alicante, Spain, September 20-22, 2006, Revised Selected Papers. Springer London, Limited, 2007.
Find full textBook chapters on the topic "Cross-Modal Retrieval and Hashing"
Zhu, Lei, Jingjing Li, and Weili Guan. "Cross-Modal Hashing." In Synthesis Lectures on Information Concepts, Retrieval, and Services, 45–89. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-37291-9_3.
Full textMandal, Devraj, Yashas Annadani, and Soma Biswas. "GrowBit: Incremental Hashing for Cross-Modal Retrieval." In Computer Vision – ACCV 2018, 305–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_19.
Full textZhang, Peng-Fei, Zi Huang, and Zheng Zhang. "Semantics-Reconstructing Hashing for Cross-Modal Retrieval." In Advances in Knowledge Discovery and Data Mining, 315–27. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47436-2_24.
Full textWang, Zening, Yungong Sun, Liang Liu, and Ao Li. "Critical Separation Hashing for Cross-Modal Retrieval." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 171–79. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36011-4_15.
Full textLi, Mingyang, Xiangwei Kong, Tao Yao, and Yujia Zhang. "Discrete Similarity Preserving Hashing for Cross-modal Retrieval." In Lecture Notes in Computer Science, 202–13. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-24265-7_18.
Full textXu, Jingnan, Tieying Li, Chong Xi, and Xiaochun Yang. "Self-auxiliary Hashing for Unsupervised Cross Modal Retrieval." In Computer Supported Cooperative Work and Social Computing, 431–43. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4549-6_33.
Full textZhu, Lei, Jingjing Li, and Weili Guan. "Composite Multi-modal Hashing." In Synthesis Lectures on Information Concepts, Retrieval, and Services, 91–144. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-37291-9_4.
Full textWeng, Weiwei, Jiagao Wu, Lu Yang, Linfeng Liu, and Bin Hu. "Label-Based Deep Semantic Hashing for Cross-Modal Retrieval." In Neural Information Processing, 24–36. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36718-3_3.
Full textZhang, Xi, Hanjiang Lai, and Jiashi Feng. "Attention-Aware Deep Adversarial Hashing for Cross-Modal Retrieval." In Computer Vision – ECCV 2018, 614–29. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01267-0_36.
Full textTang, Dianjuan, Hui Cui, Dan Shi, and Hua Ji. "Hypergraph-Based Discrete Hashing Learning for Cross-Modal Retrieval." In Advances in Multimedia Information Processing – PCM 2018, 776–86. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00776-8_71.
Full textConference papers on the topic "Cross-Modal Retrieval and Hashing"
Xu, Xing, Fumin Shen, Yang Yang, and Heng Tao Shen. "Discriminant Cross-modal Hashing." In ICMR'16: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2911996.2912056.
Full textLuo, Xin, Peng-Fei Zhang, Ye Wu, Zhen-Duo Chen, Hua-Junjie Huang, and Xin-Shun Xu. "Asymmetric Discrete Cross-Modal Hashing." In ICMR '18: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3206025.3206034.
Full textMoran, Sean, and Victor Lavrenko. "Regularised Cross-Modal Hashing." In SIGIR '15: The 38th International ACM SIGIR conference on research and development in Information Retrieval. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2766462.2767816.
Full textLiu, Yao, Yanhong Yuan, Qialli Huang, and Zhixing Huang. "Hashing for Cross-Modal Similarity Retrieval." In 2015 11th International Conference on Semantics, Knowledge and Grids (SKG). IEEE, 2015. http://dx.doi.org/10.1109/skg.2015.9.
Full textChen, Tian-yi, Lan Zhang, Shi-cong Zhang, Zi-long Li, and Bai-chuan Huang. "Extensible Cross-Modal Hashing." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/292.
Full textSun, Changchang, Xuemeng Song, Fuli Feng, Wayne Xin Zhao, Hao Zhang, and Liqiang Nie. "Supervised Hierarchical Cross-Modal Hashing." In SIGIR '19: The 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3331184.3331229.
Full textYao, Hong-Lei, Yu-Wei Zhan, Zhen-Duo Chen, Xin Luo, and Xin-Shun Xu. "TEACH: Attention-Aware Deep Cross-Modal Hashing." In ICMR '21: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3460426.3463625.
Full textWang, Hongya, Shunxin Dai, Ming Du, Bo Xu, and Mingyong Li. "Revisiting Performance Measures for Cross-Modal Hashing." In ICMR '22: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3512527.3531363.
Full textShi, Yufeng, Xinge You, Feng Zheng, Shuo Wang, and Qinmu Peng. "Equally-Guided Discriminative Hashing for Cross-modal Retrieval." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/662.
Full textTan, Shoubiao, Lingyu Hu, Anqi Wang-Xu, Jun Tang, and Zhaohong Jia. "Kernelized cross-modal hashing for multimedia retrieval." In 2016 12th World Congress on Intelligent Control and Automation (WCICA). IEEE, 2016. http://dx.doi.org/10.1109/wcica.2016.7578693.
Full text