Academic literature on the topic 'Scalability in Cross-Modal Retrieval'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Scalability in Cross-Modal Retrieval.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Scalability in Cross-Modal Retrieval"
Hu, Peng, Hongyuan Zhu, Xi Peng, and Jie Lin. "Semi-Supervised Multi-Modal Learning with Balanced Spectral Decomposition." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 99–106. http://dx.doi.org/10.1609/aaai.v34i01.5339.
Full textRasheed, Ali Salim, Davood Zabihzadeh, and Sumia Abdulhussien Razooqi Al-Obaidi. "Large-Scale Multi-modal Distance Metric Learning with Application to Content-Based Information Retrieval and Image Classification." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 13 (May 26, 2020): 2050034. http://dx.doi.org/10.1142/s0218001420500342.
Full textZalkow, Frank, and Meinard Müller. "Learning Low-Dimensional Embeddings of Audio Shingles for Cross-Version Retrieval of Classical Music." Applied Sciences 10, no. 1 (December 18, 2019): 19. http://dx.doi.org/10.3390/app10010019.
Full textHuang, Xiaobing, Tian Zhao, and Yu Cao. "PIR." International Journal of Multimedia Data Engineering and Management 5, no. 3 (July 2014): 1–27. http://dx.doi.org/10.4018/ijmdem.2014070101.
Full textZhang, Zhen, Xu Wu, and Shuang Wei. "Cross-Domain Access Control Model in Industrial IoT Environment." Applied Sciences 13, no. 8 (April 17, 2023): 5042. http://dx.doi.org/10.3390/app13085042.
Full textAn, Duo, Alan Chiu, James A. Flanders, Wei Song, Dahua Shou, Yen-Chun Lu, Lars G. Grunnet, et al. "Designing a retrievable and scalable cell encapsulation device for potential treatment of type 1 diabetes." Proceedings of the National Academy of Sciences 115, no. 2 (December 26, 2017): E263—E272. http://dx.doi.org/10.1073/pnas.1708806115.
Full textTamchyna, Aleš, Ondřej Dušek, Rudolf Rosa, and Pavel Pecina. "MTMonkey: A Scalable Infrastructure for a Machine Translation Web Service." Prague Bulletin of Mathematical Linguistics 100, no. 1 (October 1, 2013): 31–40. http://dx.doi.org/10.2478/pralin-2013-0009.
Full textZhang, Chengyuan, Jiayu Song, Xiaofeng Zhu, Lei Zhu, and Shichao Zhang. "HCMSL: Hybrid Cross-modal Similarity Learning for Cross-modal Retrieval." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (April 20, 2021): 1–22. http://dx.doi.org/10.1145/3412847.
Full textWu, Yiling, Shuhui Wang, and Qingming Huang. "Multi-modal semantic autoencoder for cross-modal retrieval." Neurocomputing 331 (February 2019): 165–75. http://dx.doi.org/10.1016/j.neucom.2018.11.042.
Full textDevezas, José. "Graph-based entity-oriented search." ACM SIGIR Forum 55, no. 1 (June 2021): 1–2. http://dx.doi.org/10.1145/3476415.3476430.
Full textDissertations / Theses on the topic "Scalability in Cross-Modal Retrieval"
Shen, Yuming. "Deep binary representation learning for single/cross-modal data retrieval." Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/67635/.
Full textZhu, Meng. "Cross-modal semantic-associative labelling, indexing and retrieval of multimodal data." Thesis, University of Reading, 2010. http://centaur.reading.ac.uk/24828/.
Full textSaragiotis, Panagiotis. "Cross-modal classification and retrieval of multimodal data using combinations of neural networks." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/843338/.
Full textSurian, Didi. "Novel Applications Using Latent Variable Models." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14014.
Full textTran, Thi Quynh Nhi. "Robust and comprehensive joint image-text representations." Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096/document.
Full textThis thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
Li, Yan-Fu, and 李彥甫. "The Cross-Modal Method of Tag Labeling in Music Information Retrieval." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/45038305568580924323.
Full text輔仁大學
資訊工程學系
96
A music object contains multi-facet feature, such as the average frequency, speed, timbre, melody, rhythm, genre and so on. We conclude that these features are extracted from various feature domains, respectively. Moreover, these feature do- mains are separated into two types, the quantified and the unquantifiable. Within the quantified feature domain, the features are quantified as the numerical value, for example, if there are three important average frequencies in a music object, we quantify and denote as three numerical values: 20Hz, 80Hz and 100Hz in the feature domain: average frequency. On the other hand, the features in the unquan- tifiable feature domain are described as the non-numerical value (e.g. letters) and it is difficultly defined by the mathematic method. For example, the genre of the music object is difficultly extracted by the filter. However, among the features of a music object, the unquantifiable features are important for the human auditory system. Therefore, we introduce a cross-modal association method to associate the quantified and the unquantifiable features. We represent the music objects including quantified and unquantifiable features as the multimedia graph (MMG) [1] which converts the association problem to the graph problem, and apply the link analysis algorithm to rank the nodes in the graph. Thus we label the music object by the rank of nodes.
Yang, Bo. "Semantic-aware data processing towards cross-modal multimedia analysis and content-based retrieval in distributed and mobile environments /." 2007. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-1850/index.html.
Full textRamanishka, Vasili. "Describing and retrieving visual content using natural language." Thesis, 2020. https://hdl.handle.net/2144/42026.
Full textBooks on the topic "Scalability in Cross-Modal Retrieval"
C, Peters, ed. Evaluation of multilingual and multi-modal information retrieval: 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, Alicante, Spain, September 20-22, 2006 ; revised selected papers. Berlin: Springer, 2007.
Find full textGey, Fredric C., Paul Clough, Bernardo Magnini, Douglas W. Oard, and Jussi Karlgren. Evaluation of Multilingual and Multi-Modal Information Retrieval: 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, Alicante, Spain, September 20-22, 2006, Revised Selected Papers. Springer London, Limited, 2007.
Find full textBook chapters on the topic "Scalability in Cross-Modal Retrieval"
Zhu, Lei, Jingjing Li, and Weili Guan. "Cross-Modal Hashing." In Synthesis Lectures on Information Concepts, Retrieval, and Services, 45–89. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-37291-9_3.
Full textLi, Qing, and Yu Yang. "Cross-Modal Multimedia Information Retrieval." In Encyclopedia of Database Systems, 1–6. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_90-2.
Full textLi, Qing, and Yu Yang. "Cross-Modal Multimedia Information Retrieval." In Encyclopedia of Database Systems, 528–32. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_90.
Full textWen, Zhenyu, and Aimin Feng. "Deep Centralized Cross-modal Retrieval." In MultiMedia Modeling, 443–55. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67832-6_36.
Full textLi, Qing, and Yu Yang. "Cross-Modal Multimedia Information Retrieval." In Encyclopedia of Database Systems, 672–77. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_90.
Full textNing, Xuecheng, Xiaoshan Yang, and Changsheng Xu. "Multi-hop Interactive Cross-Modal Retrieval." In MultiMedia Modeling, 681–93. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37734-2_55.
Full textMalik, Shaily, Nikhil Bhardwaj, Rahul Bhardwaj, and Saurabh Kumar. "Cross-Modal Retrieval Using Deep Learning." In Proceedings of Third Doctoral Symposium on Computational Intelligence, 725–34. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3148-2_62.
Full textXuan, Ruisheng, Weihua Ou, Quan Zhou, Yongfeng Cao, Hua Yang, Xiangguang Xiong, and Fangming Ruan. "Semantics Consistent Adversarial Cross-Modal Retrieval." In Cognitive Internet of Things: Frameworks, Tools and Applications, 463–72. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-04946-1_45.
Full textMandal, Devraj, Yashas Annadani, and Soma Biswas. "GrowBit: Incremental Hashing for Cross-Modal Retrieval." In Computer Vision – ACCV 2018, 305–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_19.
Full textOwen, Charles B., and Fillia Makedon. "The Xtrieve Cross-Modal Information Retrieval System." In Computed Synchronization for Multimedia Applications, 149–68. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4757-4830-7_7.
Full textConference papers on the topic "Scalability in Cross-Modal Retrieval"
Wang, Bokun, Yang Yang, Xing Xu, Alan Hanjalic, and Heng Tao Shen. "Adversarial Cross-Modal Retrieval." In MM '17: ACM Multimedia Conference. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3123266.3123326.
Full textJing, Longlong, Elahe Vahdani, Jiaxing Tan, and Yingli Tian. "Cross-Modal Center Loss for 3D Cross-Modal Retrieval." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00316.
Full textZhen, Liangli, Peng Hu, Xu Wang, and Dezhong Peng. "Deep Supervised Cross-Modal Retrieval." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.01064.
Full textRanjan, Viresh, Nikhil Rasiwasia, and C. V. Jawahar. "Multi-label Cross-Modal Retrieval." In 2015 IEEE International Conference on Computer Vision (ICCV). IEEE, 2015. http://dx.doi.org/10.1109/iccv.2015.466.
Full textZong, Linlin, Qiujie Xie, Jiahui Zhou, Peiran Wu, Xianchao Zhang, and Bo Xu. "FedCMR: Federated Cross-Modal Retrieval." In SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3404835.3462989.
Full textHou, Danyang, Liang Pang, Yanyan Lan, Huawei Shen, and Xueqi Cheng. "Region-based Cross-modal Retrieval." In 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022. http://dx.doi.org/10.1109/ijcnn55064.2022.9892139.
Full textXu, Xing, Fumin Shen, Yang Yang, and Heng Tao Shen. "Discriminant Cross-modal Hashing." In ICMR'16: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2911996.2912056.
Full textGur, Shir, Natalia Neverova, Chris Stauffer, Ser-Nam Lim, Douwe Kiela, and Austin Reiter. "Cross-Modal Retrieval Augmentation for Multi-Modal Classification." In Findings of the Association for Computational Linguistics: EMNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-emnlp.11.
Full textFei, Hongliang, Tan Yu, and Ping Li. "Cross-lingual Cross-modal Pretraining for Multimodal Retrieval." In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.285.
Full textSong, Ge, and Xiaoyang Tan. "Cross-modal Retrieval via Memory Network." In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.178.
Full text