Gotowa bibliografia na temat „Cross-Modal Retrieval under Noisy Labels”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Cross-Modal Retrieval under Noisy Labels”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Cross-Modal Retrieval under Noisy Labels"

1

Sotudian, Shahabeddin, Ruidi Chen i Ioannis Ch Paschalidis. "Distributionally robust learning-to-rank under the Wasserstein metric". PLOS ONE 18, nr 3 (30.03.2023): e0283574. http://dx.doi.org/10.1371/journal.pone.0283574.

Pełny tekst źródła
Streszczenie:
Despite their satisfactory performance, most existing listwise Learning-To-Rank (LTR) models do not consider the crucial issue of robustness. A data set can be contaminated in various ways, including human error in labeling or annotation, distributional data shift, and malicious adversaries who wish to degrade the algorithm’s performance. It has been shown that Distributionally Robust Optimization (DRO) is resilient against various types of noise and perturbations. To fill this gap, we introduce a new listwise LTR model called Distributionally Robust Multi-output Regression Ranking (DRMRR). Different from existing methods, the scoring function of DRMRR was designed as a multivariate mapping from a feature vector to a vector of deviation scores, which captures local context information and cross-document interactions. In this way, we are able to incorporate the LTR metrics into our model. DRMRR uses a Wasserstein DRO framework to minimize a multi-output loss function under the most adverse distributions in the neighborhood of the empirical data distribution defined by a Wasserstein ball. We present a compact and computationally solvable reformulation of the min-max formulation of DRMRR. Our experiments were conducted on two real-world applications: medical document retrieval and drug response prediction, showing that DRMRR notably outperforms state-of-the-art LTR models. We also conducted an extensive analysis to examine the resilience of DRMRR against various types of noise: Gaussian noise, adversarial perturbations, and label poisoning. Accordingly, DRMRR is not only able to achieve significantly better performance than other baselines, but it can maintain a relatively stable performance as more noise is added to the data.
Style APA, Harvard, Vancouver, ISO itp.
2

Li, Mingyong, Qiqi Li, Yan Ma i Degang Yang. "Semantic-guided autoencoder adversarial hashing for large-scale cross-modal retrieval". Complex & Intelligent Systems, 4.01.2022. http://dx.doi.org/10.1007/s40747-021-00615-3.

Pełny tekst źródła
Streszczenie:
AbstractWith the vigorous development of mobile Internet technology and the popularization of smart devices, while the amount of multimedia data has exploded, its forms have become more and more diversified. People’s demand for information is no longer satisfied with single-modal data retrieval, and cross-modal retrieval has become a research hotspot in recent years. Due to the strong feature learning ability of deep learning, cross-modal deep hashing has been extensively studied. However, the similarity of different modalities is difficult to measure directly because of the different distribution and representation of cross-modal. Therefore, it is urgent to eliminate the modal gap and improve retrieval accuracy. Some previous research work has introduced GANs in cross-modal hashing to reduce semantic differences between different modalities. However, most of the existing GAN-based cross-modal hashing methods have some issues such as network training is unstable and gradient disappears, which affect the elimination of modal differences. To solve this issue, this paper proposed a novel Semantic-guided Autoencoder Adversarial Hashing method for cross-modal retrieval (SAAH). First of all, two kinds of adversarial autoencoder networks, under the guidance of semantic multi-labels, maximize the semantic relevance of instances and maintain the immutability of cross-modal. Secondly, under the supervision of semantics, the adversarial module guides the feature learning process and maintains the modality relations. In addition, to maintain the inter-modal correlation of all similar pairs, this paper use two types of loss functions to maintain the similarity. To verify the effectiveness of our proposed method, sufficient experiments were conducted on three widely used cross-modal datasets (MIRFLICKR, NUS-WIDE and MS COCO), and compared with several representatives advanced cross-modal retrieval methods, SAAH achieved leading retrieval performance.
Style APA, Harvard, Vancouver, ISO itp.
3

Okamura, Daiki, Ryosuke Harakawa i Masahiro Iwahashi. "LCNME: Label Correction Using Network Prediction Based on Memorization Effects for Cross-Modal Retrieval with Noisy Labels". IEEE Transactions on Circuits and Systems for Video Technology, 2023, 1. http://dx.doi.org/10.1109/tcsvt.2023.3286546.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Cross-Modal Retrieval under Noisy Labels"

1

Mandal, Devraj. "Cross-Modal Retrieval and Hashing". Thesis, 2020. https://etd.iisc.ac.in/handle/2005/4685.

Pełny tekst źródła
Streszczenie:
The objective of cross-modal retrieval is to retrieve relevant items from one modality (say image), given a query from another modality (say textual document). Cross-modal retrieval has various applications like matching image-sketch, audio-visual, near infrared-RGB, etc. Different feature representations of the two modalities, absence of paired correspondences, etc. makes this a very challenging problem. In this thesis, we have extensively looked at the cross-modal retrieval problem from different aspects and proposed methodologies to address them. • In the first work, we propose a novel framework, which can work with unpaired data of the two modalities. The method has two-steps, consisting of a hash code learning stage followed by a hash function learning stage. The method can also generate unified hash representations in post-processing stage for even better performance. Finally, we investigate, formulate and address the cross-modal hashing problem in presence of missing similarity information between the data items. • In the second work, we investigate how to make the cross-modal hashing algorithms scalable so that it can handle large amounts of training data and propose two solutions. The first approach builds on mini-batch realization of the previously formulated objective and the second is based on matrix factorization. We also investigate whether it is possible to build a hashing based approach without the need to learn a hash function as is typically done in literature. Finally, we propose a strategy so that an already trained cross-modal approach can be adapted and updated to take into account the real life scenario of increasing label space, without retraining the entire model from scratch. • In the third work, we explore semi-supervised approaches for cross-modal retrieval. We first propose a novel framework, which can predict the labels of the unlabeled data using complementary information from the different modalities. The framework can be used as an add-on with any baseline cross-modal algorithm. The second approach estimates the labels of the unlabeled data using nearest neighbor strategy, and then train a network with skip connections to predict the true labels. • In the fourth work, we investigate the cross-modal problem in an incremental multiclass scenario, where new data may contain previously unseen categories. We propose a novel incremental cross-modal hashing algorithm, which can adapt itself to handle incoming data of new categories. At every stage, a small amount of old category data termed exemplars is used, so as not to forget the old data while trying to learn for the new incoming data. • Finally, we investigate the effect of label corruption on cross-modal algorithms. We first study the recently proposed training paradigms, which focuses on small loss samples to build noise-resistant image classification models and improve upon that model using techniques like self-supervision and relabeling of large loss samples. Next we extend this work for cross-modal retrieval under noisy data.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Cross-Modal Retrieval under Noisy Labels"

1

Mandal, Devraj, i Soma Biswas. "Cross-Modal Retrieval With Noisy Labels". W 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9190981.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Hu, Peng, Xi Peng, Hongyuan Zhu, Liangli Zhen i Jie Lin. "Learning Cross-Modal Retrieval with Noisy Labels". W 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00536.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Li, Runhao, Zhenyu Weng, Huiping Zhuang, Yongming Chen i Zhiping Lin. "Neighborhood Learning from Noisy Labels for Cross-Modal Retrieval". W 2023 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2023. http://dx.doi.org/10.1109/iscas46773.2023.10181441.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Xu, Tianyuan, Xueliang Liu, Zhen Huang, Dan Guo, Richang Hong i Meng Wang. "Early-Learning regularized Contrastive Learning for Cross-Modal Retrieval with Noisy Labels". W MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548066.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Feng, Yanglin, Hongyuan Zhu, Dezhong Peng, Xi Peng i Peng Hu. "RONO: Robust Discriminative Learning with Noisy Labels for 2D-3D Cross-Modal Retrieval". W 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01117.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Okamura, Daiki, Ryosuke Harakawa i Masahiro Iwahashi. "LCN: Label Correction Based on Network Prediction for Cross-Modal Retrieval with Noisy Labels". W 2022 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2022. http://dx.doi.org/10.23919/apsipaasc55919.2022.9980224.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii