Academic literature on the topic 'Learning with noisy labels'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Learning with noisy labels.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Learning with noisy labels"
Xie, Ming-Kun, and Sheng-Jun Huang. "Partial Multi-Label Learning with Noisy Label Identification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6454–61. http://dx.doi.org/10.1609/aaai.v34i04.6117.
Full textChen, Mingcai, Hao Cheng, Yuntao Du, Ming Xu, Wenyu Jiang, and Chongjun Wang. "Two Wrongs Don’t Make a Right: Combating Confirmation Bias in Learning with Label Noise." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14765–73. http://dx.doi.org/10.1609/aaai.v37i12.26725.
Full textLi, Hui, Zhaodong Niu, Quan Sun, and Yabo Li. "Co-Correcting: Combat Noisy Labels in Space Debris Detection." Remote Sensing 14, no. 20 (October 21, 2022): 5261. http://dx.doi.org/10.3390/rs14205261.
Full textTang, Xinyu, Milad Nasr, Saeed Mahloujifar, Virat Shejwalkar, Liwei Song, Amir Houmansadr, and Prateek Mittal. "Machine Learning with Differentially Private Labels: Mechanisms and Frameworks." Proceedings on Privacy Enhancing Technologies 2022, no. 4 (October 2022): 332–50. http://dx.doi.org/10.56553/popets-2022-0112.
Full textWu, Yichen, Jun Shu, Qi Xie, Qian Zhao, and Deyu Meng. "Learning to Purify Noisy Labels via Meta Soft Label Corrector." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10388–96. http://dx.doi.org/10.1609/aaai.v35i12.17244.
Full textZheng, Guoqing, Ahmed Hassan Awadallah, and Susan Dumais. "Meta Label Correction for Noisy Label Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 11053–61. http://dx.doi.org/10.1609/aaai.v35i12.17319.
Full textShi, Jialin, Chenyi Guo, and Ji Wu. "A Hybrid Robust-Learning Architecture for Medical Image Segmentation with Noisy Labels." Future Internet 14, no. 2 (January 26, 2022): 41. http://dx.doi.org/10.3390/fi14020041.
Full textNorthcutt, Curtis, Lu Jiang, and Isaac Chuang. "Confident Learning: Estimating Uncertainty in Dataset Labels." Journal of Artificial Intelligence Research 70 (April 14, 2021): 1373–411. http://dx.doi.org/10.1613/jair.1.12125.
Full textSilva, Amila, Ling Luo, Shanika Karunasekera, and Christopher Leckie. "Noise-Robust Learning from Multiple Unsupervised Sources of Inferred Labels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8315–23. http://dx.doi.org/10.1609/aaai.v36i8.20806.
Full textYan, Xuguo, Xuhui Xia, Lei Wang, and Zelin Zhang. "A Progressive Deep Neural Network Training Method for Image Classification with Noisy Labels." Applied Sciences 12, no. 24 (December 12, 2022): 12754. http://dx.doi.org/10.3390/app122412754.
Full textDissertations / Theses on the topic "Learning with noisy labels"
Yu, Xiyu. "Learning with Biased and Noisy Labels." Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/20125.
Full textCaye, Daudt Rodrigo. "Convolutional neural networks for change analysis in earth observation images with noisy labels and domain shifts." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT033.
Full textThe analysis of satellite and aerial Earth observation images allows us to obtain precise information over large areas. A multitemporal analysis of such images is necessary to understand the evolution of such areas. In this thesis, convolutional neural networks are used to detect and understand changes using remote sensing images from various sources in supervised and weakly supervised settings. Siamese architectures are used to compare coregistered image pairs and to identify changed pixels. The proposed method is then extended into a multitask network architecture that is used to detect changes and perform land cover mapping simultaneously, which permits a semantic understanding of the detected changes. Then, classification filtering and a novel guided anisotropic diffusion algorithm are used to reduce the effect of biased label noise, which is a concern for automatically generated large-scale datasets. Weakly supervised learning is also achieved to perform pixel-level change detection using only image-level supervision through the usage of class activation maps and a novel spatial attention layer. Finally, a domain adaptation method based on adversarial training is proposed, which succeeds in projecting images from different domains into a common latent space where a given task can be performed. This method is tested not only for domain adaptation for change detection, but also for image classification and semantic segmentation, which proves its versatility
Fang, Tongtong. "Learning from noisy labelsby importance reweighting: : a deep learning approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264125.
Full textFelaktiga annoteringar kan sänka klassificeringsprestanda.Speciellt för djupa nätverk kan detta leda till dålig generalisering. Nyligen har brusrobust djup inlärning överträffat andra inlärningsmetoder när det gäller hantering av komplexa indata Befintligta resultat från djup inlärning kan dock inte tillhandahålla rimliga viktomfördelningskriterier. För att hantera detta kunskapsgap och inspirerat av domänanpassning föreslår vi en ny robust djup inlärningsmetod som använder omviktning. Omviktningen görs genom att minimera den maximala medelavvikelsen mellan förlustfördelningen av felmärkta och korrekt märkta data. I experiment slår den föreslagna metoden andra metoder. Resultaten visar en stor forskningspotential för att tillämpa domänanpassning. Dessutom motiverar den föreslagna metoden undersökningar av andra intressanta problem inom domänanpassning genom att möjliggöra smarta omviktningar.
Ainapure, Abhijeet Narhar. "Application and Performance Enhancement of Intelligent Cross-Domain Fault Diagnosis in Rotating Machinery." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623164772153736.
Full textChan, Jeffrey (Jeffrey D. ). "On boosting and noisy labels." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100297.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 53-56).
Boosting is a machine learning technique widely used across many disciplines. Boosting enables one to learn from labeled data in order to predict the labels of unlabeled data. A central property of boosting instrumental to its popularity is its resistance to overfitting. Previous experiments provide a margin-based explanation for this resistance to overfitting. In this thesis, the main finding is that boosting's resistance to overfitting can be understood in terms of how it handles noisy (mislabeled) points. Confirming experimental evidence emerged from experiments using the Wisconsin Diagnostic Breast Cancer(WDBC) dataset commonly used in machine learning experiments. A majority vote ensemble filter identified on average that 2.5% of the points in the dataset as noisy. The experiments chiefly investigated boosting's treatment of noisy points from a volume-based perspective. While the cell volume surrounding noisy points did not show a significant difference from other points, the decision volume surrounding noisy points was two to three times less than that of non-noisy points. Additional findings showed that decision volume not only provides insight into boosting's resistance to overfitting in the context of noisy points, but also serves as a suitable metric for identifying which points in a dataset are likely to be mislabeled.
by Jeffrey Chan.
M. Eng.
Almansour, Amal. "Credibility assessment for Arabic micro-blogs using noisy labels." Thesis, King's College London (University of London), 2016. https://kclpure.kcl.ac.uk/portal/en/theses/credibility-assessment-for-arabic-microblogs-using-noisy-labels(6baf983a-940d-4c2c-8821-e992348b4097).html.
Full textNorthcutt, Curtis George. "Classification with noisy labels : "Multiple Account" cheating detection in Open Online Courses." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111870.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 113-122).
Massive Open Online Courses (MOOCs) have the potential to enhance socioeconomic mobility through education. Yet, the viability of this outcome largely depends on the reputation of MOOC certificates as a credible academic credential. I describe a cheating strategy that threatens this reputation and holds the potential to render the MOOC certificate valueless. The strategy, Copying Answers using Multiple Existences Online (CAMEO), involves a user who gathers solutions to assessment questions using one or more harvester accounts and then submits correct answers using one or more separate master accounts. To estimate a lower bound for CAMEO prevalence among 1.9 million course participants in 115 HarvardX and MITx courses, I introduce a filter-based CAMEO detection algorithm and use a small-scale experiment to verify CAMEO use with certainty. I identify preventive strategies that can decrease CAMEO rates and show evidence of their effectiveness in science courses. Because the CAMEO algorithm functions as a lower bound estimate, it fails to detect many CAMEO cheaters. As a novelty of this thesis, instead of improving the shortcomings of the CAMEO algorithm directly, I recognize that we can think of the CAMEO algorithm as a method for producing noisy predicted cheating labels. Then a solution to the more general problem of binary classification with noisy labels ( ~ P̃̃̃ Ñ learning) is a solution to CAMEO cheating detection. ~ P̃ Ñ learning is the problem of binary classification when training examples may be mislabeled (flipped) uniformly with noise rate 1 for positive examples and 0 for negative examples. I propose Rank Pruning to solve ~ P ~N learning and the open problem of estimating the noise rates. Unlike prior solutions, Rank Pruning is efficient and general, requiring O(T) for any unrestricted choice of probabilistic classifier with T fitting time. I prove Rank Pruning achieves consistent noise estimation and equivalent expected risk as learning with uncorrupted labels in ideal conditions, and derive closed-form solutions when conditions are non-ideal. Rank Pruning achieves state-of-the-art noise rate estimation and F1, error, and AUC-PR on the MNIST and CIFAR datasets, regardless of noise rates. To highlight, Rank Pruning with a CNN classifier can predict if a MNIST digit is a one or not one with only 0:25% error, and 0:46% error across all digits, even when 50% of positive examples are mislabeled and 50% of observed positive labels are mislabeled negative examples. Rank Pruning achieves similarly impressive results when as large as 50% of training examples are actually just noise drawn from a third distribution. Together, the CAMEO and Rank Pruning algorithms allow for a robust, general, and time-efficient solution to the CAMEO cheating detection problem. By ensuring the validity of MOOC credentials, we enable MOOCs to achieve both openness and value, and thus take one step closer to the greater goal of democratization of education.
by Curtis George Northcutt.
S.M.
Ekambaram, Rajmadhan. "Active Cleaning of Label Noise Using Support Vector Machines." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6830.
Full textBalasubramanian, Krishnakumar. "Learning without labels and nonnegative tensor factorization." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33926.
Full textNugyen, Duc Tam [Verfasser], and Thomas [Akademischer Betreuer] Brox. "Robust deep learning for computer vision to counteract data scarcity and label noise." Freiburg : Universität, 2020. http://d-nb.info/1226657060/34.
Full textBooks on the topic "Learning with noisy labels"
Ramsay, Carol A. Pesticides: Learning about labels. [Pullman]: Cooperative Extension, Washington State University, 1999.
Find full textZamzmi, Ghada, Sameer Antani, Ulas Bagci, Marius George Linguraru, Sivaramakrishnan Rajaraman, and Zhiyun Xue, eds. Medical Image Learning with Limited and Noisy Data. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16760-7.
Full textAllan, John A. B. 1941-, ed. Kids with labels: Keys to intimacy. Toronto: Lugus Productions, 1990.
Find full textLiteracy, not labels: Celebrating students' strengths through whole language. Portsmouth, NH: Boynton/Cook Publishers, 1995.
Find full textWang, Qian, Fausto Milletari, Hien V. Nguyen, Shadi Albarqouni, M. Jorge Cardoso, Nicola Rieke, Ziyue Xu, et al., eds. Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33391-1.
Full textCarneiro, Gustavo. Machine Learning with Noisy Labels: Definitions, Theory, Techniques and Solutions. Elsevier Science & Technology Books, 2024.
Find full textHerzog, Joyce. Learning in Spite of Labels. JoyceHerzog.com, Inc., 1994.
Find full textOwen, Phillip. What-If... ?: Learning Without Labels. Independently Published, 2017.
Find full textPublishing, Carson-Dellosa. Celebrate Learning Labels and Organizers. Carson-Dellosa Publishing, LLC, 2018.
Find full textNational Education Trust (Great Britain) Staff and Marc Rowland. Learning Without Labels: Improving Outcomes for Vulnerable Pupils. Catt Educational, Limited, John, 2017.
Find full textBook chapters on the topic "Learning with noisy labels"
Vembu, Shankar, and Sandra Zilles. "Interactive Learning from Multiple Noisy Labels." In Machine Learning and Knowledge Discovery in Databases, 493–508. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46128-1_31.
Full textGoryunova, Natalya, Artem Baklanov, and Egor Ianovski. "A Noisy-Labels Approach to Detecting Uncompetitive Auctions." In Machine Learning, Optimization, and Data Science, 185–200. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95467-3_15.
Full textYang, Longrong, Fanman Meng, Hongliang Li, Qingbo Wu, and Qishang Cheng. "Learning with Noisy Class Labels for Instance Segmentation." In Computer Vision – ECCV 2020, 38–53. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58568-6_3.
Full textHu, Mengying, Hu Han, Shiguang Shan, and Xilin Chen. "Multi-label Learning from Noisy Labels with Non-linear Feature Transformation." In Computer Vision – ACCV 2018, 404–19. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20873-8_26.
Full textGao, Zhengqi, Fan-Keng Sun, Mingran Yang, Sucheng Ren, Zikai Xiong, Marc Engeler, Antonio Burazer, Linda Wildling, Luca Daniel, and Duane S. Boning. "Learning from Multiple Annotator Noisy Labels via Sample-Wise Label Fusion." In Lecture Notes in Computer Science, 407–22. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20053-3_24.
Full textNigam, Nitika, Tanima Dutta, and Hari Prabhat Gupta. "Impact of Noisy Labels in Learning Techniques: A Survey." In Advances in Data and Information Sciences, 403–11. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0694-9_38.
Full textChen, Yipeng, Xiaojuan Ban, and Ke Xu. "Combating Noisy Labels via Contrastive Learning with Challenging Pairs." In Pattern Recognition and Computer Vision, 614–25. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18910-4_49.
Full textLiang, Xuefeng, Longshan Yao, and XingYu Liu. "Noisy Label Learning in Deep Learning." In IFIP Advances in Information and Communication Technology, 84–97. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14903-0_10.
Full textSousa, Vitor, Amanda Lucas Pereira, Manoela Kohler, and Marco Pacheco. "Learning by Small Loss Approach Multi-label to Deal with Noisy Labels." In Computational Science and Its Applications – ICCSA 2023, 385–403. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36805-9_26.
Full textCai, Zhuotong, Jingmin Xin, Peiwen Shi, Sanping Zhou, Jiayi Wu, and Nanning Zheng. "Meta Pixel Loss Correction for Medical Image Segmentation with Noisy Labels." In Medical Image Learning with Limited and Noisy Data, 32–41. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16760-7_4.
Full textConference papers on the topic "Learning with noisy labels"
Liu, Yun-Peng, Ning Xu, Yu Zhang, and Xin Geng. "Label Distribution for Learning with Noisy Labels." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/356.
Full textLu, Yangdi, and Wenbo He. "SELC: Self-Ensemble Label Correction Improves Learning with Noisy Labels." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/455.
Full textLi, Ziwei, Gengyu Lyu, and Songhe Feng. "Partial Multi-Label Learning via Multi-Subspace Representation." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/362.
Full textHu, Chuanyang, Shipeng Yan, Zhitong Gao, and Xuming He. "MILD: Modeling the Instance Learning Dynamics for Learning with Noisy Labels." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/92.
Full textLi, Ximing, and Yang Wang. "Recovering Accurate Labeling Information from Partially Valid Data for Effective Multi-Label Learning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/191.
Full textGao, Ziyang, Yaping Yan, and Xin Geng. "Learning from Noisy Labels via Meta Credible Label Elicitation." In 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897577.
Full textTu, Yuanpeng, Boshen Zhang, Yuxi Li, Liang Liu, Jian Li, Yabiao Wang, Chengjie Wang, and Cai Rong Zhao. "Learning from Noisy Labels with Decoupled Meta Label Purifier." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01909.
Full textZhang, Haoyu, Dingkun Long, Guangwei Xu, Muhua Zhu, Pengjun Xie, Fei Huang, and Ji Wang. "Learning with Noise: Improving Distantly-Supervised Fine-grained Entity Typing via Automatic Relabeling." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/527.
Full textLuo, Yijing, Bo Han, and Chen Gong. "A Bi-level Formulation for Label Noise Learning with Spectral Cluster Discovery." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/361.
Full textGui, Xian-Jin, Wei Wang, and Zhang-Hao Tian. "Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/340.
Full textReports on the topic "Learning with noisy labels"
Jin, Rong, and Anil K. Jain. Data Representation: Learning Kernels from Noisy Data and Uncertain Information. Fort Belvoir, VA: Defense Technical Information Center, July 2010. http://dx.doi.org/10.21236/ada535030.
Full textMultiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, March 2022. http://dx.doi.org/10.4271/2022-01-0616.
Full text