Gotowa bibliografia na temat „Learning with noisy labels”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Learning with noisy labels”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Learning with noisy labels"
Xie, Ming-Kun, i Sheng-Jun Huang. "Partial Multi-Label Learning with Noisy Label Identification". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 6454–61. http://dx.doi.org/10.1609/aaai.v34i04.6117.
Pełny tekst źródłaChen, Mingcai, Hao Cheng, Yuntao Du, Ming Xu, Wenyu Jiang i Chongjun Wang. "Two Wrongs Don’t Make a Right: Combating Confirmation Bias in Learning with Label Noise". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 12 (26.06.2023): 14765–73. http://dx.doi.org/10.1609/aaai.v37i12.26725.
Pełny tekst źródłaLi, Hui, Zhaodong Niu, Quan Sun i Yabo Li. "Co-Correcting: Combat Noisy Labels in Space Debris Detection". Remote Sensing 14, nr 20 (21.10.2022): 5261. http://dx.doi.org/10.3390/rs14205261.
Pełny tekst źródłaTang, Xinyu, Milad Nasr, Saeed Mahloujifar, Virat Shejwalkar, Liwei Song, Amir Houmansadr i Prateek Mittal. "Machine Learning with Differentially Private Labels: Mechanisms and Frameworks". Proceedings on Privacy Enhancing Technologies 2022, nr 4 (październik 2022): 332–50. http://dx.doi.org/10.56553/popets-2022-0112.
Pełny tekst źródłaWu, Yichen, Jun Shu, Qi Xie, Qian Zhao i Deyu Meng. "Learning to Purify Noisy Labels via Meta Soft Label Corrector". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 12 (18.05.2021): 10388–96. http://dx.doi.org/10.1609/aaai.v35i12.17244.
Pełny tekst źródłaZheng, Guoqing, Ahmed Hassan Awadallah i Susan Dumais. "Meta Label Correction for Noisy Label Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 12 (18.05.2021): 11053–61. http://dx.doi.org/10.1609/aaai.v35i12.17319.
Pełny tekst źródłaShi, Jialin, Chenyi Guo i Ji Wu. "A Hybrid Robust-Learning Architecture for Medical Image Segmentation with Noisy Labels". Future Internet 14, nr 2 (26.01.2022): 41. http://dx.doi.org/10.3390/fi14020041.
Pełny tekst źródłaNorthcutt, Curtis, Lu Jiang i Isaac Chuang. "Confident Learning: Estimating Uncertainty in Dataset Labels". Journal of Artificial Intelligence Research 70 (14.04.2021): 1373–411. http://dx.doi.org/10.1613/jair.1.12125.
Pełny tekst źródłaSilva, Amila, Ling Luo, Shanika Karunasekera i Christopher Leckie. "Noise-Robust Learning from Multiple Unsupervised Sources of Inferred Labels". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 8315–23. http://dx.doi.org/10.1609/aaai.v36i8.20806.
Pełny tekst źródłaYan, Xuguo, Xuhui Xia, Lei Wang i Zelin Zhang. "A Progressive Deep Neural Network Training Method for Image Classification with Noisy Labels". Applied Sciences 12, nr 24 (12.12.2022): 12754. http://dx.doi.org/10.3390/app122412754.
Pełny tekst źródłaRozprawy doktorskie na temat "Learning with noisy labels"
Yu, Xiyu. "Learning with Biased and Noisy Labels". Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/20125.
Pełny tekst źródłaCaye, Daudt Rodrigo. "Convolutional neural networks for change analysis in earth observation images with noisy labels and domain shifts". Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT033.
Pełny tekst źródłaThe analysis of satellite and aerial Earth observation images allows us to obtain precise information over large areas. A multitemporal analysis of such images is necessary to understand the evolution of such areas. In this thesis, convolutional neural networks are used to detect and understand changes using remote sensing images from various sources in supervised and weakly supervised settings. Siamese architectures are used to compare coregistered image pairs and to identify changed pixels. The proposed method is then extended into a multitask network architecture that is used to detect changes and perform land cover mapping simultaneously, which permits a semantic understanding of the detected changes. Then, classification filtering and a novel guided anisotropic diffusion algorithm are used to reduce the effect of biased label noise, which is a concern for automatically generated large-scale datasets. Weakly supervised learning is also achieved to perform pixel-level change detection using only image-level supervision through the usage of class activation maps and a novel spatial attention layer. Finally, a domain adaptation method based on adversarial training is proposed, which succeeds in projecting images from different domains into a common latent space where a given task can be performed. This method is tested not only for domain adaptation for change detection, but also for image classification and semantic segmentation, which proves its versatility
Fang, Tongtong. "Learning from noisy labelsby importance reweighting: : a deep learning approach". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264125.
Pełny tekst źródłaFelaktiga annoteringar kan sänka klassificeringsprestanda.Speciellt för djupa nätverk kan detta leda till dålig generalisering. Nyligen har brusrobust djup inlärning överträffat andra inlärningsmetoder när det gäller hantering av komplexa indata Befintligta resultat från djup inlärning kan dock inte tillhandahålla rimliga viktomfördelningskriterier. För att hantera detta kunskapsgap och inspirerat av domänanpassning föreslår vi en ny robust djup inlärningsmetod som använder omviktning. Omviktningen görs genom att minimera den maximala medelavvikelsen mellan förlustfördelningen av felmärkta och korrekt märkta data. I experiment slår den föreslagna metoden andra metoder. Resultaten visar en stor forskningspotential för att tillämpa domänanpassning. Dessutom motiverar den föreslagna metoden undersökningar av andra intressanta problem inom domänanpassning genom att möjliggöra smarta omviktningar.
Ainapure, Abhijeet Narhar. "Application and Performance Enhancement of Intelligent Cross-Domain Fault Diagnosis in Rotating Machinery". University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623164772153736.
Pełny tekst źródłaChan, Jeffrey (Jeffrey D. ). "On boosting and noisy labels". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100297.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 53-56).
Boosting is a machine learning technique widely used across many disciplines. Boosting enables one to learn from labeled data in order to predict the labels of unlabeled data. A central property of boosting instrumental to its popularity is its resistance to overfitting. Previous experiments provide a margin-based explanation for this resistance to overfitting. In this thesis, the main finding is that boosting's resistance to overfitting can be understood in terms of how it handles noisy (mislabeled) points. Confirming experimental evidence emerged from experiments using the Wisconsin Diagnostic Breast Cancer(WDBC) dataset commonly used in machine learning experiments. A majority vote ensemble filter identified on average that 2.5% of the points in the dataset as noisy. The experiments chiefly investigated boosting's treatment of noisy points from a volume-based perspective. While the cell volume surrounding noisy points did not show a significant difference from other points, the decision volume surrounding noisy points was two to three times less than that of non-noisy points. Additional findings showed that decision volume not only provides insight into boosting's resistance to overfitting in the context of noisy points, but also serves as a suitable metric for identifying which points in a dataset are likely to be mislabeled.
by Jeffrey Chan.
M. Eng.
Almansour, Amal. "Credibility assessment for Arabic micro-blogs using noisy labels". Thesis, King's College London (University of London), 2016. https://kclpure.kcl.ac.uk/portal/en/theses/credibility-assessment-for-arabic-microblogs-using-noisy-labels(6baf983a-940d-4c2c-8821-e992348b4097).html.
Pełny tekst źródłaNorthcutt, Curtis George. "Classification with noisy labels : "Multiple Account" cheating detection in Open Online Courses". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111870.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 113-122).
Massive Open Online Courses (MOOCs) have the potential to enhance socioeconomic mobility through education. Yet, the viability of this outcome largely depends on the reputation of MOOC certificates as a credible academic credential. I describe a cheating strategy that threatens this reputation and holds the potential to render the MOOC certificate valueless. The strategy, Copying Answers using Multiple Existences Online (CAMEO), involves a user who gathers solutions to assessment questions using one or more harvester accounts and then submits correct answers using one or more separate master accounts. To estimate a lower bound for CAMEO prevalence among 1.9 million course participants in 115 HarvardX and MITx courses, I introduce a filter-based CAMEO detection algorithm and use a small-scale experiment to verify CAMEO use with certainty. I identify preventive strategies that can decrease CAMEO rates and show evidence of their effectiveness in science courses. Because the CAMEO algorithm functions as a lower bound estimate, it fails to detect many CAMEO cheaters. As a novelty of this thesis, instead of improving the shortcomings of the CAMEO algorithm directly, I recognize that we can think of the CAMEO algorithm as a method for producing noisy predicted cheating labels. Then a solution to the more general problem of binary classification with noisy labels ( ~ P̃̃̃ Ñ learning) is a solution to CAMEO cheating detection. ~ P̃ Ñ learning is the problem of binary classification when training examples may be mislabeled (flipped) uniformly with noise rate 1 for positive examples and 0 for negative examples. I propose Rank Pruning to solve ~ P ~N learning and the open problem of estimating the noise rates. Unlike prior solutions, Rank Pruning is efficient and general, requiring O(T) for any unrestricted choice of probabilistic classifier with T fitting time. I prove Rank Pruning achieves consistent noise estimation and equivalent expected risk as learning with uncorrupted labels in ideal conditions, and derive closed-form solutions when conditions are non-ideal. Rank Pruning achieves state-of-the-art noise rate estimation and F1, error, and AUC-PR on the MNIST and CIFAR datasets, regardless of noise rates. To highlight, Rank Pruning with a CNN classifier can predict if a MNIST digit is a one or not one with only 0:25% error, and 0:46% error across all digits, even when 50% of positive examples are mislabeled and 50% of observed positive labels are mislabeled negative examples. Rank Pruning achieves similarly impressive results when as large as 50% of training examples are actually just noise drawn from a third distribution. Together, the CAMEO and Rank Pruning algorithms allow for a robust, general, and time-efficient solution to the CAMEO cheating detection problem. By ensuring the validity of MOOC credentials, we enable MOOCs to achieve both openness and value, and thus take one step closer to the greater goal of democratization of education.
by Curtis George Northcutt.
S.M.
Ekambaram, Rajmadhan. "Active Cleaning of Label Noise Using Support Vector Machines". Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6830.
Pełny tekst źródłaBalasubramanian, Krishnakumar. "Learning without labels and nonnegative tensor factorization". Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33926.
Pełny tekst źródłaNugyen, Duc Tam [Verfasser], i Thomas [Akademischer Betreuer] Brox. "Robust deep learning for computer vision to counteract data scarcity and label noise". Freiburg : Universität, 2020. http://d-nb.info/1226657060/34.
Pełny tekst źródłaKsiążki na temat "Learning with noisy labels"
Ramsay, Carol A. Pesticides: Learning about labels. [Pullman]: Cooperative Extension, Washington State University, 1999.
Znajdź pełny tekst źródłaZamzmi, Ghada, Sameer Antani, Ulas Bagci, Marius George Linguraru, Sivaramakrishnan Rajaraman i Zhiyun Xue, red. Medical Image Learning with Limited and Noisy Data. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16760-7.
Pełny tekst źródłaAllan, John A. B. 1941-, red. Kids with labels: Keys to intimacy. Toronto: Lugus Productions, 1990.
Znajdź pełny tekst źródłaLiteracy, not labels: Celebrating students' strengths through whole language. Portsmouth, NH: Boynton/Cook Publishers, 1995.
Znajdź pełny tekst źródłaWang, Qian, Fausto Milletari, Hien V. Nguyen, Shadi Albarqouni, M. Jorge Cardoso, Nicola Rieke, Ziyue Xu i in., red. Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33391-1.
Pełny tekst źródłaCarneiro, Gustavo. Machine Learning with Noisy Labels: Definitions, Theory, Techniques and Solutions. Elsevier Science & Technology Books, 2024.
Znajdź pełny tekst źródłaHerzog, Joyce. Learning in Spite of Labels. JoyceHerzog.com, Inc., 1994.
Znajdź pełny tekst źródłaOwen, Phillip. What-If... ?: Learning Without Labels. Independently Published, 2017.
Znajdź pełny tekst źródłaPublishing, Carson-Dellosa. Celebrate Learning Labels and Organizers. Carson-Dellosa Publishing, LLC, 2018.
Znajdź pełny tekst źródłaNational Education Trust (Great Britain) Staff i Marc Rowland. Learning Without Labels: Improving Outcomes for Vulnerable Pupils. Catt Educational, Limited, John, 2017.
Znajdź pełny tekst źródłaCzęści książek na temat "Learning with noisy labels"
Vembu, Shankar, i Sandra Zilles. "Interactive Learning from Multiple Noisy Labels". W Machine Learning and Knowledge Discovery in Databases, 493–508. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46128-1_31.
Pełny tekst źródłaGoryunova, Natalya, Artem Baklanov i Egor Ianovski. "A Noisy-Labels Approach to Detecting Uncompetitive Auctions". W Machine Learning, Optimization, and Data Science, 185–200. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95467-3_15.
Pełny tekst źródłaYang, Longrong, Fanman Meng, Hongliang Li, Qingbo Wu i Qishang Cheng. "Learning with Noisy Class Labels for Instance Segmentation". W Computer Vision – ECCV 2020, 38–53. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58568-6_3.
Pełny tekst źródłaHu, Mengying, Hu Han, Shiguang Shan i Xilin Chen. "Multi-label Learning from Noisy Labels with Non-linear Feature Transformation". W Computer Vision – ACCV 2018, 404–19. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20873-8_26.
Pełny tekst źródłaGao, Zhengqi, Fan-Keng Sun, Mingran Yang, Sucheng Ren, Zikai Xiong, Marc Engeler, Antonio Burazer, Linda Wildling, Luca Daniel i Duane S. Boning. "Learning from Multiple Annotator Noisy Labels via Sample-Wise Label Fusion". W Lecture Notes in Computer Science, 407–22. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20053-3_24.
Pełny tekst źródłaNigam, Nitika, Tanima Dutta i Hari Prabhat Gupta. "Impact of Noisy Labels in Learning Techniques: A Survey". W Advances in Data and Information Sciences, 403–11. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0694-9_38.
Pełny tekst źródłaChen, Yipeng, Xiaojuan Ban i Ke Xu. "Combating Noisy Labels via Contrastive Learning with Challenging Pairs". W Pattern Recognition and Computer Vision, 614–25. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18910-4_49.
Pełny tekst źródłaLiang, Xuefeng, Longshan Yao i XingYu Liu. "Noisy Label Learning in Deep Learning". W IFIP Advances in Information and Communication Technology, 84–97. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14903-0_10.
Pełny tekst źródłaSousa, Vitor, Amanda Lucas Pereira, Manoela Kohler i Marco Pacheco. "Learning by Small Loss Approach Multi-label to Deal with Noisy Labels". W Computational Science and Its Applications – ICCSA 2023, 385–403. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36805-9_26.
Pełny tekst źródłaCai, Zhuotong, Jingmin Xin, Peiwen Shi, Sanping Zhou, Jiayi Wu i Nanning Zheng. "Meta Pixel Loss Correction for Medical Image Segmentation with Noisy Labels". W Medical Image Learning with Limited and Noisy Data, 32–41. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16760-7_4.
Pełny tekst źródłaStreszczenia konferencji na temat "Learning with noisy labels"
Liu, Yun-Peng, Ning Xu, Yu Zhang i Xin Geng. "Label Distribution for Learning with Noisy Labels". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/356.
Pełny tekst źródłaLu, Yangdi, i Wenbo He. "SELC: Self-Ensemble Label Correction Improves Learning with Noisy Labels". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/455.
Pełny tekst źródłaLi, Ziwei, Gengyu Lyu i Songhe Feng. "Partial Multi-Label Learning via Multi-Subspace Representation". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/362.
Pełny tekst źródłaHu, Chuanyang, Shipeng Yan, Zhitong Gao i Xuming He. "MILD: Modeling the Instance Learning Dynamics for Learning with Noisy Labels". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/92.
Pełny tekst źródłaLi, Ximing, i Yang Wang. "Recovering Accurate Labeling Information from Partially Valid Data for Effective Multi-Label Learning". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/191.
Pełny tekst źródłaGao, Ziyang, Yaping Yan i Xin Geng. "Learning from Noisy Labels via Meta Credible Label Elicitation". W 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897577.
Pełny tekst źródłaTu, Yuanpeng, Boshen Zhang, Yuxi Li, Liang Liu, Jian Li, Yabiao Wang, Chengjie Wang i Cai Rong Zhao. "Learning from Noisy Labels with Decoupled Meta Label Purifier". W 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01909.
Pełny tekst źródłaZhang, Haoyu, Dingkun Long, Guangwei Xu, Muhua Zhu, Pengjun Xie, Fei Huang i Ji Wang. "Learning with Noise: Improving Distantly-Supervised Fine-grained Entity Typing via Automatic Relabeling". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/527.
Pełny tekst źródłaLuo, Yijing, Bo Han i Chen Gong. "A Bi-level Formulation for Label Noise Learning with Spectral Cluster Discovery". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/361.
Pełny tekst źródłaGui, Xian-Jin, Wei Wang i Zhang-Hao Tian. "Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion". W Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/340.
Pełny tekst źródłaRaporty organizacyjne na temat "Learning with noisy labels"
Jin, Rong, i Anil K. Jain. Data Representation: Learning Kernels from Noisy Data and Uncertain Information. Fort Belvoir, VA: Defense Technical Information Center, lipiec 2010. http://dx.doi.org/10.21236/ada535030.
Pełny tekst źródłaMultiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, marzec 2022. http://dx.doi.org/10.4271/2022-01-0616.
Pełny tekst źródła