Literatura académica sobre el tema "Learning with noisy labels"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Learning with noisy labels".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Learning with noisy labels"
Xie, Ming-Kun y Sheng-Jun Huang. "Partial Multi-Label Learning with Noisy Label Identification". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 6454–61. http://dx.doi.org/10.1609/aaai.v34i04.6117.
Texto completoChen, Mingcai, Hao Cheng, Yuntao Du, Ming Xu, Wenyu Jiang y Chongjun Wang. "Two Wrongs Don’t Make a Right: Combating Confirmation Bias in Learning with Label Noise". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 12 (26 de junio de 2023): 14765–73. http://dx.doi.org/10.1609/aaai.v37i12.26725.
Texto completoLi, Hui, Zhaodong Niu, Quan Sun y Yabo Li. "Co-Correcting: Combat Noisy Labels in Space Debris Detection". Remote Sensing 14, n.º 20 (21 de octubre de 2022): 5261. http://dx.doi.org/10.3390/rs14205261.
Texto completoTang, Xinyu, Milad Nasr, Saeed Mahloujifar, Virat Shejwalkar, Liwei Song, Amir Houmansadr y Prateek Mittal. "Machine Learning with Differentially Private Labels: Mechanisms and Frameworks". Proceedings on Privacy Enhancing Technologies 2022, n.º 4 (octubre de 2022): 332–50. http://dx.doi.org/10.56553/popets-2022-0112.
Texto completoWu, Yichen, Jun Shu, Qi Xie, Qian Zhao y Deyu Meng. "Learning to Purify Noisy Labels via Meta Soft Label Corrector". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de mayo de 2021): 10388–96. http://dx.doi.org/10.1609/aaai.v35i12.17244.
Texto completoZheng, Guoqing, Ahmed Hassan Awadallah y Susan Dumais. "Meta Label Correction for Noisy Label Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de mayo de 2021): 11053–61. http://dx.doi.org/10.1609/aaai.v35i12.17319.
Texto completoShi, Jialin, Chenyi Guo y Ji Wu. "A Hybrid Robust-Learning Architecture for Medical Image Segmentation with Noisy Labels". Future Internet 14, n.º 2 (26 de enero de 2022): 41. http://dx.doi.org/10.3390/fi14020041.
Texto completoNorthcutt, Curtis, Lu Jiang y Isaac Chuang. "Confident Learning: Estimating Uncertainty in Dataset Labels". Journal of Artificial Intelligence Research 70 (14 de abril de 2021): 1373–411. http://dx.doi.org/10.1613/jair.1.12125.
Texto completoSilva, Amila, Ling Luo, Shanika Karunasekera y Christopher Leckie. "Noise-Robust Learning from Multiple Unsupervised Sources of Inferred Labels". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 8 (28 de junio de 2022): 8315–23. http://dx.doi.org/10.1609/aaai.v36i8.20806.
Texto completoYan, Xuguo, Xuhui Xia, Lei Wang y Zelin Zhang. "A Progressive Deep Neural Network Training Method for Image Classification with Noisy Labels". Applied Sciences 12, n.º 24 (12 de diciembre de 2022): 12754. http://dx.doi.org/10.3390/app122412754.
Texto completoTesis sobre el tema "Learning with noisy labels"
Yu, Xiyu. "Learning with Biased and Noisy Labels". Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/20125.
Texto completoCaye, Daudt Rodrigo. "Convolutional neural networks for change analysis in earth observation images with noisy labels and domain shifts". Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT033.
Texto completoThe analysis of satellite and aerial Earth observation images allows us to obtain precise information over large areas. A multitemporal analysis of such images is necessary to understand the evolution of such areas. In this thesis, convolutional neural networks are used to detect and understand changes using remote sensing images from various sources in supervised and weakly supervised settings. Siamese architectures are used to compare coregistered image pairs and to identify changed pixels. The proposed method is then extended into a multitask network architecture that is used to detect changes and perform land cover mapping simultaneously, which permits a semantic understanding of the detected changes. Then, classification filtering and a novel guided anisotropic diffusion algorithm are used to reduce the effect of biased label noise, which is a concern for automatically generated large-scale datasets. Weakly supervised learning is also achieved to perform pixel-level change detection using only image-level supervision through the usage of class activation maps and a novel spatial attention layer. Finally, a domain adaptation method based on adversarial training is proposed, which succeeds in projecting images from different domains into a common latent space where a given task can be performed. This method is tested not only for domain adaptation for change detection, but also for image classification and semantic segmentation, which proves its versatility
Fang, Tongtong. "Learning from noisy labelsby importance reweighting: : a deep learning approach". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264125.
Texto completoFelaktiga annoteringar kan sänka klassificeringsprestanda.Speciellt för djupa nätverk kan detta leda till dålig generalisering. Nyligen har brusrobust djup inlärning överträffat andra inlärningsmetoder när det gäller hantering av komplexa indata Befintligta resultat från djup inlärning kan dock inte tillhandahålla rimliga viktomfördelningskriterier. För att hantera detta kunskapsgap och inspirerat av domänanpassning föreslår vi en ny robust djup inlärningsmetod som använder omviktning. Omviktningen görs genom att minimera den maximala medelavvikelsen mellan förlustfördelningen av felmärkta och korrekt märkta data. I experiment slår den föreslagna metoden andra metoder. Resultaten visar en stor forskningspotential för att tillämpa domänanpassning. Dessutom motiverar den föreslagna metoden undersökningar av andra intressanta problem inom domänanpassning genom att möjliggöra smarta omviktningar.
Ainapure, Abhijeet Narhar. "Application and Performance Enhancement of Intelligent Cross-Domain Fault Diagnosis in Rotating Machinery". University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623164772153736.
Texto completoChan, Jeffrey (Jeffrey D. ). "On boosting and noisy labels". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100297.
Texto completoThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 53-56).
Boosting is a machine learning technique widely used across many disciplines. Boosting enables one to learn from labeled data in order to predict the labels of unlabeled data. A central property of boosting instrumental to its popularity is its resistance to overfitting. Previous experiments provide a margin-based explanation for this resistance to overfitting. In this thesis, the main finding is that boosting's resistance to overfitting can be understood in terms of how it handles noisy (mislabeled) points. Confirming experimental evidence emerged from experiments using the Wisconsin Diagnostic Breast Cancer(WDBC) dataset commonly used in machine learning experiments. A majority vote ensemble filter identified on average that 2.5% of the points in the dataset as noisy. The experiments chiefly investigated boosting's treatment of noisy points from a volume-based perspective. While the cell volume surrounding noisy points did not show a significant difference from other points, the decision volume surrounding noisy points was two to three times less than that of non-noisy points. Additional findings showed that decision volume not only provides insight into boosting's resistance to overfitting in the context of noisy points, but also serves as a suitable metric for identifying which points in a dataset are likely to be mislabeled.
by Jeffrey Chan.
M. Eng.
Almansour, Amal. "Credibility assessment for Arabic micro-blogs using noisy labels". Thesis, King's College London (University of London), 2016. https://kclpure.kcl.ac.uk/portal/en/theses/credibility-assessment-for-arabic-microblogs-using-noisy-labels(6baf983a-940d-4c2c-8821-e992348b4097).html.
Texto completoNorthcutt, Curtis George. "Classification with noisy labels : "Multiple Account" cheating detection in Open Online Courses". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111870.
Texto completoThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 113-122).
Massive Open Online Courses (MOOCs) have the potential to enhance socioeconomic mobility through education. Yet, the viability of this outcome largely depends on the reputation of MOOC certificates as a credible academic credential. I describe a cheating strategy that threatens this reputation and holds the potential to render the MOOC certificate valueless. The strategy, Copying Answers using Multiple Existences Online (CAMEO), involves a user who gathers solutions to assessment questions using one or more harvester accounts and then submits correct answers using one or more separate master accounts. To estimate a lower bound for CAMEO prevalence among 1.9 million course participants in 115 HarvardX and MITx courses, I introduce a filter-based CAMEO detection algorithm and use a small-scale experiment to verify CAMEO use with certainty. I identify preventive strategies that can decrease CAMEO rates and show evidence of their effectiveness in science courses. Because the CAMEO algorithm functions as a lower bound estimate, it fails to detect many CAMEO cheaters. As a novelty of this thesis, instead of improving the shortcomings of the CAMEO algorithm directly, I recognize that we can think of the CAMEO algorithm as a method for producing noisy predicted cheating labels. Then a solution to the more general problem of binary classification with noisy labels ( ~ P̃̃̃ Ñ learning) is a solution to CAMEO cheating detection. ~ P̃ Ñ learning is the problem of binary classification when training examples may be mislabeled (flipped) uniformly with noise rate 1 for positive examples and 0 for negative examples. I propose Rank Pruning to solve ~ P ~N learning and the open problem of estimating the noise rates. Unlike prior solutions, Rank Pruning is efficient and general, requiring O(T) for any unrestricted choice of probabilistic classifier with T fitting time. I prove Rank Pruning achieves consistent noise estimation and equivalent expected risk as learning with uncorrupted labels in ideal conditions, and derive closed-form solutions when conditions are non-ideal. Rank Pruning achieves state-of-the-art noise rate estimation and F1, error, and AUC-PR on the MNIST and CIFAR datasets, regardless of noise rates. To highlight, Rank Pruning with a CNN classifier can predict if a MNIST digit is a one or not one with only 0:25% error, and 0:46% error across all digits, even when 50% of positive examples are mislabeled and 50% of observed positive labels are mislabeled negative examples. Rank Pruning achieves similarly impressive results when as large as 50% of training examples are actually just noise drawn from a third distribution. Together, the CAMEO and Rank Pruning algorithms allow for a robust, general, and time-efficient solution to the CAMEO cheating detection problem. By ensuring the validity of MOOC credentials, we enable MOOCs to achieve both openness and value, and thus take one step closer to the greater goal of democratization of education.
by Curtis George Northcutt.
S.M.
Ekambaram, Rajmadhan. "Active Cleaning of Label Noise Using Support Vector Machines". Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6830.
Texto completoBalasubramanian, Krishnakumar. "Learning without labels and nonnegative tensor factorization". Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33926.
Texto completoNugyen, Duc Tam [Verfasser] y Thomas [Akademischer Betreuer] Brox. "Robust deep learning for computer vision to counteract data scarcity and label noise". Freiburg : Universität, 2020. http://d-nb.info/1226657060/34.
Texto completoLibros sobre el tema "Learning with noisy labels"
Ramsay, Carol A. Pesticides: Learning about labels. [Pullman]: Cooperative Extension, Washington State University, 1999.
Buscar texto completoZamzmi, Ghada, Sameer Antani, Ulas Bagci, Marius George Linguraru, Sivaramakrishnan Rajaraman y Zhiyun Xue, eds. Medical Image Learning with Limited and Noisy Data. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16760-7.
Texto completoAllan, John A. B. 1941-, ed. Kids with labels: Keys to intimacy. Toronto: Lugus Productions, 1990.
Buscar texto completoLiteracy, not labels: Celebrating students' strengths through whole language. Portsmouth, NH: Boynton/Cook Publishers, 1995.
Buscar texto completoWang, Qian, Fausto Milletari, Hien V. Nguyen, Shadi Albarqouni, M. Jorge Cardoso, Nicola Rieke, Ziyue Xu et al., eds. Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33391-1.
Texto completoCarneiro, Gustavo. Machine Learning with Noisy Labels: Definitions, Theory, Techniques and Solutions. Elsevier Science & Technology Books, 2024.
Buscar texto completoHerzog, Joyce. Learning in Spite of Labels. JoyceHerzog.com, Inc., 1994.
Buscar texto completoOwen, Phillip. What-If... ?: Learning Without Labels. Independently Published, 2017.
Buscar texto completoPublishing, Carson-Dellosa. Celebrate Learning Labels and Organizers. Carson-Dellosa Publishing, LLC, 2018.
Buscar texto completoNational Education Trust (Great Britain) Staff y Marc Rowland. Learning Without Labels: Improving Outcomes for Vulnerable Pupils. Catt Educational, Limited, John, 2017.
Buscar texto completoCapítulos de libros sobre el tema "Learning with noisy labels"
Vembu, Shankar y Sandra Zilles. "Interactive Learning from Multiple Noisy Labels". En Machine Learning and Knowledge Discovery in Databases, 493–508. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46128-1_31.
Texto completoGoryunova, Natalya, Artem Baklanov y Egor Ianovski. "A Noisy-Labels Approach to Detecting Uncompetitive Auctions". En Machine Learning, Optimization, and Data Science, 185–200. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95467-3_15.
Texto completoYang, Longrong, Fanman Meng, Hongliang Li, Qingbo Wu y Qishang Cheng. "Learning with Noisy Class Labels for Instance Segmentation". En Computer Vision – ECCV 2020, 38–53. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58568-6_3.
Texto completoHu, Mengying, Hu Han, Shiguang Shan y Xilin Chen. "Multi-label Learning from Noisy Labels with Non-linear Feature Transformation". En Computer Vision – ACCV 2018, 404–19. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20873-8_26.
Texto completoGao, Zhengqi, Fan-Keng Sun, Mingran Yang, Sucheng Ren, Zikai Xiong, Marc Engeler, Antonio Burazer, Linda Wildling, Luca Daniel y Duane S. Boning. "Learning from Multiple Annotator Noisy Labels via Sample-Wise Label Fusion". En Lecture Notes in Computer Science, 407–22. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20053-3_24.
Texto completoNigam, Nitika, Tanima Dutta y Hari Prabhat Gupta. "Impact of Noisy Labels in Learning Techniques: A Survey". En Advances in Data and Information Sciences, 403–11. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0694-9_38.
Texto completoChen, Yipeng, Xiaojuan Ban y Ke Xu. "Combating Noisy Labels via Contrastive Learning with Challenging Pairs". En Pattern Recognition and Computer Vision, 614–25. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18910-4_49.
Texto completoLiang, Xuefeng, Longshan Yao y XingYu Liu. "Noisy Label Learning in Deep Learning". En IFIP Advances in Information and Communication Technology, 84–97. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14903-0_10.
Texto completoSousa, Vitor, Amanda Lucas Pereira, Manoela Kohler y Marco Pacheco. "Learning by Small Loss Approach Multi-label to Deal with Noisy Labels". En Computational Science and Its Applications – ICCSA 2023, 385–403. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36805-9_26.
Texto completoCai, Zhuotong, Jingmin Xin, Peiwen Shi, Sanping Zhou, Jiayi Wu y Nanning Zheng. "Meta Pixel Loss Correction for Medical Image Segmentation with Noisy Labels". En Medical Image Learning with Limited and Noisy Data, 32–41. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16760-7_4.
Texto completoActas de conferencias sobre el tema "Learning with noisy labels"
Liu, Yun-Peng, Ning Xu, Yu Zhang y Xin Geng. "Label Distribution for Learning with Noisy Labels". En Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/356.
Texto completoLu, Yangdi y Wenbo He. "SELC: Self-Ensemble Label Correction Improves Learning with Noisy Labels". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/455.
Texto completoLi, Ziwei, Gengyu Lyu y Songhe Feng. "Partial Multi-Label Learning via Multi-Subspace Representation". En Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/362.
Texto completoHu, Chuanyang, Shipeng Yan, Zhitong Gao y Xuming He. "MILD: Modeling the Instance Learning Dynamics for Learning with Noisy Labels". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/92.
Texto completoLi, Ximing y Yang Wang. "Recovering Accurate Labeling Information from Partially Valid Data for Effective Multi-Label Learning". En Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/191.
Texto completoGao, Ziyang, Yaping Yan y Xin Geng. "Learning from Noisy Labels via Meta Credible Label Elicitation". En 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897577.
Texto completoTu, Yuanpeng, Boshen Zhang, Yuxi Li, Liang Liu, Jian Li, Yabiao Wang, Chengjie Wang y Cai Rong Zhao. "Learning from Noisy Labels with Decoupled Meta Label Purifier". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01909.
Texto completoZhang, Haoyu, Dingkun Long, Guangwei Xu, Muhua Zhu, Pengjun Xie, Fei Huang y Ji Wang. "Learning with Noise: Improving Distantly-Supervised Fine-grained Entity Typing via Automatic Relabeling". En Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/527.
Texto completoLuo, Yijing, Bo Han y Chen Gong. "A Bi-level Formulation for Label Noise Learning with Spectral Cluster Discovery". En Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/361.
Texto completoGui, Xian-Jin, Wei Wang y Zhang-Hao Tian. "Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion". En Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/340.
Texto completoInformes sobre el tema "Learning with noisy labels"
Jin, Rong y Anil K. Jain. Data Representation: Learning Kernels from Noisy Data and Uncertain Information. Fort Belvoir, VA: Defense Technical Information Center, julio de 2010. http://dx.doi.org/10.21236/ada535030.
Texto completoMultiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, marzo de 2022. http://dx.doi.org/10.4271/2022-01-0616.
Texto completo