Littérature scientifique sur le sujet « ML fairness »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « ML fairness ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "ML fairness"
Weinberg, Lindsay. « Rethinking Fairness : An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches ». Journal of Artificial Intelligence Research 74 (6 mai 2022) : 75–109. http://dx.doi.org/10.1613/jair.1.13196.
Texte intégralBærøe, Kristine, Torbjørn Gundersen, Edmund Henden et Kjetil Rommetveit. « Can medical algorithms be fair ? Three ethical quandaries and one dilemma ». BMJ Health & ; Care Informatics 29, no 1 (avril 2022) : e100445. http://dx.doi.org/10.1136/bmjhci-2021-100445.
Texte intégralYanjun Li, Yanjun Li, Huan Huang Yanjun Li, Qiang Geng Huan Huang, Xinwei Guo Qiang Geng et Yuyu Yuan Xinwei Guo. « Fairness Measures of Machine Learning Models in Judicial Penalty Prediction ». 網際網路技術學刊 23, no 5 (septembre 2022) : 1109–16. http://dx.doi.org/10.53106/160792642022092305019.
Texte intégralGhosh, Bishwamittra, Debabrota Basu et Kuldeep S. Meel. « Algorithmic Fairness Verification with Graphical Models ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 9 (28 juin 2022) : 9539–48. http://dx.doi.org/10.1609/aaai.v36i9.21187.
Texte intégralKuzucu, Selim, Jiaee Cheong, Hatice Gunes et Sinan Kalkan. « Uncertainty as a Fairness Measure ». Journal of Artificial Intelligence Research 81 (13 octobre 2024) : 307–35. http://dx.doi.org/10.1613/jair.1.16041.
Texte intégralWeerts, Hilde, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl et Frank Hutter. « Can Fairness be Automated ? Guidelines and Opportunities for Fairness-aware AutoML ». Journal of Artificial Intelligence Research 79 (17 février 2024) : 639–77. http://dx.doi.org/10.1613/jair.1.14747.
Texte intégralMakhlouf, Karima, Sami Zhioua et Catuscia Palamidessi. « On the Applicability of Machine Learning Fairness Notions ». ACM SIGKDD Explorations Newsletter 23, no 1 (26 mai 2021) : 14–23. http://dx.doi.org/10.1145/3468507.3468511.
Texte intégralSingh, Vivek K., et Kailash Joshi. « Integrating Fairness in Machine Learning Development Life Cycle : Fair CRISP-DM ». e-Service Journal 14, no 2 (décembre 2022) : 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.
Texte intégralZhou, Zijian, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo et Bryan Kian Hsiang Low. « Probably Approximate Shapley Fairness with Applications in Machine Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 5 (26 juin 2023) : 5910–18. http://dx.doi.org/10.1609/aaai.v37i5.25732.
Texte intégralSreerama, Jeevan, et Gowrisankar Krishnamoorthy. « Ethical Considerations in AI Addressing Bias and Fairness in Machine Learning Models ». Journal of Knowledge Learning and Science Technology ISSN : 2959-6386 (online) 1, no 1 (14 septembre 2022) : 130–38. http://dx.doi.org/10.60087/jklst.vol1.n1.p138.
Texte intégralThèses sur le sujet "ML fairness"
Kaplan, Caelin. « Compromis inhérents à l'apprentissage automatique préservant la confidentialité ». Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.
Texte intégralAs machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Chapitres de livres sur le sujet "ML fairness"
Steif, Ken. « People-based ML Models : Algorithmic Fairness ». Dans Public Policy Analytics, 153–70. Boca Raton : CRC Press, 2021. http://dx.doi.org/10.1201/9781003054658-7.
Texte intégrald’Aloisio, Giordano, Antinisca Di Marco et Giovanni Stilo. « Democratizing Quality-Based Machine Learning Development through Extended Feature Models ». Dans Fundamental Approaches to Software Engineering, 88–110. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_5.
Texte intégralSilva, Inês Oliveira e., Carlos Soares, Inês Sousa et Rayid Ghani. « Systematic Analysis of the Impact of Label Noise Correction on ML Fairness ». Dans Lecture Notes in Computer Science, 173–84. Singapore : Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8391-9_14.
Texte intégralChopra, Deepti, et Roopal Khurana. « Bias and Fairness in Ml ». Dans Introduction to Machine Learning with Python, 116–22. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124422123010012.
Texte intégralZhang, Wenbin, Zichong Wang, Juyong Kim, Cheng Cheng, Thomas Oommen, Pradeep Ravikumar et Jeremy Weiss. « Individual Fairness Under Uncertainty ». Dans Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230621.
Texte intégralCohen-Inger, Nurit, Guy Rozenblatt, Seffi Cohen, Lior Rokach et Bracha Shapira. « FairUS - UpSampling Optimized Method for Boosting Fairness ». Dans Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240585.
Texte intégralKothai, G., S. Nandhagopal, P. Harish, S. Sarankumar et S. Vidhya. « Transforming Data Visualization With AI and ML ». Dans Advances in Business Information Systems and Analytics, 125–68. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-6537-3.ch007.
Texte intégralBendoukha, Adda-Akram, Nesrine Kaaniche, Aymen Boudguiga et Renaud Sirdey. « FairCognizer : A Model for Accurate Predictions with Inherent Fairness Evaluation ». Dans Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240592.
Texte intégralWang, Song, Jing Ma, Lu Cheng et Jundong Li. « Fair Few-Shot Learning with Auxiliary Sets ». Dans Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230556.
Texte intégralSunitha, K. « Ethical Issues, Fairness, Accountability, and Transparency in AI/ML ». Dans Handbook of Research on Applications of AI, Digital Twin, and Internet of Things for Sustainable Development, 103–23. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6821-0.ch007.
Texte intégralActes de conférences sur le sujet "ML fairness"
Hertweck, Corinna, Michele Loi et Christoph Heitz. « Group Fairness Refocused : Assessing the Social Impact of ML Systems ». Dans 2024 11th IEEE Swiss Conference on Data Science (SDS), 189–96. IEEE, 2024. http://dx.doi.org/10.1109/sds60720.2024.00034.
Texte intégralLi, Zhiwei, Carl Kesselman, Mike D’Arcy, Michael Pazzani et Benjamin Yizing Xu. « Deriva-ML : A Continuous FAIRness Approach to Reproducible Machine Learning Models ». Dans 2024 IEEE 20th International Conference on e-Science (e-Science), 1–10. IEEE, 2024. http://dx.doi.org/10.1109/e-science62913.2024.10678671.
Texte intégralRobles Herrera, Salvador, Verya Monjezi, Vladik Kreinovich, Ashutosh Trivedi et Saeid Tizpaz-Niari. « Predicting Fairness of ML Software Configurations ». Dans PROMISE '24 : 20th International Conference on Predictive Models and Data Analytics in Software Engineering. New York, NY, USA : ACM, 2024. http://dx.doi.org/10.1145/3663533.3664040.
Texte intégralMakhlouf, Karima, Sami Zhioua et Catuscia Palamidessi. « Identifiability of Causal-based ML Fairness Notions ». Dans 2022 14th International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2022. http://dx.doi.org/10.1109/cicn56167.2022.10008263.
Texte intégralBaresi, Luciano, Chiara Criscuolo et Carlo Ghezzi. « Understanding Fairness Requirements for ML-based Software ». Dans 2023 IEEE 31st International Requirements Engineering Conference (RE). IEEE, 2023. http://dx.doi.org/10.1109/re57278.2023.00046.
Texte intégralEyuboglu, Sabri, Karan Goel, Arjun Desai, Lingjiao Chen, Mathew Monfort, Chris Ré et James Zou. « Model ChangeLists : Characterizing Updates to ML Models ». Dans FAccT '24 : The 2024 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA : ACM, 2024. http://dx.doi.org/10.1145/3630106.3659047.
Texte intégralWexler, James, Mahima Pushkarna, Sara Robinson, Tolga Bolukbasi et Andrew Zaldivar. « Probing ML models for fairness with the what-if tool and SHAP ». Dans FAT* '20 : Conference on Fairness, Accountability, and Transparency. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3351095.3375662.
Texte intégralBlili-Hamelin, Borhane, et Leif Hancox-Li. « Making Intelligence : Ethical Values in IQ and ML Benchmarks ». Dans FAccT '23 : the 2023 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA : ACM, 2023. http://dx.doi.org/10.1145/3593013.3593996.
Texte intégralHeidari, Hoda, Michele Loi, Krishna P. Gummadi et Andreas Krause. « A Moral Framework for Understanding Fair ML through Economic Models of Equality of Opportunity ». Dans FAT* '19 : Conference on Fairness, Accountability, and Transparency. New York, NY, USA : ACM, 2019. http://dx.doi.org/10.1145/3287560.3287584.
Texte intégralSmith, Jessie J., Saleema Amershi, Solon Barocas, Hanna Wallach et Jennifer Wortman Vaughan. « REAL ML : Recognizing, Exploring, and Articulating Limitations of Machine Learning Research ». Dans FAccT '22 : 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA : ACM, 2022. http://dx.doi.org/10.1145/3531146.3533122.
Texte intégral