Literatura académica sobre el tema "ML fairness"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "ML fairness".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "ML fairness"
Weinberg, Lindsay. "Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches". Journal of Artificial Intelligence Research 74 (6 de mayo de 2022): 75–109. http://dx.doi.org/10.1613/jair.1.13196.
Texto completoBærøe, Kristine, Torbjørn Gundersen, Edmund Henden y Kjetil Rommetveit. "Can medical algorithms be fair? Three ethical quandaries and one dilemma". BMJ Health & Care Informatics 29, n.º 1 (abril de 2022): e100445. http://dx.doi.org/10.1136/bmjhci-2021-100445.
Texto completoYanjun Li, Yanjun Li, Huan Huang Yanjun Li, Qiang Geng Huan Huang, Xinwei Guo Qiang Geng y Yuyu Yuan Xinwei Guo. "Fairness Measures of Machine Learning Models in Judicial Penalty Prediction". 網際網路技術學刊 23, n.º 5 (septiembre de 2022): 1109–16. http://dx.doi.org/10.53106/160792642022092305019.
Texto completoGhosh, Bishwamittra, Debabrota Basu y Kuldeep S. Meel. "Algorithmic Fairness Verification with Graphical Models". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 9 (28 de junio de 2022): 9539–48. http://dx.doi.org/10.1609/aaai.v36i9.21187.
Texto completoKuzucu, Selim, Jiaee Cheong, Hatice Gunes y Sinan Kalkan. "Uncertainty as a Fairness Measure". Journal of Artificial Intelligence Research 81 (13 de octubre de 2024): 307–35. http://dx.doi.org/10.1613/jair.1.16041.
Texto completoWeerts, Hilde, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl y Frank Hutter. "Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML". Journal of Artificial Intelligence Research 79 (17 de febrero de 2024): 639–77. http://dx.doi.org/10.1613/jair.1.14747.
Texto completoMakhlouf, Karima, Sami Zhioua y Catuscia Palamidessi. "On the Applicability of Machine Learning Fairness Notions". ACM SIGKDD Explorations Newsletter 23, n.º 1 (26 de mayo de 2021): 14–23. http://dx.doi.org/10.1145/3468507.3468511.
Texto completoSingh, Vivek K. y Kailash Joshi. "Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM". e-Service Journal 14, n.º 2 (diciembre de 2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.
Texto completoZhou, Zijian, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo y Bryan Kian Hsiang Low. "Probably Approximate Shapley Fairness with Applications in Machine Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 5 (26 de junio de 2023): 5910–18. http://dx.doi.org/10.1609/aaai.v37i5.25732.
Texto completoSreerama, Jeevan y Gowrisankar Krishnamoorthy. "Ethical Considerations in AI Addressing Bias and Fairness in Machine Learning Models". Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 1, n.º 1 (14 de septiembre de 2022): 130–38. http://dx.doi.org/10.60087/jklst.vol1.n1.p138.
Texto completoTesis sobre el tema "ML fairness"
Kaplan, Caelin. "Compromis inhérents à l'apprentissage automatique préservant la confidentialité". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.
Texto completoAs machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Capítulos de libros sobre el tema "ML fairness"
Steif, Ken. "People-based ML Models: Algorithmic Fairness". En Public Policy Analytics, 153–70. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003054658-7.
Texto completod’Aloisio, Giordano, Antinisca Di Marco y Giovanni Stilo. "Democratizing Quality-Based Machine Learning Development through Extended Feature Models". En Fundamental Approaches to Software Engineering, 88–110. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_5.
Texto completoSilva, Inês Oliveira e., Carlos Soares, Inês Sousa y Rayid Ghani. "Systematic Analysis of the Impact of Label Noise Correction on ML Fairness". En Lecture Notes in Computer Science, 173–84. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8391-9_14.
Texto completoChopra, Deepti y Roopal Khurana. "Bias and Fairness in Ml". En Introduction to Machine Learning with Python, 116–22. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124422123010012.
Texto completoZhang, Wenbin, Zichong Wang, Juyong Kim, Cheng Cheng, Thomas Oommen, Pradeep Ravikumar y Jeremy Weiss. "Individual Fairness Under Uncertainty". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230621.
Texto completoCohen-Inger, Nurit, Guy Rozenblatt, Seffi Cohen, Lior Rokach y Bracha Shapira. "FairUS - UpSampling Optimized Method for Boosting Fairness". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240585.
Texto completoKothai, G., S. Nandhagopal, P. Harish, S. Sarankumar y S. Vidhya. "Transforming Data Visualization With AI and ML". En Advances in Business Information Systems and Analytics, 125–68. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-6537-3.ch007.
Texto completoBendoukha, Adda-Akram, Nesrine Kaaniche, Aymen Boudguiga y Renaud Sirdey. "FairCognizer: A Model for Accurate Predictions with Inherent Fairness Evaluation". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240592.
Texto completoWang, Song, Jing Ma, Lu Cheng y Jundong Li. "Fair Few-Shot Learning with Auxiliary Sets". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230556.
Texto completoSunitha, K. "Ethical Issues, Fairness, Accountability, and Transparency in AI/ML". En Handbook of Research on Applications of AI, Digital Twin, and Internet of Things for Sustainable Development, 103–23. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6821-0.ch007.
Texto completoActas de conferencias sobre el tema "ML fairness"
Hertweck, Corinna, Michele Loi y Christoph Heitz. "Group Fairness Refocused: Assessing the Social Impact of ML Systems". En 2024 11th IEEE Swiss Conference on Data Science (SDS), 189–96. IEEE, 2024. http://dx.doi.org/10.1109/sds60720.2024.00034.
Texto completoLi, Zhiwei, Carl Kesselman, Mike D’Arcy, Michael Pazzani y Benjamin Yizing Xu. "Deriva-ML: A Continuous FAIRness Approach to Reproducible Machine Learning Models". En 2024 IEEE 20th International Conference on e-Science (e-Science), 1–10. IEEE, 2024. http://dx.doi.org/10.1109/e-science62913.2024.10678671.
Texto completoRobles Herrera, Salvador, Verya Monjezi, Vladik Kreinovich, Ashutosh Trivedi y Saeid Tizpaz-Niari. "Predicting Fairness of ML Software Configurations". En PROMISE '24: 20th International Conference on Predictive Models and Data Analytics in Software Engineering. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3663533.3664040.
Texto completoMakhlouf, Karima, Sami Zhioua y Catuscia Palamidessi. "Identifiability of Causal-based ML Fairness Notions". En 2022 14th International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2022. http://dx.doi.org/10.1109/cicn56167.2022.10008263.
Texto completoBaresi, Luciano, Chiara Criscuolo y Carlo Ghezzi. "Understanding Fairness Requirements for ML-based Software". En 2023 IEEE 31st International Requirements Engineering Conference (RE). IEEE, 2023. http://dx.doi.org/10.1109/re57278.2023.00046.
Texto completoEyuboglu, Sabri, Karan Goel, Arjun Desai, Lingjiao Chen, Mathew Monfort, Chris Ré y James Zou. "Model ChangeLists: Characterizing Updates to ML Models". En FAccT '24: The 2024 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3630106.3659047.
Texto completoWexler, James, Mahima Pushkarna, Sara Robinson, Tolga Bolukbasi y Andrew Zaldivar. "Probing ML models for fairness with the what-if tool and SHAP". En FAT* '20: Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3351095.3375662.
Texto completoBlili-Hamelin, Borhane y Leif Hancox-Li. "Making Intelligence: Ethical Values in IQ and ML Benchmarks". En FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3593013.3593996.
Texto completoHeidari, Hoda, Michele Loi, Krishna P. Gummadi y Andreas Krause. "A Moral Framework for Understanding Fair ML through Economic Models of Equality of Opportunity". En FAT* '19: Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3287560.3287584.
Texto completoSmith, Jessie J., Saleema Amershi, Solon Barocas, Hanna Wallach y Jennifer Wortman Vaughan. "REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research". En FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3531146.3533122.
Texto completo