Academic literature on the topic 'ML fairness'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'ML fairness.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "ML fairness"
Weinberg, Lindsay. "Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches." Journal of Artificial Intelligence Research 74 (May 6, 2022): 75–109. http://dx.doi.org/10.1613/jair.1.13196.
Full textBærøe, Kristine, Torbjørn Gundersen, Edmund Henden, and Kjetil Rommetveit. "Can medical algorithms be fair? Three ethical quandaries and one dilemma." BMJ Health & Care Informatics 29, no. 1 (April 2022): e100445. http://dx.doi.org/10.1136/bmjhci-2021-100445.
Full textYanjun Li, Yanjun Li, Huan Huang Yanjun Li, Qiang Geng Huan Huang, Xinwei Guo Qiang Geng, and Yuyu Yuan Xinwei Guo. "Fairness Measures of Machine Learning Models in Judicial Penalty Prediction." 網際網路技術學刊 23, no. 5 (September 2022): 1109–16. http://dx.doi.org/10.53106/160792642022092305019.
Full textGhosh, Bishwamittra, Debabrota Basu, and Kuldeep S. Meel. "Algorithmic Fairness Verification with Graphical Models." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9539–48. http://dx.doi.org/10.1609/aaai.v36i9.21187.
Full textKuzucu, Selim, Jiaee Cheong, Hatice Gunes, and Sinan Kalkan. "Uncertainty as a Fairness Measure." Journal of Artificial Intelligence Research 81 (October 13, 2024): 307–35. http://dx.doi.org/10.1613/jair.1.16041.
Full textWeerts, Hilde, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl, and Frank Hutter. "Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML." Journal of Artificial Intelligence Research 79 (February 17, 2024): 639–77. http://dx.doi.org/10.1613/jair.1.14747.
Full textMakhlouf, Karima, Sami Zhioua, and Catuscia Palamidessi. "On the Applicability of Machine Learning Fairness Notions." ACM SIGKDD Explorations Newsletter 23, no. 1 (May 26, 2021): 14–23. http://dx.doi.org/10.1145/3468507.3468511.
Full textSingh, Vivek K., and Kailash Joshi. "Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM." e-Service Journal 14, no. 2 (December 2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.
Full textZhou, Zijian, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo, and Bryan Kian Hsiang Low. "Probably Approximate Shapley Fairness with Applications in Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 5 (June 26, 2023): 5910–18. http://dx.doi.org/10.1609/aaai.v37i5.25732.
Full textSreerama, Jeevan, and Gowrisankar Krishnamoorthy. "Ethical Considerations in AI Addressing Bias and Fairness in Machine Learning Models." Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 1, no. 1 (September 14, 2022): 130–38. http://dx.doi.org/10.60087/jklst.vol1.n1.p138.
Full textDissertations / Theses on the topic "ML fairness"
Kaplan, Caelin. "Compromis inhérents à l'apprentissage automatique préservant la confidentialité." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.
Full textAs machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Book chapters on the topic "ML fairness"
Steif, Ken. "People-based ML Models: Algorithmic Fairness." In Public Policy Analytics, 153–70. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003054658-7.
Full textd’Aloisio, Giordano, Antinisca Di Marco, and Giovanni Stilo. "Democratizing Quality-Based Machine Learning Development through Extended Feature Models." In Fundamental Approaches to Software Engineering, 88–110. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_5.
Full textSilva, Inês Oliveira e., Carlos Soares, Inês Sousa, and Rayid Ghani. "Systematic Analysis of the Impact of Label Noise Correction on ML Fairness." In Lecture Notes in Computer Science, 173–84. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8391-9_14.
Full textChopra, Deepti, and Roopal Khurana. "Bias and Fairness in Ml." In Introduction to Machine Learning with Python, 116–22. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124422123010012.
Full textZhang, Wenbin, Zichong Wang, Juyong Kim, Cheng Cheng, Thomas Oommen, Pradeep Ravikumar, and Jeremy Weiss. "Individual Fairness Under Uncertainty." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230621.
Full textCohen-Inger, Nurit, Guy Rozenblatt, Seffi Cohen, Lior Rokach, and Bracha Shapira. "FairUS - UpSampling Optimized Method for Boosting Fairness." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240585.
Full textKothai, G., S. Nandhagopal, P. Harish, S. Sarankumar, and S. Vidhya. "Transforming Data Visualization With AI and ML." In Advances in Business Information Systems and Analytics, 125–68. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-6537-3.ch007.
Full textBendoukha, Adda-Akram, Nesrine Kaaniche, Aymen Boudguiga, and Renaud Sirdey. "FairCognizer: A Model for Accurate Predictions with Inherent Fairness Evaluation." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240592.
Full textWang, Song, Jing Ma, Lu Cheng, and Jundong Li. "Fair Few-Shot Learning with Auxiliary Sets." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230556.
Full textSunitha, K. "Ethical Issues, Fairness, Accountability, and Transparency in AI/ML." In Handbook of Research on Applications of AI, Digital Twin, and Internet of Things for Sustainable Development, 103–23. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6821-0.ch007.
Full textConference papers on the topic "ML fairness"
Hertweck, Corinna, Michele Loi, and Christoph Heitz. "Group Fairness Refocused: Assessing the Social Impact of ML Systems." In 2024 11th IEEE Swiss Conference on Data Science (SDS), 189–96. IEEE, 2024. http://dx.doi.org/10.1109/sds60720.2024.00034.
Full textLi, Zhiwei, Carl Kesselman, Mike D’Arcy, Michael Pazzani, and Benjamin Yizing Xu. "Deriva-ML: A Continuous FAIRness Approach to Reproducible Machine Learning Models." In 2024 IEEE 20th International Conference on e-Science (e-Science), 1–10. IEEE, 2024. http://dx.doi.org/10.1109/e-science62913.2024.10678671.
Full textRobles Herrera, Salvador, Verya Monjezi, Vladik Kreinovich, Ashutosh Trivedi, and Saeid Tizpaz-Niari. "Predicting Fairness of ML Software Configurations." In PROMISE '24: 20th International Conference on Predictive Models and Data Analytics in Software Engineering. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3663533.3664040.
Full textMakhlouf, Karima, Sami Zhioua, and Catuscia Palamidessi. "Identifiability of Causal-based ML Fairness Notions." In 2022 14th International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2022. http://dx.doi.org/10.1109/cicn56167.2022.10008263.
Full textBaresi, Luciano, Chiara Criscuolo, and Carlo Ghezzi. "Understanding Fairness Requirements for ML-based Software." In 2023 IEEE 31st International Requirements Engineering Conference (RE). IEEE, 2023. http://dx.doi.org/10.1109/re57278.2023.00046.
Full textEyuboglu, Sabri, Karan Goel, Arjun Desai, Lingjiao Chen, Mathew Monfort, Chris Ré, and James Zou. "Model ChangeLists: Characterizing Updates to ML Models." In FAccT '24: The 2024 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3630106.3659047.
Full textWexler, James, Mahima Pushkarna, Sara Robinson, Tolga Bolukbasi, and Andrew Zaldivar. "Probing ML models for fairness with the what-if tool and SHAP." In FAT* '20: Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3351095.3375662.
Full textBlili-Hamelin, Borhane, and Leif Hancox-Li. "Making Intelligence: Ethical Values in IQ and ML Benchmarks." In FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3593013.3593996.
Full textHeidari, Hoda, Michele Loi, Krishna P. Gummadi, and Andreas Krause. "A Moral Framework for Understanding Fair ML through Economic Models of Equality of Opportunity." In FAT* '19: Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3287560.3287584.
Full textSmith, Jessie J., Saleema Amershi, Solon Barocas, Hanna Wallach, and Jennifer Wortman Vaughan. "REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research." In FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3531146.3533122.
Full text