Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „ML fairness“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "ML fairness" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "ML fairness"
Weinberg, Lindsay. „Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches“. Journal of Artificial Intelligence Research 74 (06.05.2022): 75–109. http://dx.doi.org/10.1613/jair.1.13196.
Der volle Inhalt der QuelleBærøe, Kristine, Torbjørn Gundersen, Edmund Henden und Kjetil Rommetveit. „Can medical algorithms be fair? Three ethical quandaries and one dilemma“. BMJ Health & Care Informatics 29, Nr. 1 (April 2022): e100445. http://dx.doi.org/10.1136/bmjhci-2021-100445.
Der volle Inhalt der QuelleYanjun Li, Yanjun Li, Huan Huang Yanjun Li, Qiang Geng Huan Huang, Xinwei Guo Qiang Geng und Yuyu Yuan Xinwei Guo. „Fairness Measures of Machine Learning Models in Judicial Penalty Prediction“. 網際網路技術學刊 23, Nr. 5 (September 2022): 1109–16. http://dx.doi.org/10.53106/160792642022092305019.
Der volle Inhalt der QuelleGhosh, Bishwamittra, Debabrota Basu und Kuldeep S. Meel. „Algorithmic Fairness Verification with Graphical Models“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 9 (28.06.2022): 9539–48. http://dx.doi.org/10.1609/aaai.v36i9.21187.
Der volle Inhalt der QuelleKuzucu, Selim, Jiaee Cheong, Hatice Gunes und Sinan Kalkan. „Uncertainty as a Fairness Measure“. Journal of Artificial Intelligence Research 81 (13.10.2024): 307–35. http://dx.doi.org/10.1613/jair.1.16041.
Der volle Inhalt der QuelleWeerts, Hilde, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl und Frank Hutter. „Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML“. Journal of Artificial Intelligence Research 79 (17.02.2024): 639–77. http://dx.doi.org/10.1613/jair.1.14747.
Der volle Inhalt der QuelleMakhlouf, Karima, Sami Zhioua und Catuscia Palamidessi. „On the Applicability of Machine Learning Fairness Notions“. ACM SIGKDD Explorations Newsletter 23, Nr. 1 (26.05.2021): 14–23. http://dx.doi.org/10.1145/3468507.3468511.
Der volle Inhalt der QuelleSingh, Vivek K., und Kailash Joshi. „Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM“. e-Service Journal 14, Nr. 2 (Dezember 2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.
Der volle Inhalt der QuelleZhou, Zijian, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo und Bryan Kian Hsiang Low. „Probably Approximate Shapley Fairness with Applications in Machine Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 5 (26.06.2023): 5910–18. http://dx.doi.org/10.1609/aaai.v37i5.25732.
Der volle Inhalt der QuelleSreerama, Jeevan, und Gowrisankar Krishnamoorthy. „Ethical Considerations in AI Addressing Bias and Fairness in Machine Learning Models“. Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 1, Nr. 1 (14.09.2022): 130–38. http://dx.doi.org/10.60087/jklst.vol1.n1.p138.
Der volle Inhalt der QuelleDissertationen zum Thema "ML fairness"
Kaplan, Caelin. „Compromis inhérents à l'apprentissage automatique préservant la confidentialité“. Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.
Der volle Inhalt der QuelleAs machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Buchteile zum Thema "ML fairness"
Steif, Ken. „People-based ML Models: Algorithmic Fairness“. In Public Policy Analytics, 153–70. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003054658-7.
Der volle Inhalt der Quelled’Aloisio, Giordano, Antinisca Di Marco und Giovanni Stilo. „Democratizing Quality-Based Machine Learning Development through Extended Feature Models“. In Fundamental Approaches to Software Engineering, 88–110. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_5.
Der volle Inhalt der QuelleSilva, Inês Oliveira e., Carlos Soares, Inês Sousa und Rayid Ghani. „Systematic Analysis of the Impact of Label Noise Correction on ML Fairness“. In Lecture Notes in Computer Science, 173–84. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8391-9_14.
Der volle Inhalt der QuelleChopra, Deepti, und Roopal Khurana. „Bias and Fairness in Ml“. In Introduction to Machine Learning with Python, 116–22. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124422123010012.
Der volle Inhalt der QuelleZhang, Wenbin, Zichong Wang, Juyong Kim, Cheng Cheng, Thomas Oommen, Pradeep Ravikumar und Jeremy Weiss. „Individual Fairness Under Uncertainty“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230621.
Der volle Inhalt der QuelleCohen-Inger, Nurit, Guy Rozenblatt, Seffi Cohen, Lior Rokach und Bracha Shapira. „FairUS - UpSampling Optimized Method for Boosting Fairness“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240585.
Der volle Inhalt der QuelleKothai, G., S. Nandhagopal, P. Harish, S. Sarankumar und S. Vidhya. „Transforming Data Visualization With AI and ML“. In Advances in Business Information Systems and Analytics, 125–68. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-6537-3.ch007.
Der volle Inhalt der QuelleBendoukha, Adda-Akram, Nesrine Kaaniche, Aymen Boudguiga und Renaud Sirdey. „FairCognizer: A Model for Accurate Predictions with Inherent Fairness Evaluation“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240592.
Der volle Inhalt der QuelleWang, Song, Jing Ma, Lu Cheng und Jundong Li. „Fair Few-Shot Learning with Auxiliary Sets“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230556.
Der volle Inhalt der QuelleSunitha, K. „Ethical Issues, Fairness, Accountability, and Transparency in AI/ML“. In Handbook of Research on Applications of AI, Digital Twin, and Internet of Things for Sustainable Development, 103–23. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6821-0.ch007.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "ML fairness"
Hertweck, Corinna, Michele Loi und Christoph Heitz. „Group Fairness Refocused: Assessing the Social Impact of ML Systems“. In 2024 11th IEEE Swiss Conference on Data Science (SDS), 189–96. IEEE, 2024. http://dx.doi.org/10.1109/sds60720.2024.00034.
Der volle Inhalt der QuelleLi, Zhiwei, Carl Kesselman, Mike D’Arcy, Michael Pazzani und Benjamin Yizing Xu. „Deriva-ML: A Continuous FAIRness Approach to Reproducible Machine Learning Models“. In 2024 IEEE 20th International Conference on e-Science (e-Science), 1–10. IEEE, 2024. http://dx.doi.org/10.1109/e-science62913.2024.10678671.
Der volle Inhalt der QuelleRobles Herrera, Salvador, Verya Monjezi, Vladik Kreinovich, Ashutosh Trivedi und Saeid Tizpaz-Niari. „Predicting Fairness of ML Software Configurations“. In PROMISE '24: 20th International Conference on Predictive Models and Data Analytics in Software Engineering. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3663533.3664040.
Der volle Inhalt der QuelleMakhlouf, Karima, Sami Zhioua und Catuscia Palamidessi. „Identifiability of Causal-based ML Fairness Notions“. In 2022 14th International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2022. http://dx.doi.org/10.1109/cicn56167.2022.10008263.
Der volle Inhalt der QuelleBaresi, Luciano, Chiara Criscuolo und Carlo Ghezzi. „Understanding Fairness Requirements for ML-based Software“. In 2023 IEEE 31st International Requirements Engineering Conference (RE). IEEE, 2023. http://dx.doi.org/10.1109/re57278.2023.00046.
Der volle Inhalt der QuelleEyuboglu, Sabri, Karan Goel, Arjun Desai, Lingjiao Chen, Mathew Monfort, Chris Ré und James Zou. „Model ChangeLists: Characterizing Updates to ML Models“. In FAccT '24: The 2024 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3630106.3659047.
Der volle Inhalt der QuelleWexler, James, Mahima Pushkarna, Sara Robinson, Tolga Bolukbasi und Andrew Zaldivar. „Probing ML models for fairness with the what-if tool and SHAP“. In FAT* '20: Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3351095.3375662.
Der volle Inhalt der QuelleBlili-Hamelin, Borhane, und Leif Hancox-Li. „Making Intelligence: Ethical Values in IQ and ML Benchmarks“. In FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3593013.3593996.
Der volle Inhalt der QuelleHeidari, Hoda, Michele Loi, Krishna P. Gummadi und Andreas Krause. „A Moral Framework for Understanding Fair ML through Economic Models of Equality of Opportunity“. In FAT* '19: Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3287560.3287584.
Der volle Inhalt der QuelleSmith, Jessie J., Saleema Amershi, Solon Barocas, Hanna Wallach und Jennifer Wortman Vaughan. „REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research“. In FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3531146.3533122.
Der volle Inhalt der Quelle