Littérature scientifique sur le sujet « Fairness-Accuracy trade-Off »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Fairness-Accuracy trade-Off ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Fairness-Accuracy trade-Off"
Jang, Taeuk, Pengyi Shi et Xiaoqian Wang. « Group-Aware Threshold Adaptation for Fair Classification ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 6 (28 juin 2022) : 6988–95. http://dx.doi.org/10.1609/aaai.v36i6.20657.
Texte intégralLangenberg, Anna, Shih-Chi Ma, Tatiana Ermakova et Benjamin Fabian. « Formal Group Fairness and Accuracy in Automated Decision Making ». Mathematics 11, no 8 (7 avril 2023) : 1771. http://dx.doi.org/10.3390/math11081771.
Texte intégralTae, Ki Hyun, Hantian Zhang, Jaeyoung Park, Kexin Rong et Steven Euijong Whang. « Falcon : Fair Active Learning Using Multi-Armed Bandits ». Proceedings of the VLDB Endowment 17, no 5 (janvier 2024) : 952–65. http://dx.doi.org/10.14778/3641204.3641207.
Texte intégralBadar, Maryam, Sandipan Sikdar, Wolfgang Nejdl et Marco Fisichella. « FairTrade : Achieving Pareto-Optimal Trade-Offs between Balanced Accuracy and Fairness in Federated Learning ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 10 (24 mars 2024) : 10962–70. http://dx.doi.org/10.1609/aaai.v38i10.28971.
Texte intégralLi, Xuran, Peng Wu et Jing Su. « Accurate Fairness : Improving Individual Fairness without Trading Accuracy ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 12 (26 juin 2023) : 14312–20. http://dx.doi.org/10.1609/aaai.v37i12.26674.
Texte intégralSilvia, Chiappa, Jiang Ray, Stepleton Tom, Pacchiano Aldo, Jiang Heinrich et Aslanides John. « A General Approach to Fairness with Optimal Transport ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 04 (3 avril 2020) : 3633–40. http://dx.doi.org/10.1609/aaai.v34i04.5771.
Texte intégralPinzón, Carlos, Catuscia Palamidessi, Pablo Piantanida et Frank Valencia. « On the Impossibility of Non-trivial Accuracy in Presence of Fairness Constraints ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 7 (28 juin 2022) : 7993–8000. http://dx.doi.org/10.1609/aaai.v36i7.20770.
Texte intégralSingh, Arashdeep, Jashandeep Singh, Ariba Khan et Amar Gupta. « Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline : DualFair ». Machine Learning and Knowledge Extraction 4, no 1 (12 mars 2022) : 240–53. http://dx.doi.org/10.3390/make4010011.
Texte intégralGitiaux, Xavier, et Huzefa Rangwala. « Fair Representations by Compression ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 13 (18 mai 2021) : 11506–15. http://dx.doi.org/10.1609/aaai.v35i13.17370.
Texte intégralGao, Shiqi, Xianxian Li, Zhenkui Shi, Peng Liu et Chunpei Li. « Towards Fair and Decentralized Federated Learning System for Gradient Boosting Decision Trees ». Security and Communication Networks 2022 (2 août 2022) : 1–18. http://dx.doi.org/10.1155/2022/4202084.
Texte intégralThèses sur le sujet "Fairness-Accuracy trade-Off"
Alves, da Silva Guilherme. « Traitement hybride pour l'équité algorithmique ». Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0323.
Texte intégralAlgorithmic decisions are currently being used on a daily basis. These decisions often rely on Machine Learning (ML) algorithms that may produce complex and opaque ML models. Recent studies raised unfairness concerns by revealing discriminating outcomes produced by ML models against minorities and unprivileged groups. As ML models are capable of amplifying discrimination against minorities due to unfair outcomes, it reveals the need for approaches that uncover and remove unintended biases. Assessing fairness and mitigating unfairness are the two main tasks that have motivated the growth of the research field called {algorithmic fairness}. Several notions used to assess fairness focus on the outcomes and link to sensitive features (e.g. gender and ethnicity) through statistical measures. Although these notions have distinct semantics, the use of these definitions of fairness is criticized for being a reductionist understanding of fairness whose aim is basically to implement accept/not-accept reports, ignoring other perspectives on inequality and on societal impact. Process fairness instead is a subjective fairness notion which is centered on the process that leads to outcomes. To mitigate or remove unfairness, approaches generally apply fairness interventions in specific steps. They usually change either (1) the data before training or (2) the optimization function or (3) the algorithms' outputs in order to enforce fairer outcomes. Recently, research on algorithmic fairness have been dedicated to explore combinations of different fairness interventions, which is referred to in this thesis as {fairness hybrid-processing}. Once we try to mitigate unfairness, a tension between fairness and performance arises that is known as the fairness-accuracy trade-off. This thesis focuses on the fairness-accuracy trade-off problem since we are interested in reducing unintended biases without compromising classification performance. We thus propose ensemble-based methods to find a good compromise between fairness and classification performance of ML models, in particular models for binary classification. In addition, these methods produce ensemble classifiers thanks to a combination of fairness interventions, which characterizes the fairness hybrid-processing approaches. We introduce FixOut ({F}a{I}rness through e{X}planations and feature drop{Out}), the human-centered, model-agnostic framework that improves process fairness without compromising classification performance. It receives a pre-trained classifier (original model), a dataset, a set of sensitive features, and an explanation method as input, and it outputs a new classifier that is less reliant on the sensitive features. To assess the reliance of a given pre-trained model on sensitive features, FixOut uses explanations to estimate the contribution of features to models' outcomes. If sensitive features are shown to contribute globally to models' outcomes, then the model is deemed unfair. In this case, it builds a pool of fairer classifiers that are then aggregated to obtain an ensemble classifier. We show the adaptability of FixOut on different combinations of explanation methods and sampling approaches. We also evaluate the effectiveness of FixOut w.r.t. to process fairness but also using well-known standard fairness notions available in the literature. Furthermore, we propose several improvements such as automating the choice of FixOut's parameters and extending FixOut to other data types
Chapitres de livres sur le sujet "Fairness-Accuracy trade-Off"
Wang, Jingbo, Yannan Li et Chao Wang. « Synthesizing Fair Decision Trees via Iterative Constraint Solving ». Dans Computer Aided Verification, 364–85. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13188-2_18.
Texte intégralBoyle, Alan. « Popular Audiences on the Web ». Dans A Field Guide for Science Writers. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195174991.003.0019.
Texte intégralActes de conférences sur le sujet "Fairness-Accuracy trade-Off"
Liu, Yazheng, Xi Zhang et Sihong Xie. « Trade less Accuracy for Fairness and Trade-off Explanation for GNN ». Dans 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020318.
Texte intégralCooper, A. Feder, Ellen Abrams et NA NA. « Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research ». Dans AIES '21 : AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA : ACM, 2021. http://dx.doi.org/10.1145/3461702.3462519.
Texte intégralBell, Andrew, Ian Solano-Kamaiko, Oded Nov et Julia Stoyanovich. « It’s Just Not That Simple : An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy ». Dans FAccT '22 : 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA : ACM, 2022. http://dx.doi.org/10.1145/3531146.3533090.
Texte intégral