Добірка наукової літератури з теми "Fairness-Accuracy trade-Off"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Fairness-Accuracy trade-Off".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Fairness-Accuracy trade-Off":
Jang, Taeuk, Pengyi Shi, and Xiaoqian Wang. "Group-Aware Threshold Adaptation for Fair Classification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6988–95. http://dx.doi.org/10.1609/aaai.v36i6.20657.
Langenberg, Anna, Shih-Chi Ma, Tatiana Ermakova, and Benjamin Fabian. "Formal Group Fairness and Accuracy in Automated Decision Making." Mathematics 11, no. 8 (April 7, 2023): 1771. http://dx.doi.org/10.3390/math11081771.
Tae, Ki Hyun, Hantian Zhang, Jaeyoung Park, Kexin Rong, and Steven Euijong Whang. "Falcon: Fair Active Learning Using Multi-Armed Bandits." Proceedings of the VLDB Endowment 17, no. 5 (January 2024): 952–65. http://dx.doi.org/10.14778/3641204.3641207.
Badar, Maryam, Sandipan Sikdar, Wolfgang Nejdl, and Marco Fisichella. "FairTrade: Achieving Pareto-Optimal Trade-Offs between Balanced Accuracy and Fairness in Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 10962–70. http://dx.doi.org/10.1609/aaai.v38i10.28971.
Li, Xuran, Peng Wu, and Jing Su. "Accurate Fairness: Improving Individual Fairness without Trading Accuracy." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14312–20. http://dx.doi.org/10.1609/aaai.v37i12.26674.
Silvia, Chiappa, Jiang Ray, Stepleton Tom, Pacchiano Aldo, Jiang Heinrich, and Aslanides John. "A General Approach to Fairness with Optimal Transport." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3633–40. http://dx.doi.org/10.1609/aaai.v34i04.5771.
Pinzón, Carlos, Catuscia Palamidessi, Pablo Piantanida, and Frank Valencia. "On the Impossibility of Non-trivial Accuracy in Presence of Fairness Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7993–8000. http://dx.doi.org/10.1609/aaai.v36i7.20770.
Singh, Arashdeep, Jashandeep Singh, Ariba Khan, and Amar Gupta. "Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair." Machine Learning and Knowledge Extraction 4, no. 1 (March 12, 2022): 240–53. http://dx.doi.org/10.3390/make4010011.
Gitiaux, Xavier, and Huzefa Rangwala. "Fair Representations by Compression." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11506–15. http://dx.doi.org/10.1609/aaai.v35i13.17370.
Gao, Shiqi, Xianxian Li, Zhenkui Shi, Peng Liu, and Chunpei Li. "Towards Fair and Decentralized Federated Learning System for Gradient Boosting Decision Trees." Security and Communication Networks 2022 (August 2, 2022): 1–18. http://dx.doi.org/10.1155/2022/4202084.
Дисертації з теми "Fairness-Accuracy trade-Off":
Alves, da Silva Guilherme. "Traitement hybride pour l'équité algorithmique." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0323.
Algorithmic decisions are currently being used on a daily basis. These decisions often rely on Machine Learning (ML) algorithms that may produce complex and opaque ML models. Recent studies raised unfairness concerns by revealing discriminating outcomes produced by ML models against minorities and unprivileged groups. As ML models are capable of amplifying discrimination against minorities due to unfair outcomes, it reveals the need for approaches that uncover and remove unintended biases. Assessing fairness and mitigating unfairness are the two main tasks that have motivated the growth of the research field called {algorithmic fairness}. Several notions used to assess fairness focus on the outcomes and link to sensitive features (e.g. gender and ethnicity) through statistical measures. Although these notions have distinct semantics, the use of these definitions of fairness is criticized for being a reductionist understanding of fairness whose aim is basically to implement accept/not-accept reports, ignoring other perspectives on inequality and on societal impact. Process fairness instead is a subjective fairness notion which is centered on the process that leads to outcomes. To mitigate or remove unfairness, approaches generally apply fairness interventions in specific steps. They usually change either (1) the data before training or (2) the optimization function or (3) the algorithms' outputs in order to enforce fairer outcomes. Recently, research on algorithmic fairness have been dedicated to explore combinations of different fairness interventions, which is referred to in this thesis as {fairness hybrid-processing}. Once we try to mitigate unfairness, a tension between fairness and performance arises that is known as the fairness-accuracy trade-off. This thesis focuses on the fairness-accuracy trade-off problem since we are interested in reducing unintended biases without compromising classification performance. We thus propose ensemble-based methods to find a good compromise between fairness and classification performance of ML models, in particular models for binary classification. In addition, these methods produce ensemble classifiers thanks to a combination of fairness interventions, which characterizes the fairness hybrid-processing approaches. We introduce FixOut ({F}a{I}rness through e{X}planations and feature drop{Out}), the human-centered, model-agnostic framework that improves process fairness without compromising classification performance. It receives a pre-trained classifier (original model), a dataset, a set of sensitive features, and an explanation method as input, and it outputs a new classifier that is less reliant on the sensitive features. To assess the reliance of a given pre-trained model on sensitive features, FixOut uses explanations to estimate the contribution of features to models' outcomes. If sensitive features are shown to contribute globally to models' outcomes, then the model is deemed unfair. In this case, it builds a pool of fairer classifiers that are then aggregated to obtain an ensemble classifier. We show the adaptability of FixOut on different combinations of explanation methods and sampling approaches. We also evaluate the effectiveness of FixOut w.r.t. to process fairness but also using well-known standard fairness notions available in the literature. Furthermore, we propose several improvements such as automating the choice of FixOut's parameters and extending FixOut to other data types
Частини книг з теми "Fairness-Accuracy trade-Off":
Wang, Jingbo, Yannan Li, and Chao Wang. "Synthesizing Fair Decision Trees via Iterative Constraint Solving." In Computer Aided Verification, 364–85. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13188-2_18.
Boyle, Alan. "Popular Audiences on the Web." In A Field Guide for Science Writers. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195174991.003.0019.
Тези доповідей конференцій з теми "Fairness-Accuracy trade-Off":
Liu, Yazheng, Xi Zhang, and Sihong Xie. "Trade less Accuracy for Fairness and Trade-off Explanation for GNN." In 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020318.
Cooper, A. Feder, Ellen Abrams, and NA NA. "Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research." In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462519.
Bell, Andrew, Ian Solano-Kamaiko, Oded Nov, and Julia Stoyanovich. "It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy." In FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3531146.3533090.