Letteratura scientifica selezionata sul tema "Fairness-Accuracy trade-Off"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Fairness-Accuracy trade-Off".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Fairness-Accuracy trade-Off":
Jang, Taeuk, Pengyi Shi e Xiaoqian Wang. "Group-Aware Threshold Adaptation for Fair Classification". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 6 (28 giugno 2022): 6988–95. http://dx.doi.org/10.1609/aaai.v36i6.20657.
Langenberg, Anna, Shih-Chi Ma, Tatiana Ermakova e Benjamin Fabian. "Formal Group Fairness and Accuracy in Automated Decision Making". Mathematics 11, n. 8 (7 aprile 2023): 1771. http://dx.doi.org/10.3390/math11081771.
Tae, Ki Hyun, Hantian Zhang, Jaeyoung Park, Kexin Rong e Steven Euijong Whang. "Falcon: Fair Active Learning Using Multi-Armed Bandits". Proceedings of the VLDB Endowment 17, n. 5 (gennaio 2024): 952–65. http://dx.doi.org/10.14778/3641204.3641207.
Badar, Maryam, Sandipan Sikdar, Wolfgang Nejdl e Marco Fisichella. "FairTrade: Achieving Pareto-Optimal Trade-Offs between Balanced Accuracy and Fairness in Federated Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 10 (24 marzo 2024): 10962–70. http://dx.doi.org/10.1609/aaai.v38i10.28971.
Li, Xuran, Peng Wu e Jing Su. "Accurate Fairness: Improving Individual Fairness without Trading Accuracy". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 12 (26 giugno 2023): 14312–20. http://dx.doi.org/10.1609/aaai.v37i12.26674.
Silvia, Chiappa, Jiang Ray, Stepleton Tom, Pacchiano Aldo, Jiang Heinrich e Aslanides John. "A General Approach to Fairness with Optimal Transport". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 04 (3 aprile 2020): 3633–40. http://dx.doi.org/10.1609/aaai.v34i04.5771.
Pinzón, Carlos, Catuscia Palamidessi, Pablo Piantanida e Frank Valencia. "On the Impossibility of Non-trivial Accuracy in Presence of Fairness Constraints". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 7 (28 giugno 2022): 7993–8000. http://dx.doi.org/10.1609/aaai.v36i7.20770.
Singh, Arashdeep, Jashandeep Singh, Ariba Khan e Amar Gupta. "Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair". Machine Learning and Knowledge Extraction 4, n. 1 (12 marzo 2022): 240–53. http://dx.doi.org/10.3390/make4010011.
Gitiaux, Xavier, e Huzefa Rangwala. "Fair Representations by Compression". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 13 (18 maggio 2021): 11506–15. http://dx.doi.org/10.1609/aaai.v35i13.17370.
Gao, Shiqi, Xianxian Li, Zhenkui Shi, Peng Liu e Chunpei Li. "Towards Fair and Decentralized Federated Learning System for Gradient Boosting Decision Trees". Security and Communication Networks 2022 (2 agosto 2022): 1–18. http://dx.doi.org/10.1155/2022/4202084.
Tesi sul tema "Fairness-Accuracy trade-Off":
Alves, da Silva Guilherme. "Traitement hybride pour l'équité algorithmique". Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0323.
Algorithmic decisions are currently being used on a daily basis. These decisions often rely on Machine Learning (ML) algorithms that may produce complex and opaque ML models. Recent studies raised unfairness concerns by revealing discriminating outcomes produced by ML models against minorities and unprivileged groups. As ML models are capable of amplifying discrimination against minorities due to unfair outcomes, it reveals the need for approaches that uncover and remove unintended biases. Assessing fairness and mitigating unfairness are the two main tasks that have motivated the growth of the research field called {algorithmic fairness}. Several notions used to assess fairness focus on the outcomes and link to sensitive features (e.g. gender and ethnicity) through statistical measures. Although these notions have distinct semantics, the use of these definitions of fairness is criticized for being a reductionist understanding of fairness whose aim is basically to implement accept/not-accept reports, ignoring other perspectives on inequality and on societal impact. Process fairness instead is a subjective fairness notion which is centered on the process that leads to outcomes. To mitigate or remove unfairness, approaches generally apply fairness interventions in specific steps. They usually change either (1) the data before training or (2) the optimization function or (3) the algorithms' outputs in order to enforce fairer outcomes. Recently, research on algorithmic fairness have been dedicated to explore combinations of different fairness interventions, which is referred to in this thesis as {fairness hybrid-processing}. Once we try to mitigate unfairness, a tension between fairness and performance arises that is known as the fairness-accuracy trade-off. This thesis focuses on the fairness-accuracy trade-off problem since we are interested in reducing unintended biases without compromising classification performance. We thus propose ensemble-based methods to find a good compromise between fairness and classification performance of ML models, in particular models for binary classification. In addition, these methods produce ensemble classifiers thanks to a combination of fairness interventions, which characterizes the fairness hybrid-processing approaches. We introduce FixOut ({F}a{I}rness through e{X}planations and feature drop{Out}), the human-centered, model-agnostic framework that improves process fairness without compromising classification performance. It receives a pre-trained classifier (original model), a dataset, a set of sensitive features, and an explanation method as input, and it outputs a new classifier that is less reliant on the sensitive features. To assess the reliance of a given pre-trained model on sensitive features, FixOut uses explanations to estimate the contribution of features to models' outcomes. If sensitive features are shown to contribute globally to models' outcomes, then the model is deemed unfair. In this case, it builds a pool of fairer classifiers that are then aggregated to obtain an ensemble classifier. We show the adaptability of FixOut on different combinations of explanation methods and sampling approaches. We also evaluate the effectiveness of FixOut w.r.t. to process fairness but also using well-known standard fairness notions available in the literature. Furthermore, we propose several improvements such as automating the choice of FixOut's parameters and extending FixOut to other data types
Capitoli di libri sul tema "Fairness-Accuracy trade-Off":
Wang, Jingbo, Yannan Li e Chao Wang. "Synthesizing Fair Decision Trees via Iterative Constraint Solving". In Computer Aided Verification, 364–85. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13188-2_18.
Boyle, Alan. "Popular Audiences on the Web". In A Field Guide for Science Writers. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195174991.003.0019.
Atti di convegni sul tema "Fairness-Accuracy trade-Off":
Liu, Yazheng, Xi Zhang e Sihong Xie. "Trade less Accuracy for Fairness and Trade-off Explanation for GNN". In 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020318.
Cooper, A. Feder, Ellen Abrams e NA NA. "Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research". In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462519.
Bell, Andrew, Ian Solano-Kamaiko, Oded Nov e Julia Stoyanovich. "It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy". In FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3531146.3533090.