Auswahl der wissenschaftlichen Literatur zum Thema „Fairness-Accuracy trade-Off“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Fairness-Accuracy trade-Off" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Fairness-Accuracy trade-Off"
Jang, Taeuk, Pengyi Shi und Xiaoqian Wang. „Group-Aware Threshold Adaptation for Fair Classification“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 6 (28.06.2022): 6988–95. http://dx.doi.org/10.1609/aaai.v36i6.20657.
Der volle Inhalt der QuelleLangenberg, Anna, Shih-Chi Ma, Tatiana Ermakova und Benjamin Fabian. „Formal Group Fairness and Accuracy in Automated Decision Making“. Mathematics 11, Nr. 8 (07.04.2023): 1771. http://dx.doi.org/10.3390/math11081771.
Der volle Inhalt der QuelleTae, Ki Hyun, Hantian Zhang, Jaeyoung Park, Kexin Rong und Steven Euijong Whang. „Falcon: Fair Active Learning Using Multi-Armed Bandits“. Proceedings of the VLDB Endowment 17, Nr. 5 (Januar 2024): 952–65. http://dx.doi.org/10.14778/3641204.3641207.
Der volle Inhalt der QuelleBadar, Maryam, Sandipan Sikdar, Wolfgang Nejdl und Marco Fisichella. „FairTrade: Achieving Pareto-Optimal Trade-Offs between Balanced Accuracy and Fairness in Federated Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 10 (24.03.2024): 10962–70. http://dx.doi.org/10.1609/aaai.v38i10.28971.
Der volle Inhalt der QuelleLi, Xuran, Peng Wu und Jing Su. „Accurate Fairness: Improving Individual Fairness without Trading Accuracy“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 12 (26.06.2023): 14312–20. http://dx.doi.org/10.1609/aaai.v37i12.26674.
Der volle Inhalt der QuelleSilvia, Chiappa, Jiang Ray, Stepleton Tom, Pacchiano Aldo, Jiang Heinrich und Aslanides John. „A General Approach to Fairness with Optimal Transport“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 3633–40. http://dx.doi.org/10.1609/aaai.v34i04.5771.
Der volle Inhalt der QuellePinzón, Carlos, Catuscia Palamidessi, Pablo Piantanida und Frank Valencia. „On the Impossibility of Non-trivial Accuracy in Presence of Fairness Constraints“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 7 (28.06.2022): 7993–8000. http://dx.doi.org/10.1609/aaai.v36i7.20770.
Der volle Inhalt der QuelleSingh, Arashdeep, Jashandeep Singh, Ariba Khan und Amar Gupta. „Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair“. Machine Learning and Knowledge Extraction 4, Nr. 1 (12.03.2022): 240–53. http://dx.doi.org/10.3390/make4010011.
Der volle Inhalt der QuelleGitiaux, Xavier, und Huzefa Rangwala. „Fair Representations by Compression“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 13 (18.05.2021): 11506–15. http://dx.doi.org/10.1609/aaai.v35i13.17370.
Der volle Inhalt der QuelleGao, Shiqi, Xianxian Li, Zhenkui Shi, Peng Liu und Chunpei Li. „Towards Fair and Decentralized Federated Learning System for Gradient Boosting Decision Trees“. Security and Communication Networks 2022 (02.08.2022): 1–18. http://dx.doi.org/10.1155/2022/4202084.
Der volle Inhalt der QuelleDissertationen zum Thema "Fairness-Accuracy trade-Off"
Alves, da Silva Guilherme. „Traitement hybride pour l'équité algorithmique“. Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0323.
Der volle Inhalt der QuelleAlgorithmic decisions are currently being used on a daily basis. These decisions often rely on Machine Learning (ML) algorithms that may produce complex and opaque ML models. Recent studies raised unfairness concerns by revealing discriminating outcomes produced by ML models against minorities and unprivileged groups. As ML models are capable of amplifying discrimination against minorities due to unfair outcomes, it reveals the need for approaches that uncover and remove unintended biases. Assessing fairness and mitigating unfairness are the two main tasks that have motivated the growth of the research field called {algorithmic fairness}. Several notions used to assess fairness focus on the outcomes and link to sensitive features (e.g. gender and ethnicity) through statistical measures. Although these notions have distinct semantics, the use of these definitions of fairness is criticized for being a reductionist understanding of fairness whose aim is basically to implement accept/not-accept reports, ignoring other perspectives on inequality and on societal impact. Process fairness instead is a subjective fairness notion which is centered on the process that leads to outcomes. To mitigate or remove unfairness, approaches generally apply fairness interventions in specific steps. They usually change either (1) the data before training or (2) the optimization function or (3) the algorithms' outputs in order to enforce fairer outcomes. Recently, research on algorithmic fairness have been dedicated to explore combinations of different fairness interventions, which is referred to in this thesis as {fairness hybrid-processing}. Once we try to mitigate unfairness, a tension between fairness and performance arises that is known as the fairness-accuracy trade-off. This thesis focuses on the fairness-accuracy trade-off problem since we are interested in reducing unintended biases without compromising classification performance. We thus propose ensemble-based methods to find a good compromise between fairness and classification performance of ML models, in particular models for binary classification. In addition, these methods produce ensemble classifiers thanks to a combination of fairness interventions, which characterizes the fairness hybrid-processing approaches. We introduce FixOut ({F}a{I}rness through e{X}planations and feature drop{Out}), the human-centered, model-agnostic framework that improves process fairness without compromising classification performance. It receives a pre-trained classifier (original model), a dataset, a set of sensitive features, and an explanation method as input, and it outputs a new classifier that is less reliant on the sensitive features. To assess the reliance of a given pre-trained model on sensitive features, FixOut uses explanations to estimate the contribution of features to models' outcomes. If sensitive features are shown to contribute globally to models' outcomes, then the model is deemed unfair. In this case, it builds a pool of fairer classifiers that are then aggregated to obtain an ensemble classifier. We show the adaptability of FixOut on different combinations of explanation methods and sampling approaches. We also evaluate the effectiveness of FixOut w.r.t. to process fairness but also using well-known standard fairness notions available in the literature. Furthermore, we propose several improvements such as automating the choice of FixOut's parameters and extending FixOut to other data types
Buchteile zum Thema "Fairness-Accuracy trade-Off"
Wang, Jingbo, Yannan Li und Chao Wang. „Synthesizing Fair Decision Trees via Iterative Constraint Solving“. In Computer Aided Verification, 364–85. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13188-2_18.
Der volle Inhalt der QuelleBoyle, Alan. „Popular Audiences on the Web“. In A Field Guide for Science Writers. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195174991.003.0019.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Fairness-Accuracy trade-Off"
Liu, Yazheng, Xi Zhang und Sihong Xie. „Trade less Accuracy for Fairness and Trade-off Explanation for GNN“. In 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020318.
Der volle Inhalt der QuelleCooper, A. Feder, Ellen Abrams und NA NA. „Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research“. In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462519.
Der volle Inhalt der QuelleBell, Andrew, Ian Solano-Kamaiko, Oded Nov und Julia Stoyanovich. „It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy“. In FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3531146.3533090.
Der volle Inhalt der Quelle