Auswahl der wissenschaftlichen Literatur zum Thema „Unfairness mitigation“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Unfairness mitigation" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Unfairness mitigation"
Balayn, Agathe, Christoph Lofi und Geert-Jan Houben. „Managing bias and unfairness in data for decision support: a survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems“. VLDB Journal 30, Nr. 5 (05.05.2021): 739–68. http://dx.doi.org/10.1007/s00778-021-00671-8.
Der volle Inhalt der QuellePagano, Tiago P., Rafael B. Loureiro, Fernanda V. N. Lisboa, Rodrigo M. Peixoto, Guilherme A. S. Guimarães, Gustavo O. R. Cruz, Maira M. Araujo et al. „Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods“. Big Data and Cognitive Computing 7, Nr. 1 (13.01.2023): 15. http://dx.doi.org/10.3390/bdcc7010015.
Der volle Inhalt der QuelleAbdullah, Nurhidayah, und Zuhairah Ariff Abd Ghadas. „THE APPLICATION OF GOOD FAITH IN CONTRACTS DURING A FORCE MAJEURE EVENT AND BEYOND WITH SPECIAL REFERENCE TO THE COVID-19 ACT 2020“. UUM Journal of Legal Studies 14, Nr. 1 (18.01.2023): 141–60. http://dx.doi.org/10.32890/uumjls2023.14.1.6.
Der volle Inhalt der QuelleMenziwa, Yolanda, Eunice Lebogang Sesale und Solly Matshonisa Seeletse. „Challenges in research data collection and mitigation interventions“. International Journal of Research in Business and Social Science (2147- 4478) 13, Nr. 2 (03.04.2024): 336–44. http://dx.doi.org/10.20525/ijrbs.v13i2.3187.
Der volle Inhalt der QuelleRana, Saadia Afzal, Zati Hakim Azizul und Ali Afzal Awan. „A step toward building a unified framework for managing AI bias“. PeerJ Computer Science 9 (26.10.2023): e1630. http://dx.doi.org/10.7717/peerj-cs.1630.
Der volle Inhalt der QuelleLatif, Aadil, Wolfgang Gawlik und Peter Palensky. „Quantification and Mitigation of Unfairness in Active Power Curtailment of Rooftop Photovoltaic Systems Using Sensitivity Based Coordinated Control“. Energies 9, Nr. 6 (04.06.2016): 436. http://dx.doi.org/10.3390/en9060436.
Der volle Inhalt der QuelleYang, Zhenhuan, Yan Lok Ko, Kush R. Varshney und Yiming Ying. „Minimax AUC Fairness: Efficient Algorithm with Provable Convergence“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 10 (26.06.2023): 11909–17. http://dx.doi.org/10.1609/aaai.v37i10.26405.
Der volle Inhalt der QuelleKhanam, Taslima. „Rule of law approach to alleviation of poverty: An analysis on human rights dimension of governance“. IIUC Studies 15 (21.09.2020): 23–32. http://dx.doi.org/10.3329/iiucs.v15i0.49342.
Der volle Inhalt der QuelleQi, Jin. „Mitigating Delays and Unfairness in Appointment Systems“. Management Science 63, Nr. 2 (Februar 2017): 566–83. http://dx.doi.org/10.1287/mnsc.2015.2353.
Der volle Inhalt der QuelleLehrieder, Frank, Simon Oechsner, Tobias Hoßfeld, Dirk Staehle, Zoran Despotovic, Wolfgang Kellerer und Maximilian Michel. „Mitigating unfairness in locality-aware peer-to-peer networks“. International Journal of Network Management 21, Nr. 1 (Januar 2011): 3–20. http://dx.doi.org/10.1002/nem.772.
Der volle Inhalt der QuelleDissertationen zum Thema "Unfairness mitigation"
Yao, Sirui. „Evaluating, Understanding, and Mitigating Unfairness in Recommender Systems“. Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103779.
Der volle Inhalt der QuelleDoctor of Philosophy
Recommender systems are information filtering tools that discover potential matching between users and items. However, a recommender system, if not properly built, may not treat users and items equitably, which raises ethical and legal concerns. In this research, we explore the implication of fairness in the context of recommender systems, study the relation between unfairness in recommender output and inequality in the underlying population, and propose effective unfairness mitigation approaches. We start with finding unfairness metrics appropriate for recommender systems. We focus on the task of rating prediction, which is a crucial step in recommender systems. We propose a set of unfairness metrics measured as the disparity in how much predictions deviate from the ground truth ratings. We also offer a mitigation method to reduce these forms of unfairness in matrix factorization models Next, we look deeper into the factors that contribute to error-based unfairness in matrix factorization models and identify four types of biases that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which is a mitigation strategy that learns personalized regularization parameters to directly addresses data biases. The learned per-user regularization parameters are interpretable and provide insight into how fairness is improved. Third, we conduct a theoretical study on the long-term dynamics of the inequality in the fitting (e.g., interest, qualification, etc.) between users and items. We first mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we discuss the existence and uniqueness of system equilibrium as the one-step dynamics repeat. We also show that depending on the relation between item categories and the recommendation policies (unconstrained or fair), recommendations in one item category can reshape the user-item fit in another item category. In summary, we examine different fairness criteria in rating prediction and recommendation, study the dynamics of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality.
Alves, da Silva Guilherme. „Traitement hybride pour l'équité algorithmique“. Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0323.
Der volle Inhalt der QuelleAlgorithmic decisions are currently being used on a daily basis. These decisions often rely on Machine Learning (ML) algorithms that may produce complex and opaque ML models. Recent studies raised unfairness concerns by revealing discriminating outcomes produced by ML models against minorities and unprivileged groups. As ML models are capable of amplifying discrimination against minorities due to unfair outcomes, it reveals the need for approaches that uncover and remove unintended biases. Assessing fairness and mitigating unfairness are the two main tasks that have motivated the growth of the research field called {algorithmic fairness}. Several notions used to assess fairness focus on the outcomes and link to sensitive features (e.g. gender and ethnicity) through statistical measures. Although these notions have distinct semantics, the use of these definitions of fairness is criticized for being a reductionist understanding of fairness whose aim is basically to implement accept/not-accept reports, ignoring other perspectives on inequality and on societal impact. Process fairness instead is a subjective fairness notion which is centered on the process that leads to outcomes. To mitigate or remove unfairness, approaches generally apply fairness interventions in specific steps. They usually change either (1) the data before training or (2) the optimization function or (3) the algorithms' outputs in order to enforce fairer outcomes. Recently, research on algorithmic fairness have been dedicated to explore combinations of different fairness interventions, which is referred to in this thesis as {fairness hybrid-processing}. Once we try to mitigate unfairness, a tension between fairness and performance arises that is known as the fairness-accuracy trade-off. This thesis focuses on the fairness-accuracy trade-off problem since we are interested in reducing unintended biases without compromising classification performance. We thus propose ensemble-based methods to find a good compromise between fairness and classification performance of ML models, in particular models for binary classification. In addition, these methods produce ensemble classifiers thanks to a combination of fairness interventions, which characterizes the fairness hybrid-processing approaches. We introduce FixOut ({F}a{I}rness through e{X}planations and feature drop{Out}), the human-centered, model-agnostic framework that improves process fairness without compromising classification performance. It receives a pre-trained classifier (original model), a dataset, a set of sensitive features, and an explanation method as input, and it outputs a new classifier that is less reliant on the sensitive features. To assess the reliance of a given pre-trained model on sensitive features, FixOut uses explanations to estimate the contribution of features to models' outcomes. If sensitive features are shown to contribute globally to models' outcomes, then the model is deemed unfair. In this case, it builds a pool of fairer classifiers that are then aggregated to obtain an ensemble classifier. We show the adaptability of FixOut on different combinations of explanation methods and sampling approaches. We also evaluate the effectiveness of FixOut w.r.t. to process fairness but also using well-known standard fairness notions available in the literature. Furthermore, we propose several improvements such as automating the choice of FixOut's parameters and extending FixOut to other data types
Buchteile zum Thema "Unfairness mitigation"
Xu, Zikang, Shang Zhao, Quan Quan, Qingsong Yao und S. Kevin Zhou. „FairAdaBN: Mitigating Unfairness with Adaptive Batch Normalization and Its Application to Dermatological Disease Classification“. In Lecture Notes in Computer Science, 307–17. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43895-0_29.
Der volle Inhalt der QuelleYi, Kun, Xisha Jin, Zhengyang Bai, Yuntao Kong und Qiang Ma. „An Empirical User Study on Congestion-Aware Route Recommendation“. In Information and Communication Technologies in Tourism 2024, 325–38. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_35.
Der volle Inhalt der QuelleChakrobartty, Shuvro, und Omar F. El-Gayar. „Fairness Challenges in Artificial Intelligence“. In Encyclopedia of Data Science and Machine Learning, 1685–702. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9220-5.ch101.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Unfairness mitigation"
Calegari, Roberta, Gabriel G. Castañé, Michela Milano und Barry O'Sullivan. „Assessing and Enforcing Fairness in the AI Lifecycle“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/735.
Der volle Inhalt der QuelleBoratto, Ludovico, Francesco Fabbri, Gianni Fenu, Mirko Marras und Giacomo Medda. „Counterfactual Graph Augmentation for Consumer Unfairness Mitigation in Recommender Systems“. In CIKM '23: The 32nd ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583780.3615165.
Der volle Inhalt der QuelleMahmud, Md Sultan, und Md Forkan Uddin. „Unfairness problem in WLANs due to asymmetric co-channel interference and its mitigation“. In 2013 16th International Conference on Computer and Information Technology (ICCIT). IEEE, 2014. http://dx.doi.org/10.1109/iccitechn.2014.6997322.
Der volle Inhalt der QuelleKim, Dohyung, Sungho Park, Sunhee Hwang, Minsong Ki, Seogkyu Jeon und Hyeran Byun. „Resampling Strategy for Mitigating Unfairness in Face Attribute Classification“. In 2020 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2020. http://dx.doi.org/10.1109/ictc49870.2020.9289379.
Der volle Inhalt der QuelleLi, Tianlin, Zhiming Li, Anran Li, Mengnan Du, Aishan Liu, Qing Guo, Guozhu Meng und Yang Liu. „Fairness via Group Contribution Matching“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/49.
Der volle Inhalt der QuelleSinghal, Anmol, Preethu Rose Anish, Shirish Karande und Smita Ghaisas. „Towards Mitigating Perceived Unfairness in Contracts from a Non-Legal Stakeholder’s Perspective“. In Proceedings of the Natural Legal Language Processing Workshop 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.nllp-1.11.
Der volle Inhalt der QuelleCirino, Fernanda R. P., Carlos D. Maia, Marcelo S. Balbino und Cristiane N. Nobre. „Proposal of a Method for Identifying Unfairness in Machine Learning Models based on Counterfactual Explanations“. In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/kdmile.2023.232900.
Der volle Inhalt der QuelleTran, Cuong, und Ferdinando Fioretto. „On the Fairness Impacts of Private Ensembles Models“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/57.
Der volle Inhalt der QuelleTouaiti, Balsam, und Delphine Lacaze. „THE ROLE OF EMOTIONAL LABOR AND AUTONOMY IN MITIGATING THE EXHAUSTING EFFECTS OF UNFAIRNESS IN THE TEACHING SECTOR“. In 12th annual International Conference of Education, Research and Innovation. IATED, 2019. http://dx.doi.org/10.21125/iceri.2019.1192.
Der volle Inhalt der QuelleMitsui, Shu, und Hiroki Nishiyama. „A Bandwidth Allocation Algorithm Mitigating Unfairness Issues in a UAV-Aided Flying Base Station Used for Disaster Recovery“. In 2023 IEEE 98th Vehicular Technology Conference (VTC2023-Fall). IEEE, 2023. http://dx.doi.org/10.1109/vtc2023-fall60731.2023.10333709.
Der volle Inhalt der Quelle