Добірка наукової літератури з теми "Unfairness mitigation"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Unfairness mitigation".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Unfairness mitigation":
Balayn, Agathe, Christoph Lofi, and Geert-Jan Houben. "Managing bias and unfairness in data for decision support: a survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems." VLDB Journal 30, no. 5 (May 5, 2021): 739–68. http://dx.doi.org/10.1007/s00778-021-00671-8.
Pagano, Tiago P., Rafael B. Loureiro, Fernanda V. N. Lisboa, Rodrigo M. Peixoto, Guilherme A. S. Guimarães, Gustavo O. R. Cruz, Maira M. Araujo, et al. "Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods." Big Data and Cognitive Computing 7, no. 1 (January 13, 2023): 15. http://dx.doi.org/10.3390/bdcc7010015.
Abdullah, Nurhidayah, and Zuhairah Ariff Abd Ghadas. "THE APPLICATION OF GOOD FAITH IN CONTRACTS DURING A FORCE MAJEURE EVENT AND BEYOND WITH SPECIAL REFERENCE TO THE COVID-19 ACT 2020." UUM Journal of Legal Studies 14, no. 1 (January 18, 2023): 141–60. http://dx.doi.org/10.32890/uumjls2023.14.1.6.
Menziwa, Yolanda, Eunice Lebogang Sesale, and Solly Matshonisa Seeletse. "Challenges in research data collection and mitigation interventions." International Journal of Research in Business and Social Science (2147- 4478) 13, no. 2 (April 3, 2024): 336–44. http://dx.doi.org/10.20525/ijrbs.v13i2.3187.
Rana, Saadia Afzal, Zati Hakim Azizul, and Ali Afzal Awan. "A step toward building a unified framework for managing AI bias." PeerJ Computer Science 9 (October 26, 2023): e1630. http://dx.doi.org/10.7717/peerj-cs.1630.
Latif, Aadil, Wolfgang Gawlik, and Peter Palensky. "Quantification and Mitigation of Unfairness in Active Power Curtailment of Rooftop Photovoltaic Systems Using Sensitivity Based Coordinated Control." Energies 9, no. 6 (June 4, 2016): 436. http://dx.doi.org/10.3390/en9060436.
Yang, Zhenhuan, Yan Lok Ko, Kush R. Varshney, and Yiming Ying. "Minimax AUC Fairness: Efficient Algorithm with Provable Convergence." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (June 26, 2023): 11909–17. http://dx.doi.org/10.1609/aaai.v37i10.26405.
Khanam, Taslima. "Rule of law approach to alleviation of poverty: An analysis on human rights dimension of governance." IIUC Studies 15 (September 21, 2020): 23–32. http://dx.doi.org/10.3329/iiucs.v15i0.49342.
Qi, Jin. "Mitigating Delays and Unfairness in Appointment Systems." Management Science 63, no. 2 (February 2017): 566–83. http://dx.doi.org/10.1287/mnsc.2015.2353.
Lehrieder, Frank, Simon Oechsner, Tobias Hoßfeld, Dirk Staehle, Zoran Despotovic, Wolfgang Kellerer, and Maximilian Michel. "Mitigating unfairness in locality-aware peer-to-peer networks." International Journal of Network Management 21, no. 1 (January 2011): 3–20. http://dx.doi.org/10.1002/nem.772.
Дисертації з теми "Unfairness mitigation":
Yao, Sirui. "Evaluating, Understanding, and Mitigating Unfairness in Recommender Systems." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103779.
Doctor of Philosophy
Recommender systems are information filtering tools that discover potential matching between users and items. However, a recommender system, if not properly built, may not treat users and items equitably, which raises ethical and legal concerns. In this research, we explore the implication of fairness in the context of recommender systems, study the relation between unfairness in recommender output and inequality in the underlying population, and propose effective unfairness mitigation approaches. We start with finding unfairness metrics appropriate for recommender systems. We focus on the task of rating prediction, which is a crucial step in recommender systems. We propose a set of unfairness metrics measured as the disparity in how much predictions deviate from the ground truth ratings. We also offer a mitigation method to reduce these forms of unfairness in matrix factorization models Next, we look deeper into the factors that contribute to error-based unfairness in matrix factorization models and identify four types of biases that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which is a mitigation strategy that learns personalized regularization parameters to directly addresses data biases. The learned per-user regularization parameters are interpretable and provide insight into how fairness is improved. Third, we conduct a theoretical study on the long-term dynamics of the inequality in the fitting (e.g., interest, qualification, etc.) between users and items. We first mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we discuss the existence and uniqueness of system equilibrium as the one-step dynamics repeat. We also show that depending on the relation between item categories and the recommendation policies (unconstrained or fair), recommendations in one item category can reshape the user-item fit in another item category. In summary, we examine different fairness criteria in rating prediction and recommendation, study the dynamics of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality.
Alves, da Silva Guilherme. "Traitement hybride pour l'équité algorithmique." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0323.
Algorithmic decisions are currently being used on a daily basis. These decisions often rely on Machine Learning (ML) algorithms that may produce complex and opaque ML models. Recent studies raised unfairness concerns by revealing discriminating outcomes produced by ML models against minorities and unprivileged groups. As ML models are capable of amplifying discrimination against minorities due to unfair outcomes, it reveals the need for approaches that uncover and remove unintended biases. Assessing fairness and mitigating unfairness are the two main tasks that have motivated the growth of the research field called {algorithmic fairness}. Several notions used to assess fairness focus on the outcomes and link to sensitive features (e.g. gender and ethnicity) through statistical measures. Although these notions have distinct semantics, the use of these definitions of fairness is criticized for being a reductionist understanding of fairness whose aim is basically to implement accept/not-accept reports, ignoring other perspectives on inequality and on societal impact. Process fairness instead is a subjective fairness notion which is centered on the process that leads to outcomes. To mitigate or remove unfairness, approaches generally apply fairness interventions in specific steps. They usually change either (1) the data before training or (2) the optimization function or (3) the algorithms' outputs in order to enforce fairer outcomes. Recently, research on algorithmic fairness have been dedicated to explore combinations of different fairness interventions, which is referred to in this thesis as {fairness hybrid-processing}. Once we try to mitigate unfairness, a tension between fairness and performance arises that is known as the fairness-accuracy trade-off. This thesis focuses on the fairness-accuracy trade-off problem since we are interested in reducing unintended biases without compromising classification performance. We thus propose ensemble-based methods to find a good compromise between fairness and classification performance of ML models, in particular models for binary classification. In addition, these methods produce ensemble classifiers thanks to a combination of fairness interventions, which characterizes the fairness hybrid-processing approaches. We introduce FixOut ({F}a{I}rness through e{X}planations and feature drop{Out}), the human-centered, model-agnostic framework that improves process fairness without compromising classification performance. It receives a pre-trained classifier (original model), a dataset, a set of sensitive features, and an explanation method as input, and it outputs a new classifier that is less reliant on the sensitive features. To assess the reliance of a given pre-trained model on sensitive features, FixOut uses explanations to estimate the contribution of features to models' outcomes. If sensitive features are shown to contribute globally to models' outcomes, then the model is deemed unfair. In this case, it builds a pool of fairer classifiers that are then aggregated to obtain an ensemble classifier. We show the adaptability of FixOut on different combinations of explanation methods and sampling approaches. We also evaluate the effectiveness of FixOut w.r.t. to process fairness but also using well-known standard fairness notions available in the literature. Furthermore, we propose several improvements such as automating the choice of FixOut's parameters and extending FixOut to other data types
Частини книг з теми "Unfairness mitigation":
Xu, Zikang, Shang Zhao, Quan Quan, Qingsong Yao, and S. Kevin Zhou. "FairAdaBN: Mitigating Unfairness with Adaptive Batch Normalization and Its Application to Dermatological Disease Classification." In Lecture Notes in Computer Science, 307–17. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43895-0_29.
Yi, Kun, Xisha Jin, Zhengyang Bai, Yuntao Kong, and Qiang Ma. "An Empirical User Study on Congestion-Aware Route Recommendation." In Information and Communication Technologies in Tourism 2024, 325–38. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_35.
Chakrobartty, Shuvro, and Omar F. El-Gayar. "Fairness Challenges in Artificial Intelligence." In Encyclopedia of Data Science and Machine Learning, 1685–702. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9220-5.ch101.
Тези доповідей конференцій з теми "Unfairness mitigation":
Calegari, Roberta, Gabriel G. Castañé, Michela Milano, and Barry O'Sullivan. "Assessing and Enforcing Fairness in the AI Lifecycle." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/735.
Boratto, Ludovico, Francesco Fabbri, Gianni Fenu, Mirko Marras, and Giacomo Medda. "Counterfactual Graph Augmentation for Consumer Unfairness Mitigation in Recommender Systems." In CIKM '23: The 32nd ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583780.3615165.
Mahmud, Md Sultan, and Md Forkan Uddin. "Unfairness problem in WLANs due to asymmetric co-channel interference and its mitigation." In 2013 16th International Conference on Computer and Information Technology (ICCIT). IEEE, 2014. http://dx.doi.org/10.1109/iccitechn.2014.6997322.
Kim, Dohyung, Sungho Park, Sunhee Hwang, Minsong Ki, Seogkyu Jeon, and Hyeran Byun. "Resampling Strategy for Mitigating Unfairness in Face Attribute Classification." In 2020 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2020. http://dx.doi.org/10.1109/ictc49870.2020.9289379.
Li, Tianlin, Zhiming Li, Anran Li, Mengnan Du, Aishan Liu, Qing Guo, Guozhu Meng, and Yang Liu. "Fairness via Group Contribution Matching." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/49.
Singhal, Anmol, Preethu Rose Anish, Shirish Karande, and Smita Ghaisas. "Towards Mitigating Perceived Unfairness in Contracts from a Non-Legal Stakeholder’s Perspective." In Proceedings of the Natural Legal Language Processing Workshop 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.nllp-1.11.
Cirino, Fernanda R. P., Carlos D. Maia, Marcelo S. Balbino, and Cristiane N. Nobre. "Proposal of a Method for Identifying Unfairness in Machine Learning Models based on Counterfactual Explanations." In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/kdmile.2023.232900.
Tran, Cuong, and Ferdinando Fioretto. "On the Fairness Impacts of Private Ensembles Models." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/57.
Touaiti, Balsam, and Delphine Lacaze. "THE ROLE OF EMOTIONAL LABOR AND AUTONOMY IN MITIGATING THE EXHAUSTING EFFECTS OF UNFAIRNESS IN THE TEACHING SECTOR." In 12th annual International Conference of Education, Research and Innovation. IATED, 2019. http://dx.doi.org/10.21125/iceri.2019.1192.
Mitsui, Shu, and Hiroki Nishiyama. "A Bandwidth Allocation Algorithm Mitigating Unfairness Issues in a UAV-Aided Flying Base Station Used for Disaster Recovery." In 2023 IEEE 98th Vehicular Technology Conference (VTC2023-Fall). IEEE, 2023. http://dx.doi.org/10.1109/vtc2023-fall60731.2023.10333709.