Добірка наукової літератури з теми "Unfairness mitigation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Unfairness mitigation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Unfairness mitigation":

1

Balayn, Agathe, Christoph Lofi, and Geert-Jan Houben. "Managing bias and unfairness in data for decision support: a survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems." VLDB Journal 30, no. 5 (May 5, 2021): 739–68. http://dx.doi.org/10.1007/s00778-021-00671-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe increasing use of data-driven decision support systems in industry and governments is accompanied by the discovery of a plethora of bias and unfairness issues in the outputs of these systems. Multiple computer science communities, and especially machine learning, have started to tackle this problem, often developing algorithmic solutions to mitigate biases to obtain fairer outputs. However, one of the core underlying causes for unfairness is bias in training data which is not fully covered by such approaches. Especially, bias in data is not yet a central topic in data engineering and management research. We survey research on bias and unfairness in several computer science domains, distinguishing between data management publications and other domains. This covers the creation of fairness metrics, fairness identification, and mitigation methods, software engineering approaches and biases in crowdsourcing activities. We identify relevant research gaps and show which data management activities could be repurposed to handle biases and which ones might reinforce such biases. In the second part, we argue for a novel data-centered approach overcoming the limitations of current algorithmic-centered methods. This approach focuses on eliciting and enforcing fairness requirements and constraints on data that systems are trained, validated, and used on. We argue for the need to extend database management systems to handle such constraints and mitigation methods. We discuss the associated future research directions regarding algorithms, formalization, modelling, users, and systems.
2

Pagano, Tiago P., Rafael B. Loureiro, Fernanda V. N. Lisboa, Rodrigo M. Peixoto, Guilherme A. S. Guimarães, Gustavo O. R. Cruz, Maira M. Araujo, et al. "Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods." Big Data and Cognitive Computing 7, no. 1 (January 13, 2023): 15. http://dx.doi.org/10.3390/bdcc7010015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
One of the difficulties of artificial intelligence is to ensure that model decisions are fair and free of bias. In research, datasets, metrics, techniques, and tools are applied to detect and mitigate algorithmic unfairness and bias. This study examines the current knowledge on bias and unfairness in machine learning models. The systematic review followed the PRISMA guidelines and is registered on OSF plataform. The search was carried out between 2021 and early 2022 in the Scopus, IEEE Xplore, Web of Science, and Google Scholar knowledge bases and found 128 articles published between 2017 and 2022, of which 45 were chosen based on search string optimization and inclusion and exclusion criteria. We discovered that the majority of retrieved works focus on bias and unfairness identification and mitigation techniques, offering tools, statistical approaches, important metrics, and datasets typically used for bias experiments. In terms of the primary forms of bias, data, algorithm, and user interaction were addressed in connection to the preprocessing, in-processing, and postprocessing mitigation methods. The use of Equalized Odds, Opportunity Equality, and Demographic Parity as primary fairness metrics emphasizes the crucial role of sensitive attributes in mitigating bias. The 25 datasets chosen span a wide range of areas, including criminal justice image enhancement, finance, education, product pricing, and health, with the majority including sensitive attributes. In terms of tools, Aequitas is the most often referenced, yet many of the tools were not employed in empirical experiments. A limitation of current research is the lack of multiclass and multimetric studies, which are found in just a few works and constrain the investigation to binary-focused method. Furthermore, the results indicate that different fairness metrics do not present uniform results for a given use case, and that more research with varied model architectures is necessary to standardize which ones are more appropriate for a given context. We also observed that all research addressed the transparency of the algorithm, or its capacity to explain how decisions are taken.
3

Abdullah, Nurhidayah, and Zuhairah Ariff Abd Ghadas. "THE APPLICATION OF GOOD FAITH IN CONTRACTS DURING A FORCE MAJEURE EVENT AND BEYOND WITH SPECIAL REFERENCE TO THE COVID-19 ACT 2020." UUM Journal of Legal Studies 14, no. 1 (January 18, 2023): 141–60. http://dx.doi.org/10.32890/uumjls2023.14.1.6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many parties face difficulties in performing contracts due to the economic dislocation since the outbreak of COVID-19. The extraordinary nature of this pandemic situation calls for good faith in contractual settings. The discussion of this paper focuses on the imposition in a force majeure event which will cause many contracts to be unenforceable. The research method used doctrinal analysis to discuss the force majeure clause in the context of the COVID-19 pandemic and the obligation of good faith in contracts. This paper will discuss the COVID-19 pandemic as a force majeure event, arguing that the rise of "good faith" in contract law and the application of "good faith" in contracts as a mitigation for a force majeure event. The paper will then present its conclusion and recommendations. The findings highlight the significance of applying "good faith" in the event of force majeure and beyond as a mitigating factor in alleviating uncertainty and unfairness.
4

Menziwa, Yolanda, Eunice Lebogang Sesale, and Solly Matshonisa Seeletse. "Challenges in research data collection and mitigation interventions." International Journal of Research in Business and Social Science (2147- 4478) 13, no. 2 (April 3, 2024): 336–44. http://dx.doi.org/10.20525/ijrbs.v13i2.3187.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper investigated the challenges that researchers in a health sciences university can experience, and ways to counterbalance the negative effects of these challenges. Focus was on the extent to which gatekeepers on higher education institutions (HEIs) can restrict research, and the way natural sciences researchers often experience gatekeeper biasness on denying them access as compared to the way health sciences researchers are treated. The method compared experiences of researchers for Master of Science (MSc) degrees in selected science subjects, and the projects undertaken by health sciences students. All the studies were based on students on campus as research subjects. The MSc ones were for students who were already academics teaching on campus. All the proposals received clearance certificates from the same ethics committee. Upon requiring the HEI registrar to grant permission to use the student as study participants, the health sciences were granted permission and the names of the students. For the science academics, they were denied permission to the student numbers, which were needed to request individual students to make on decisions whether they wanted to participate in the studies or not. Gatekeeping weaknesses were explored, and lawful interventions were used to collect research data. It was observed that in the science academic divisions of HEIs that are dominated by the health sciences, gatekeeper unfairness and power could offset creativities and innovations initiated by researchers. Recommendations have been made to limit this power.
5

Rana, Saadia Afzal, Zati Hakim Azizul, and Ali Afzal Awan. "A step toward building a unified framework for managing AI bias." PeerJ Computer Science 9 (October 26, 2023): e1630. http://dx.doi.org/10.7717/peerj-cs.1630.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Integrating artificial intelligence (AI) has transformed living standards. However, AI’s efforts are being thwarted by concerns about the rise of biases and unfairness. The problem advocates strongly for a strategy for tackling potential biases. This article thoroughly evaluates existing knowledge to enhance fairness management, which will serve as a foundation for creating a unified framework to address any bias and its subsequent mitigation method throughout the AI development pipeline. We map the software development life cycle (SDLC), machine learning life cycle (MLLC) and cross industry standard process for data mining (CRISP-DM) together to have a general understanding of how phases in these development processes are related to each other. The map should benefit researchers from multiple technical backgrounds. Biases are categorised into three distinct classes; pre-existing, technical and emergent bias, and subsequently, three mitigation strategies; conceptual, empirical and technical, along with fairness management approaches; fairness sampling, learning and certification. The recommended practices for debias and overcoming challenges encountered further set directions for successfully establishing a unified framework.
6

Latif, Aadil, Wolfgang Gawlik, and Peter Palensky. "Quantification and Mitigation of Unfairness in Active Power Curtailment of Rooftop Photovoltaic Systems Using Sensitivity Based Coordinated Control." Energies 9, no. 6 (June 4, 2016): 436. http://dx.doi.org/10.3390/en9060436.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yang, Zhenhuan, Yan Lok Ko, Kush R. Varshney, and Yiming Ying. "Minimax AUC Fairness: Efficient Algorithm with Provable Convergence." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (June 26, 2023): 11909–17. http://dx.doi.org/10.1609/aaai.v37i10.26405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The use of machine learning models in consequential decision making often exacerbates societal inequity, in particular yielding disparate impact on members of marginalized groups defined by race and gender. The area under the ROC curve (AUC) is widely used to evaluate the performance of a scoring function in machine learning, but is studied in algorithmic fairness less than other performance metrics. Due to the pairwise nature of the AUC, defining an AUC-based group fairness metric is pairwise-dependent and may involve both intra-group and inter-group AUCs. Importantly, considering only one category of AUCs is not sufficient to mitigate unfairness in AUC optimization. In this paper, we propose a minimax learning and bias mitigation framework that incorporates both intra-group and inter-group AUCs while maintaining utility. Based on this Rawlsian framework, we design an efficient stochastic optimization algorithm and prove its convergence to the minimum group-level AUC. We conduct numerical experiments on both synthetic and real-world datasets to validate the effectiveness of the minimax framework and the proposed optimization algorithm.
8

Khanam, Taslima. "Rule of law approach to alleviation of poverty: An analysis on human rights dimension of governance." IIUC Studies 15 (September 21, 2020): 23–32. http://dx.doi.org/10.3329/iiucs.v15i0.49342.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A society without rule of law is similar to a bowl having holes in it, so it leaks. Without plugging the leaks, putting more money in it makes no sense. Almost this claptrap is going in the form of poverty mitigation programs. Retorting the fact, this paper reflects that substantial poverty must be implied as formed by society itself and argues that lots of inhabitants of the world are deprived of the opportunity to get improved livings and live in dearth, as they are not within the shield of the rule of law. They may possibly be the citizens of nation state in which they live; nevertheless, their chattels and workings are vulnerable and far less rewarding than these are addressed. To address this unfairness, the paper provides a concise overview on the impact of rule of law as the basis for the people of opportunity and equity following the study of analytical approach with interdisciplinary aspect. Particular emphasis is to be found on human rights dimension of governance, and legal empowerment for the alleviation of poverty. IIUC Studies Vol.15(0) December 2018: 23-32
9

Qi, Jin. "Mitigating Delays and Unfairness in Appointment Systems." Management Science 63, no. 2 (February 2017): 566–83. http://dx.doi.org/10.1287/mnsc.2015.2353.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lehrieder, Frank, Simon Oechsner, Tobias Hoßfeld, Dirk Staehle, Zoran Despotovic, Wolfgang Kellerer, and Maximilian Michel. "Mitigating unfairness in locality-aware peer-to-peer networks." International Journal of Network Management 21, no. 1 (January 2011): 3–20. http://dx.doi.org/10.1002/nem.772.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Unfairness mitigation":

1

Yao, Sirui. "Evaluating, Understanding, and Mitigating Unfairness in Recommender Systems." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103779.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recommender systems are information filtering tools that discover potential matchings between users and items and benefit both parties. This benefit can be considered a social resource that should be equitably allocated across users and items, especially in critical domains such as education and employment. Biases and unfairness in recommendations raise both ethical and legal concerns. In this dissertation, we investigate the concept of unfairness in the context of recommender systems. In particular, we study appropriate unfairness evaluation metrics, examine the relation between bias in recommender models and inequality in the underlying population, as well as propose effective unfairness mitigation approaches. We start with exploring the implication of fairness in recommendation and formulating unfairness evaluation metrics. We focus on the task of rating prediction. We identify the insufficiency of demographic parity for scenarios where the target variable is justifiably dependent on demographic features. Then we propose an alternative set of unfairness metrics that measured based on how much the average predicted ratings deviate from average true ratings. We also reduce these unfairness in matrix factorization (MF) models by explicitly adding them as penalty terms to learning objectives. Next, we target a form of unfairness in matrix factorization models observed as disparate model performance across user groups. We identify four types of biases in the training data that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which learns personalized regularization parameters that directly address the data biases. PRL poses the hyperparameter search problem as a secondary learning task. It enables back-propagation to learn the personalized regularization parameters by leveraging the closed-form solutions of alternating least squares (ALS) to solve MF. Furthermore, the learned parameters are interpretable and provide insights into how fairness is improved. Third, we conduct theoretical analysis on the long-term dynamics of inequality in the underlying population, in terms of the fitting between users and items. We view the task of recommendation as solving a set of classification problems through threshold policies. We mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we prove that a system with the formulated dynamics always has at least one equilibrium, and we provide sufficient conditions for the equilibrium to be unique. We also show that, depending on the item category relationships and the recommendation policies, recommendations in one item category can reshape the user-item fit in another item category. To summarize, in this research, we examine different fairness criteria in rating prediction and recommendation, study the dynamic of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality.
Doctor of Philosophy
Recommender systems are information filtering tools that discover potential matching between users and items. However, a recommender system, if not properly built, may not treat users and items equitably, which raises ethical and legal concerns. In this research, we explore the implication of fairness in the context of recommender systems, study the relation between unfairness in recommender output and inequality in the underlying population, and propose effective unfairness mitigation approaches. We start with finding unfairness metrics appropriate for recommender systems. We focus on the task of rating prediction, which is a crucial step in recommender systems. We propose a set of unfairness metrics measured as the disparity in how much predictions deviate from the ground truth ratings. We also offer a mitigation method to reduce these forms of unfairness in matrix factorization models Next, we look deeper into the factors that contribute to error-based unfairness in matrix factorization models and identify four types of biases that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which is a mitigation strategy that learns personalized regularization parameters to directly addresses data biases. The learned per-user regularization parameters are interpretable and provide insight into how fairness is improved. Third, we conduct a theoretical study on the long-term dynamics of the inequality in the fitting (e.g., interest, qualification, etc.) between users and items. We first mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we discuss the existence and uniqueness of system equilibrium as the one-step dynamics repeat. We also show that depending on the relation between item categories and the recommendation policies (unconstrained or fair), recommendations in one item category can reshape the user-item fit in another item category. In summary, we examine different fairness criteria in rating prediction and recommendation, study the dynamics of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality.
2

Alves, da Silva Guilherme. "Traitement hybride pour l'équité algorithmique." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0323.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les décisions algorithmiques sont actuellement utilisées quotidiennement. Ces décisions reposent souvent sur des algorithmes d'apprentissage automatique (machine learning, ML) qui peuvent produire des modèles complexes et opaques. Des études récentes ont soulevé des problèmes d'iniquité en révélant des résultats discriminatoires produits par les modèles ML contre des minorités et des groupes non privilégiés. Comme les modèles ML sont capables d'amplifier la discrimination en raison de résultats injustes, cela révèle la nécessité d'approches qui découvrent et suppriment les biais inattendues. L'évaluation de l'équité et l'atténuation de l'iniquité sont les deux tâches principales qui ont motivé la croissance du domaine de recherche en équité algorithmique (algorithmic fairness). Plusieurs notions utilisées pour évaluer l'équité se concentrent sur les résultats et sont liées à des attributs sensibles (par exemple, l'éthinicité) par des mesures statistiques. Bien que ces notions aient une sémantique distincte, l'utilisation de ces définitions est critiquée pour sa compréhension réductrice de l'équité, dont le but est essentiellement de mettre en œuvre des rapports d'acceptation/non-acceptation, ignorant d'autres perspectives sur l'iniquité et l'impact sociétal. Process fairness (équité des procédures) est au contraire une notion d'équité subjective, centrée sur le processus qui conduit aux résultats. Pour atténuer ou supprimer l'iniquité, les approches appliquent généralement des interventions en matière d'équité selon des étapes spécifiques. Elles modifient généralement soit les données avant l'apprentissage, la fonction d'optimisation ou les sorties des algorithmes afin d'obtenir des résultats plus équitables. Récemment, les recherches ont été consacrées à l'exploration de combinaisons de différentes interventions en matière d'équité, ce qui est désigné dans cette thèse par le traitement hybride de l'équité. Une fois que nous essayons d'atténuer l'iniquité, une tension entre l'équité et la performance apparaît, connue comme le compromis équité/précision. Cette thèse se concentre sur le problème du compromis équité/précision, puisque nous sommes intéressés par la réduction des biais inattendues sans compromettre les performances de classification. Nous proposons donc des méthodes ensemblistes pour trouver un bon compromis entre l'équité et la performance de classification des modèles ML, en particulier les classificateurs binaires. De plus, ces méthodes produisent des classificateurs d'ensemble grâce à une combinaison d'interventions sur l'équité, ce qui caractérise les approches de traitement hybride de l'équité. Nous proposons FixOut (FaIrness through eXplanations and feature dropOut), un framework centré sur l'humain et agnostique vis-à-vis des modèles qui améliore l'équité sans compromettre les performances de classification. Il reçoit en entrée un classificateur pré-entraîné, un ensemble de données, un ensemble de attributs sensibles et une méthode d'explication, et il produit un nouveau classificateur qui dépend moins des attributs sensibles. Pour évaluer la dépendance d'un modèle pré-entraîné aux attributs sensibles, FixOut utilise des explications pour estimer la contribution des attributs aux résultats du modèle. S'il s'avère que les attributs sensibles contribuent globalement aux résultats, alors le modèle est considéré comme injuste. Dans ce cas, il construit un groupe de classificateurs plus justes qui sont ensuite agrégés pour obtenir un modèle d'ensemble. Nous montrons l'adaptabilité de FixOut sur différentes combinaisons de méthodes d'explication et d'approches d'échantillonnage. Nous évaluons également l'efficacité de FixOut par rapport au process fairness mais aussi en utilisant des notions d'équité standard bien connues disponibles dans la littérature. De plus, nous proposons plusieurs améliorations telles que l'automatisation du choix des paramètres et l'extension de FixOut à d'autres types de données
Algorithmic decisions are currently being used on a daily basis. These decisions often rely on Machine Learning (ML) algorithms that may produce complex and opaque ML models. Recent studies raised unfairness concerns by revealing discriminating outcomes produced by ML models against minorities and unprivileged groups. As ML models are capable of amplifying discrimination against minorities due to unfair outcomes, it reveals the need for approaches that uncover and remove unintended biases. Assessing fairness and mitigating unfairness are the two main tasks that have motivated the growth of the research field called {algorithmic fairness}. Several notions used to assess fairness focus on the outcomes and link to sensitive features (e.g. gender and ethnicity) through statistical measures. Although these notions have distinct semantics, the use of these definitions of fairness is criticized for being a reductionist understanding of fairness whose aim is basically to implement accept/not-accept reports, ignoring other perspectives on inequality and on societal impact. Process fairness instead is a subjective fairness notion which is centered on the process that leads to outcomes. To mitigate or remove unfairness, approaches generally apply fairness interventions in specific steps. They usually change either (1) the data before training or (2) the optimization function or (3) the algorithms' outputs in order to enforce fairer outcomes. Recently, research on algorithmic fairness have been dedicated to explore combinations of different fairness interventions, which is referred to in this thesis as {fairness hybrid-processing}. Once we try to mitigate unfairness, a tension between fairness and performance arises that is known as the fairness-accuracy trade-off. This thesis focuses on the fairness-accuracy trade-off problem since we are interested in reducing unintended biases without compromising classification performance. We thus propose ensemble-based methods to find a good compromise between fairness and classification performance of ML models, in particular models for binary classification. In addition, these methods produce ensemble classifiers thanks to a combination of fairness interventions, which characterizes the fairness hybrid-processing approaches. We introduce FixOut ({F}a{I}rness through e{X}planations and feature drop{Out}), the human-centered, model-agnostic framework that improves process fairness without compromising classification performance. It receives a pre-trained classifier (original model), a dataset, a set of sensitive features, and an explanation method as input, and it outputs a new classifier that is less reliant on the sensitive features. To assess the reliance of a given pre-trained model on sensitive features, FixOut uses explanations to estimate the contribution of features to models' outcomes. If sensitive features are shown to contribute globally to models' outcomes, then the model is deemed unfair. In this case, it builds a pool of fairer classifiers that are then aggregated to obtain an ensemble classifier. We show the adaptability of FixOut on different combinations of explanation methods and sampling approaches. We also evaluate the effectiveness of FixOut w.r.t. to process fairness but also using well-known standard fairness notions available in the literature. Furthermore, we propose several improvements such as automating the choice of FixOut's parameters and extending FixOut to other data types

Частини книг з теми "Unfairness mitigation":

1

Xu, Zikang, Shang Zhao, Quan Quan, Qingsong Yao, and S. Kevin Zhou. "FairAdaBN: Mitigating Unfairness with Adaptive Batch Normalization and Its Application to Dermatological Disease Classification." In Lecture Notes in Computer Science, 307–17. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43895-0_29.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yi, Kun, Xisha Jin, Zhengyang Bai, Yuntao Kong, and Qiang Ma. "An Empirical User Study on Congestion-Aware Route Recommendation." In Information and Communication Technologies in Tourism 2024, 325–38. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractOvertourism has become a significant concern in many popular travel destinations around the world. As one of considerable approaches to handle the overtourism issues, congestion-aware methods can be effective in mitigating overcrowding at popular attractions by spreading tourists to less-visited areas. However, they may lead to a potential Hawk-Dove game: tourists who share the same preference may have some of them assigned worse routes than others to avoid congestion, which raises a possibility that the tourists who are assigned to relatively unfavorable routes may feel dissatisfaction and unfairness. Most existing research focuses on alleviating congestion from an overall planner perspective through simulation studies, with little emphasis on actual user experience. In this study, we conducted a user survey on congestion-aware route recommendation in Kyoto, Japan, aiming to investigate the evaluation of congestion-aware route recommendation methods from each tourist’s personal perspective and to clarify the development status and future research directions of congestion-aware route recommendation methods. We choose five congestion-aware route recommendation methods that vary in their consideration of congestion and multi-agent interactions. We reveal the strengths and weaknesses of these methods from multiple aspects. We cluster the respondents based on their text responses and explore the differences between these clusters. Furthermore, we investigate the factors affecting tourists’ experience and compare the differences among groups of tourists.
3

Chakrobartty, Shuvro, and Omar F. El-Gayar. "Fairness Challenges in Artificial Intelligence." In Encyclopedia of Data Science and Machine Learning, 1685–702. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9220-5.ch101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fairness is a highly desirable human value in day-to-day decisions that affect human life. In recent years many successful applications of AI systems have been developed, and increasingly, AI methods are becoming part of many new applications for decision-making tasks that were previously carried out by human beings. Questions have been raised: 1) Can the decision be trusted? 2) Is it fair? Overall, are the AI-based systems making fair decisions, or are they increasing the unfairness in society? This article presents a systematic literature review (SLR) of existing works on AI fairness challenges. Towards this end, a conceptual bias mitigation framework for organizing and discussing AI fairness-related research is developed and presented. The systematic review provides a mapping of the AI fairness challenges to components of a proposed framework based on the suggested solutions within the literature. Future research opportunities are also identified.

Тези доповідей конференцій з теми "Unfairness mitigation":

1

Calegari, Roberta, Gabriel G. Castañé, Michela Milano, and Barry O'Sullivan. "Assessing and Enforcing Fairness in the AI Lifecycle." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/735.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness. The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult. This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle. Gaps and challenges identified during the development of this work are also discussed.
2

Boratto, Ludovico, Francesco Fabbri, Gianni Fenu, Mirko Marras, and Giacomo Medda. "Counterfactual Graph Augmentation for Consumer Unfairness Mitigation in Recommender Systems." In CIKM '23: The 32nd ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583780.3615165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mahmud, Md Sultan, and Md Forkan Uddin. "Unfairness problem in WLANs due to asymmetric co-channel interference and its mitigation." In 2013 16th International Conference on Computer and Information Technology (ICCIT). IEEE, 2014. http://dx.doi.org/10.1109/iccitechn.2014.6997322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kim, Dohyung, Sungho Park, Sunhee Hwang, Minsong Ki, Seogkyu Jeon, and Hyeran Byun. "Resampling Strategy for Mitigating Unfairness in Face Attribute Classification." In 2020 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2020. http://dx.doi.org/10.1109/ictc49870.2020.9289379.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Li, Tianlin, Zhiming Li, Anran Li, Mengnan Du, Aishan Liu, Qing Guo, Guozhu Meng, and Yang Liu. "Fairness via Group Contribution Matching." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/49.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fairness issues in Deep Learning models have recently received increasing attention due to their significant societal impact. Although methods for mitigating unfairness are constantly proposed, little research has been conducted to understand how discrimination and bias develop during the standard training process. In this study, we propose analyzing the contribution of each subgroup (i.e., a group of data with the same sensitive attribute) in the training process to understand the cause of such bias development process. We propose a gradient-based metric to assess training subgroup contribution disparity, showing that unequal contributions from different subgroups are one source of such unfairness. One way to balance the contribution of each subgroup is through oversampling, which ensures that an equal number of samples are drawn from each subgroup during each training iteration. However, we have found that even with a balanced number of samples, the contribution of each group remains unequal, resulting in unfairness under the oversampling strategy. To address the above issues, we propose an easy but effective group contribution matching (GCM) method to match the contribution of each subgroup. Our experiments show that our GCM effectively improves fairness and outperforms other methods significantly.
6

Singhal, Anmol, Preethu Rose Anish, Shirish Karande, and Smita Ghaisas. "Towards Mitigating Perceived Unfairness in Contracts from a Non-Legal Stakeholder’s Perspective." In Proceedings of the Natural Legal Language Processing Workshop 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.nllp-1.11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cirino, Fernanda R. P., Carlos D. Maia, Marcelo S. Balbino, and Cristiane N. Nobre. "Proposal of a Method for Identifying Unfairness in Machine Learning Models based on Counterfactual Explanations." In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/kdmile.2023.232900.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As machine learning models continue impacting diverse areas of society, the need to ensure fairness in decision-making becomes increasingly vital. Unfair outcomes resulting from biased data can have profound societal implications. This work proposes a method for identifying unfairness and mitigating biases in machine learning models based on counterfactual explanations. By analyzing the model’s equity implications after training, we provide insight into the potential of the method proposed to address equity issues. The findings of this study contribute to advancing the understanding of fairness assessment techniques, emphasizing the importance of post-training counterfactual approaches in ensuring fair decision-making processes in machine learning models.
8

Tran, Cuong, and Ferdinando Fioretto. "On the Fairness Impacts of Private Ensembles Models." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/57.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Private Aggregation of Teacher Ensembles (PATE) is a machine learning framework that enables the creation of private models through the combination of multiple "teacher" models and a "student" model. The student model learns to predict an output based on the voting of the teachers, and the resulting model satisfies differential privacy. PATE has been shown to be effective in creating private models in semi-supervised settings or when protecting data labels is a priority. This paper explores whether the use of PATE can result in unfairness, and demonstrates that it can lead to accuracy disparities among groups of individuals. The paper also analyzes the algorithmic and data properties that contribute to these disproportionate impacts, why these aspects are affecting different groups disproportionately, and offers recommendations for mitigating these effects.
9

Touaiti, Balsam, and Delphine Lacaze. "THE ROLE OF EMOTIONAL LABOR AND AUTONOMY IN MITIGATING THE EXHAUSTING EFFECTS OF UNFAIRNESS IN THE TEACHING SECTOR." In 12th annual International Conference of Education, Research and Innovation. IATED, 2019. http://dx.doi.org/10.21125/iceri.2019.1192.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Mitsui, Shu, and Hiroki Nishiyama. "A Bandwidth Allocation Algorithm Mitigating Unfairness Issues in a UAV-Aided Flying Base Station Used for Disaster Recovery." In 2023 IEEE 98th Vehicular Technology Conference (VTC2023-Fall). IEEE, 2023. http://dx.doi.org/10.1109/vtc2023-fall60731.2023.10333709.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії