Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: ML fairness.

Статті в журналах з теми "ML fairness"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "ML fairness".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Weinberg, Lindsay. "Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches." Journal of Artificial Intelligence Research 74 (May 6, 2022): 75–109. http://dx.doi.org/10.1613/jair.1.13196.

Повний текст джерела
Анотація:
This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society’s most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench “bias,” are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI’s long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bærøe, Kristine, Torbjørn Gundersen, Edmund Henden, and Kjetil Rommetveit. "Can medical algorithms be fair? Three ethical quandaries and one dilemma." BMJ Health & Care Informatics 29, no. 1 (April 2022): e100445. http://dx.doi.org/10.1136/bmjhci-2021-100445.

Повний текст джерела
Анотація:
ObjectiveTo demonstrate what it takes to reconcile the idea of fairness in medical algorithms and machine learning (ML) with the broader discourse of fairness and health equality in health research.MethodThe methodological approach used in this paper is theoretical and ethical analysis.ResultWe show that the question of ensuring comprehensive ML fairness is interrelated to three quandaries and one dilemma.DiscussionAs fairness in ML depends on a nexus of inherent justice and fairness concerns embedded in health research, a comprehensive conceptualisation is called for to make the notion useful.ConclusionThis paper demonstrates that more analytical work is needed to conceptualise fairness in ML so it adequately reflects the complexity of justice and fairness concerns within the field of health research.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yanjun Li, Yanjun Li, Huan Huang Yanjun Li, Qiang Geng Huan Huang, Xinwei Guo Qiang Geng, and Yuyu Yuan Xinwei Guo. "Fairness Measures of Machine Learning Models in Judicial Penalty Prediction." 網際網路技術學刊 23, no. 5 (September 2022): 1109–16. http://dx.doi.org/10.53106/160792642022092305019.

Повний текст джерела
Анотація:
<p>Machine learning (ML) has been widely adopted in many software applications across domains. However, accompanying the outstanding performance, the behaviors of the ML models, which are essentially a kind of black-box software, could be unfair and hard to understand in many cases. In our human-centered society, an unfair decision could potentially damage human value, even causing severe social consequences, especially in decision-critical scenarios such as legal judgment. Although some existing works investigated the ML models in terms of robustness, accuracy, security, privacy, quality, etc., the study on the fairness of ML is still in the early stage. In this paper, we first proposed a set of fairness metrics for ML models from different perspectives. Based on this, we performed a comparative study on the fairness of existing widely used classic ML and deep learning models in the domain of real-world judicial judgments. The experiment results reveal that the current state-of-the-art ML models could still raise concerns for unfair decision-making. The ML models with high accuracy and fairness are urgently demanding.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ghosh, Bishwamittra, Debabrota Basu, and Kuldeep S. Meel. "Algorithmic Fairness Verification with Graphical Models." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9539–48. http://dx.doi.org/10.1609/aaai.v36i9.21187.

Повний текст джерела
Анотація:
In recent years, machine learning (ML) algorithms have been deployed in safety-critical and high-stake decision-making, where the fairness of algorithms is of paramount importance. Fairness in ML centers on detecting bias towards certain demographic populations induced by an ML classifier and proposes algorithmic solutions to mitigate the bias with respect to different fairness definitions. To this end, several fairness verifiers have been proposed that compute the bias in the prediction of an ML classifier—essentially beyond a finite dataset—given the probability distribution of input features. In the context of verifying linear classifiers, existing fairness verifiers are limited by accuracy due to imprecise modeling of correlations among features and scalability due to restrictive formulations of the classifiers as SSAT/SMT formulas or by sampling. In this paper, we propose an efficient fairness verifier, called FVGM, that encodes the correlations among features as a Bayesian network. In contrast to existing verifiers, FVGM proposes a stochastic subset-sum based approach for verifying linear classifiers. Experimentally, we show that FVGM leads to an accurate and scalable assessment for more diverse families of fairness-enhancing algorithms, fairness attacks, and group/causal fairness metrics than the state-of-the-art fairness verifiers. We also demonstrate that FVGM facilitates the computation of fairness influence functions as a stepping stone to detect the source of bias induced by subsets of features.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kuzucu, Selim, Jiaee Cheong, Hatice Gunes, and Sinan Kalkan. "Uncertainty as a Fairness Measure." Journal of Artificial Intelligence Research 81 (October 13, 2024): 307–35. http://dx.doi.org/10.1613/jair.1.16041.

Повний текст джерела
Анотація:
Unfair predictions of machine learning (ML) models impede their broad acceptance in real-world settings. Tackling this arduous challenge first necessitates defining what it means for an ML model to be fair. This has been addressed by the ML community with various measures of fairness that depend on the prediction outcomes of the ML models, either at the group-level or the individual-level. These fairness measures are limited in that they utilize point predictions, neglecting their variances, or uncertainties, making them susceptible to noise, missingness and shifts in data. In this paper, we first show that a ML model may appear to be fair with existing point-based fairness measures but biased against a demographic group in terms of prediction uncertainties. Then, we introduce new fairness measures based on different types of uncertainties, namely, aleatoric uncertainty and epistemic uncertainty. We demonstrate on many datasets that (i) our uncertaintybased measures are complementary to existing measures of fairness, and (ii) they provide more insights about the underlying issues leading to bias.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Weerts, Hilde, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl, and Frank Hutter. "Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML." Journal of Artificial Intelligence Research 79 (February 17, 2024): 639–77. http://dx.doi.org/10.1613/jair.1.14747.

Повний текст джерела
Анотація:
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness-related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely posing it as an optimization problem can have adverse side effects. With this work, we aim to raise awareness among developers of AutoML systems about such limitations of fairness-aware AutoML, while also calling attention to the potential of AutoML as a tool for fairness research. We present a comprehensive overview of different ways in which fairness-related harm can arise and the ensuing implications for the design of fairness-aware AutoML. We conclude that while fairness cannot be automated, fairness-aware AutoML can play an important role in the toolbox of ML practitioners. We highlight several open technical challenges for future work in this direction. Additionally, we advocate for the creation of more user-centered assistive systems designed to tackle challenges encountered in fairness work. This article appears in the AI & Society track.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Makhlouf, Karima, Sami Zhioua, and Catuscia Palamidessi. "On the Applicability of Machine Learning Fairness Notions." ACM SIGKDD Explorations Newsletter 23, no. 1 (May 26, 2021): 14–23. http://dx.doi.org/10.1145/3468507.3468511.

Повний текст джерела
Анотація:
Machine Learning (ML) based predictive systems are increasingly used to support decisions with a critical impact on individuals' lives such as college admission, job hiring, child custody, criminal risk assessment, etc. As a result, fairness emerged as an important requirement to guarantee that ML predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey of fairness notions that, unlike other surveys in the literature, addresses the question of "which notion of fairness is most suited to a given real-world scenario and why?". Our attempt to answer this question consists in (1) identifying the set of fairness-related characteristics of the real-world scenario at hand, (2) analyzing the behavior of each fairness notion, and then (3) fitting these two elements to recommend the most suitable fairness notion in every specific setup. The results are summarized in a decision diagram that can be used by practitioners and policy makers to navigate the relatively large catalogue of ML fairness notions.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Singh, Vivek K., and Kailash Joshi. "Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM." e-Service Journal 14, no. 2 (December 2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.

Повний текст джерела
Анотація:
ABSTRACT: Developing efficient processes for building machine learning (ML) applications is an emerging topic for research. One of the well-known frameworks for organizing, developing, and deploying predictive machine learning models is cross-industry standard for data mining (CRISP-DM). However, the framework does not provide any guidelines for detecting and mitigating different types of fairness-related biases in the development of ML applications. The study of these biases is a relatively recent stream of research. To address this significant theoretical and practical gap, we propose a new framework—Fair CRISP-DM, which groups and maps these biases corresponding to each phase of an ML application development. Through this study, we contribute to the literature on ML development and fairness. We present recommendations to ML researchers on including fairness as part of the ML evaluation process. Further, ML practitioners can use our framework to identify and mitigate fairness-related biases in each phase of an ML project development. Finally, we also discuss emerging technologies which can help developers to detect and mitigate biases in different stages of ML application development.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhou, Zijian, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo, and Bryan Kian Hsiang Low. "Probably Approximate Shapley Fairness with Applications in Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 5 (June 26, 2023): 5910–18. http://dx.doi.org/10.1609/aaai.v37i5.25732.

Повний текст джерела
Анотація:
The Shapley value (SV) is adopted in various scenarios in machine learning (ML), including data valuation, agent valuation, and feature attribution, as it satisfies their fairness requirements. However, as exact SVs are infeasible to compute in practice, SV estimates are approximated instead. This approximation step raises an important question: do the SV estimates preserve the fairness guarantees of exact SVs? We observe that the fairness guarantees of exact SVs are too restrictive for SV estimates. Thus, we generalise Shapley fairness to probably approximate Shapley fairness and propose fidelity score, a metric to measure the variation of SV estimates, that determines how probable the fairness guarantees hold. Our last theoretical contribution is a novel greedy active estimation (GAE) algorithm that will maximise the lowest fidelity score and achieve a better fairness guarantee than the de facto Monte-Carlo estimation. We empirically verify GAE outperforms several existing methods in guaranteeing fairness while remaining competitive in estimation accuracy in various ML scenarios using real-world datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sreerama, Jeevan, and Gowrisankar Krishnamoorthy. "Ethical Considerations in AI Addressing Bias and Fairness in Machine Learning Models." Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 1, no. 1 (September 14, 2022): 130–38. http://dx.doi.org/10.60087/jklst.vol1.n1.p138.

Повний текст джерела
Анотація:
The proliferation of artificial intelligence (AI) and machine learning (ML) technologies has brought about unprecedented advancements in various domains. However, concerns surrounding bias and fairness in ML models have gained significant attention, raising ethical considerations that must be addressed. This paper explores the ethical implications of bias in AI systems and the importance of ensuring fairness in ML models. It examines the sources of bias in data collection, algorithm design, and decision-making processes, highlighting the potential consequences of biased AI systems on individuals and society. Furthermore, the paper discusses various approaches and strategies for mitigating bias and promoting fairness in ML models, including data preprocessing techniques, algorithmic transparency, and diverse representation in training datasets. Ethical guidelines and frameworks for developing responsible AI systems are also reviewed, emphasizing the need for interdisciplinary collaboration and stakeholder engagement to address bias and fairness comprehensively. Finally, future directions and challenges in advancing ethical considerations in AI are discussed, underscoring the ongoing efforts required to build trustworthy and equitable AI technologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Blow, Christina Hastings, Lijun Qian, Camille Gibson, Pamela Obiomon, and Xishuang Dong. "Comprehensive Validation on Reweighting Samples for Bias Mitigation via AIF360." Applied Sciences 14, no. 9 (April 30, 2024): 3826. http://dx.doi.org/10.3390/app14093826.

Повний текст джерела
Анотація:
Fairness Artificial Intelligence (AI) aims to identify and mitigate bias throughout the AI development process, spanning data collection, modeling, assessment, and deployment—a critical facet of establishing trustworthy AI systems. Tackling data bias through techniques like reweighting samples proves effective for promoting fairness. This paper undertakes a systematic exploration of reweighting samples for conventional Machine-Learning (ML) models, utilizing five models for binary classification on datasets such as Adult Income and COMPAS, incorporating various protected attributes. In particular, AI Fairness 360 (AIF360) from IBM, a versatile open-source library aimed at identifying and mitigating bias in machine-learning models throughout the entire AI application lifecycle, is employed as the foundation for conducting this systematic exploration. The evaluation of prediction outcomes employs five fairness metrics from AIF360, elucidating the nuanced and model-specific efficacy of reweighting samples in fostering fairness within traditional ML frameworks. Experimental results illustrate that reweighting samples effectively reduces bias in traditional ML methods for classification tasks. For instance, after reweighting samples, the balanced accuracy of Decision Tree (DT) improves to 100%, and its bias, as measured by fairness metrics such as Average Odds Difference (AOD), Equal Opportunity Difference (EOD), and Theil Index (TI), is mitigated to 0. However, reweighting samples does not effectively enhance the fairness performance of K Nearest Neighbor (KNN). This sheds light on the intricate dynamics of bias, underscoring the complexity involved in achieving fairness across different models and scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Teodorescu, Mike, Lily Morse, Yazeed Awwad, and Gerald Kane. "Failures of Fairness in Automation Require a Deeper Understanding of Human-ML Augmentation." MIS Quarterly 45, no. 3 (September 1, 2021): 1483–500. http://dx.doi.org/10.25300/misq/2021/16535.

Повний текст джерела
Анотація:
Machine learning (ML) tools reduce the costs of performing repetitive, time-consuming tasks yet run the risk of introducing systematic unfairness into organizational processes. Automated approaches to achieving fair- ness often fail in complex situations, leading some researchers to suggest that human augmentation of ML tools is necessary. However, our current understanding of human–ML augmentation remains limited. In this paper, we argue that the Information Systems (IS) discipline needs a more sophisticated view of and research into human–ML augmentation. We introduce a typology of augmentation for fairness consisting of four quadrants: reactive oversight, proactive oversight, informed reliance, and supervised reliance. We identify significant intersections with previous IS research and distinct managerial approaches to fairness for each quadrant. Several potential research questions emerge from fundamental differences between ML tools trained on data and traditional IS built with code. IS researchers may discover that the differences of ML tools undermine some of the fundamental assumptions upon which classic IS theories and concepts rest. ML may require massive rethinking of significant portions of the corpus of IS research in light of these differences, representing an exciting frontier for research into human–ML augmentation in the years ahead that IS researchers should embrace.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Pessach, Dana, and Erez Shmueli. "A Review on Fairness in Machine Learning." ACM Computing Surveys 55, no. 3 (April 30, 2023): 1–44. http://dx.doi.org/10.1145/3494672.

Повний текст джерела
Анотація:
An increasing number of decisions regarding the daily lives of human beings are being controlled by artificial intelligence and machine learning (ML) algorithms in spheres ranging from healthcare, transportation, and education to college admissions, recruitment, provision of loans, and many more realms. Since they now touch on many aspects of our lives, it is crucial to develop ML algorithms that are not only accurate but also objective and fair. Recent studies have shown that algorithmic decision making may be inherently prone to unfairness, even when there is no intention for it. This article presents an overview of the main concepts of identifying, measuring, and improving algorithmic fairness when using ML algorithms, focusing primarily on classification tasks. The article begins by discussing the causes of algorithmic bias and unfairness and the common definitions and measures for fairness. Fairness-enhancing mechanisms are then reviewed and divided into pre-process, in-process, and post-process mechanisms. A comprehensive comparison of the mechanisms is then conducted, toward a better understanding of which mechanisms should be used in different scenarios. The article ends by reviewing several emerging research sub-fields of algorithmic fairness, beyond classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Ghosh, Bishwamittra, Debabrota Basu, and Kuldeep S. Meel. "Justicia: A Stochastic SAT Approach to Formally Verify Fairness." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7554–63. http://dx.doi.org/10.1609/aaai.v35i9.16925.

Повний текст джерела
Анотація:
As a technology ML is oblivious to societal good or bad, and thus, the field of fair machine learning has stepped up to propose multiple mathematical definitions, algorithms, and systems to ensure different notions of fairness in ML applications. Given the multitude of propositions, it has become imperative to formally verify the fairness metrics satisfied by different algorithms on different datasets. In this paper, we propose a stochastic satisfiability (SSAT) framework, Justicia, that formally verifies different fairness measures of supervised learning algorithms with respect to the underlying data distribution. We instantiate Justicia on multiple classification and bias mitigation algorithms, and datasets to verify different fairness metrics, such as disparate impact, statistical parity, and equalized odds. Justicia is scalable, accurate, and operates on non-Boolean and compound sensitive attributes unlike existing distribution-based verifiers, such as FairSquare and VeriFair. Being distribution-based by design, Justicia is more robust than the verifiers, such as AIF360, that operate on specific test samples. We also theoretically bound the finite-sample error of the verified fairness measure.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Sikstrom, Laura, Marta M. Maslej, Katrina Hui, Zoe Findlay, Daniel Z. Buchman, and Sean L. Hill. "Conceptualising fairness: three pillars for medical algorithms and health equity." BMJ Health & Care Informatics 29, no. 1 (January 2022): e100459. http://dx.doi.org/10.1136/bmjhci-2021-100459.

Повний текст джерела
Анотація:
ObjectivesFairness is a core concept meant to grapple with different forms of discrimination and bias that emerge with advances in Artificial Intelligence (eg, machine learning, ML). Yet, claims to fairness in ML discourses are often vague and contradictory. The response to these issues within the scientific community has been technocratic. Studies either measure (mathematically) competing definitions of fairness, and/or recommend a range of governance tools (eg, fairness checklists or guiding principles). To advance efforts to operationalise fairness in medicine, we synthesised a broad range of literature.MethodsWe conducted an environmental scan of English language literature on fairness from 1960-July 31, 2021. Electronic databases Medline, PubMed and Google Scholar were searched, supplemented by additional hand searches. Data from 213 selected publications were analysed using rapid framework analysis. Search and analysis were completed in two rounds: to explore previously identified issues (a priori), as well as those emerging from the analysis (de novo).ResultsOur synthesis identified ‘Three Pillars for Fairness’: transparency, impartiality and inclusion. We draw on these insights to propose a multidimensional conceptual framework to guide empirical research on the operationalisation of fairness in healthcare.DiscussionWe apply the conceptual framework generated by our synthesis to risk assessment in psychiatry as a case study. We argue that any claim to fairness must reflect critical assessment and ongoing social and political deliberation around these three pillars with a range of stakeholders, including patients.ConclusionWe conclude by outlining areas for further research that would bolster ongoing commitments to fairness and health equity in healthcare.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Drira, Mohamed, Sana Ben Hassine, Michael Zhang, and Steven Smith. "Machine Learning Methods in Student Mental Health Research: An Ethics-Centered Systematic Literature Review." Applied Sciences 14, no. 24 (December 16, 2024): 11738. https://doi.org/10.3390/app142411738.

Повний текст джерела
Анотація:
This study conducts an ethics-centered analysis of the AI/ML models used in Student Mental Health (SMH) research, considering the ethical principles of fairness, privacy, transparency, and interpretability. First, this paper surveys the AI/ML methods used in the extant SMH literature published between 2015 and 2024, as well as the main health outcomes, to inform future work in the SMH field. Then, it leverages advanced topic modeling techniques to depict the prevailing themes in the corpus. Finally, this study proposes novel measurable privacy, transparency (reporting and replicability), interpretability, and fairness metrics scores as a multi-dimensional integrative framework to evaluate the extent of ethics awareness and consideration in AI/ML-enabled SMH research. Findings show that (i) 65% of the surveyed papers disregard the privacy principle; (ii) 59% of the studies use black-box models resulting in low interpretability scores; and (iii) barely 18% of the papers provide demographic information about participants, indicating a limited consideration of the fairness principle. Nonetheless, the transparency principle is implemented at a satisfactory level with mean reporting and replicability scores of 80%. Overall, our results suggest a significant lack of awareness and consideration for the ethical principles of privacy, fairness, and interpretability in AI/ML-enabled SMH research. As AI/ML continues to expand in SMH, incorporating ethical considerations at every stage—from design to dissemination—is essential for producing ethically responsible and reliable research.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Kumbo, Lazaro Inon, Victor Simon Nkwera, and Rodrick Frank Mero. "Evaluating the Ethical Practices in Developing AI and Ml Systems in Tanzania." ABUAD Journal of Engineering Research and Development (AJERD) 7, no. 2 (September 2024): 340–51. http://dx.doi.org/10.53982/ajerd.2024.0702.33-j.

Повний текст джерела
Анотація:
Artificial Intelligence (AI) and Machine Learning (ML) present transformative opportunities for sectors in developing countries like Tanzania that were previously hindered by manual processes and data inefficiencies. Despite these advancements, the ethical challenges of bias, fairness, transparency, privacy, and accountability are critical during AI and ML system design and deployment. This study explores these ethical dimensions from the perspective of Tanzanian IT professionals, given the country's nascent AI landscape. The research aims to understand and address these challenges using a mixed-method approach, including case studies, a systematic literature review, and critical analysis. Findings reveal significant concerns about algorithm bias, the complexity of ensuring fairness and equity, transparency and explainability, which are crucial for promoting trust and understanding among users, and heightened privacy and security risks. The study underscores the importance of integrating ethical considerations throughout the development lifecycle of AI and ML systems and the necessity of robust regulatory frameworks. Recommendations include developing targeted regulatory guidelines, providing comprehensive training for IT professionals, and fostering public trust through transparency and accountability. This study underscores the importance of ethical AI and ML practices to ensure responsible and equitable technological development in Tanzania.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Ezzeldin, Yahya H., Shen Yan, Chaoyang He, Emilio Ferrara, and A. Salman Avestimehr. "FairFed: Enabling Group Fairness in Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7494–502. http://dx.doi.org/10.1609/aaai.v37i6.25911.

Повний текст джерела
Анотація:
Training ML models which are fair across different demographic groups is of critical importance due to the increased integration of ML in crucial decision-making scenarios such as healthcare and recruitment. Federated learning has been viewed as a promising solution for collaboratively training machine learning models among multiple parties while maintaining their local data privacy. However, federated learning also poses new challenges in mitigating the potential bias against certain populations (e.g., demographic groups), as this typically requires centralized access to the sensitive information (e.g., race, gender) of each datapoint. Motivated by the importance and challenges of group fairness in federated learning, in this work, we propose FairFed, a novel algorithm for fairness-aware aggregation to enhance group fairness in federated learning. Our proposed approach is server-side and agnostic to the applied local debiasing thus allowing for flexible use of different local debiasing methods across clients. We evaluate FairFed empirically versus common baselines for fair ML and federated learning and demonstrate that it provides fairer models, particularly under highly heterogeneous data distributions across clients. We also demonstrate the benefits of FairFed in scenarios involving naturally distributed real-life data collected from different geographical locations or departments within an organization.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Fessenko, Dessislava. "Ethical Requirements for Achieving Fairness in Radiology Machine Learning: An Intersectionality and Social Embeddedness Approach." Journal of Health Ethics 20, no. 1 (2024): 37–49. http://dx.doi.org/10.18785/jhe.2001.04.

Повний текст джерела
Анотація:
Radiodiagnostics by machine-learning (ML) systems is often perceived as objective and fair. It may, however, exhibit bias towards certain patient sub-groups. The typical reasons for this are the selection of disease features for ML systems to screen, that ML systems learn from human clinical judgements, which are often biased, and that fairness in ML is often inappropriately conceptualized as “equality”. ML systems with such parameters fail to accurately diagnose and address patients’ actual health needs and how they depend on patients’ social identities (i.e. intersectionality) and broader social conditions (i.e. embeddedness). This paper explores the ethical obligations to ensure fairness of ML systems precisely in light of patients’ intersectionality and the social embeddedness of their health. The paper proposes a set of interventions to tackle these issues. It recommended a paradigm shift in the development of ML systems that enables them to screen both endogenous disease causes and the health effects of patients’ relevant underlying (e.g. socioeconomic) circumstances. The paper proposes a framework of ethical requirements for instituting this shift and further ensuring fairness. The requirements center patients’ intersectionality and the social embeddedness of their health most notably through (i) integrating in ML systems adequate measurable medical indicators of the health impact of patients’ circumstances, (ii) ethically sourced, diverse, representative and correct patient data concerning relevant disease features and medical indicators, and (iii) iterative socially sensitive co-exploration and co-design of datasets and ML systems involving all relevant stakeholders.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Cheng, Lu. "Demystifying Algorithmic Fairness in an Uncertain World." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 20 (March 24, 2024): 22662. http://dx.doi.org/10.1609/aaai.v38i20.30278.

Повний текст джерела
Анотація:
Significant progress in the field of fair machine learning (ML) has been made to counteract algorithmic discrimination against marginalized groups. However, fairness remains an active research area that is far from settled. One key bottleneck is the implicit assumption that environments, where ML is developed and deployed, are certain and reliable. In a world that is characterized by volatility, uncertainty, complexity, and ambiguity, whether what has been developed in algorithmic fairness can still serve its purpose is far from obvious. In this talk, I will first discuss how to improve algorithmic fairness under two kinds of predictive uncertainties, i.e., aleatoric uncertainty (i.e., randomness and ambiguity in the data) and epistemic uncertainty (i.e., a lack of data or knowledge), respectively. The former regards historical bias reflected in the data and the latter corresponds to the bias perpetuated or amplified during model training due to lack of data or knowledge. In particular, the first work studies pushing the fairness-utility trade-off through aleatoric uncertainty, and the second work investigates fair few-shot learning. The last work introduces coverage-based fairness that ensures different groups enjoy identical treatment and receive equal coverage.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Arslan, Ayse. "Mitigation Techniques to Overcome Data Harm in Model Building for ML." International Journal of Artificial Intelligence & Applications 13, no. 1 (January 31, 2022): 73–82. http://dx.doi.org/10.5121/ijaia.2022.13105.

Повний текст джерела
Анотація:
Given the impact of Machine Learning (ML) on individuals and the society, understanding how harm might be occur throughout the ML life cycle becomes critical more than ever. By offering a framework to determine distinct potential sources of downstream harm in ML pipeline, the paper demonstrates the importance of choices throughout distinct phases of data collection, development, and deployment that extend far beyond just model training. Relevant mitigation techniques are also suggested for being used instead of merely relying on generic notions of what counts as fairness.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Vartak, Manasi. "From ML models to intelligent applications." Proceedings of the VLDB Endowment 14, no. 13 (September 2021): 3419. http://dx.doi.org/10.14778/3484224.3484240.

Повний текст джерела
Анотація:
The last 5+ years in ML have focused on building the best models, hyperparameter optimization, parallel training, massive neural networks, etc. Now that the building of models has become easy, models are being integrated into every piece of software and device - from smart kitchens to radiology to detecting performance of turbines. This shift from training ML models to building intelligent, ML-driven applications has highlighted a variety of problems going from "a model" to a whole application or business process running on ML. These challenges range from operational challenges (how to package and deploy different types of models using existing SDLC tools and practices), rethinking what existing abstractions mean for ML (e.g., testing, monitoring, warehouses for ML), and collaboration challenges arising from disparate skill sets involved in ML product development (DS vs. SWE), and brand-new problems unique to ML (e.g., explainability, fairness, retraining, etc.) In this talk, I will discuss the slew of challenges that still exist in operationalizing ML to build intelligent applications, some solutions that the community has adopted, and highlight various open problems that would benefit from the research community's contributions.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Arjunan, Gopalakrishnan. "Enhancing Data Quality and Integrity in Machine Learning Pipelines: Approaches for Detecting and Mitigating Bias." International Journal of Scientific Research and Management (IJSRM) 10, no. 09 (September 24, 2022): 940–45. http://dx.doi.org/10.18535/ijsrm/v10i9.ec04.

Повний текст джерела
Анотація:
Machine learning (ML) has become a cornerstone of innovation in numerous industries, including healthcare, finance, marketing, and criminal justice. However, the growing reliance on ML models has revealed the critical importance of data quality and integrity in ensuring fair and reliable predictions. As AI technologies are deployed in sensitive decision-making areas, the presence of hidden biases within data has become a major concern. These biases can perpetuate systemic inequalities and result in unethical outcomes, undermining trust in AI systems. The accuracy and fairness of ML models are directly influenced by the data used to train them, and poor-quality data—whether due to missing values, noise, or inherent biases—can degrade performance, skew results, and exacerbate societal inequalities. This paper explores the complex relationship between data quality, data integrity, and bias in machine learning pipelines. Specifically, it examines the different types of bias that can emerge at various stages of data collection, preprocessing, and model development, and the negative impacts these biases have on model performance and fairness. Furthermore, the paper outlines a range of bias detection and bias mitigation techniques, which are essential for developing trustworthy and ethical AI systems. From data preprocessing methods like imputation and normalization to advanced fairness-aware algorithms and post-processing adjustments, several approaches are available to improve data quality and eliminate bias from machine learning pipelines. Additionally, the paper emphasizes the importance of ongoing monitoring and validation of ML models to detect emerging biases and ensure that they continue to operate fairly as they are exposed to new data. The integration of regular audits, fairness metrics, and data drift detection mechanisms are discussed as crucial steps in maintaining model integrity over time. By focusing on the processes and strategies required to enhance both data quality and integrity, this paper aims to contribute to the development of more equitable, transparent, and reliable AI systems. The goal is to ensure that machine learning technologies can be used responsibly and in ways that promote fairness, equality, and trust, ultimately benefiting all sectors of society.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Singh, Arashdeep, Jashandeep Singh, Ariba Khan, and Amar Gupta. "Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair." Machine Learning and Knowledge Extraction 4, no. 1 (March 12, 2022): 240–53. http://dx.doi.org/10.3390/make4010011.

Повний текст джерела
Анотація:
Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this “model discrimination” by updating the training data (pre-processing), altering the model learning process (in-processing), or manipulating the model output (post-processing). However, more work can be done in extending this situation to intersectional fairness, where we consider multiple sensitive parameters (e.g., race) and sensitive options (e.g., black or white), thus allowing for greater real-world usability. Prior work in fairness has also suffered from an accuracy–fairness trade-off that prevents both accuracy and fairness from being high. Moreover, the previous literature has not clearly presented holistic fairness metrics that work with intersectional fairness. In this paper, we address all three of these problems by (a) creating a bias mitigation technique called DualFair and (b) developing a new fairness metric (i.e., AWI, a measure of bias of an algorithm based upon inconsistent counterfactual predictions) that can handle intersectional fairness. Lastly, we test our novel mitigation method using a comprehensive U.S. mortgage lending dataset and show that our classifier, or fair loan predictor, obtains relatively high fairness and accuracy metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Tambari Faith Nuka and Amos Abidemi Ogunola. "AI and machine learning as tools for financial inclusion: challenges and opportunities in credit scoring." International Journal of Science and Research Archive 13, no. 2 (November 30, 2024): 1052–67. http://dx.doi.org/10.30574/ijsra.2024.13.2.2258.

Повний текст джерела
Анотація:
Financial inclusion remains a pressing global challenge, with millions of underserved individuals excluded from traditional credit systems due to systemic biases and outdated evaluation models. Artificial Intelligence [AI] and Machine Learning [ML] have emerged as transformative tools for addressing these inequities, offering opportunities to redefine how creditworthiness is assessed. By leveraging the predictive power of AI and ML, financial institutions can expand access to credit, improve fairness, and reduce disparities in underserved communities. This paper begins by exploring the broad potential of AI and ML in financial inclusion, highlighting their ability to process vast datasets and uncover patterns that traditional methods overlook. It then delves into the specific role of ML in identifying and reducing biases in credit scoring. ML algorithms, when designed with fairness in mind, can detect discriminatory patterns, enabling financial institutions to implement corrective measures and create more inclusive systems. The discussion narrows to examine the importance of diverse datasets in ensuring equitable outcomes. By incorporating non-traditional data points—such as rent payments, utility bills, and employment history—AI systems can provide a more holistic view of creditworthiness, particularly for individuals marginalized by conventional models. Finally, the ethical considerations of using AI in credit scoring are addressed, focusing on the need for transparency, accountability, and safeguards against algorithmic discrimination. This paper argues that responsible implementation of AI and ML, combined with robust regulatory frameworks, is essential to balance innovation with fairness. By embracing these principles, the financial industry can harness AI as a powerful enabler of financial inclusion, ultimately creating a more equitable credit ecosystem for underserved communities.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Keswani, Vijay, and L. Elisa Celis. "Algorithmic Fairness From the Perspective of Legal Anti-discrimination Principles." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 724–37. http://dx.doi.org/10.1609/aies.v7i1.31674.

Повний текст джерела
Анотація:
Real-world applications of machine learning (ML) algorithms often propagate negative stereotypes and social biases against marginalized groups. In response, the field of fair machine learning has proposed technical solutions for a variety of settings that aim to correct the biases in algorithmic predictions. These solutions remove the dependence of the final prediction on the protected attributes (like gender or race) and/or ensure that prediction performance is similar across demographic groups. Yet, recent studies assessing the impact of these solutions in practice demonstrate their ineffectiveness in tackling real-world inequalities. Given this lack of real-world success, it is essential to take a step back and question the design motivations of algorithmic fairness interventions. We use popular legal anti-discriminatory principles, specifically anti-classification and anti-subordination principles, to study the motivations of fairness interventions and their applications. The anti-classification principle suggests addressing discrimination by ensuring that decision processes and outcomes are independent of the protected attributes of individuals. The anti-subordination principle, on the other hand, argues that decision-making policies can provide equal protection to all only by actively tackling societal hierarchies that enable structural discrimination, even if that requires using protected attributes to address historical inequalities. Through a survey of the fairness mechanisms and applications, we assess different components of fair ML approaches from the perspective of these principles. We argue that the observed shortcomings of fair ML algorithms are similar to the failures of anti-classification policies and that these shortcomings constitute violations of the anti-subordination principle. Correspondingly, we propose guidelines for algorithmic fairness interventions to adhere to the anti-subordination principle. In doing so, we hope to bridge critical concepts between legal frameworks for non-discrimination and fairness in machine learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Detassis, Fabrizio, Michele Lombardi, and Michela Milano. "Teaching the Old Dog New Tricks: Supervised Learning with Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (May 18, 2021): 3742–49. http://dx.doi.org/10.1609/aaai.v35i5.16491.

Повний текст джерела
Анотація:
Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems, such as safety and fairness. Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output. Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver: this enables taking advantage of decades of research on constrained optimization with limited effort. In practice, we use a decomposition scheme alternating master steps (in charge of enforcing the constraints) and learner steps (where any supervised ML model and training algorithm can be employed). The process leads to approximate constraint satisfaction in general, and convergence properties are difficult to establish; despite this fact, we found empirically that even a naive setup of our approach performs well on ML tasks with fairness constraints, and on classical datasets with synthetic constraints.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Sunday Adeola Oladosu, Christian Chukwuemeka Ike, Peter Adeyemo Adepoju, Adeoye Idowu Afolabi, Adebimpe Bolatito Ige, and Olukunle Oladipupo Amoo. "Frameworks for ethical data governance in machine learning: Privacy, fairness, and business optimization." Magna Scientia Advanced Research and Reviews 7, no. 2 (April 30, 2023): 096–106. https://doi.org/10.30574/msarr.2023.7.2.0043.

Повний текст джерела
Анотація:
The rapid growth of machine learning (ML) technologies has transformed industries by enabling data-driven decision-making, yet it has also raised critical ethical concerns. Frameworks for ethical data governance are essential to ensure that ML systems uphold privacy, fairness, and business optimization while addressing societal and organizational needs. This review explores the intersection of these three pillars, providing a structured approach to balance competing priorities in ML applications. Privacy concerns focus on safeguarding individuals' data through strategies such as anonymization, differential privacy, and adherence to regulations like GDPR and CCPA. Fairness involves mitigating biases in datasets and algorithms to prevent discrimination and promote equitable outcomes. Business optimization emphasizes leveraging ML responsibly to maximize value without compromising ethical standards. The proposed frameworks integrate legal compliance, organizational policies, and technical solutions to achieve a holistic approach to ethical data governance. Key components include privacy-preserving techniques, fairness-aware ML models, and transparent decision-making processes. Challenges such as balancing trade-offs between privacy and utility, addressing bias in data, and ensuring scalability in implementation are critically examined. Case studies highlight successful applications of ethical data governance in real-world scenarios, demonstrating the viability of these frameworks in promoting both ethical integrity and business innovation. Emerging trends, such as federated learning, AI ethics boards, and international collaboration on data standards, are identified as pivotal for advancing ethical practices. This review emphasizes the necessity of embedding ethics throughout the AI lifecycle, from design to deployment and monitoring. By adopting robust governance frameworks, organizations can foster trust, comply with regulatory mandates, and harness the full potential of ML responsibly.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Czarnowska, Paula, Yogarshi Vyas, and Kashif Shah. "Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics." Transactions of the Association for Computational Linguistics 9 (2021): 1249–67. http://dx.doi.org/10.1162/tacl_a_00425.

Повний текст джерела
Анотація:
Abstract Measuring bias is key for better understanding and addressing unfairness in NLP/ML models. This is often done via fairness metrics, which quantify the differences in a model’s behaviour across a range of demographic groups. In this work, we shed more light on the differences and similarities between the fairness metrics used in NLP. First, we unify a broad range of existing metrics under three generalized fairness metrics, revealing the connections between them. Next, we carry out an extensive empirical comparison of existing metrics and demonstrate that the observed differences in bias measurement can be systematically explained via differences in parameter choices for our generalized metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Park, Sojung, Eunhye Ahn, Tae-Hyuk Ahn, SangNam Ahn, Soobin Park, Eunsun Kwon, Seoyeon Ahn, and Yuanyuan Yang. "ROLE OF MACHINE LEARNING (ML) IN AGING IN PLACE RESEARCH: A SCOPING REVIEW." Innovation in Aging 8, Supplement_1 (December 2024): 1215. https://doi.org/10.1093/geroni/igae098.3890.

Повний текст джерела
Анотація:
Abstract As global aging accelerates, Aging in Place (AIP) is increasingly central to improving older adults’ quality of life. Machine Learning (ML) is widely used in aging research, particularly in health monitoring and personalized care. However, most studies focus on clinical settings, leaving a gap in understanding ML’s application in non-clinical AIP contexts. This review addresses this gap by exploring the themes, policy implications, and ethical concerns of AIP-ML studies, including AI bias and fairness. The review examined 32 peer-reviewed studies sourced from databases like PsycINFO, MEDLINE, and PubMed. Through thematic analysis, three main themes emerged: successful aging, managing depressive symptoms, and fostering social connectedness. The studies employed various ML techniques, including classification algorithms, random forest, SVM, and LASSO, to predict health outcomes such as depression, cognitive decline, and functional disabilities in older adults. They also identified key risk factors and examined disparities in healthcare access and digital inclusion.However, the studies often lacked validation across diverse populations, which limits their policy impact. Most research focused on healthy, community-dwelling older adults, excluding those with dementia or disabilities, thereby introducing biases that could disproportionately affect marginalized groups The review highlights that while ML models are useful for prediction, they may overlook the needs of less healthy older adults, reducing their accuracy and fairness. These findings emphasize the need for transparency and participant involvement in data usage, suggesting that despite its limitations, the current literature offers valuable insights for advancing AIP research and guiding the development of more inclusive AIP policies.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Shah, Kanan, Yassamin Neshatvar, Elaine Shum, and Madhur Nayan. "Optimizing the fairness of survival prediction models for racial/ethnic subgroups: A study on predicting post-operative survival in stage IA and IB non-small cell lung cancer." JCO Oncology Practice 20, no. 10_suppl (October 2024): 380. http://dx.doi.org/10.1200/op.2024.20.10_suppl.380.

Повний текст джерела
Анотація:
380 Background: The recent surge of utilizing machine learning (ML) to develop prediction models for clinical decision-making aids is promising. However, these models can demonstrate racial bias due to inequities in real-world training data. In lung cancer, multiple models have been developed to predict prognosis, but none have been optimized to mitigate bias in performance among racial/ethnic subgroups. We developed a ML model to predict five-year survival in Stage 1A-1B non-small cell lung cancer (NSCLC), ensuring fairness on race. Methods: In the National Cancer Database, we identified patients with histopathologically confirmed stage 1A -1B NSCLC who underwent curative intent lobectomy from 2004 – 2017. We split the study cohort into a training and test sets (70%/30%). We trained and compared various ML models to predict 5-year overall survival. Patient demographic, clinical, and disease characteristics were used as input features for the models. To evaluate model fairness, we used the equalized odds ratio (eOR), which compares the true positive and false positive rates across groups; an eOR value of 1 represents equivalent rates across racial groups. We utilized 3 approaches to mitigate model bias and optimize for fairness of the best “naïve” model: grid search, threshold optimizer, and the exponentiated gradient methods. We evaluated model performance before and after bias mitigation using the area under the curve (AUC). Results: 124,298 patients fit our inclusion/exclusion criteria; 87% of patients were White, 8% were Black/African American, 3% Hispanic, and 2% Asian. Eighty percent of patients were diagnosed with stage 1A cancer; 20% had stage 1B cancer. The best naïve ML model, not optimized for fairness on race, had an eOR of 0.25 with an AUC of 0.66 (95% CI 0.65-0.66) overall. This model demonstrated an AUC of 0.65 (0.65-0.66) among white patients, 0.64 (0.62-0.66) among Black patients, 0.64 (0.60-0.68) among Asian patients, and 0.71 (0.68-0.74) among Hispanic patients. The threshold optimizer bias mitigation strategy improved fairness the most while maintaining similar overall performance of AUC 0.65 (0.64-0.66). With this strategy the eOR improved to 0.83 while AUC remained relatively stable across racial subgroups. Conclusions: We developed a ML model to predict 5-year survival in patients undergoing surgery for stage IA-IB NSCLC and employed model bias mitigation strategies that significantly improved model fairness, without diminishing overall performance. These strategies should be considered when developing prediction models for clinical decision making to avoid perpetuating disparities in care due to algorithm bias. Model performance metrics. Equalized Odds Ratio True Positive Rate False Positive Rate AUC Naive model 0.25 0.61 0.30 0.66 Threshold optimizer mitigated model 0.83 0.62 0.32 0.65
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Islam, Rashidul, Huiyuan Chen, and Yiwei Cai. "Fairness without Demographics through Shared Latent Space-Based Debiasing." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12717–25. http://dx.doi.org/10.1609/aaai.v38i11.29167.

Повний текст джерела
Анотація:
Ensuring fairness in machine learning (ML) is crucial, particularly in applications that impact diverse populations. The majority of existing works heavily rely on the availability of protected features like race and gender. However, practical challenges such as privacy concerns and regulatory restrictions often prohibit the use of this data, limiting the scope of traditional fairness research. To address this, we introduce a Shared Latent Space-based Debiasing (SLSD) method that transforms data from both the target domain, which lacks protected features, and a separate source domain, which contains these features, into correlated latent representations. This allows for joint training of a cross-domain protected group estimator on the representations. We then debias the downstream ML model with an adversarial learning technique that leverages the group estimator. We also present a relaxed variant of SLSD, the R-SLSD, that occasionally accesses a small subset of protected features from the target domain during its training phase. Our extensive experiments on benchmark datasets demonstrate that our methods consistently outperform existing state-of-the-art models in standard group fairness metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Lamba, Hemank, Kit T. Rodolfa, and Rayid Ghani. "An Empirical Comparison of Bias Reduction Methods on Real-World Problems in High-Stakes Policy Settings." ACM SIGKDD Explorations Newsletter 23, no. 1 (May 26, 2021): 69–85. http://dx.doi.org/10.1145/3468507.3468518.

Повний текст джерела
Анотація:
Applications of machine learning (ML) to high-stakes policy settings - such as education, criminal justice, healthcare, and social service delivery - have grown rapidly in recent years, sparking important conversations about how to ensure fair outcomes from these systems. The machine learning research community has responded to this challenge with a wide array of proposed fairness-enhancing strategies for ML models, but despite the large number of methods that have been developed, little empirical work exists evaluating these methods in real-world settings. Here, we seek to fill this research gap by investigating the performance of several methods that operate at different points in the ML pipeline across four real-world public policy and social good problems. Across these problems, we find a wide degree of variability and inconsistency in the ability of many of these methods to improve model fairness, but postprocessing by choosing group-specific score thresholds consistently removes disparities, with important implications for both the ML research community and practitioners deploying machine learning to inform consequential policy decisions.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Shook, Jim, Robyn Smith, and Alex Antonio. "Transparency and Fairness in Machine Learning Applications." Symposium Edition - Artificial Intelligence and the Legal Profession 4, no. 5 (April 2018): 443–63. http://dx.doi.org/10.37419/jpl.v4.i5.2.

Повний текст джерела
Анотація:
Businesses and consumers increasingly use artificial intelligence (“AI”)— and specifically machine learning (“ML”) applications—in their daily work. ML is often used as a tool to help people perform their jobs more efficiently, but increasingly it is becoming a technology that may eventually replace humans in performing certain functions. An AI recently beat humans in a reading comprehension test, and there is an ongoing race to replace human drivers with self-driving cars and trucks. Tomorrow there is the potential for much more—as AI is even learning to build its own AI. As the use of AI technologies continues to expand, and especially as machines begin to act more autonomously with less human intervention, important questions arise about how we can best integrate this new technology into our society, particularly within our legal and compliance frameworks. The questions raised are different from those that we have already addressed with other technologies because AI is different. Most previous technologies functioned as a tool, operated by a person, and for legal purposes we could usually hold that person responsible for actions that resulted from using that tool. For example, an employee who used a computer to send a discriminatory or defamatory email could not have done so without the computer, but the employee would still be held responsible for creating the email. While AI can function as merely a tool, it can also be designed to act after making its own decisions, and in the future, will act even more autonomously. As AI becomes more autonomous, it will be more difficult to determine who—or what—is making decisions and taking actions, and determining the basis and responsibility for those actions. These are the challenges that must be overcome to ensure AI’s integration for legal and compliance purposes.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Ding, Xueying, Rui Xi, and Leman Akoglu. "Outlier Detection Bias Busted: Understanding Sources of Algorithmic Bias through Data-centric Factors." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 384–95. http://dx.doi.org/10.1609/aies.v7i1.31644.

Повний текст джерела
Анотація:
The astonishing successes of ML have raised growing concern for the fairness of modern methods when deployed in real world settings. However, studies on fairness have mostly focused on supervised ML, while unsupervised outlier detection (OD), with numerous applications in finance, security, etc., have attracted little attention. While a few studies proposed fairness-enhanced OD algorithms, they remain agnostic to the underlying driving mechanisms or sources of unfairness. Even within the supervised ML literature, there exists debate on whether unfairness stems solely from algorithmic biases (i.e. design choices) or from the biases encoded in the data on which they are trained. To close this gap, this work aims to shed light on the possible sources of unfairness in OD by auditing detection models under different data-centric factors.By injecting various known biases into the input data---as pertain to sample size disparity, under-representation, feature measurement noise, and group membership obfuscation---we find that the OD algorithms under the study all exhibit fairness pitfalls, although differing in which types of data bias they are more susceptible to. Most notable of our study is to demonstrate that OD algorithm bias is not merely a data bias problem. A key realization is that the data properties that emerge from bias injection could as well be organic---as pertain to natural group differences w.r.t. sparsity, base rate, variance, and multi-modality. Either natural or biased, such data properties can give rise to unfairness as they interact with certain algorithmic design choices. Our work provides a deeper understanding of the possible sources of OD unfairness, and serves as a framework for assessing the unfairness of future OD algorithms under specific data-centric factors. It also paves the way for future work on mitigation strategies by underscoring the susceptibility of various design choices.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Galhotra, Sainyam, Karthikeyan Shanmugam, Prasanna Sattigeri, and Kush R. Varshney. "Interventional Fairness with Indirect Knowledge of Unobserved Protected Attributes." Entropy 23, no. 12 (November 25, 2021): 1571. http://dx.doi.org/10.3390/e23121571.

Повний текст джерела
Анотація:
The deployment of machine learning (ML) systems in applications with societal impact has motivated the study of fairness for marginalized groups. Often, the protected attribute is absent from the training dataset for legal reasons. However, datasets still contain proxy attributes that capture protected information and can inject unfairness in the ML model. Some deployed systems allow auditors, decision makers, or affected users to report issues or seek recourse by flagging individual samples. In this work, we examine such systems and consider a feedback-based framework where the protected attribute is unavailable and the flagged samples are indirect knowledge. The reported samples are used as guidance to identify the proxy attributes that are causally dependent on the (unknown) protected attribute. We work under the causal interventional fairness paradigm. Without requiring the underlying structural causal model a priori, we propose an approach that performs conditional independence tests on observed data to identify such proxy attributes. We theoretically prove the optimality of our algorithm, bound its complexity, and complement it with an empirical evaluation demonstrating its efficacy on various real-world and synthetic datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Xiao, Ying, Jie M. Zhang, Yepang Liu, Mohammad Reza Mousavi, Sicen Liu, and Dingyuan Xue. "MirrorFair: Fixing Fairness Bugs in Machine Learning Software via Counterfactual Predictions." Proceedings of the ACM on Software Engineering 1, FSE (July 12, 2024): 2121–43. http://dx.doi.org/10.1145/3660801.

Повний текст джерела
Анотація:
With the increasing utilization of Machine Learning (ML) software in critical domains such as employee hiring, college admission, and credit evaluation, ensuring fairness in the decision-making processes of underlying models has emerged as a paramount ethical concern. Nonetheless, existing methods for rectifying fairness issues can hardly strike a consistent trade-off between performance and fairness across diverse tasks and algorithms. Informed by the principles of counterfactual inference, this paper introduces MirrorFair, an innovative adaptive ensemble approach designed to mitigate fairness concerns. MirrorFair initially constructs a counterfactual dataset derived from the original data, training two distinct models—one on the original dataset and the other on the counterfactual dataset. Subsequently, MirrorFair adaptively combines these model predictions to generate fairer final decisions. We conduct an extensive evaluation of MirrorFair and compare it with 15 existing methods across a diverse range of decision-making scenarios. Our findings reveal that MirrorFair outperforms all the baselines in every measurement (i.e., fairness improvement, performance preservation, and trade-off metrics). Specifically, in 93% of cases, MirrorFair surpasses the fairness and performance trade-off baseline proposed by the benchmarking tool Fairea, whereas the state-of-the-art method achieves this in only 88% of cases. Furthermore, MirrorFair consistently demonstrates its superiority across various tasks and algorithms, ranking first in balancing model performance and fairness in 83% of scenarios. To foster replicability and future research, we have made our code, data, and results openly accessible to the research community.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Igoche, Bern Igoche, Olumuyiwa Matthew, Peter Bednar, and Alexander Gegov. "Integrating Structural Causal Model Ontologies with LIME for Fair Machine Learning Explanations in Educational Admissions." Journal of Computing Theories and Applications 2, no. 1 (June 25, 2024): 65–85. http://dx.doi.org/10.62411/jcta.10501.

Повний текст джерела
Анотація:
This study employed knowledge discovery in databases (KDD) to extract and discover knowledge from the Benue State Polytechnic (Benpoly) admission database and used a structural causal model (SCM) ontological framework to represent the admission process in the Nigerian polytechnic education system. The SCM ontology identified important causal relations in features needed to model the admission process and was validated using the conditional independence test (CIT) criteria. The SCM ontology was further employed to identify and constrain input features causing bias in the local interpretable model-agnostic explanations (LIME) framework applied to machine learning (ML) black-box predictions. The ablation process produced more stable LIME explanations devoid of fairness bias compared to LIME without ablation, with higher prediction accuracy (91% vs. 89%) and F1 scores (95% vs. 94%). The study also compared the performance of different ML models, including Gaussian Naïve Bayes, Decision Trees, and Logistic Regression, before and after ablation. The limitation is that the SCM ontology is qualitative and context-specific, so the fair-LIME framework can only be extrapolated to similar contexts. Future work could compare other explanation frameworks like Shapley on the same dataset. Overall, this study demonstrates a novel approach to enforcing fairness in ML explanations by integrating qualitative SCM ontologies with quantitative ML/LIME methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Vajiac, Catalina, Arun Frey, Joachim Baumann, Abigail Smith, Kasun Amarasinghe, Alice Lai, Kit T. Rodolfa, and Rayid Ghani. "Preventing Eviction-Caused Homelessness through ML-Informed Distribution of Rental Assistance." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 20 (March 24, 2024): 22393–400. http://dx.doi.org/10.1609/aaai.v38i20.30246.

Повний текст джерела
Анотація:
Rental assistance programs provide individuals with financial assistance to prevent housing instabilities caused by evictions and avert homelessness. Since these programs operate under resource constraints, they must decide who to prioritize. Typically, funding is distributed by a reactive allocation process that does not systematically consider risk of future homelessness. We partnered with Anonymous County (PA) to explore a proactive and preventative allocation approach that prioritizes individuals facing eviction based on their risk of future homelessness. Our ML models, trained on state and county administrative data accurately identify at-risk individuals, outperforming simpler prioritization approaches by at least 20% while meeting our equity and fairness goals across race and gender. Furthermore, our approach would reach 28% of individuals who are overlooked by the current process and end up homeless. Beyond improvements to the rental assistance program in Anonymous County, this study can inform the development of evidence-based decision support tools in similar contexts, including lessons about data needs, model design, evaluation, and field validation.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Metevier, Blossom. "Pursuing Social Good: An Overview of Short- and Long-Term Fairness in Classification." ACM SIGCAS Computers and Society 52, no. 2 (September 2023): 6. http://dx.doi.org/10.1145/3656021.3656022.

Повний текст джерела
Анотація:
Machine learning (ML) models are increasingly being used to aid decision-making in high-risk applications. However, these models can perpetuate biases present in their training data or the systems in which they are integrated. When unaddressed, these biases can lead to harmful outcomes, such as misdiagnoses in healthcare [11], wrongful denials of loan applications [9], and over-policing of minority communities [2, 4]. Consequently, the fair ML community is dedicated to developing algorithms that minimize the influence of data and model bias.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Bantilan, Niels. "Themis-ml: A Fairness-Aware Machine Learning Interface for End-To-End Discrimination Discovery and Mitigation." Journal of Technology in Human Services 36, no. 1 (January 2, 2018): 15–30. http://dx.doi.org/10.1080/15228835.2017.1416512.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Elglaly, Yasmine N., and Yudong Liu. "Promoting Machine Learning Fairness Education through Active Learning and Reflective Practices." ACM SIGCSE Bulletin 55, no. 3 (July 2023): 4–6. http://dx.doi.org/10.1145/3610585.3610589.

Повний текст джерела
Анотація:
As Natural Language Processing (NLP) has witnessed significant progress in the last decade and language technologies have gained widespread usage, there is an increasing acknowledgement that the choices made by NLP researchers and practitioners regarding data, methods, and tools carry significant ethical and societal implications. Consequently, there arises a pressing need for integrating ethics education into computer science (CS) curriculum, specifically within NLP and other related machine learning (ML) courses. In this project, our primary objective was to highlight the importance of fairness in ML ethics. We aimed to raise awareness regarding biases that can exist in machine learning, such as gender bias and disability bias. Acknowledging the intricate nature of the intersection between machine learning, ethics, and bias, we formed a participatory group comprising professors and students to develop the teaching interventions. The group members have experiences in machine learning, accessible computing, or both. It was crucial to include students in the design process of the teaching interventions because we wanted to ensure that fairness is sufficiently covered without being too complex to understand or too subtle to recognize [Tseng et al., 2022].
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Seastedt, Kenneth P., Patrick Schwab, Zach O’Brien, Edith Wakida, Karen Herrera, Portia Grace F. Marcelo, Louis Agha-Mir-Salim, et al. "Global healthcare fairness: We should be sharing more, not less, data." PLOS Digital Health 1, no. 10 (October 6, 2022): e0000102. http://dx.doi.org/10.1371/journal.pdig.0000102.

Повний текст джерела
Анотація:
The availability of large, deidentified health datasets has enabled significant innovation in using machine learning (ML) to better understand patients and their diseases. However, questions remain regarding the true privacy of this data, patient control over their data, and how we regulate data sharing in a way that that does not encumber progress or further potentiate biases for underrepresented populations. After reviewing the literature on potential reidentifications of patients in publicly available datasets, we argue that the cost—measured in terms of access to future medical innovations and clinical software—of slowing ML progress is too great to limit sharing data through large publicly available databases for concerns of imperfect data anonymization. This cost is especially great for developing countries where the barriers preventing inclusion in such databases will continue to rise, further excluding these populations and increasing existing biases that favor high-income countries. Preventing artificial intelligence’s progress towards precision medicine and sliding back to clinical practice dogma may pose a larger threat than concerns of potential patient reidentification within publicly available datasets. While the risk to patient privacy should be minimized, we believe this risk will never be zero, and society has to determine an acceptable risk threshold below which data sharing can occur—for the benefit of a global medical knowledge system.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Wang, Mini Han, Ruoyu Zhou, Zhiyuan Lin, Yang Yu, Peijin Zeng, Xiaoxiao Fang, Jie yang, et al. "Can Explainable Artificial Intelligence Optimize the Data Quality of Machine Learning Model? Taking Meibomian Gland Dysfunction Detections as a Case Study." Journal of Physics: Conference Series 2650, no. 1 (November 1, 2023): 012025. http://dx.doi.org/10.1088/1742-6596/2650/1/012025.

Повний текст джерела
Анотація:
Abstract Data quality plays a crucial role in computer-aided diagnosis (CAD) for ophthalmic disease detection. Various methodologies for data enhancement and preprocessing exist, with varying effectiveness and impact on model performance. However, the process of identifying the most effective approach usually involves time-consuming and resource-intensive experiments to determine optimal parameters. To address this issue, this study introduces a novel guidance framework that utilizes Explainable Artificial Intelligence (XAI) to enhance data quality. This method provides evidence of the significant contribution of XAI in classifying meibomian gland dysfunction (MGD) by aiding in feature selection, improving model transparency, mitigating data biases, providing interpretability, enabling error analysis, and establishing trust in machine learning (ML) models using multi-source meibomian datasets. The experimental results demonstrate substantial performance improvements in ML models when utilizing enhanced datasets compared to original images, as indicated by increased accuracy (0.67 vs. 0.86), recall (0.46 vs. 0.89), F1 score (0.48 vs. 0.84), XAI indicator (0.51 vs. 0.81), and IOU score (0.44 vs. 0.79). These findings highlight the significant potential of XAI in ML model MGD classification, particularly in advancing interpretability, standardization, fairness, domain integration, and clinical adoption. Consequently, the proposed framework not only saves valuable resources but also provides interpretable evidence for decision-making in data enhancement strategies. This study contributes to the understanding of XAI’s role in ML model MGD classification and its potential for driving advancements in key areas such as interpretability, standardization, fairness, domain integration, and clinical adoption.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Raza, Shaina, Parisa Osivand Pour, and Syed Raza Bashir. "Fairness in Machine Learning Meets with Equity in Healthcare." Proceedings of the AAAI Symposium Series 1, no. 1 (October 3, 2023): 149–53. http://dx.doi.org/10.1609/aaaiss.v1i1.27493.

Повний текст джерела
Анотація:
With the growing utilization of machine learning in healthcare, there is increasing potential to enhance healthcare outcomes. However, this also brings the risk of perpetuating biases in data and model design that can harm certain demographic groups based on factors such as age, gender, and race. This study proposes an artificial intelligence framework, grounded in software engineering principles, for identifying and mitigating biases in data and models while ensuring fairness in healthcare settings. A case study is presented to demonstrate how systematic biases in data can lead to amplified biases in model predictions, and machine learning methods are suggested to pre-vent such biases. Future research aims to test and validate the proposed ML framework in real-world clinical settings to evaluate its impact on promoting health equity.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Begum, Shaik Salma. "JARVIS - Customer Support Chatbot with ML." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 01 (January 13, 2024): 1–12. http://dx.doi.org/10.55041/ijsrem28053.

Повний текст джерела
Анотація:
Embarking on an unprecedented venture, this avant-garde initiative unveils a state-of-the-art customer support chatbot meticulously crafted to revolutionize the intricate landscape of global course registration for students hailing from diverse academic institutions. Positioned as a beacon of efficiency and accessibility, this platform facilitates seamless online enrollment across ten distinct subjects, transcending geographical and institutional boundaries to welcome students from around the world. In this digital haven for academic pursuits, each course commands a registration fee of 10,000 units, an investment in knowledge with the added allure of an exclusive 30% discount for early registrants, fostering a culture of expeditious and strategic engagement with the enrollment process. This innovative incentive structure not only propels swift registrations but also underscores a commitment to making quality education financially accessible. Navigating through this educational portal, aspiring learners encounter a dedicated helpline, a virtual concierge ready to address and assuage any queries related to the intricate registration process. As the enrollment journey unfolds, the system dynamically orchestrates a delicate ballet, promptly notifying users when the coveted cap of 60 students per subject is reached. This transparent communication ensures users are informed of seat unavailability in real-time, fortifying the platform's commitment to fairness and openness. However, the journey doesn't end here. For those facing unresolved challenges or encountering roadblocks during registration, the chatbot deftly orchestrates a seamless handoff to a specialized support clientele. This strategic escalation ensures that every student's concern is meticulously addressed by a dedicated team, fostering an environment where no query goes unanswered and no registration challenge remains insurmountable. In scenarios where course quotas reach their zenith, the support team takes center stage. Diligently reviewing and addressing concerns, they become the architects of successful registrations for students navigating the labyrinth of challenges. This comprehensive and user-centric approach doesn't merely strive for efficiency; it aspires to craft an immersive and transformative online course registration experience. Aiming beyond transactional engagements, this initiative envisions itself as a catalyst for global International Journal of Scientific Research in Engineering and Management (IJSREM) Volume: 08 Issue: 01 | January - 2024 SJIF Rating: 8.176 ISSN: 2582-3930 © 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM28053 | Page 2 educational opportunities, prioritizing not only customer satisfaction but also the holistic resolution of any issue encountered on the educational journey. In essence, it seeks to redefine the paradigm of online education, making it not just accessible but an enriching and empowering experience for students worldwide.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Tang, Zicheng. "The Role of AI and ML in Transforming Marketing Strategies: Insights from Recent Studies." Advances in Economics, Management and Political Sciences 108, no. 1 (September 27, 2024): 132–39. http://dx.doi.org/10.54254/2754-1169/108/20242009.

Повний текст джерела
Анотація:
With the development of digital information technology, the application of AI and ML in marketing has always been a key research direction. In this paper, this review focuses on the applications of Predictive Analytics (P), Personalization (P), Advertising Optimization (A), and Customer Experience Enhancement (C) in the marketing mix, explores the latest applications and research results published in various journals in recent years, and summarizes the progress made in this field of machine learning. It is easy to understand that machine learning can help enterprise decision-makers to help determine decision-making guidelines, but it is controversial in terms of privacy due to the large amount of customer data it requires, and as the algorithm deepens, transparency and fairness agnosticism is also a major concern. Finally, this paper will provide research directions and suggestions for future research based on the overall advantages and disadvantages, which can be combined with human insight and multidisciplinary cooperation to further optimize the problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Patel, Ekta V., Kirit J. Modi,, and Maitri H. Bhavsar. "Employee Performance Evaluation Using Machine Learning." International Journal of Advances in Engineering and Management 6, no. 11 (November 2024): 160–64. https://doi.org/10.35629/5252-0611160164.

Повний текст джерела
Анотація:
The application of machine learning (ML) in employee performance evaluation offers data-driven methods to improve traditional human resources (HR) processes, addressing issues of subjectivity and bias. This paper comprehensively reviews machine learning models, including predictive modelling, artificial neural networks (ANNs), natural language processing (NLP), and more related to employee performance evaluation. By analysing over 20 sources, this paper examines the effectiveness, limitations, and ethical considerations of ML-based performance evaluation systems. We explore how these approaches can augment traditional HR methods, making evaluations more consistent, accurate, and actionable. This review also highlights best practices for ML model deployment and ethical challenges such as fairness, transparency, and privacy, aiming to lay a foundation for future research in AI-enhanced HR practices.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Goretzko, David, and Laura Sophia Finja Israel. "Pitfalls of Machine Learning-Based Personnel Selection." Journal of Personnel Psychology 21, no. 1 (January 2022): 37–47. http://dx.doi.org/10.1027/1866-5888/a000287.

Повний текст джерела
Анотація:
Abstract. In recent years, machine learning (ML) modeling (often referred to as artificial intelligence) has become increasingly popular for personnel selection purposes. Numerous organizations use ML-based procedures for screening large candidate pools, while some companies try to automate the hiring process as far as possible. Since ML models can handle large sets of predictor variables and are therefore able to incorporate many different data sources (often more than common procedures can consider), they promise a higher predictive accuracy and objectivity in selecting the best candidate than traditional personal selection processes. However, there are some pitfalls and challenges that have to be taken into account when using ML for a sensitive issue as personnel selection. In this paper, we address these major challenges – namely the definition of a valid criterion, transparency regarding collected data and decision mechanisms, algorithmic fairness, changing data conditions, and adequate performance evaluation – and discuss some recommendations for implementing fair, transparent, and accurate ML-based selection algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Mangal, Mudit, and Zachary A. Pardos. "Implementing equitable and intersectionality‐aware ML in education: A practical guide." British Journal of Educational Technology, May 23, 2024. http://dx.doi.org/10.1111/bjet.13484.

Повний текст джерела
Анотація:
AbstractThe greater the proliferation of AI in educational contexts, the more important it becomes to ensure that AI adheres to the equity and inclusion values of an educational system or institution. Given that modern AI is based on historic datasets, mitigating historic biases with respect to protected classes (ie, fairndess) is an important component of this value alignment. Although extensive research has been done on AI fairness in education, there has been a lack of guidance for practitioners, which could enhance the practical uptake of these methods. In this work, we present a practitioner‐oriented, step‐by‐step framework, based on findings from the field, to implement AI fairness techniques. We also present an empirical case study that applies this framework in the context of a grade prediction task using data from a large public university. Our novel findings from the case study and extended analyses underscore the importance of incorporating intersectionality (such as race and gender) as central equity and inclusion institution values. Moreover, our research demonstrates the effectiveness of bias mitigation techniques, like adversarial learning, in enhancing fairness, particularly for intersectional categories like race–gender and race–income. Practitioner notesWhat is already known about this topic AI‐powered Educational Decision Support Systems (EDSS) are increasingly used in various educational contexts, such as course selection, admissions, scholarship allocation and identifying at‐risk students. There are known challenges with AI in education, particularly around the reinforcement of existing biases, leading to unfair outcomes. The machine learning community has developed metrics and methods to measure and mitigate biases, which have been effectively applied to education as seen in the AI in education literature. What this paper adds Introduces a comprehensive technical framework for equity and inclusion, specifically for machine learning practitioners in AI education systems. Presents a novel modification to the ABROCA fairness metric to better represent disparities among multiple subgroups within a protected class. Empirical analysis of the effectiveness of bias‐mitigating techniques, like adversarial learning, in reducing biases in intersectional classes (eg, race–gender, race–income). Model reporting in the form of model cards that can foster transparent communication among developers, users and stakeholders. Implications for practice and/or policy The fairness framework can act as a systematic guide for practitioners to design equitable and inclusive AI‐EDSS. The fairness framework can act as a systematic guide for practitioners to make compliance with emerging AI regulations more manageable. Stakeholders may become more involved in tailoring the fairness and equity model tuning process to align with their values.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії