Добірка наукової літератури з теми "ML fairness"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "ML fairness".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "ML fairness"

1

Weinberg, Lindsay. "Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches." Journal of Artificial Intelligence Research 74 (May 6, 2022): 75–109. http://dx.doi.org/10.1613/jair.1.13196.

Повний текст джерела
Анотація:
This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society’s most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench “bias,” are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI’s long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bærøe, Kristine, Torbjørn Gundersen, Edmund Henden, and Kjetil Rommetveit. "Can medical algorithms be fair? Three ethical quandaries and one dilemma." BMJ Health & Care Informatics 29, no. 1 (April 2022): e100445. http://dx.doi.org/10.1136/bmjhci-2021-100445.

Повний текст джерела
Анотація:
ObjectiveTo demonstrate what it takes to reconcile the idea of fairness in medical algorithms and machine learning (ML) with the broader discourse of fairness and health equality in health research.MethodThe methodological approach used in this paper is theoretical and ethical analysis.ResultWe show that the question of ensuring comprehensive ML fairness is interrelated to three quandaries and one dilemma.DiscussionAs fairness in ML depends on a nexus of inherent justice and fairness concerns embedded in health research, a comprehensive conceptualisation is called for to make the notion useful.ConclusionThis paper demonstrates that more analytical work is needed to conceptualise fairness in ML so it adequately reflects the complexity of justice and fairness concerns within the field of health research.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yanjun Li, Yanjun Li, Huan Huang Yanjun Li, Qiang Geng Huan Huang, Xinwei Guo Qiang Geng, and Yuyu Yuan Xinwei Guo. "Fairness Measures of Machine Learning Models in Judicial Penalty Prediction." 網際網路技術學刊 23, no. 5 (September 2022): 1109–16. http://dx.doi.org/10.53106/160792642022092305019.

Повний текст джерела
Анотація:
<p>Machine learning (ML) has been widely adopted in many software applications across domains. However, accompanying the outstanding performance, the behaviors of the ML models, which are essentially a kind of black-box software, could be unfair and hard to understand in many cases. In our human-centered society, an unfair decision could potentially damage human value, even causing severe social consequences, especially in decision-critical scenarios such as legal judgment. Although some existing works investigated the ML models in terms of robustness, accuracy, security, privacy, quality, etc., the study on the fairness of ML is still in the early stage. In this paper, we first proposed a set of fairness metrics for ML models from different perspectives. Based on this, we performed a comparative study on the fairness of existing widely used classic ML and deep learning models in the domain of real-world judicial judgments. The experiment results reveal that the current state-of-the-art ML models could still raise concerns for unfair decision-making. The ML models with high accuracy and fairness are urgently demanding.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ghosh, Bishwamittra, Debabrota Basu, and Kuldeep S. Meel. "Algorithmic Fairness Verification with Graphical Models." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9539–48. http://dx.doi.org/10.1609/aaai.v36i9.21187.

Повний текст джерела
Анотація:
In recent years, machine learning (ML) algorithms have been deployed in safety-critical and high-stake decision-making, where the fairness of algorithms is of paramount importance. Fairness in ML centers on detecting bias towards certain demographic populations induced by an ML classifier and proposes algorithmic solutions to mitigate the bias with respect to different fairness definitions. To this end, several fairness verifiers have been proposed that compute the bias in the prediction of an ML classifier—essentially beyond a finite dataset—given the probability distribution of input features. In the context of verifying linear classifiers, existing fairness verifiers are limited by accuracy due to imprecise modeling of correlations among features and scalability due to restrictive formulations of the classifiers as SSAT/SMT formulas or by sampling. In this paper, we propose an efficient fairness verifier, called FVGM, that encodes the correlations among features as a Bayesian network. In contrast to existing verifiers, FVGM proposes a stochastic subset-sum based approach for verifying linear classifiers. Experimentally, we show that FVGM leads to an accurate and scalable assessment for more diverse families of fairness-enhancing algorithms, fairness attacks, and group/causal fairness metrics than the state-of-the-art fairness verifiers. We also demonstrate that FVGM facilitates the computation of fairness influence functions as a stepping stone to detect the source of bias induced by subsets of features.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kuzucu, Selim, Jiaee Cheong, Hatice Gunes, and Sinan Kalkan. "Uncertainty as a Fairness Measure." Journal of Artificial Intelligence Research 81 (October 13, 2024): 307–35. http://dx.doi.org/10.1613/jair.1.16041.

Повний текст джерела
Анотація:
Unfair predictions of machine learning (ML) models impede their broad acceptance in real-world settings. Tackling this arduous challenge first necessitates defining what it means for an ML model to be fair. This has been addressed by the ML community with various measures of fairness that depend on the prediction outcomes of the ML models, either at the group-level or the individual-level. These fairness measures are limited in that they utilize point predictions, neglecting their variances, or uncertainties, making them susceptible to noise, missingness and shifts in data. In this paper, we first show that a ML model may appear to be fair with existing point-based fairness measures but biased against a demographic group in terms of prediction uncertainties. Then, we introduce new fairness measures based on different types of uncertainties, namely, aleatoric uncertainty and epistemic uncertainty. We demonstrate on many datasets that (i) our uncertaintybased measures are complementary to existing measures of fairness, and (ii) they provide more insights about the underlying issues leading to bias.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Weerts, Hilde, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl, and Frank Hutter. "Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML." Journal of Artificial Intelligence Research 79 (February 17, 2024): 639–77. http://dx.doi.org/10.1613/jair.1.14747.

Повний текст джерела
Анотація:
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness-related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely posing it as an optimization problem can have adverse side effects. With this work, we aim to raise awareness among developers of AutoML systems about such limitations of fairness-aware AutoML, while also calling attention to the potential of AutoML as a tool for fairness research. We present a comprehensive overview of different ways in which fairness-related harm can arise and the ensuing implications for the design of fairness-aware AutoML. We conclude that while fairness cannot be automated, fairness-aware AutoML can play an important role in the toolbox of ML practitioners. We highlight several open technical challenges for future work in this direction. Additionally, we advocate for the creation of more user-centered assistive systems designed to tackle challenges encountered in fairness work. This article appears in the AI & Society track.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Makhlouf, Karima, Sami Zhioua, and Catuscia Palamidessi. "On the Applicability of Machine Learning Fairness Notions." ACM SIGKDD Explorations Newsletter 23, no. 1 (May 26, 2021): 14–23. http://dx.doi.org/10.1145/3468507.3468511.

Повний текст джерела
Анотація:
Machine Learning (ML) based predictive systems are increasingly used to support decisions with a critical impact on individuals' lives such as college admission, job hiring, child custody, criminal risk assessment, etc. As a result, fairness emerged as an important requirement to guarantee that ML predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey of fairness notions that, unlike other surveys in the literature, addresses the question of "which notion of fairness is most suited to a given real-world scenario and why?". Our attempt to answer this question consists in (1) identifying the set of fairness-related characteristics of the real-world scenario at hand, (2) analyzing the behavior of each fairness notion, and then (3) fitting these two elements to recommend the most suitable fairness notion in every specific setup. The results are summarized in a decision diagram that can be used by practitioners and policy makers to navigate the relatively large catalogue of ML fairness notions.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Singh, Vivek K., and Kailash Joshi. "Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM." e-Service Journal 14, no. 2 (December 2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.

Повний текст джерела
Анотація:
ABSTRACT: Developing efficient processes for building machine learning (ML) applications is an emerging topic for research. One of the well-known frameworks for organizing, developing, and deploying predictive machine learning models is cross-industry standard for data mining (CRISP-DM). However, the framework does not provide any guidelines for detecting and mitigating different types of fairness-related biases in the development of ML applications. The study of these biases is a relatively recent stream of research. To address this significant theoretical and practical gap, we propose a new framework—Fair CRISP-DM, which groups and maps these biases corresponding to each phase of an ML application development. Through this study, we contribute to the literature on ML development and fairness. We present recommendations to ML researchers on including fairness as part of the ML evaluation process. Further, ML practitioners can use our framework to identify and mitigate fairness-related biases in each phase of an ML project development. Finally, we also discuss emerging technologies which can help developers to detect and mitigate biases in different stages of ML application development.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhou, Zijian, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo, and Bryan Kian Hsiang Low. "Probably Approximate Shapley Fairness with Applications in Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 5 (June 26, 2023): 5910–18. http://dx.doi.org/10.1609/aaai.v37i5.25732.

Повний текст джерела
Анотація:
The Shapley value (SV) is adopted in various scenarios in machine learning (ML), including data valuation, agent valuation, and feature attribution, as it satisfies their fairness requirements. However, as exact SVs are infeasible to compute in practice, SV estimates are approximated instead. This approximation step raises an important question: do the SV estimates preserve the fairness guarantees of exact SVs? We observe that the fairness guarantees of exact SVs are too restrictive for SV estimates. Thus, we generalise Shapley fairness to probably approximate Shapley fairness and propose fidelity score, a metric to measure the variation of SV estimates, that determines how probable the fairness guarantees hold. Our last theoretical contribution is a novel greedy active estimation (GAE) algorithm that will maximise the lowest fidelity score and achieve a better fairness guarantee than the de facto Monte-Carlo estimation. We empirically verify GAE outperforms several existing methods in guaranteeing fairness while remaining competitive in estimation accuracy in various ML scenarios using real-world datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sreerama, Jeevan, and Gowrisankar Krishnamoorthy. "Ethical Considerations in AI Addressing Bias and Fairness in Machine Learning Models." Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 1, no. 1 (September 14, 2022): 130–38. http://dx.doi.org/10.60087/jklst.vol1.n1.p138.

Повний текст джерела
Анотація:
The proliferation of artificial intelligence (AI) and machine learning (ML) technologies has brought about unprecedented advancements in various domains. However, concerns surrounding bias and fairness in ML models have gained significant attention, raising ethical considerations that must be addressed. This paper explores the ethical implications of bias in AI systems and the importance of ensuring fairness in ML models. It examines the sources of bias in data collection, algorithm design, and decision-making processes, highlighting the potential consequences of biased AI systems on individuals and society. Furthermore, the paper discusses various approaches and strategies for mitigating bias and promoting fairness in ML models, including data preprocessing techniques, algorithmic transparency, and diverse representation in training datasets. Ethical guidelines and frameworks for developing responsible AI systems are also reviewed, emphasizing the need for interdisciplinary collaboration and stakeholder engagement to address bias and fairness comprehensively. Finally, future directions and challenges in advancing ethical considerations in AI are discussed, underscoring the ongoing efforts required to build trustworthy and equitable AI technologies.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "ML fairness"

1

Kaplan, Caelin. "Compromis inhérents à l'apprentissage automatique préservant la confidentialité." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.

Повний текст джерела
Анотація:
À mesure que les modèles d'apprentissage automatique (ML) sont de plus en plus intégrés dans un large éventail d'applications, il devient plus important que jamais de garantir la confidentialité des données des individus. Cependant, les techniques actuelles entraînent souvent une perte d'utilité et peuvent affecter des facteurs comme l'équité et l'interprétabilité. Cette thèse vise à approfondir la compréhension des compromis dans trois techniques de ML respectueuses de la vie privée : la confidentialité différentielle, les défenses empiriques, et l'apprentissage fédéré, et à proposer des méthodes qui améliorent leur efficacité tout en maintenant la protection de la vie privée. La première étude examine l'impact de la confidentialité différentielle sur l'équité entre les groupes définis par des attributs sensibles. Alors que certaines hypothèses précédentes suggéraient que la confidentialité différentielle pourrait exacerber l'injustice dans les modèles ML, nos expériences montrent que la sélection d'une architecture de modèle optimale et le réglage des hyperparamètres pour DP-SGD (Descente de Gradient Stochastique Différentiellement Privée) peuvent atténuer les disparités d'équité. En utilisant des ensembles de données standards dans la littérature sur l'équité du ML, nous montrons que les disparités entre les groupes pour les métriques telles que la parité démographique, l'égalité des chances et la parité prédictive sont souvent réduites ou négligeables par rapport aux modèles non privés. La deuxième étude se concentre sur les défenses empiriques de la vie privée, qui visent à protéger les données d'entraînement tout en minimisant la perte d'utilité. La plupart des défenses existantes supposent l'accès à des données de référence — un ensemble de données supplémentaire provenant de la même distribution (ou similaire) que les données d'entraînement. Cependant, les travaux antérieurs n'ont que rarement évalué les risques de confidentialité associés aux données de référence. Pour y remédier, nous avons réalisé la première analyse complète de la confidentialité des données de référence dans les défenses empiriques. Nous avons proposé une méthode de défense de référence, la minimisation du risque empirique pondéré (WERM), qui permet de mieux comprendre les compromis entre l'utilité du modèle, la confidentialité des données d'entraînement et celle des données de référence. En plus d'offrir des garanties théoriques, WERM surpasse régulièrement les défenses empiriques de pointe dans presque tous les régimes de confidentialité relatifs. La troisième étude aborde les compromis liés à la convergence dans les systèmes d'inférence collaborative (CIS), de plus en plus utilisés dans l'Internet des objets (IoT) pour permettre aux nœuds plus petits de décharger une partie de leurs tâches d'inférence vers des nœuds plus puissants. Alors que l'apprentissage fédéré (FL) est souvent utilisé pour entraîner conjointement les modèles dans ces systèmes, les méthodes traditionnelles ont négligé la dynamique opérationnelle, comme l'hétérogénéité des taux de service entre les nœuds. Nous proposons une approche FL novatrice, spécialement conçue pour les CIS, qui prend en compte les taux de service variables et la disponibilité inégale des données. Notre cadre offre des garanties théoriques et surpasse systématiquement les algorithmes de pointe, en particulier dans les scénarios où les appareils finaux gèrent des taux de requêtes d'inférence élevés. En conclusion, cette thèse contribue à l'amélioration des techniques de ML respectueuses de la vie privée en analysant les compromis entre confidentialité, utilité et autres facteurs. Les méthodes proposées offrent des solutions pratiques pour intégrer ces techniques dans des applications réelles, en assurant une meilleure protection des données personnelles
As machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "ML fairness"

1

Steif, Ken. "People-based ML Models: Algorithmic Fairness." In Public Policy Analytics, 153–70. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003054658-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

d’Aloisio, Giordano, Antinisca Di Marco, and Giovanni Stilo. "Democratizing Quality-Based Machine Learning Development through Extended Feature Models." In Fundamental Approaches to Software Engineering, 88–110. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_5.

Повний текст джерела
Анотація:
AbstractML systems have become an essential tool for experts of many domains, data scientists and researchers, allowing them to find answers to many complex business questions starting from raw datasets. Nevertheless, the development of ML systems able to satisfy the stakeholders’ needs requires an appropriate amount of knowledge about the ML domain. Over the years, several solutions have been proposed to automate the development of ML systems. However, an approach taking into account the new quality concerns needed by ML systems (like fairness, interpretability, privacy, and others) is still missing.In this paper, we propose a new engineering approach for the quality-based development of ML systems by realizing a workflow formalized as a Software Product Line through Extended Feature Models to generate an ML System satisfying the required quality constraints. The proposed approach leverages an experimental environment that applies all the settings to enhance a given Quality Attribute, and selects the best one. The experimental environment is general and can be used for future quality methods’ evaluations. Finally, we demonstrate the usefulness of our approach in the context of multi-class classification problem and fairness quality attribute.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Silva, Inês Oliveira e., Carlos Soares, Inês Sousa, and Rayid Ghani. "Systematic Analysis of the Impact of Label Noise Correction on ML Fairness." In Lecture Notes in Computer Science, 173–84. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8391-9_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chopra, Deepti, and Roopal Khurana. "Bias and Fairness in Ml." In Introduction to Machine Learning with Python, 116–22. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124422123010012.

Повний текст джерела
Анотація:
In machine learning and AI, future predictions are based on past observations, and bias is based on prior information. Harmful biases occur because of human biases which are learned by an algorithm from the training data. In the previous chapter, we discussed training versus testing, bounding the testing error, and VC dimension. In this chapter, we will discuss bias and fairness.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhang, Wenbin, Zichong Wang, Juyong Kim, Cheng Cheng, Thomas Oommen, Pradeep Ravikumar, and Jeremy Weiss. "Individual Fairness Under Uncertainty." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230621.

Повний текст джерела
Анотація:
Algorithmic fairness, the research field of making machine learning (ML) algorithms fair, is an established area in ML. As ML technologies expand their application domains, including ones with high societal impact, it becomes essential to take fairness into consideration during the building of ML systems. Yet, despite its wide range of socially sensitive applications, most work treats the issue of algorithmic bias as an intrinsic property of supervised learning, i.e., the class label is given as a precondition. Unlike prior studies in fairness, we propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels, while enforcing similar individuals to be treated similarly from a ranking perspective, free of the Lipschitz condition in the conventional individual fairness definition. We argue that this perspective represents a more realistic model of fairness research for real-world application deployment and show how learning with such a relaxed precondition draws new insights that better explains algorithmic fairness. We conducted experiments on four real-world datasets to evaluate our proposed method compared to other fairness models, demonstrating its superiority in minimizing discrimination while maintaining predictive performance with uncertainty present.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cohen-Inger, Nurit, Guy Rozenblatt, Seffi Cohen, Lior Rokach, and Bracha Shapira. "FairUS - UpSampling Optimized Method for Boosting Fairness." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240585.

Повний текст джерела
Анотація:
The increasing application of machine learning (ML) in critical areas such as healthcare and finance highlights the importance of fairness in ML models, challenged by biases in training data that can lead to discrimination. We introduce ‘FairUS’, a novel pre-processing method for reducing bias in ML models utilizing the Conditional Generative Adversarial Network (CTGAN) to synthesize upsampled data. Unlike traditional approaches that focus solely on balancing subgroup sample sizes, FairUS strategically optimizes the quantity of synthesized data. This optimization aims to achieve an ideal balance between enhancing fairness and maintaining the overall performance of the model. Extensive evaluations of our method over several canonical datasets show that the proposed method enhances fairness by 2.7 times more than the related work and 4 times more than the baseline without mitigation, while preserving the performance of the ML model. Moreover, less than a third of the amount of synthetic data was needed on average. Uniquely, the proposed method enables decision-makers to choose the working point between improved fairness and model’s performance according to their preferences.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kothai, G., S. Nandhagopal, P. Harish, S. Sarankumar, and S. Vidhya. "Transforming Data Visualization With AI and ML." In Advances in Business Information Systems and Analytics, 125–68. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-6537-3.ch007.

Повний текст джерела
Анотація:
This chapter explores the ethical considerations and challenges associated with AI-driven visualizations. It highlights the importance of ethics in maintaining trust, fairness, transparency, and privacy. The chapter discusses key challenges such as bias, transparency, privacy, accountability, and accessibility. Strategies for addressing these challenges include implementing ethical AI frameworks, enhancing transparency, promoting fairness, ensuring privacy, and fostering an ethical culture. Case studies from IBM Watson and Microsoft AI are examined to illustrate these points. Future trends in AI and ML for data visualization are also considered, emphasizing the need for responsible use of technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bendoukha, Adda-Akram, Nesrine Kaaniche, Aymen Boudguiga, and Renaud Sirdey. "FairCognizer: A Model for Accurate Predictions with Inherent Fairness Evaluation." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240592.

Повний текст джерела
Анотація:
Algorithmic fairness is a critical challenge in building trustworthy Machine Learning (ML) models. ML classifiers strive to make predictions that closely match real-world observations (ground truth). However, if the ground truth data itself reflects biases against certain sub-populations, a dilemma arises: prioritize fairness and potentially reduce accuracy, or emphasize accuracy at the expense of fairness. This work proposes a novel training framework that goes beyond achieving high accuracy. Our framework trains a classifier to not only deliver optimal predictions but also to identify potential fairness risks associated with each prediction. To do so, we specify a dual-labeling strategy where the second label contains a per-prediction fairness evaluation, referred to as an unfairness risk evaluation. In addition, we identify a subset of samples as highly vulnerable to group-unfair classifiers. Our experiments demonstrate that our classifiers attain optimal accuracy levels on both the Adult-Census-Income and Compas-Recidivism datasets. Moreover, they identify unfair predictions with nearly 75% accuracy at the cost of expanding the size of the classifier by a mere 45%.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Song, Jing Ma, Lu Cheng, and Jundong Li. "Fair Few-Shot Learning with Auxiliary Sets." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230556.

Повний текст джерела
Анотація:
Recently, there has been a growing interest in developing machine learning (ML) models that can promote fairness, i.e., eliminating biased predictions towards certain populations (e.g., individuals from a specific demographic group). Most existing works learn such models based on well-designed fairness constraints in optimization. Nevertheless, in many practical ML tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance. This is because existing fairness constraints are designed to restrict the prediction disparity among different sensitive groups, but with few samples, it becomes difficult to accurately measure the disparity, thus rendering ineffective fairness optimization. In this paper, we define the fairness-aware learning task with limited training samples as the fair few-shot learning problem. To deal with this problem, we devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks. To compensate for insufficient training samples, we propose an essential strategy to select and leverage an auxiliary set for each meta-test task. These auxiliary sets contain several labeled training samples that can enhance the model performance regarding fairness in meta-test tasks, thereby allowing for the transfer of learned useful fairness-oriented knowledge to meta-test tasks. Furthermore, we conduct extensive experiments on three real-world datasets to validate the superiority of our framework against the state-of-the-art baselines.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sunitha, K. "Ethical Issues, Fairness, Accountability, and Transparency in AI/ML." In Handbook of Research on Applications of AI, Digital Twin, and Internet of Things for Sustainable Development, 103–23. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6821-0.ch007.

Повний текст джерела
Анотація:
The ethical issues of how the computer evolved to the artificial intelligence and machine learning era are explored in this chapter. To develop these intelligent systems, what are the basic principles, policies, and rules? How are these systems helpful to humankind as well as to society? How are businesses and other relevant organizations adapting AI and ML? AI and ML are booming technology. They have major applications in healthcare, computer vision, traffic networks, manufacturing, business trade markets, and so on.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "ML fairness"

1

Hertweck, Corinna, Michele Loi, and Christoph Heitz. "Group Fairness Refocused: Assessing the Social Impact of ML Systems." In 2024 11th IEEE Swiss Conference on Data Science (SDS), 189–96. IEEE, 2024. http://dx.doi.org/10.1109/sds60720.2024.00034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Li, Zhiwei, Carl Kesselman, Mike D’Arcy, Michael Pazzani, and Benjamin Yizing Xu. "Deriva-ML: A Continuous FAIRness Approach to Reproducible Machine Learning Models." In 2024 IEEE 20th International Conference on e-Science (e-Science), 1–10. IEEE, 2024. http://dx.doi.org/10.1109/e-science62913.2024.10678671.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Robles Herrera, Salvador, Verya Monjezi, Vladik Kreinovich, Ashutosh Trivedi, and Saeid Tizpaz-Niari. "Predicting Fairness of ML Software Configurations." In PROMISE '24: 20th International Conference on Predictive Models and Data Analytics in Software Engineering. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3663533.3664040.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Makhlouf, Karima, Sami Zhioua, and Catuscia Palamidessi. "Identifiability of Causal-based ML Fairness Notions." In 2022 14th International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2022. http://dx.doi.org/10.1109/cicn56167.2022.10008263.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Baresi, Luciano, Chiara Criscuolo, and Carlo Ghezzi. "Understanding Fairness Requirements for ML-based Software." In 2023 IEEE 31st International Requirements Engineering Conference (RE). IEEE, 2023. http://dx.doi.org/10.1109/re57278.2023.00046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Eyuboglu, Sabri, Karan Goel, Arjun Desai, Lingjiao Chen, Mathew Monfort, Chris Ré, and James Zou. "Model ChangeLists: Characterizing Updates to ML Models." In FAccT '24: The 2024 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3630106.3659047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wexler, James, Mahima Pushkarna, Sara Robinson, Tolga Bolukbasi, and Andrew Zaldivar. "Probing ML models for fairness with the what-if tool and SHAP." In FAT* '20: Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3351095.3375662.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Blili-Hamelin, Borhane, and Leif Hancox-Li. "Making Intelligence: Ethical Values in IQ and ML Benchmarks." In FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3593013.3593996.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Heidari, Hoda, Michele Loi, Krishna P. Gummadi, and Andreas Krause. "A Moral Framework for Understanding Fair ML through Economic Models of Equality of Opportunity." In FAT* '19: Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3287560.3287584.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Smith, Jessie J., Saleema Amershi, Solon Barocas, Hanna Wallach, and Jennifer Wortman Vaughan. "REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research." In FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3531146.3533122.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії