Articoli di riviste sul tema "Fairness-Accuracy trade-Off"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Fairness-Accuracy trade-Off.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-38 articoli di riviste per l'attività di ricerca sul tema "Fairness-Accuracy trade-Off".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Jang, Taeuk, Pengyi Shi e Xiaoqian Wang. "Group-Aware Threshold Adaptation for Fair Classification". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 6 (28 giugno 2022): 6988–95. http://dx.doi.org/10.1609/aaai.v36i6.20657.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The fairness in machine learning is getting increasing attention, as its applications in different fields continue to expand and diversify. To mitigate the discriminated model behaviors between different demographic groups, we introduce a novel post-processing method to optimize over multiple fairness constraints through group-aware threshold adaptation. We propose to learn adaptive classification thresholds for each demographic group by optimizing the confusion matrix estimated from the probability distribution of a classification model output. As we only need an estimated probability distribution of model output instead of the classification model structure, our post-processing model can be applied to a wide range of classification models and improve fairness in a model-agnostic manner and ensure privacy. This even allows us to post-process existing fairness methods to further improve the trade-off between accuracy and fairness. Moreover, our model has low computational cost. We provide rigorous theoretical analysis on the convergence of our optimization algorithm and the trade-off between accuracy and fairness. Our method theoretically enables a better upper bound in near optimality than previous method under the same condition. Experimental results demonstrate that our method outperforms state-of-the-art methods and obtains the result that is closest to the theoretical accuracy-fairness trade-off boundary.
2

Langenberg, Anna, Shih-Chi Ma, Tatiana Ermakova e Benjamin Fabian. "Formal Group Fairness and Accuracy in Automated Decision Making". Mathematics 11, n. 8 (7 aprile 2023): 1771. http://dx.doi.org/10.3390/math11081771.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Most research on fairness in Machine Learning assumes the relationship between fairness and accuracy to be a trade-off, with an increase in fairness leading to an unavoidable loss of accuracy. In this study, several approaches for fair Machine Learning are studied to experimentally analyze the relationship between accuracy and group fairness. The results indicated that group fairness and accuracy may even benefit each other, which emphasizes the importance of selecting appropriate measures for performance evaluation. This work provides a foundation for further studies on the adequate objectives of Machine Learning in the context of fair automated decision making.
3

Tae, Ki Hyun, Hantian Zhang, Jaeyoung Park, Kexin Rong e Steven Euijong Whang. "Falcon: Fair Active Learning Using Multi-Armed Bandits". Proceedings of the VLDB Endowment 17, n. 5 (gennaio 2024): 952–65. http://dx.doi.org/10.14778/3641204.3641207.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Biased data can lead to unfair machine learning models, highlighting the importance of embedding fairness at the beginning of data analysis, particularly during dataset curation and labeling. In response, we propose Falcon, a scalable fair active learning framework. Falcon adopts a data-centric approach that improves machine learning model fairness via strategic sample selection. Given a user-specified group fairness measure, Falcon identifies samples from "target groups" (e.g., (attribute=female, label=positive)) that are the most informative for improving fairness. However, a challenge arises since these target groups are defined using ground truth labels that are not available during sample selection. To handle this, we propose a novel trial-and-error method, where we postpone using a sample if the predicted label is different from the expected one and falls outside the target group. We also observe the trade-off that selecting more informative samples results in higher likelihood of postponing due to undesired label prediction, and the optimal balance varies per dataset. We capture the trade-off between informativeness and postpone rate as policies and propose to automatically select the best policy using adversarial multi-armed bandit methods, given their computational efficiency and theoretical guarantees. Experiments show that Falcon significantly outperforms existing fair active learning approaches in terms of fairness and accuracy and is more efficient. In particular, only Falcon supports a proper trade-off between accuracy and fairness where its maximum fairness score is 1.8--4.5x higher than the second-best results.
4

Badar, Maryam, Sandipan Sikdar, Wolfgang Nejdl e Marco Fisichella. "FairTrade: Achieving Pareto-Optimal Trade-Offs between Balanced Accuracy and Fairness in Federated Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 10 (24 marzo 2024): 10962–70. http://dx.doi.org/10.1609/aaai.v38i10.28971.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
As Federated Learning (FL) gains prominence in distributed machine learning applications, achieving fairness without compromising predictive performance becomes paramount. The data being gathered from distributed clients in an FL environment often leads to class imbalance. In such scenarios, balanced accuracy rather than accuracy is the true representation of model performance. However, most state-of-the-art fair FL methods report accuracy as the measure of performance, which can lead to misguided interpretations of the model's effectiveness to mitigate discrimination. To the best of our knowledge, this work presents the first attempt towards achieving Pareto-optimal trade-offs between balanced accuracy and fairness in a federated environment (FairTrade). By utilizing multi-objective optimization, the framework negotiates the intricate balance between model's balanced accuracy and fairness. The framework's agnostic design adeptly accommodates both statistical and causal fairness notions, ensuring its adaptability across diverse FL contexts. We provide empirical evidence of our framework's efficacy through extensive experiments on five real-world datasets and comparisons with six baselines. The empirical results underscore the potential of our framework in improving the trade-off between fairness and balanced accuracy in FL applications.
5

Li, Xuran, Peng Wu e Jing Su. "Accurate Fairness: Improving Individual Fairness without Trading Accuracy". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 12 (26 giugno 2023): 14312–20. http://dx.doi.org/10.1609/aaai.v37i12.26674.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Accuracy and individual fairness are both crucial for trustworthy machine learning, but these two aspects are often incompatible with each other so that enhancing one aspect may sacrifice the other inevitably with side effects of true bias or false fairness. We propose in this paper a new fairness criterion, accurate fairness, to align individual fairness with accuracy. Informally, it requires the treatments of an individual and the individual's similar counterparts to conform to a uniform target, i.e., the ground truth of the individual. We prove that accurate fairness also implies typical group fairness criteria over a union of similar sub-populations. We then present a Siamese fairness in-processing approach to minimize the accuracy and fairness losses of a machine learning model under the accurate fairness constraints. To the best of our knowledge, this is the first time that a Siamese approach is adapted for bias mitigation. We also propose fairness confusion matrix-based metrics, fair-precision, fair-recall, and fair-F1 score, to quantify a trade-off between accuracy and individual fairness. Comparative case studies with popular fairness datasets show that our Siamese fairness approach can achieve on average 1.02%-8.78% higher individual fairness (in terms of fairness through awareness) and 8.38%-13.69% higher accuracy, as well as 10.09%-20.57% higher true fair rate, and 5.43%-10.01% higher fair-F1 score, than the state-of-the-art bias mitigation techniques. This demonstrates that our Siamese fairness approach can indeed improve individual fairness without trading accuracy. Finally, the accurate fairness criterion and Siamese fairness approach are applied to mitigate the possible service discrimination with a real Ctrip dataset, by on average fairly serving 112.33% more customers (specifically, 81.29% more customers in an accurately fair way) than baseline models.
6

Silvia, Chiappa, Jiang Ray, Stepleton Tom, Pacchiano Aldo, Jiang Heinrich e Aslanides John. "A General Approach to Fairness with Optimal Transport". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 04 (3 aprile 2020): 3633–40. http://dx.doi.org/10.1609/aaai.v34i04.5771.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We propose a general approach to fairness based on transporting distributions corresponding to different sensitive attributes to a common distribution. We use optimal transport theory to derive target distributions and methods that allow us to achieve fairness with minimal changes to the unfair model. Our approach is applicable to both classification and regression problems, can enforce different notions of fairness, and enable us to achieve a Pareto-optimal trade-off between accuracy and fairness. We demonstrate that it outperforms previous approaches in several benchmark fairness datasets.
7

Pinzón, Carlos, Catuscia Palamidessi, Pablo Piantanida e Frank Valencia. "On the Impossibility of Non-trivial Accuracy in Presence of Fairness Constraints". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 7 (28 giugno 2022): 7993–8000. http://dx.doi.org/10.1609/aaai.v36i7.20770.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
One of the main concerns about fairness in machine learning (ML) is that, in order to achieve it, one may have to trade off some accuracy. To overcome this issue, Hardt et al. proposed the notion of equality of opportunity (EO), which is compatible with maximal accuracy when the target label is deterministic with respect to the input features. In the probabilistic case, however, the issue is more complicated: It has been shown that under differential privacy constraints, there are data sources for which EO can only be achieved at the total detriment of accuracy, in the sense that a classifier that satisfies EO cannot be more accurate than a trivial (random guessing) classifier. In our paper we strengthen this result by removing the privacy constraint. Namely, we show that for certain data sources, the most accurate classifier that satisfies EO is a trivial classifier. Furthermore, we study the trade-off between accuracy and EO loss (opportunity difference), and provide a sufficient condition on the data source under which EO and non-trivial accuracy are compatible.
8

Singh, Arashdeep, Jashandeep Singh, Ariba Khan e Amar Gupta. "Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair". Machine Learning and Knowledge Extraction 4, n. 1 (12 marzo 2022): 240–53. http://dx.doi.org/10.3390/make4010011.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this “model discrimination” by updating the training data (pre-processing), altering the model learning process (in-processing), or manipulating the model output (post-processing). However, more work can be done in extending this situation to intersectional fairness, where we consider multiple sensitive parameters (e.g., race) and sensitive options (e.g., black or white), thus allowing for greater real-world usability. Prior work in fairness has also suffered from an accuracy–fairness trade-off that prevents both accuracy and fairness from being high. Moreover, the previous literature has not clearly presented holistic fairness metrics that work with intersectional fairness. In this paper, we address all three of these problems by (a) creating a bias mitigation technique called DualFair and (b) developing a new fairness metric (i.e., AWI, a measure of bias of an algorithm based upon inconsistent counterfactual predictions) that can handle intersectional fairness. Lastly, we test our novel mitigation method using a comprehensive U.S. mortgage lending dataset and show that our classifier, or fair loan predictor, obtains relatively high fairness and accuracy metrics.
9

Gitiaux, Xavier, e Huzefa Rangwala. "Fair Representations by Compression". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 13 (18 maggio 2021): 11506–15. http://dx.doi.org/10.1609/aaai.v35i13.17370.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Organizations that collect and sell data face increasing scrutiny for the discriminatory use of data. We propose a novel unsupervised approach to map data into a compressed binary representation independent of sensitive attributes. We show that in an information bottleneck framework, a parsimonious representation should filter out information related to sensitive attributes if they are provided directly to the decoder. Empirical results show that the method achieves state-of-the-art accuracy-fairness trade-off and that explicit control of the entropy of the representation bit stream allows the user to move smoothly and simultaneously along both rate-distortion and rate-fairness curves.
10

Gao, Shiqi, Xianxian Li, Zhenkui Shi, Peng Liu e Chunpei Li. "Towards Fair and Decentralized Federated Learning System for Gradient Boosting Decision Trees". Security and Communication Networks 2022 (2 agosto 2022): 1–18. http://dx.doi.org/10.1155/2022/4202084.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
At present, gradient boosting decision trees (GBDTs) has become a popular machine learning algorithm and has shined in many data mining competitions and real-world applications for its salient results on classification, ranking, prediction, etc. Federated learning which aims to mitigate privacy risks and costs, enables many entities to keep data locally and train a model collaboratively under an orchestration service. However, most of the existing systems often fail to make an excellent trade-off between accuracy and communication. In addition, they overlook an important aspect: fairness such as performance gains from different parties’ datasets. In this paper, we propose a novel federated GBDT scheme based on the blockchain which can achieve constant communication overhead and good model performance and quantify the contribution of each party. Specifically, we replace the tree-based communication scheme with the pure gradient-based scheme and compress the intermediate gradient information to a limit to achieve good model performance and constant communication overhead in skewed datasets. On the other hand, we introduce a novel contribution allocation scheme named split Shapley value, which can quantify the contribution of each party with a limited gradient update and provide a basis for monetary reward. Finally, we combine the quantification mechanism with blockchain organically and implement a closed-loop federated GBDT system FGBDT-Chain in a permissioned blockchain environment and conduct a comprehensive experiment on public datasets. The experimental results show that FGBDT-Chain achieves a good trade-off between accuracy, communication overhead, fairness, and security under large-scale skewed datasets.
11

Pan, Chenglu, Jiarong Xu, Yue Yu, Ziqi Yang, Qingbiao Wu, Chunping Wang, Lei Chen e Yang Yang. "Towards Fair Graph Federated Learning via Incentive Mechanisms". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 13 (24 marzo 2024): 14499–507. http://dx.doi.org/10.1609/aaai.v38i13.29365.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Graph federated learning (FL) has emerged as a pivotal paradigm enabling multiple agents to collaboratively train a graph model while preserving local data privacy. Yet, current efforts overlook a key issue: agents are self-interested and would hesitant to share data without fair and satisfactory incentives. This paper is the first endeavor to address this issue by studying the incentive mechanism for graph federated learning. We identify a unique phenomenon in graph federated learning: the presence of agents posing potential harm to the federation and agents contributing with delays. This stands in contrast to previous FL incentive mechanisms that assume all agents contribute positively and in a timely manner. In view of this, this paper presents a novel incentive mechanism tailored for fair graph federated learning, integrating incentives derived from both model gradient and payoff. To achieve this, we first introduce an agent valuation function aimed at quantifying agent contributions through the introduction of two criteria: gradient alignment and graph diversity. Moreover, due to the high heterogeneity in graph federated learning, striking a balance between accuracy and fairness becomes particularly crucial. We introduce motif prototypes to enhance accuracy, communicated between the server and agents, enhancing global model aggregation and aiding agents in local model optimization. Extensive experiments show that our model achieves the best trade-off between accuracy and the fairness of model gradient, as well as superior payoff fairness.
12

Li, Yanying, Xiuling Wang, Yue Ning e Hui Wang. "FairLP: Towards Fair Link Prediction on Social Network Graphs". Proceedings of the International AAAI Conference on Web and Social Media 16 (31 maggio 2022): 628–39. http://dx.doi.org/10.1609/icwsm.v16i1.19321.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Link prediction has been widely applied in social network analysis. Despite its importance, link prediction algorithms can be biased by disfavoring the links between individuals in particular demographic groups. In this paper, we study one particular type of bias, namely, the bias in predicting inter-group links (i.e., links across different demographic groups). First, we formalize the definition of bias in link prediction by providing quantitative measurements of accuracy disparity, which measures the difference in prediction accuracy of inter-group and intra-group links. Second, we unveil the existence of bias in six existing state-of-the-art link prediction algorithms through extensive empirical studies over real world datasets. Third, we identify the imbalanced density across intra-group and inter-group links in training graphs as one of the underlying causes of bias in link prediction. Based on the identified cause, fourth, we design a pre-processing bias mitigation method named FairLP to modify the training graph, aiming to balance the distribution of intra-group and inter-group links while preserving the network characteristics of the graph. FairLP is model-agnostic and thus is compatible with any existing link prediction algorithm. Our experimental results on real-world social network graphs demonstrate that FairLP achieves better trade-off between fairness and prediction accuracy than the existing fairness-enhancing link prediction methods.
13

Zeng, Ziqian, Rashidul Islam, Kamrun Naher Keya, James Foulds, Yangqiu Song e Shimei Pan. "Fair Representation Learning for Heterogeneous Information Networks". Proceedings of the International AAAI Conference on Web and Social Media 15 (22 maggio 2021): 877–87. http://dx.doi.org/10.1609/icwsm.v15i1.18111.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Recently, much attention has been paid to the societal impact of AI, especially concerns regarding its fairness. A growing body of research has identified unfair AI systems and proposed methods to debias them, yet many challenges remain. Representation learning methods for Heterogeneous Information Networks (HINs), fundamental building blocks used in complex network mining, have socially consequential applications such as automated career counseling, but there have been few attempts to ensure that it will not encode or amplify harmful biases, e.g. sexism in the job market. To address this gap, we propose a comprehensive set of de-biasing methods for fair HINs representation learning, including sampling-based, projection-based, and graph neural networks (GNNs)-based techniques. We systematically study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy. We evaluate the performance of the proposed methods in an automated career counseling application where we mitigate gender bias in career recommendation. Based on the evaluation results on two datasets, we identify the most effective fair HINs representation learning techniques under different conditions.
14

Sun, Ying, Fariborz Haghighat e Benjamin C. M. Fung. "Trade-off between accuracy and fairness of data-driven building and indoor environment models: A comparative study of pre-processing methods". Energy 239 (gennaio 2022): 122273. http://dx.doi.org/10.1016/j.energy.2021.122273.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Sun, Ying, Fariborz Haghighat e Benjamin C. M. Fung. "Trade-off between accuracy and fairness of data-driven building and indoor environment models: A comparative study of pre-processing methods". Energy 239 (gennaio 2022): 122273. http://dx.doi.org/10.1016/j.energy.2021.122273.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Li, Qin, Zhou, Cheng, Zhang e Ai. "Intelligent Rapid Adaptive Offloading Algorithm for Computational Services in Dynamic Internet of Things System". Sensors 19, n. 15 (4 agosto 2019): 3423. http://dx.doi.org/10.3390/s19153423.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
As restricted resources have seriously limited the computational performance of massive Internet of things (IoT) devices, better processing capability is urgently required. As an innovative technology, multi-access edge computing can provide cloudlet capabilities by offloading computation-intensive services from devices to a nearby edge server. This paper proposes an intelligent rapid adaptive offloading (IRAO) algorithm for a dynamic IoT system to increase overall computational performance and simultaneously keep the fairness of multiple participants, which can achieve agile centralized control and solve the joint optimization problems related to offloading policy and resource allocation. For reducing algorithm execution time, we apply machine learning methods and construct an adaptive learning-based framework consisting of offloading decision-making, radio resource slicing and algorithm parameters updating. In particular, the offloading policy can be rapidly derived from an estimation algorithm based on a deep neural network, which uses an experience replay training method to improve model accuracy and adopts an asynchronous sampling trick to enhance training convergence performance. Extensive simulations with different parameters are conducted to maintain the trade-off between accuracy and efficiency of the IRAO algorithm. Compared with other candidates, the results illustrate that the IRAO algorithm can achieve superior performance in terms of scalability, effectiveness and efficiency.
17

HERBORDT, MARTIN C., e CHARLES C. WEEMS. "ENPASSANT: AN ENVIRONMENT FOR EVALUATING MASSIVELY PARALLEL ARRAY ARCHITECTURES FOR SPATIALLY MAPPED APPLICATIONS". International Journal of Pattern Recognition and Artificial Intelligence 09, n. 02 (aprile 1995): 175–200. http://dx.doi.org/10.1142/s0218001495000109.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Although massively parallel arrays for spatially mapped applications have been proposed since the 1950s42 and built since the 1960s,12 there have been very few systematic empirical studies that cover more than a small fraction of the design space. The problems have included the lack of a test suite of non-trivial application codes; inadequate language support; the difficulties of balancing evaluation performance with flexibility; and balancing test suite portability with accuracy of evaluation. We describe an environment that addresses these problems. A realistic workload including a series of applications currently being used as building blocks in vision research has been constructed. Both flexibility in architectural parameter selection and simulation efficiency are maintained with a novel new technique that combines virtual machine emulation with trace-driven simulation. The trade-off between fairness to diverse target architectures and programmability of the test suite is addressed through the use of operator and application libraries for a small set of critical functions. We also present examples of the type of results we are obtaining, including the effects of changing ALU designs and datapath widths, finding critical points in register set and cache sizes, the benefits of various types of router networks, and the performance cost of processor virtualization.
18

Kieslich, Kimon, Birte Keller e Christopher Starke. "Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence". Big Data & Society 9, n. 1 (gennaio 2022): 205395172210929. http://dx.doi.org/10.1177/20539517221092956.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Despite the immense societal importance of ethically designing artificial intelligence, little research on the public perceptions of ethical artificial intelligence principles exists. This becomes even more striking when considering that ethical artificial intelligence development has the aim to be human-centric and of benefit for the whole society. In this study, we investigate how ethical principles (explainability, fairness, security, accountability, accuracy, privacy, and machine autonomy) are weighted in comparison to each other. This is especially important, since simultaneously considering ethical principles is not only costly, but sometimes even impossible, as developers must make specific trade-off decisions. In this paper, we give first answers on the relative importance of ethical principles given a specific use case—the use of artificial intelligence in tax fraud detection. The results of a large conjoint survey ([Formula: see text]) suggest that, by and large, German respondents evaluate the ethical principles as equally important. However, subsequent cluster analysis shows that different preference models for ethically designed systems exist among the German population. These clusters substantially differ not only in the preferred ethical principles but also in the importance levels of the principles themselves. We further describe how these groups are constituted in terms of sociodemographics as well as opinions on artificial intelligence. Societal implications, as well as design challenges, are discussed.
19

Costa, Diogo, Miguel Costa e Sandro Pinto. "Train Me If You Can: Decentralized Learning on the Deep Edge". Applied Sciences 12, n. 9 (6 maggio 2022): 4653. http://dx.doi.org/10.3390/app12094653.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The end of Moore’s Law aligned with data privacy concerns is forcing machine learning (ML) to shift from the cloud to the deep edge. In the next-generation ML systems, the inference and part of the training process will perform at the edge, while the cloud stays responsible for major updates. This new computing paradigm, called federated learning (FL), alleviates the cloud and network infrastructure while increasing data privacy. Recent advances empowered the inference pass of quantized artificial neural networks (ANNs) on Arm Cortex-M and RISC-V microcontroller units (MCUs). Nevertheless, the training remains confined to the cloud, imposing the transaction of high volumes of private data over a network and leading to unpredictable delays when ML applications attempt to adapt to adversarial environments. To fill this gap, we make the first attempt to evaluate the feasibility of ANN training in Arm Cortex-M MCUs. From the available optimization algorithms, stochastic gradient descent (SGD) has the best trade-off between accuracy, memory footprint, and latency. However, its original form and the variants available in the literature still do not fit the stringent requirements of Arm Cortex-M MCUs. We propose L-SGD, a lightweight implementation of SGD optimized for maximum speed and minimal memory footprint in this class of MCUs. We developed a floating-point version and another that operates over quantized weights. For a fully-connected ANN trained on the MNIST dataset, L-SGD (float-32) is 4.20× faster than the SGD while requiring only 2.80% of the memory with negligible accuracy loss. Results also show that quantized training is still unfeasible to train an ANN from the scratch but is a lightweight solution to perform minor model fixes and counteract the fairness problem in typical FL systems.
20

Schwartz, Jessica M., Maureen George, Sarah Collins Rossetti, Patricia C. Dykes, Simon R. Minshall, Eugene Lucas e Kenrick D. Cato. "Factors Influencing Clinician Trust in Predictive Clinical Decision Support Systems for In-Hospital Deterioration: Qualitative Descriptive Study". JMIR Human Factors 9, n. 2 (12 maggio 2022): e33960. http://dx.doi.org/10.2196/33960.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background Clinician trust in machine learning–based clinical decision support systems (CDSSs) for predicting in-hospital deterioration (a type of predictive CDSS) is essential for adoption. Evidence shows that clinician trust in predictive CDSSs is influenced by perceived understandability and perceived accuracy. Objective The aim of this study was to explore the phenomenon of clinician trust in predictive CDSSs for in-hospital deterioration by confirming and characterizing factors known to influence trust (understandability and accuracy), uncovering and describing other influencing factors, and comparing nurses’ and prescribing providers’ trust in predictive CDSSs. Methods We followed a qualitative descriptive methodology conducting directed deductive and inductive content analysis of interview data. Directed deductive analyses were guided by the human-computer trust conceptual framework. Semistructured interviews were conducted with nurses and prescribing providers (physicians, physician assistants, or nurse practitioners) working with a predictive CDSS at 2 hospitals in Mass General Brigham. Results A total of 17 clinicians were interviewed. Concepts from the human-computer trust conceptual framework—perceived understandability and perceived technical competence (ie, perceived accuracy)—were found to influence clinician trust in predictive CDSSs for in-hospital deterioration. The concordance between clinicians’ impressions of patients’ clinical status and system predictions influenced clinicians’ perceptions of system accuracy. Understandability was influenced by system explanations, both global and local, as well as training. In total, 3 additional themes emerged from the inductive analysis. The first, perceived actionability, captured the variation in clinicians’ desires for predictive CDSSs to recommend a discrete action. The second, evidence, described the importance of both macro- (scientific) and micro- (anecdotal) evidence for fostering trust. The final theme, equitability, described fairness in system predictions. The findings were largely similar between nurses and prescribing providers. Conclusions Although there is a perceived trade-off between machine learning–based CDSS accuracy and understandability, our findings confirm that both are important for fostering clinician trust in predictive CDSSs for in-hospital deterioration. We found that reliance on the predictive CDSS in the clinical workflow may influence clinicians’ requirements for trust. Future research should explore the impact of reliance, the optimal explanation design for enhancing understandability, and the role of perceived actionability in driving trust.
21

Li, Jingyang, e Guoqiang Li. "The Triangular Trade-off between Robustness, Accuracy and Fairness in Deep Neural Networks: A Survey". ACM Computing Surveys, 12 febbraio 2024. http://dx.doi.org/10.1145/3645088.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
With the rapid development of deep learning, AI systems are being used more in complex and important domains and necessitates the simultaneous fulfillment of multiple constraints: accurate, robust, and fair. Accuracy measures how well a DNN can generalize to new data. Robustness demonstrates how well the network can withstand minor perturbations without changing the results. Fairness focuses on treating different groups equally. This survey provides an overview of the triangular trade-off among robustness, accuracy, and fairness in neural networks. This trade-off makes it difficult for AI systems to achieve true intelligence and is connected to generalization, robustness, and fairness in deep learning. The survey explores these trade-offs and their relationships to adversarial examples, adversarial training, and fair machine learning. The trade-offs between accuracy and robustness, accuracy and fairness, and robustness and fairness have been studied to different extents. However, there is a lack of taxonomy and analysis of these trade-offs. The accuracy-robustness trade-off is inherent in Gaussian models, but it varies when classes are not closely distributed. The accuracy-fairness and robustness-fairness trade-offs have been assessed empirically, but their theoretical nature needs more investigation. This survey aims to explore the origins, evolution, influencing factors, and future research directions of these trade-offs.
22

Talbert, Douglas A., Katherine L. Phillips, Katherine E. Brown e Steve Talbert. "Assessing and Addressing Model Trustworthiness Trade-offs in Trauma Triage". International Journal on Artificial Intelligence Tools 33, n. 03 (25 aprile 2024). http://dx.doi.org/10.1142/s0218213024600078.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Trauma triage occurs in suboptimal environments for making consequential decisions. Published triage studies demonstrate the extremes of the complexity/accuracy trade-off, either studying simple models with poor accuracy or very complex models with accuracies nearing published goals. Using a Level I Trauma Center’s registry cases (n = 50 644), this study describes, uses, and derives observations from a methodology to more thoroughly examine this trade-off. This or similar methods can provide the insight needed for practitioners to balance understandability with accuracy. Additionally, this study incorporates an evaluation of group-based fairness into this trade-off analysis to provide an additional dimension of insight into model selection. Lastly, this paper proposes and analyzes a multi-model approach to mitigating trust-related trade-offs. The experiments allow us to draw several conclusions regarding the machine learning models in the domain of trauma triage and demonstrate the value of our trade-off analysis to provide insight into choices regarding model complexity, model accuracy, and model fairness.
23

Chen, Zhenpeng, Jie M. Zhang, Federica Sarro e Mark Harman. "A Comprehensive Empirical Study of Bias Mitigation Methods for Machine Learning Classifiers". ACM Transactions on Software Engineering and Methodology, 9 febbraio 2023. http://dx.doi.org/10.1145/3583561.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Software bias is an increasingly important operational concern for software engineers. We present a large-scale, comprehensive empirical study of 17 representative bias mitigation methods for Machine Learning (ML) classifiers, evaluated with 11 ML performance metrics (e.g., accuracy), 4 fairness metrics, and 20 types of fairness-performance trade-off assessment, applied to 8 widely-adopted software decision tasks. The empirical coverage is much more comprehensive, covering the largest numbers of bias mitigation methods, evaluation metrics, and fairness-performance trade-off measures compared to previous work on this important software property. We find that (1) the bias mitigation methods significantly decrease ML performance in 53% of the studied scenarios (ranging between 42% ∼ 66% according to different ML performance metrics); (2) the bias mitigation methods significantly improve fairness measured by the 4 used metrics in 46% of all the scenarios (ranging between 24% ∼ 59% according to different fairness metrics); (3) the bias mitigation methods even lead to decrease in both fairness and ML performance in 25% of the scenarios; (4) the effectiveness of the bias mitigation methods depends on tasks, models, the choice of protected attributes, and the set of metrics used to assess fairness and ML performance; (5) there is no bias mitigation method that can achieve the best trade-off in all the scenarios. The best method that we find outperforms other methods in 30% of the scenarios. Researchers and practitioners need to choose the bias mitigation method best suited to their intended application scenario(s).
24

Buijsman, Stefan. "Navigating fairness measures and trade-offs". AI and Ethics, 17 luglio 2023. http://dx.doi.org/10.1007/s43681-023-00318-0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractTo monitor and prevent bias in AI systems, we can use a wide range of (statistical) fairness measures. However, it is mathematically impossible to optimize all of these measures at the same time. In addition, optimizing a fairness measure often greatly reduces the accuracy of the system (Kozodoi et al., Eur J Oper Res 297:1083–1094, 2022). As a result, we need a substantive theory that informs us how to make these decisions and for what reasons. I show that by using Rawls’ notion of justice as fairness, we can create a basis for navigating fairness measures and the accuracy trade-off. In particular, this leads to a principled choice focusing on both the most vulnerable groups and the type of fairness measure that has the biggest impact on that group. This also helps to close part of the gap between philosophical accounts of distributive justice and the fairness literature that has been observed by (Kuppler et al. Distributive justice and fairness metrics in automated decision-making: How much overlap is there? arXiv preprint arXiv:2105.01441, 2021), and to operationalise the value of fairness.
25

Wei, Chen, Kui Xu, Zhexian Shen, Xiaochen Xia, Wei Xie e Chunguo Li. "Location-aided uplink transmission for user-centric cell-free massive MIMO systems: a fairness priority perspective". EURASIP Journal on Wireless Communications and Networking 2022, n. 1 (11 settembre 2022). http://dx.doi.org/10.1186/s13638-022-02171-x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractIn this paper, we investigate the uplink transmission for user-centric cell-free massive multiple-input multiple-output (MIMO) systems. The largest-large-scale-fading-based access point (AP) selection method is adopted to achieve a user-centric operation. Under this user-centric framework, we propose a novel inter-cluster interference-based (IC-IB) pilot assignment scheme to alleviate pilot contamination. Considering the local characteristics of channel estimates and statistics, we propose a location-aided distributed uplink combining scheme to balance the relationship among the spectral efficiency (SE), user equipment (UE) fairness and complexity, in which local partial minimum mean-squared error (LP-MMSE) combining is adopted for some APs, while maximum-ratio (MR) combining is adopted for the remaining APs. A corresponding AP selection scheme based on a novel proposed metric representing inter-user interference is proposed. We also propose a new fairness coefficient taking SE performance into account to indicate the UE fairness. Moreover, the performance of the proposed scheme is investigated under fractional power control and max–min fairness (MMF) power control. Simulation results demonstrate that the channel estimation accuracy of our proposed IC-IB pilot assignment scheme outperforms that of the conventional pilot assignment schemes. It is also shown that compared with the benchmark LP-MMSE combining, the proposed location-aided combining trades 13.45$$\%$$ % average SE loss for 26.61$$\%$$ % UE fairness improvement and 28.58$$\%$$ % complexity reduction when $$\gamma = 0.6$$ γ = 0.6 . And by adjusting the threshold $$\gamma$$ γ , a good trade-off between the average SE, UE fairness and computational complexity can be provided by the proposed scheme. Furthermore, the proposed scheme with fractional power control can better demonstrate the advantages of trade-off performance than MMF power control and full power transmission.
26

Duricic, Tomislav, Dominik Kowald, Emanuel Lacic e Elisabeth Lex. "Beyond-accuracy: a review on diversity, serendipity, and fairness in recommender systems based on graph neural networks". Frontiers in Big Data 6 (19 dicembre 2023). http://dx.doi.org/10.3389/fdata.2023.1251072.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
By providing personalized suggestions to users, recommender systems have become essential to numerous online platforms. Collaborative filtering, particularly graph-based approaches using Graph Neural Networks (GNNs), have demonstrated great results in terms of recommendation accuracy. However, accuracy may not always be the most important criterion for evaluating recommender systems' performance, since beyond-accuracy aspects such as recommendation diversity, serendipity, and fairness can strongly influence user engagement and satisfaction. This review paper focuses on addressing these dimensions in GNN-based recommender systems, going beyond the conventional accuracy-centric perspective. We begin by reviewing recent developments in approaches that improve not only the accuracy-diversity trade-off but also promote serendipity, and fairness in GNN-based recommender systems. We discuss different stages of model development including data preprocessing, graph construction, embedding initialization, propagation layers, embedding fusion, score computation, and training methodologies. Furthermore, we present a look into the practical difficulties encountered in assuring diversity, serendipity, and fairness, while retaining high accuracy. Finally, we discuss potential future research directions for developing more robust GNN-based recommender systems that go beyond the unidimensional perspective of focusing solely on accuracy. This review aims to provide researchers and practitioners with an in-depth understanding of the multifaceted issues that arise when designing GNN-based recommender systems, setting our work apart by offering a comprehensive exploration of beyond-accuracy dimensions.
27

Rueda, Jon, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín e David Rodríguez-Arias. "“Just” accuracy? Procedural fairness demands explainability in AI-based medical resource allocations". AI & SOCIETY, 21 dicembre 2022. http://dx.doi.org/10.1007/s00146-022-01614-9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractThe increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps to maximize patients’ benefits and optimizes limited resources. However, we claim that the opaqueness of the algorithmic black box and its absence of explainability threatens core commitments of procedural fairness such as accountability, avoidance of bias, and transparency. To illustrate this, we discuss liver transplantation as a case of critical medical resources in which the lack of explainability in AI-based allocation algorithms is procedurally unfair. Finally, we provide a number of ethical recommendations for when considering the use of unexplainable algorithms in the distribution of health-related resources.
28

Oh, Hyeji, e Chulyun Kim. "Fairness-aware recommendation with meta learning". Scientific Reports 14, n. 1 (2 maggio 2024). http://dx.doi.org/10.1038/s41598-024-60808-x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractFairness has become a critical value online, and the latest studies consider it in many problems. In recommender systems, fairness is important since the visibility of items is controlled by systems. Previous fairness-aware recommender systems assume that sufficient relationship data between users and items are available. However, it is common that new users and items are frequently introduced, and they have no relationship data yet. In this paper, we study recommendation methods to enhance fairness in a cold-start state. Fairness is more significant when the preference of a user or the popularity of an item is unknown. We propose a meta-learning-based cold-start recommendation framework called FaRM to alleviate the unfairness of recommendations. The proposed framework consists of three steps. We first propose a fairness-aware meta-path generation method to eliminate bias in sensitive attributes. In addition, we construct fairness-aware user representations through the meta-path aggregation approach. Then, we propose a novel fairness objective function and introduce a joint learning method to minimize the trade-off between relevancy and fairness. In extensive experiments with various cold-start scenarios, it is shown that FaRM is significantly superior in fairness performance while preserving relevance accuracy over previous work.
29

Zhang, Tao, Tianqing Zhu, Mengde Han, Fengwen Chen, Jing Li, Wanlei Zhou e Philip S. Yu. "Fairness in graph-based semi-supervised learning". Knowledge and Information Systems, 1 ottobre 2022. http://dx.doi.org/10.1007/s10115-022-01738-w.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractMachine learning is widely deployed in society, unleashing its power in a wide range of applications owing to the advent of big data. One emerging problem faced by machine learning is the discrimination from data, and such discrimination is reflected in the eventual decisions made by the algorithms. Recent study has proved that increasing the size of training (labeled) data will promote the fairness criteria with model performance being maintained. In this work, we aim to explore a more general case where quantities of unlabeled data are provided, indeed leading to a new form of learning paradigm, namely fair semi-supervised learning. Taking the popularity of graph-based approaches in semi-supervised learning, we study this problem both on conventional label propagation method and graph neural networks, where various fairness criteria can be flexibly integrated. Our developed algorithms are proved to be non-trivial extensions to the existing supervised models with fairness constraints. Extensive experiments on real-world datasets exhibit that our methods achieve a better trade-off between classification accuracy and fairness than the compared baselines.
30

Loi, Michele, e Markus Christen. "Choosing how to discriminate: navigating ethical trade-offs in fair algorithmic design for the insurance sector". Philosophy & Technology, 13 marzo 2021. http://dx.doi.org/10.1007/s13347-021-00444-9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractHere, we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts, business managers, and regulators, proposing a framework of moral reasoning behind the choice of fairness goals for prediction-based decisions in the insurance domain. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in underwriting are different from those of using predictive algorithms in other sectors (e.g., medical diagnosis, sentencing). Here we focus on the trade-off in the extent to which one can pursue indirect non-discrimination versus predictive accuracy. The moral assessment of this trade-off is related to the context of application—to the consequences of inaccurate risk predictions in the insurance domain.
31

Scher, Sebastian, Simone Kopeinik, Andreas Trügler e Dominik Kowald. "Modelling the long-term fairness dynamics of data-driven targeted help on job seekers". Scientific Reports 13, n. 1 (31 gennaio 2023). http://dx.doi.org/10.1038/s41598-023-28874-9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractThe use of data-driven decision support by public agencies is becoming more widespread and already influences the allocation of public resources. This raises ethical concerns, as it has adversely affected minorities and historically discriminated groups. In this paper, we use an approach that combines statistics and data-driven approaches with dynamical modeling to assess long-term fairness effects of labor market interventions. Specifically, we develop and use a model to investigate the impact of decisions caused by a public employment authority that selectively supports job-seekers through targeted help. The selection of who receives what help is based on a data-driven intervention model that estimates an individual’s chances of finding a job in a timely manner and rests upon data that describes a population in which skills relevant to the labor market are unevenly distributed between two groups (e.g., males and females). The intervention model has incomplete access to the individual’s actual skills and can augment this with knowledge of the individual’s group affiliation, thus using a protected attribute to increase predictive accuracy. We assess this intervention model’s dynamics—especially fairness-related issues and trade-offs between different fairness goals- over time and compare it to an intervention model that does not use group affiliation as a predictive feature. We conclude that in order to quantify the trade-off correctly and to assess the long-term fairness effects of such a system in the real-world, careful modeling of the surrounding labor market is indispensable.
32

Müllner, Peter, Elisabeth Lex, Markus Schedl e Dominik Kowald. "Differential privacy in collaborative filtering recommender systems: a review". Frontiers in Big Data 6 (12 ottobre 2023). http://dx.doi.org/10.3389/fdata.2023.1249997.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
State-of-the-art recommender systems produce high-quality recommendations to support users in finding relevant content. However, through the utilization of users' data for generating recommendations, recommender systems threaten users' privacy. To alleviate this threat, often, differential privacy is used to protect users' data via adding random noise. This, however, leads to a substantial drop in recommendation quality. Therefore, several approaches aim to improve this trade-off between accuracy and user privacy. In this work, we first overview threats to user privacy in recommender systems, followed by a brief introduction to the differential privacy framework that can protect users' privacy. Subsequently, we review recommendation approaches that apply differential privacy, and we highlight research that improves the trade-off between recommendation quality and user privacy. Finally, we discuss open issues, e.g., considering the relation between privacy and fairness, and the users' different needs for privacy. With this review, we hope to provide other researchers an overview of the ways in which differential privacy has been applied to state-of-the-art collaborative filtering recommender systems.
33

Islam, Sheikh Rabiul, Ingrid Russell, William Eberle, Douglas Talbert e Md Golam Moula Mehedi Hasan. "Advances in Explainable, Fair, and Trustworthy AI". International Journal on Artificial Intelligence Tools 33, n. 03 (22 aprile 2024). http://dx.doi.org/10.1142/s0218213024030015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This special issue encapsulates the multifaceted landscape of contemporary challenges and innovations in Artificial Intelligence (AI) and Machine Learning (ML), with a particular focus on issues related to explainability, fairness, and trustworthiness. The exploration begins with the computational intricacies of understanding and explaining the behavior of binary neurons within neural networks. Simultaneously, ethical dimensions in AI are scrutinized, emphasizing the nuanced considerations required in defining autonomous ethical agents. The pursuit of fairness is exemplified through frameworks and methodologies in machine learning, addressing biases and promoting trust, particularly in predictive policing systems. Human-agent interaction dynamics are elucidated, revealing the nuanced relationship between task allocation, performance, and user satisfaction. The imperative of interpretability in complex predictive models is highlighted, emphasizing a query-driven methodology. Lastly, in the context of trauma triage, the study underscores the delicate trade-off between model accuracy and practitioner-friendly interpretability, introducing innovative strategies to address biases and trust-related metrics.
34

Pal, Manjish, Subham Pokhriyal, Sandipan Sikdar e Niloy Ganguly. "Ensuring generalized fairness in batch classification". Scientific Reports 13, n. 1 (2 novembre 2023). http://dx.doi.org/10.1038/s41598-023-45943-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractIn this paper, we consider the problem of batch classification and propose a novel framework for achieving fairness in such settings. The problem of batch classification involves selection of a set of individuals, often encountered in real-world scenarios such as job recruitment, college admissions etc. This is in contrast to a typical classification problem, where each candidate in the test set is considered separately and independently. In such scenarios, achieving the same acceptance rate (i.e., probability of the classifier assigning positive class) for each group (membership determined by the value of sensitive attributes such as gender, race etc.) is often not desirable, and the regulatory body specifies a different acceptance rate for each group. The existing fairness enhancing methods do not allow for such specifications and hence are unsuited for such scenarios. In this paper, we define a configuration model whereby the acceptance rate of each group can be regulated and further introduce a novel batch-wise fairness post-processing framework using the classifier confidence-scores. We deploy our framework across four real-world datasets and two popular notions of fairness, namely demographic parity and equalized odds. In addition to consistent performance improvements over the competing baselines, the proposed framework allows flexibility and significant speed-up. It can also seamlessly incorporate multiple overlapping sensitive attributes. To further demonstrate the generalizability of our framework, we deploy it to the problem of fair gerrymandering where it achieves a better fairness-accuracy trade-off than the existing baseline method.
35

Shanklin, Robert, Michele Samorani, Shannon Harris e Michael A. Santoro. "Ethical Redress of Racial Inequities in AI: Lessons from Decoupling Machine Learning from Optimization in Medical Appointment Scheduling". Philosophy & Technology 35, n. 4 (20 ottobre 2022). http://dx.doi.org/10.1007/s13347-022-00590-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractAn Artificial Intelligence algorithm trained on data that reflect racial biases may yield racially biased outputs, even if the algorithm on its own is unbiased. For example, algorithms used to schedule medical appointments in the USA predict that Black patients are at a higher risk of no-show than non-Black patients, though technically accurate given existing data that prediction results in Black patients being overwhelmingly scheduled in appointment slots that cause longer wait times than non-Black patients. This perpetuates racial inequity, in this case lesser access to medical care. This gives rise to one type of Accuracy-Fairness trade-off: preserve the efficiency offered by using AI to schedule appointments or discard that efficiency in order to avoid perpetuating ethno-racial disparities. Similar trade-offs arise in a range of AI applications including others in medicine, as well as in education, judicial systems, and public security, among others. This article presents a framework for addressing such trade-offs where Machine Learning and Optimization components of the algorithm are decoupled. Applied to medical appointment scheduling, our framework articulates four approaches intervening in different ways on different components of the algorithm. Each yields specific results, in one case preserving accuracy comparable to the current state-of-the-art while eliminating the disparity.
36

Szepannek, Gero, e Karsten Lübke. "Facing the Challenges of Developing Fair Risk Scoring Models". Frontiers in Artificial Intelligence 4 (14 ottobre 2021). http://dx.doi.org/10.3389/frai.2021.681915.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Algorithmic scoring methods are widely used in the finance industry for several decades in order to prevent risk and to automate and optimize decisions. Regulatory requirements as given by the Basel Committee on Banking Supervision (BCBS) or the EU data protection regulations have led to an increasing interest and research activity on understanding black box machine learning models by means of explainable machine learning. Even though this is a step into a right direction, such methods are not able to guarantee for a fair scoring as machine learning models are not necessarily unbiased and may discriminate with respect to certain subpopulations such as a particular race, gender, or sexual orientation—even if the variable itself is not used for modeling. This is also true for white box methods like logistic regression. In this study, a framework is presented that allows analyzing and developing models with regard to fairness. The proposed methodology is based on techniques of causal inference and some of the methods can be linked to methods from explainable machine learning. A definition of counterfactual fairness is given together with an algorithm that results in a fair scoring model. The concepts are illustrated by means of a transparent simulation and a popular real-world example, the German Credit data using traditional scorecard models based on logistic regression and weight of evidence variable pre-transform. In contrast to previous studies in the field for our study, a corrected version of the data is presented and used. With the help of the simulation, the trade-off between fairness and predictive accuracy is analyzed. The results indicate that it is possible to remove unfairness without a strong performance decrease unless the correlation of the discriminative attributes on the other predictor variables in the model is not too strong. In addition, the challenge in explaining the resulting scoring model and the associated fairness implications to users is discussed.
37

Cortés-Andrés, Jordi, Gustau Camps-Valls, Sebastian Sippel, Eniko Melinda Székely, Dino Sejdinovic, Emiliano Díaz, Adrián Pérez-Suay, Zhu Li, Miguel D. Mahecha e Markus Reichstein. "Physics-aware nonparametric regression models for Earth data analysis". Environmental Research Letters, 14 aprile 2022. http://dx.doi.org/10.1088/1748-9326/ac6762.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Process understanding and modeling is at the core of scientific reasoning. Principled parametric and mechanistic modeling dominated science and engineering until the recent emergence of machine learning. Despite great success in many areas, machine learning algorithms in the Earth and climate sciences, and more broadly in physical sciences, are not explicitly designed to be physically-consistent and may, therefore, violate the most basic laws of physics. In this work, motivated by the field of algorithmic fairness, we reconcile data-driven machine learning with physics modeling by illustrating a nonparametric and nonlinear physics-aware regression method. By incorporating a dependence-based regularizer, the method leads to models that are consistent with domain knowledge, as reflected by either simulations from physical models or ancillary data. The idea can conversely encourage independence of model predictions with other variables that are known to be uncertain either in their representation or magnitude. The method is computationally efficient and comes with a closed-form analytic solution. Through a consistency-vs-accuracy path diagram, one can assess the consistency between data-driven models and physical models. We demonstrate in three examples on simulations and measurement data in Earth and climate studies that the proposed machine learning framework allows us to trade-off physical consistency and accuracy.
38

Pham, Diem, Binh Tran, Su Nguyen, Damminda Alahakoon e Mengjie Zhang. "Fairness optimisation with multi-objective swarms for explainable classifiers on data streams". Complex & Intelligent Systems, 3 aprile 2024. http://dx.doi.org/10.1007/s40747-024-01347-w.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractRecently, advanced AI systems equipped with sophisticated learning algorithms have emerged, enabling the processing of extensive streaming data for online decision-making in diverse domains. However, the widespread deployment of these systems has prompted concerns regarding potential ethical issues, particularly the risk of discrimination that can adversely impact certain community groups. This issue has been proven to be challenging to address in the context of streaming data, where data distribution can change over time, including changes in the level of discrimination within the data. In addition, transparent models like decision trees are favoured in such applications because they illustrate the decision-making process. However, it is essential to keep the models compact because the explainability of large models can diminish. Existing methods usually mitigate discrimination at the cost of accuracy. Accuracy and discrimination, therefore, can be considered conflicting objectives. Current methods are still limited in controlling the trade-off between these conflicting objectives. This paper proposes a method that can incrementally learn classification models from streaming data and automatically adjust the learnt models to balance multi-objectives simultaneously. The novelty of this research is to propose a multi-objective algorithm to maximise accuracy, minimise discrimination and model size simultaneously based on swarm intelligence. Experimental results using six real-world datasets show that the proposed algorithm can evolve fairer and simpler classifiers while maintaining competitive accuracy compared to existing state-of-the-art methods tailored for streaming data.

Vai alla bibliografia