Статті в журналах з теми "Unfairness mitigation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Unfairness mitigation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-38 статей у журналах для дослідження на тему "Unfairness mitigation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Balayn, Agathe, Christoph Lofi, and Geert-Jan Houben. "Managing bias and unfairness in data for decision support: a survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems." VLDB Journal 30, no. 5 (May 5, 2021): 739–68. http://dx.doi.org/10.1007/s00778-021-00671-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe increasing use of data-driven decision support systems in industry and governments is accompanied by the discovery of a plethora of bias and unfairness issues in the outputs of these systems. Multiple computer science communities, and especially machine learning, have started to tackle this problem, often developing algorithmic solutions to mitigate biases to obtain fairer outputs. However, one of the core underlying causes for unfairness is bias in training data which is not fully covered by such approaches. Especially, bias in data is not yet a central topic in data engineering and management research. We survey research on bias and unfairness in several computer science domains, distinguishing between data management publications and other domains. This covers the creation of fairness metrics, fairness identification, and mitigation methods, software engineering approaches and biases in crowdsourcing activities. We identify relevant research gaps and show which data management activities could be repurposed to handle biases and which ones might reinforce such biases. In the second part, we argue for a novel data-centered approach overcoming the limitations of current algorithmic-centered methods. This approach focuses on eliciting and enforcing fairness requirements and constraints on data that systems are trained, validated, and used on. We argue for the need to extend database management systems to handle such constraints and mitigation methods. We discuss the associated future research directions regarding algorithms, formalization, modelling, users, and systems.
2

Pagano, Tiago P., Rafael B. Loureiro, Fernanda V. N. Lisboa, Rodrigo M. Peixoto, Guilherme A. S. Guimarães, Gustavo O. R. Cruz, Maira M. Araujo, et al. "Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods." Big Data and Cognitive Computing 7, no. 1 (January 13, 2023): 15. http://dx.doi.org/10.3390/bdcc7010015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
One of the difficulties of artificial intelligence is to ensure that model decisions are fair and free of bias. In research, datasets, metrics, techniques, and tools are applied to detect and mitigate algorithmic unfairness and bias. This study examines the current knowledge on bias and unfairness in machine learning models. The systematic review followed the PRISMA guidelines and is registered on OSF plataform. The search was carried out between 2021 and early 2022 in the Scopus, IEEE Xplore, Web of Science, and Google Scholar knowledge bases and found 128 articles published between 2017 and 2022, of which 45 were chosen based on search string optimization and inclusion and exclusion criteria. We discovered that the majority of retrieved works focus on bias and unfairness identification and mitigation techniques, offering tools, statistical approaches, important metrics, and datasets typically used for bias experiments. In terms of the primary forms of bias, data, algorithm, and user interaction were addressed in connection to the preprocessing, in-processing, and postprocessing mitigation methods. The use of Equalized Odds, Opportunity Equality, and Demographic Parity as primary fairness metrics emphasizes the crucial role of sensitive attributes in mitigating bias. The 25 datasets chosen span a wide range of areas, including criminal justice image enhancement, finance, education, product pricing, and health, with the majority including sensitive attributes. In terms of tools, Aequitas is the most often referenced, yet many of the tools were not employed in empirical experiments. A limitation of current research is the lack of multiclass and multimetric studies, which are found in just a few works and constrain the investigation to binary-focused method. Furthermore, the results indicate that different fairness metrics do not present uniform results for a given use case, and that more research with varied model architectures is necessary to standardize which ones are more appropriate for a given context. We also observed that all research addressed the transparency of the algorithm, or its capacity to explain how decisions are taken.
3

Abdullah, Nurhidayah, and Zuhairah Ariff Abd Ghadas. "THE APPLICATION OF GOOD FAITH IN CONTRACTS DURING A FORCE MAJEURE EVENT AND BEYOND WITH SPECIAL REFERENCE TO THE COVID-19 ACT 2020." UUM Journal of Legal Studies 14, no. 1 (January 18, 2023): 141–60. http://dx.doi.org/10.32890/uumjls2023.14.1.6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many parties face difficulties in performing contracts due to the economic dislocation since the outbreak of COVID-19. The extraordinary nature of this pandemic situation calls for good faith in contractual settings. The discussion of this paper focuses on the imposition in a force majeure event which will cause many contracts to be unenforceable. The research method used doctrinal analysis to discuss the force majeure clause in the context of the COVID-19 pandemic and the obligation of good faith in contracts. This paper will discuss the COVID-19 pandemic as a force majeure event, arguing that the rise of "good faith" in contract law and the application of "good faith" in contracts as a mitigation for a force majeure event. The paper will then present its conclusion and recommendations. The findings highlight the significance of applying "good faith" in the event of force majeure and beyond as a mitigating factor in alleviating uncertainty and unfairness.
4

Menziwa, Yolanda, Eunice Lebogang Sesale, and Solly Matshonisa Seeletse. "Challenges in research data collection and mitigation interventions." International Journal of Research in Business and Social Science (2147- 4478) 13, no. 2 (April 3, 2024): 336–44. http://dx.doi.org/10.20525/ijrbs.v13i2.3187.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper investigated the challenges that researchers in a health sciences university can experience, and ways to counterbalance the negative effects of these challenges. Focus was on the extent to which gatekeepers on higher education institutions (HEIs) can restrict research, and the way natural sciences researchers often experience gatekeeper biasness on denying them access as compared to the way health sciences researchers are treated. The method compared experiences of researchers for Master of Science (MSc) degrees in selected science subjects, and the projects undertaken by health sciences students. All the studies were based on students on campus as research subjects. The MSc ones were for students who were already academics teaching on campus. All the proposals received clearance certificates from the same ethics committee. Upon requiring the HEI registrar to grant permission to use the student as study participants, the health sciences were granted permission and the names of the students. For the science academics, they were denied permission to the student numbers, which were needed to request individual students to make on decisions whether they wanted to participate in the studies or not. Gatekeeping weaknesses were explored, and lawful interventions were used to collect research data. It was observed that in the science academic divisions of HEIs that are dominated by the health sciences, gatekeeper unfairness and power could offset creativities and innovations initiated by researchers. Recommendations have been made to limit this power.
5

Rana, Saadia Afzal, Zati Hakim Azizul, and Ali Afzal Awan. "A step toward building a unified framework for managing AI bias." PeerJ Computer Science 9 (October 26, 2023): e1630. http://dx.doi.org/10.7717/peerj-cs.1630.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Integrating artificial intelligence (AI) has transformed living standards. However, AI’s efforts are being thwarted by concerns about the rise of biases and unfairness. The problem advocates strongly for a strategy for tackling potential biases. This article thoroughly evaluates existing knowledge to enhance fairness management, which will serve as a foundation for creating a unified framework to address any bias and its subsequent mitigation method throughout the AI development pipeline. We map the software development life cycle (SDLC), machine learning life cycle (MLLC) and cross industry standard process for data mining (CRISP-DM) together to have a general understanding of how phases in these development processes are related to each other. The map should benefit researchers from multiple technical backgrounds. Biases are categorised into three distinct classes; pre-existing, technical and emergent bias, and subsequently, three mitigation strategies; conceptual, empirical and technical, along with fairness management approaches; fairness sampling, learning and certification. The recommended practices for debias and overcoming challenges encountered further set directions for successfully establishing a unified framework.
6

Latif, Aadil, Wolfgang Gawlik, and Peter Palensky. "Quantification and Mitigation of Unfairness in Active Power Curtailment of Rooftop Photovoltaic Systems Using Sensitivity Based Coordinated Control." Energies 9, no. 6 (June 4, 2016): 436. http://dx.doi.org/10.3390/en9060436.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yang, Zhenhuan, Yan Lok Ko, Kush R. Varshney, and Yiming Ying. "Minimax AUC Fairness: Efficient Algorithm with Provable Convergence." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (June 26, 2023): 11909–17. http://dx.doi.org/10.1609/aaai.v37i10.26405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The use of machine learning models in consequential decision making often exacerbates societal inequity, in particular yielding disparate impact on members of marginalized groups defined by race and gender. The area under the ROC curve (AUC) is widely used to evaluate the performance of a scoring function in machine learning, but is studied in algorithmic fairness less than other performance metrics. Due to the pairwise nature of the AUC, defining an AUC-based group fairness metric is pairwise-dependent and may involve both intra-group and inter-group AUCs. Importantly, considering only one category of AUCs is not sufficient to mitigate unfairness in AUC optimization. In this paper, we propose a minimax learning and bias mitigation framework that incorporates both intra-group and inter-group AUCs while maintaining utility. Based on this Rawlsian framework, we design an efficient stochastic optimization algorithm and prove its convergence to the minimum group-level AUC. We conduct numerical experiments on both synthetic and real-world datasets to validate the effectiveness of the minimax framework and the proposed optimization algorithm.
8

Khanam, Taslima. "Rule of law approach to alleviation of poverty: An analysis on human rights dimension of governance." IIUC Studies 15 (September 21, 2020): 23–32. http://dx.doi.org/10.3329/iiucs.v15i0.49342.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A society without rule of law is similar to a bowl having holes in it, so it leaks. Without plugging the leaks, putting more money in it makes no sense. Almost this claptrap is going in the form of poverty mitigation programs. Retorting the fact, this paper reflects that substantial poverty must be implied as formed by society itself and argues that lots of inhabitants of the world are deprived of the opportunity to get improved livings and live in dearth, as they are not within the shield of the rule of law. They may possibly be the citizens of nation state in which they live; nevertheless, their chattels and workings are vulnerable and far less rewarding than these are addressed. To address this unfairness, the paper provides a concise overview on the impact of rule of law as the basis for the people of opportunity and equity following the study of analytical approach with interdisciplinary aspect. Particular emphasis is to be found on human rights dimension of governance, and legal empowerment for the alleviation of poverty. IIUC Studies Vol.15(0) December 2018: 23-32
9

Qi, Jin. "Mitigating Delays and Unfairness in Appointment Systems." Management Science 63, no. 2 (February 2017): 566–83. http://dx.doi.org/10.1287/mnsc.2015.2353.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lehrieder, Frank, Simon Oechsner, Tobias Hoßfeld, Dirk Staehle, Zoran Despotovic, Wolfgang Kellerer, and Maximilian Michel. "Mitigating unfairness in locality-aware peer-to-peer networks." International Journal of Network Management 21, no. 1 (January 2011): 3–20. http://dx.doi.org/10.1002/nem.772.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Mao, Runze, Wenqi Fan, and Qing Li. "GCARe: Mitigating Subgroup Unfairness in Graph Condensation through Adversarial Regularization." Applied Sciences 13, no. 16 (August 11, 2023): 9166. http://dx.doi.org/10.3390/app13169166.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Training Graph Neural Networks (GNNs) on large-scale graphs in the deep learning era can be expensive. While graph condensation has recently emerged as a promising approach through which to reduce training cost by compressing large graphs into smaller ones and for preserving most knowledge, its capability in treating different node subgroups fairly during compression remains unexplored. In this paper, we investigate current graph condensation techniques from a perspective of fairness, and show that they bear severe disparate impact toward node subgroups. Specifically, GNNs trained on condensed graphs become more biased than those trained on original graphs. Since the condensed graphs comprise synthetic nodes, which are absent of explicit group IDs, the current algorithms used to train fair GNNs fail in this case. To address this issue, we propose Graph Condensation with Adversarial Regularization (GCARe), which is a method that directly regularizes the condensation process to distill the knowledge of different subgroups fairly into resulting graphs. A comprehensive series of experiments substantiated that our method enhances the fairness in condensed graphs without compromising accuracy, thus yielding more equitable GNN models. Additionally, our discoveries underscore the significance of incorporating fairness considerations in data condensation, and offer invaluable guidance for future inquiries in this domain.
12

Mahmud, Md Sultan, and Md Forkan Uddin. "Mitigating unfairness problem in WLANs caused by asymmetric co-channel interference." International Journal of Mobile Communications 16, no. 3 (2018): 307. http://dx.doi.org/10.1504/ijmc.2018.091385.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Mahmud, Md Sultan, and Md Forkean Uddin. "Mitigating unfairness problem in WLANs caused by asymmetric co-channel interference." International Journal of Mobile Communications 16, no. 1 (2018): 1. http://dx.doi.org/10.1504/ijmc.2018.10004556.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Bejerano, Yigal, Seung-Jae Han, and Mark Smith. "A novel frequency planning algorithm for mitigating unfairness in wireless LANs." Computer Networks 54, no. 15 (October 2010): 2575–90. http://dx.doi.org/10.1016/j.comnet.2010.04.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Wang, Wei, Ben Leong, and Wei Tsang Ooi. "Mitigating Unfairness Due to Physical Layer Capture in Practical 802.11 Mesh Networks." IEEE Transactions on Mobile Computing 14, no. 1 (January 1, 2015): 99–112. http://dx.doi.org/10.1109/tmc.2014.2315796.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wu, Yinghui, Hai Yang, Shuo Zhao, and Pan Shang. "Mitigating unfairness in urban rail transit operation: A mixed-integer linear programming approach." Transportation Research Part B: Methodological 149 (July 2021): 418–42. http://dx.doi.org/10.1016/j.trb.2021.04.014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zhang, Wenbin, Tina Hernandez-Boussard, and Jeremy Weiss. "Censored Fairness through Awareness." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14611–19. http://dx.doi.org/10.1609/aaai.v37i12.26708.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There has been increasing concern within the machine learning community and beyond that Artificial Intelligence (AI) faces a bias and discrimination crisis which needs AI fairness with urgency. As many have begun to work on this problem, most existing work depends on the availability of class label for the given fairness definition and algorithm which may not align with real-world usage. In this work, we study an AI fairness problem that stems from the gap between the design of a "fair" model in the lab and its deployment in the real-world. Specifically, we consider defining and mitigating individual unfairness amidst censorship, where the availability of class label is not always guaranteed due to censorship, which is broadly applicable in a diversity of real-world socially sensitive applications. We show that our method is able to quantify and mitigate individual unfairness in the presence of censorship across three benchmark tasks, which provides the first known results on individual fairness guarantee in analysis of censored data.
18

Pade, Robin, and Sven Feurer. "The mitigating role of nostalgia for consumer price unfairness perceptions in response to disadvantageous personalized pricing." Journal of Business Research 145 (June 2022): 277–87. http://dx.doi.org/10.1016/j.jbusres.2022.02.057.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Safdari, Fash, and Anatoliy Gorbenko. "Theoretical and experimental study of performance anomaly in multi-rate IEEE802.11ac wireless networks." Radioelectronic and Computer Systems, no. 4 (November 29, 2022): 85–97. http://dx.doi.org/10.32620/reks.2022.4.06.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
IEEE 802.11 wireless local area networks (WLANs) are shared networks, which use contention-based distributed coordination function (DCF) to share access to wireless medium among numerous wireless stations. The performance of the distributed coordination function mechanism mostly depends on the network load, number of wireless nodes and their data rates. The throughput unfairness, also known as performance anomaly is inherent in the very nature of mixed data rate Wi-Fi networks using the distributed coordination function. This unfairness exhibits itself through the fact that slow clients consume more airtime to transfer a given amount of data, leaving less airtime for fast clients. In this paper, we comprehensively examine the performance anomaly in multi-rate wireless networks using three approaches: experimental measurement, analytical modelling and simulation in Network Simulator v.3 (NS3). The results of our practical experiments benchmarking the throughput of a multi-rate 802.11ac wireless network clearly shows that even the recent wireless standards still suffer from airtime consumption unfairness. It was shown that even a single low-data rate station can decrease the throughput of high-data rate stations by 3–6 times. The simulation and analytical modelling confirm this finding with considerably high accuracy. Most of the theoretical models evaluating performance anomaly in Wi-Fi networks suggest that all stations get the same throughput independently of the used data rate. However, experimental and simulation results have demonstrated that despite a significant performance degradation high-speed stations still outperform stations with lower data rates once the difference between data rates becomes more significant. This is due to the better efficiency of the TCP protocol working over a fast wireless connection. It is also noteworthy that the throughput achieved by a station when it monopolistically uses the wireless media is considerably less than 50 % of its data rate due to significant overheads even in most recent Wi-Fi technologies. Mitigating performance anomaly in mixed-data rate WLANs requires a holistic approach that combines frame aggregation/fragmentation and adaption of data rates, contention window and other link-layer parameters.
20

Delobelle, Pieter, Paul Temple, Gilles Perrouin, Benoit Frénay, Patrick Heymans, and Bettina Berendt. "Ethical Adversaries." ACM SIGKDD Explorations Newsletter 23, no. 1 (May 26, 2021): 32–41. http://dx.doi.org/10.1145/3468507.3468513.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Machine learning is being integrated into a growing number of critical systems with far-reaching impacts on society. Unexpected behaviour and unfair decision processes are coming under increasing scrutiny due to this widespread use and its theoretical considerations. Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable. We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets. Our framework relies on two inter-operating adversaries to improve fairness. First, a model is trained with the goal of preventing the guessing of protected attributes' values while limiting utility losses. This first step optimizes the model's parameters for fairness. Second, the framework leverages evasion attacks from adversarial machine learning to generate new examples that will be misclassified. These new examples are then used to retrain and improve the model in the first step. These two steps are iteratively applied until a significant improvement in fairness is obtained. We evaluated our framework on well-studied datasets in the fairness literature - including COMPAS - where it can surpass other approaches concerning demographic parity, equality of opportunity and also the model's utility. We investigated the trade-offs between these targets in terms of model hyperparameters and also illustrated our findings on the subtle difficulties when mitigating unfairness and highlight how our framework can assist model designers.
21

Mansoury, Masoud. ""Understanding and mitigating multi-sided exposure bias in recommender systems" by Masoud Mansoury with Aparna S. Varde as coordinator." ACM SIGWEB Newsletter, Autumn (September 2022): 1–4. http://dx.doi.org/10.1145/3566100.3566103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Masoud Mansoury is a postdoctoral researcher at Amsterdam Machine Learning Lab at University of Amsterdam, Netherlands. He is also a member of Discovery Lab collaborating with Data Science team at Elsevier Company in the area of recommender systems. Masoud received his PhD in Computer and Information Science from Eindhoven University of Technology, Netherlands, in 2021. He has published his research works in top conferences such as FAccT, RecSys, and CIKM. His research interests include recommender systems, algorithmic bias, and contextual bandits. This research conducted by Masoud Mansoury investigated the impact of unfair recommendations on different actors in the system and proposed solutions to tackle the unfairness of recommendations. The solutions were a rating transformation technique that works as a pre-processing step before recommendation generation and a general graph-based solution that works as a post-processing approach after recommendation generation for mitigating the multi-sided exposure bias in the recommendation results. For evaluation, he introduced several metrics for measuring the exposure fairness for items and suppliers, and showed that the proposed metrics better capture the fairness properties in the recommendation results. Extensive experiments on different publicly-available datasets confirmed the superiority of the proposed solutions in improving the exposure fairness for items and suppliers.
22

Bargh, Mortaza S., and Sunil Choenni. "Towards an Integrated Approach for Preserving Data Utility, Privacy and Fairness." 2018 International Conference on Multidisciplinary Research 2022 (December 30, 2022): 290–360. http://dx.doi.org/10.26803/myres.2022.24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Data reusability has become a distinct characteristic of scientific, commercial, and administrative practices nowadays. However, an unlimited and careless reuse of data may lead to privacy breaches and unfair impacts on individuals and vulnerable groups. Data content adaption is a key aspect of preserving data privacy and fairness. Often, such content adaption affects data utility adversely. Further, the interaction between privacy protection and fairness protection can be subject to making trade-offs because mitigating privacy risks may adversely affect detecting unfairness and vice versa. Therefore, there is a need for research on understanding the interactions between data utility, privacy and fairness. To this end, in this contribution, we use concepts from causal reasoning and argue for adopting an integrated view on data content adaption for data driven decision support systems. This asks for considering the operation context wholistically. By means of two cases, we illustrate that, in some situations, local data content adaption may lead to low data quality and utility. An integrated wholistic approach, however, may result in reuse of the original data (i.e., without content adaption, thus in higher data utilization) without adversely affecting privacy and fairness. We discuss some implications of this approach and sketch a few directions for future research.
23

Agrawal, Nimesh, Anuj Kumar Sirohi, Sandeep Kumar, and Jayadeva. "No Prejudice! Fair Federated Graph Neural Networks for Personalized Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 10775–83. http://dx.doi.org/10.1609/aaai.v38i10.28950.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ensuring fairness in Recommendation Systems (RSs) across demographic groups is critical due to the increased integration of RSs in applications such as personalized healthcare, finance, and e-commerce. Graph-based RSs play a crucial role in capturing intricate higher-order interactions among entities. However, integrating these graph models into the Federated Learning (FL) paradigm with fairness constraints poses formidable challenges as this requires access to the entire interaction graph and sensitive user information (such as gender, age, etc.) at the central server. This paper addresses the pervasive issue of inherent bias within RSs for different demographic groups without compromising the privacy of sensitive user attributes in FL environment with the graph-based model. To address the group bias, we propose F2PGNN (Fair Federated Personalized Graph Neural Network), a novel framework that leverages the power of Personalized Graph Neural Network (GNN) coupled with fairness considerations. Additionally, we use differential privacy techniques to fortify privacy protection. Experimental evaluation on three publicly available datasets showcases the efficacy of F2PGNN in mitigating group unfairness by 47% ∼ 99% compared to the state-of-the-art while preserving privacy and maintaining the utility. The results validate the significance of our framework in achieving equitable and personalized recommendations using GNN within the FL landscape. Source code is at: https://github.com/nimeshagrawal/F2PGNN-AAAI24
24

Ngcobo, Senzo, and Colin D. Reddy. "Exploring the Link between Organisational Performance Pressures and the Factors that Compromise Ethical Leadership." Athens Journal of Business & Economics 10, no. 2 (March 29, 2024): 139–58. http://dx.doi.org/10.30958/ajbe.10-2-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose: This research paper explores the link between threat-appraised organizational performance pressure and factors that compromise ethical leadership. Design/Methodology/Approach: The study uses a qualitative approach, using a rank-type Delphi method and administered questionnaires to 40 academic and practitioner experts and 10 organisational leaders. The collected data was analysed through qualitative comparative analysis (QCA). Results: The findings provide empirical evidence of the detrimental impact of threat-appraised performance pressure on ethical leadership behaviour. Four themes are identified as top-ranked organisational performance pressures and factors compromising ethical leadership: market share growth pressure, pressure to present positive financial statements, pressure to achieve greater efficiency, and competitive pressure linked to several factors that compromise ethical leadership. Practical Implications: This research has practical implications for academics, ethics practitioners, policymakers, and organizations, emphasising the importance of mitigating the negative consequences of performance pressures on ethical decision-making. The research supports the development of effective measures, training programs, and ethical frameworks to navigate ethical challenges posed by performance pressures, contributing to long-term success and sustainability. Originality/Value This research contributes novel insights to the field of ethical leadership by exploring the relationship between organisational performance pressures and factors compromising ethical leadership. It fills a significant gap in empirical evidence and advances our understanding of how performance pressures can impact ethical leadership behaviour. The rigorous methodology, comprehensive analysis, and practical implications make it valuable for academics, researchers, practitioners, and policymakers. Keywords: ethical leadership, performance pressure, qualitative comparative analysis, Delphi method, dishonesty, unfairness, low moral judgement, lack of accountability
25

Udoh, I. J., and O. I. Alabi. "An Assessment of Counter-Terrorism Options: A State Dependent Terror Queuing Model Perspective." International Journal of Research and Innovation in Applied Science IX, no. II (2024): 74–103. http://dx.doi.org/10.51584/ijrias.2024.90209.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The study is an application of a state-dependent queuing theory to evaluate the performance of counterterrorism (CT) options. The CT options examined include the Stick (use of force), the Carrot (non-coercive approaches), their combined variant, and covert agents. The model incorporates state transitions to capture the dynamic nature of terrorist recruitment processes in a CT environment. Performance measures are adapted from conventional queue frameworks to assess the effectiveness of these CT options in mitigating terrorist threats. The study analyses the CT options under an arithmetic progression pattern of terrorist recruitment and state transitions. The results demonstrate the importance of maximizing interdiction rate, discrimination rate, system efficiency, and intelligence integration while minimizing system unfairness factors, response time, and queue length for optimal CT operations. The results of the analysis also highlight a positive correlation between the Stick and the Carrot options, as well as between their intelligence-driven variants, emphasizing the need for a balanced and coordinated intelligence-driven CT approach. The study argues that relying solely on brute force or aggressive law enforcement measures without credible intelligence would be insufficient and counterproductive. It suggests leveraging syndromnized intelligence optimizing pseudo-terrorists (SIOP) agents for enhanced credibility, sufficient intelligence gathering, and covert supervision of terrorists’ compliance to Carrot instruments in the CT environment. The findings contribute to the existing literature on CT research and provide insights for informed decision-making in optimizing CT strategies. The study aims to support the development of more efficient and adaptive approaches to combat terrorism.
26

Villarreal, Dan. "Sociolinguistic auto-coding has fairness problems too: measuring and mitigating bias." Linguistics Vanguard, March 12, 2024. http://dx.doi.org/10.1515/lingvan-2022-0114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Sociolinguistics researchers can use sociolinguistic auto-coding (SLAC) to predict humans’ hand-codes of sociolinguistic data. While auto-coding promises opportunities for greater efficiency, like other computational methods there are inherent concerns about this method’s fairness – whether it generates equally valid predictions for different speaker groups. Unfairness would be problematic for sociolinguistic work given the central importance of correlating speaker groups to differences in variable usage. The current study examines SLAC fairness through the lens of gender fairness in auto-coding Southland New Zealand English non-prevocalic /r/. First, given that there are multiple, mutually incompatible definitions of machine learning fairness, I argue that fairness for SLAC is best captured by two definitions (overall accuracy equality and class accuracy equality) corresponding to three fairness metrics. Second, I empirically assess the extent to which SLAC is prone to unfairness; I find that a specific auto-coder described in previous literature performed poorly on all three fairness metrics. Third, to remedy these imbalances, I tested unfairness mitigation strategies on the same data; I find several strategies that reduced unfairness to virtually zero. I close by discussing what SLAC fairness means not just for auto-coding, but more broadly for how we conceptualize variation as an object of study.
27

Brown, Alexander, Nenad Tomasev, Jan Freyberg, Yuan Liu, Alan Karthikesalingam, and Jessica Schrouff. "Detecting shortcut learning for fair medical AI using shortcut testing." Nature Communications 14, no. 1 (July 18, 2023). http://dx.doi.org/10.1038/s41467-023-39902-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractMachine learning (ML) holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities. An important step is to characterize the (un)fairness of ML models—their tendency to perform differently across subgroups of the population—and to understand its underlying mechanisms. One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data. Diagnosing this phenomenon is difficult as sensitive attributes may be causally linked with disease. Using multitask learning, we propose a method to directly test for the presence of shortcut learning in clinical ML systems and demonstrate its application to clinical tasks in radiology and dermatology. Finally, our approach reveals instances when shortcutting is not responsible for unfairness, highlighting the need for a holistic approach to fairness mitigation in medical AI.
28

Deho, Oscar Blessed, Chen Zhan, Jiuyong Li, Jixue Liu, Lin Liu, and Thuc Duy Le. "How do the existing fairness metrics and unfairness mitigation algorithms contribute to ethical learning analytics?" British Journal of Educational Technology, April 12, 2022. http://dx.doi.org/10.1111/bjet.13217.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Wan, Mingyang, Daochen Zha, Ninghao Liu, and Na Zou. "In-Processing Modeling Techniques for Machine Learning Fairness: A Survey." ACM Transactions on Knowledge Discovery from Data, July 30, 2022. http://dx.doi.org/10.1145/3551390.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Machine learning models are becoming pervasive in high-stakes applications. Despite their clear benefits in terms of performance, the models could show discrimination against minority groups and result in fairness issues in a decision-making process, leading to severe negative impacts on the individuals and the society. In recent years, various techniques have been developed to mitigate the unfairness for machine learning models. Among them, in-processing methods have drawn increasing attention from the community, where fairness is directly taken into consideration during model design to induce intrinsically fair models and fundamentally mitigate fairness issues in outputs and representations. In this survey, we review the current progress of in-processing fairness mitigation techniques. Based on where the fairness is achieved in the model, we categorize them into explicit and implicit methods, where the former directly incorporates fairness metrics in training objectives, and the latter focuses on refining latent representation learning. Finally, we conclude the survey with a discussion of the research challenges in this community to motivate future exploration.
30

Tsai, Thomas C., Sercan Arik, Benjamin H. Jacobson, Jinsung Yoon, Nate Yoder, Dario Sava, Margaret Mitchell, Garth Graham, and Tomas Pfister. "Algorithmic fairness in pandemic forecasting: lessons from COVID-19." npj Digital Medicine 5, no. 1 (May 10, 2022). http://dx.doi.org/10.1038/s41746-022-00602-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractRacial and ethnic minorities have borne a particularly acute burden of the COVID-19 pandemic in the United States. There is a growing awareness from both researchers and public health leaders of the critical need to ensure fairness in forecast results. Without careful and deliberate bias mitigation, inequities embedded in data can be transferred to model predictions, perpetuating disparities, and exacerbating the disproportionate harms of the COVID-19 pandemic. These biases in data and forecasts can be viewed through both statistical and sociological lenses, and the challenges of both building hierarchical models with limited data availability and drawing on data that reflects structural inequities must be confronted. We present an outline of key modeling domains in which unfairness may be introduced and draw on our experience building and testing the Google-Harvard COVID-19 Public Forecasting model to illustrate these challenges and offer strategies to address them. While targeted toward pandemic forecasting, these domains of potentially biased modeling and concurrent approaches to pursuing fairness present important considerations for equitable machine-learning innovation.
31

Xie, Shanghong, Akihisa Kaneko, and Yasuhiro Hayashi. "A Decentralized Voltage Regulation Scheme Using Improved Volt‐Var Function of PV Smart Inverter." IEEJ Transactions on Electrical and Electronic Engineering, April 11, 2024. http://dx.doi.org/10.1002/tee.24080.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the growing distributed PV installation rate in distribution systems, voltage regulation difficulties such as local voltage violations and fluctuations have become common. To solve the voltage regulation problems, the local voltage regulation method using volt‐var (VV) function is effective for its high regulation speed, high accuracy, and flexibility. However, there are still hurdles on real application, such as parameter setting difficulties, insufficient voltage fluctuation mitigation effect, and unfairness of reactive power generation between PV customers. To further improve the VV function, this paper proposes a PID closed‐loop based VV function and mode‐switching function of PV smart inverter (SI). To validate the proposed methods, numerical simulations were conducted using a 6‐feeder distribution system model based on the JST‐CREST 126 distribution feeder model. According to the simulation result, the proposed method can better mitigate the local voltage violation compared to basic VV function based on the VV curve, also improve the fairness between PV customers, while the total reactive power generation and tap operations increases for the aim of more adequate voltage regulation. © 2024 The Authors. IEEJ Transactions on Electrical and Electronic Engineering published by Institute of Electrical Engineer of Japan and Wiley Periodicals LLC.
32

Digumarthi, Varun, Tapan Amin, Samuel Kanu, Joshua Mathew, Bryan Edwards, Lisa A. Peterson, Matthew E. Lundy, and Karen E. Hegarty. "Preoperative prediction model for risk of readmission after total joint replacement surgery: a random forest approach leveraging NLP and unfairness mitigation for improved patient care and cost-effectiveness." Journal of Orthopaedic Surgery and Research 19, no. 1 (May 10, 2024). http://dx.doi.org/10.1186/s13018-024-04774-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Background The Center for Medicare and Medicaid Services (CMS) imposes payment penalties for readmissions following total joint replacement surgeries. This study focuses on total hip, knee, and shoulder arthroplasty procedures as they account for most joint replacement surgeries. Apart from being a burden to healthcare systems, readmissions are also troublesome for patients. There are several studies which only utilized structured data from Electronic Health Records (EHR) without considering any gender and payor bias adjustments. Methods For this study, dataset of 38,581 total knee, hip, and shoulder replacement surgeries performed from 2015 to 2021 at Novant Health was gathered. This data was used to train a random forest machine learning model to predict the combined endpoint of emergency department (ED) visit or unplanned readmissions within 30 days of discharge or discharge to Skilled Nursing Facility (SNF) following the surgery. 98 features of laboratory results, diagnoses, vitals, medications, and utilization history were extracted. A natural language processing (NLP) model finetuned from Clinical BERT was used to generate an NLP risk score feature for each patient based on their clinical notes. To address societal biases, a feature bias analysis was performed in conjunction with propensity score matching. A threshold optimization algorithm from the Fairlearn toolkit was used to mitigate gender and payor biases to promote fairness in predictions. Results The model achieved an Area Under the Receiver Operating characteristic Curve (AUROC) of 0.738 (95% confidence interval, 0.724 to 0.754) and an Area Under the Precision-Recall Curve (AUPRC) of 0.406 (95% confidence interval, 0.384 to 0.433). Considering an outcome prevalence of 16%, these metrics indicate the model’s ability to accurately discriminate between readmission and non-readmission cases within the context of total arthroplasty surgeries while adjusting patient scores in the model to mitigate bias based on patient gender and payor. Conclusion This work culminated in a model that identifies the most predictive and protective features associated with the combined endpoint. This model serves as a tool to empower healthcare providers to proactively intervene based on these influential factors without introducing bias towards protected patient classes, effectively mitigating the risk of negative outcomes and ultimately improving quality of care regardless of socioeconomic factors.
33

Zhang, Qingquan, Jialin Liu, Zeqi Zhang, Junyi Wen, Bifei Mao, and Xin Yao. "Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning." IEEE Transactions on Evolutionary Computation, 2022, 1. http://dx.doi.org/10.1109/tevc.2022.3209544.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Colakovic, Ivona, and Sašo Karakatič. "Adaptive Boosting Method for Mitigating Ethnicity and Age Group Unfairness." SN Computer Science 5, no. 1 (November 15, 2023). http://dx.doi.org/10.1007/s42979-023-02342-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractMachine learning algorithms make decisions in various fields, thus influencing people’s lives. However, despite their good quality, they can be unfair to certain demographic groups, perpetuating socially induced biases. Therefore, this paper deals with a common unfairness problem, unequal quality of service, that appears in classification when age and ethnicity groups are used. To tackle this issue, we propose an adaptive boosting algorithm that aims to mitigate the existing unfairness in data. The proposed method is based on the AdaBoost algorithm but incorporates fairness in the calculation of the instance’s weight with the goal of making the prediction as good as possible for all ages and ethnicities. The results show that the proposed method increases the fairness of age and ethnicity groups while maintaining good overall quality compared to traditional classification algorithms. The proposed method achieves the best accuracy in almost every sensitive feature group. Based on the extensive analysis of the results, we found that when it comes to ethnicity, interestingly, White people are likely to be incorrectly classified as not being heroin users, whereas other groups are likely to be incorrectly classified as heroin users.
35

Barnes, Malerie Beth, and Michele S. Moses. "Racial Misdirection: How Anti-affirmative Action Crusaders use Distraction and Spectacle to Promote Incomplete Conceptions of Merit and Perpetuate Racial Inequality." Educational Policy, December 29, 2020, 089590482098446. http://dx.doi.org/10.1177/0895904820984465.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Despite the marginal success that anti-affirmative action groups have had at paring back the use of race in college admissions practices, affirmative action has remained largely in-tact as a tool to promote diversity on college campuses. But what might happen if “diversity”—the very thing that heretofore has protected affirmative action—was used instead as proof of its supposed unfairness? In this paper, focusing on the Students for Fair Admissions v Harvard case, we will employ Political Spectacle Theory to analyze the strategies and tactics used by the anti-affirmative action groups to distract from their real aims and to divert focus away from mitigating structural inequality.
36

Shin, Hyunju, and Riza Casidy. "Use it or lose it: point expiration and status demotion." Journal of Services Marketing ahead-of-print, ahead-of-print (April 6, 2021). http://dx.doi.org/10.1108/jsm-01-2020-0015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose In managing hierarchical loyalty programs (HLP), firms often use a reward point expiration and status demotion policy to reduce financial liability and to encourage repeat purchases. This study aims to examine how point expiration and status demotion policies affect customer patronage, the role of extension strategies in mitigating the negative effects of these policies on customers and the moderating role of status endowment in the effect of point expiration on customers patronage following status demotion experience. Design/methodology/approach Three experiments were conducted using the hotel industry as the context. The hypothesized relationships were tested using ANOVA and a serial moderated mediation analysis using SPSS PROCESS Macro. Findings Customers subjected to reward point expiration exhibited a higher level of anger and perceived severity of the problem than those subjected to status demotion in HLP. Consequently, when customers experienced both point expiration and status demotion, the point extension strategy rather than the status extension strategy was found to be a more effective remedy for reducing perceived unfairness, although there was no change in the level of patronage reduction between the two extension strategies. Importantly, the effect of point expiration on patronage reduction was stronger among endowed-status customers than earned-status customers, serially driven by heightened feelings of embarrassment and perceived unfairness. Originality/value The study adds to the existing literature on HLP by comparing the effects of point expiration and status demotion on customer patronage with practical insights for HLP managers.
37

Biewer, Sebastian, Kevin Baum, Sarah Sterz, Holger Hermanns, Sven Hetmank, Markus Langer, Anne Lauber-Rönsberg, and Franz Lehr. "Software doping analysis for human oversight." Formal Methods in System Design, April 4, 2024. http://dx.doi.org/10.1007/s10703-024-00445-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis article introduces a framework that is meant to assist in mitigating societal risks that software can pose. Concretely, this encompasses facets of software doping as well as unfairness and discrimination in high-risk decision-making systems. The term software doping refers to software that contains surreptitiously added functionality that is against the interest of the user. A prominent example of software doping are the tampered emission cleaning systems that were found in millions of cars around the world when the diesel emissions scandal surfaced. The first part of this article combines the formal foundations of software doping analysis with established probabilistic falsification techniques to arrive at a black-box analysis technique for identifying undesired effects of software. We apply this technique to emission cleaning systems in diesel cars but also to high-risk systems that evaluate humans in a possibly unfair or discriminating way. We demonstrate how our approach can assist humans-in-the-loop to make better informed and more responsible decisions. This is to promote effective human oversight, which will be a central requirement enforced by the European Union’s upcoming AI Act. We complement our technical contribution with a juridically, philosophically, and psychologically informed perspective on the potential problems caused by such systems.
38

Kelly, Christopher J., Alan Karthikesalingam, Mustafa Suleyman, Greg Corrado, and Dominic King. "Key challenges for delivering clinical impact with artificial intelligence." BMC Medicine 17, no. 1 (October 29, 2019). http://dx.doi.org/10.1186/s12916-019-1426-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Background Artificial intelligence (AI) research in healthcare is accelerating rapidly, with potential applications being demonstrated across various domains of medicine. However, there are currently limited examples of such techniques being successfully deployed into clinical practice. This article explores the main challenges and limitations of AI in healthcare, and considers the steps required to translate these potentially transformative technologies from research to clinical practice. Main body Key challenges for the translation of AI systems in healthcare include those intrinsic to the science of machine learning, logistical difficulties in implementation, and consideration of the barriers to adoption as well as of the necessary sociocultural or pathway changes. Robust peer-reviewed clinical evaluation as part of randomised controlled trials should be viewed as the gold standard for evidence generation, but conducting these in practice may not always be appropriate or feasible. Performance metrics should aim to capture real clinical applicability and be understandable to intended users. Regulation that balances the pace of innovation with the potential for harm, alongside thoughtful post-market surveillance, is required to ensure that patients are not exposed to dangerous interventions nor deprived of access to beneficial innovations. Mechanisms to enable direct comparisons of AI systems must be developed, including the use of independent, local and representative test sets. Developers of AI algorithms must be vigilant to potential dangers, including dataset shift, accidental fitting of confounders, unintended discriminatory bias, the challenges of generalisation to new populations, and the unintended negative consequences of new algorithms on health outcomes. Conclusion The safe and timely translation of AI research into clinically validated and appropriately regulated systems that can benefit everyone is challenging. Robust clinical evaluation, using metrics that are intuitive to clinicians and ideally go beyond measures of technical accuracy to include quality of care and patient outcomes, is essential. Further work is required (1) to identify themes of algorithmic bias and unfairness while developing mitigations to address these, (2) to reduce brittleness and improve generalisability, and (3) to develop methods for improved interpretability of machine learning predictions. If these goals can be achieved, the benefits for patients are likely to be transformational.

До бібліографії