Статті в журналах з теми "Bias mitigation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Bias mitigation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Bias mitigation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Erkmen, Cherie Parungo, Lauren Kane, and David T. Cooke. "Bias Mitigation in Cardiothoracic Recruitment." Annals of Thoracic Surgery 111, no. 1 (January 2021): 12–15. http://dx.doi.org/10.1016/j.athoracsur.2020.07.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Vejsbjerg, Inge, Elizabeth M. Daly, Rahul Nair, and Svetoslav Nizhnichenkov. "Interactive Human-Centric Bias Mitigation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23838–40. http://dx.doi.org/10.1609/aaai.v38i21.30582.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Bias mitigation algorithms differ in their definition of bias and how they go about achieving that objective. Bias mitigation algorithms impact different cohorts differently and allowing end users and data scientists to understand the impact of these differences in order to make informed choices is a relatively unexplored domain. This demonstration presents an interactive bias mitigation pipeline that allows users to understand the cohorts impacted by their algorithm choice and provide feedback in order to provide a bias mitigated pipeline that most aligns with their goals.
3

Djebrouni, Yasmine, Nawel Benarba, Ousmane Touat, Pasquale De Rosa, Sara Bouchenak, Angela Bonifati, Pascal Felber, Vania Marangozova, and Valerio Schiavoni. "Bias Mitigation in Federated Learning for Edge Computing." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, no. 4 (December 19, 2023): 1–35. http://dx.doi.org/10.1145/3631455.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.
4

Gallaher, Joshua P., Alexander J. Kamrud, and Brett J. Borghetti. "Detection and Mitigation of Inefficient Visual Searching." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 47–51. http://dx.doi.org/10.1177/1071181320641015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A commonly known cognitive bias is a confirmation bias: the overweighting of evidence supporting a hy- pothesis and underweighting evidence countering that hypothesis. Due to high-stress and fast-paced opera- tions, military decisions can be affected by confirmation bias. One military decision task prone to confirma- tion bias is a visual search. During a visual search, the operator scans an environment to locate a specific target. If confirmation bias causes the operator to scan the wrong portion of the environment first, the search is inefficient. This study has two primary goals: 1) detect inefficient visual search using machine learning and Electroencephalography (EEG) signals, and 2) apply various mitigation techniques in an effort to im- prove the efficiency of searches. Early findings are presented showing how machine learning models can use EEG signals to detect when a person might be performing an inefficient visual search. Four mitigation techniques were evaluated: a nudge which indirectly slows search speed, a hint on how to search efficiently, an explanation for why the participant was receiving a nudge, and instructions to instruct the participant to search efficiently. These mitigation techniques are evaluated, revealing the most effective mitigations found to be the nudge and hint techniques.
5

Rahmawati, Fitriana, and Fitri Santi. "A Literature Review on the Influence of Availability Bias and Overconfidence Bias on Investor Decisions." East Asian Journal of Multidisciplinary Research 2, no. 12 (December 30, 2023): 4961–76. http://dx.doi.org/10.55927/eajmr.v2i12.6896.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This research examines the impact of Availability Bias and Overconfidence Bias on investment decisions. Utilizing a literature review approach and VOSviewer analysis, this study explores how these biases affect investor decision-making processes and potential mitigation strategies. The objective is to highlight the significance of understanding and mitigating these biases in achieving more rational investment decisions. The findings underscore the potential negative effects of both biases, leading to overconfident and less rational investment decisions. Awareness of their interplay is crucial, as they reinforce each other's negative effects on investment decision-making. Overcoming these cognitive biases is essential for more effective investment decision-making. This research contributes insights into mitigating biases, aiding in a more balanced and rational approach to investment decision-making.
6

Singh, Richa, Puspita Majumdar, Surbhi Mittal, and Mayank Vatsa. "Anatomizing Bias in Facial Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12351–58. http://dx.doi.org/10.1609/aaai.v36i11.21500.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups. Due to its impact on society, it has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals. This has led to research in the identification and mitigation of bias in AI systems. In this paper, we encapsulate bias detection/estimation and mitigation algorithms for facial analysis. Our main contributions include a systematic review of algorithms proposed for understanding bias, along with a taxonomy and extensive overview of existing bias mitigation algorithms. We also discuss open challenges in the field of biased facial analysis.
7

Lee, Yu-Hao, Norah E. Dunbar, Claude H. Miller, Brianna L. Lane, Matthew L. Jensen, Elena Bessarabova, Judee K. Burgoon, et al. "Training Anchoring and Representativeness Bias Mitigation Through a Digital Game." Simulation & Gaming 47, no. 6 (August 20, 2016): 751–79. http://dx.doi.org/10.1177/1046878116662955.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Objective. Humans systematically make poor decisions because of cognitive biases. Can digital games train people to avoid cognitive biases? The goal of this study is to investigate the affordance of different educational media in training people about cognitive biases and to mitigate cognitive biases within their decision-making processes. Method. A between-subject experiment was conducted to compare a digital game, a traditional slideshow, and a combined condition in mitigating two types of cognitive biases: anchoring bias and representativeness bias. We measured both immediate effects and delayed effects after four weeks. Results. The digital game and slideshow conditions were effective in mitigating cognitive biases immediately after the training, but the effects decayed after four weeks. By providing the basic knowledge through the slideshow, then allowing learners to practice bias-mitigation techniques in the digital game, the combined condition was most effective at mitigating the cognitive biases both immediately and after four weeks.
8

Patil, Pranita, and Kevin Purcell. "Decorrelation-Based Deep Learning for Bias Mitigation." Future Internet 14, no. 4 (March 29, 2022): 110. http://dx.doi.org/10.3390/fi14040110.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Although deep learning has proven to be tremendously successful, the main issue is the dependency of its performance on the quality and quantity of training datasets. Since the quality of data can be affected by biases, a novel deep learning method based on decorrelation is presented in this study. The decorrelation specifically learns bias invariant features by reducing the non-linear statistical dependency between features and bias itself. This makes the deep learning models less prone to biased decisions by addressing data bias issues. We introduce Decorrelated Deep Neural Networks (DcDNN) or Decorrelated Convolutional Neural Networks (DcCNN) and Decorrelated Artificial Neural Networks (DcANN) by applying decorrelation-based optimization to Deep Neural Networks (DNN) and Artificial Neural Networks (ANN), respectively. Previous bias mitigation methods result in a drastic loss in accuracy at the cost of bias reduction. Our study aims to resolve this by controlling how strongly the decorrelation function for bias reduction and loss function for accuracy affect the network objective function. The detailed analysis of the hyperparameter shows that for the optimal value of hyperparameter, our model is capable of maintaining accuracy while being bias invariant. The proposed method is evaluated on several benchmark datasets with different types of biases such as age, gender, and color. Additionally, we test our approach along with traditional approaches to analyze the bias mitigation in deep learning. Using simulated datasets, the results of t-distributed stochastic neighbor embedding (t-SNE) of the proposed model validated the effective removal of bias. An analysis of fairness metrics and accuracy comparisons shows that using our proposed models reduces the biases without compromising accuracy significantly. Furthermore, the comparison of our method with existing methods shows the superior performance of our model in terms of bias mitigation, as well as simplicity of training.
9

Kim, Hyo-eun. "Fairness Criteria and Mitigation of AI Bias." Korean Journal of Psychology: General 40, no. 4 (December 25, 2021): 459–85. http://dx.doi.org/10.22257/kjp.2021.12.40.4.459.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Park, Souneil, Seungwoo Kang, Sangyoung Chung, and Junehwa Song. "A Computational Framework for Media Bias Mitigation." ACM Transactions on Interactive Intelligent Systems 2, no. 2 (June 2012): 1–32. http://dx.doi.org/10.1145/2209310.2209311.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Hopkins, Taylor. "Bias Mitigation: Identifying Barriers and Finding Solutions." Forensic Science International: Synergy 6 (2023): 100420. http://dx.doi.org/10.1016/j.fsisyn.2023.100420.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hudson, P., W. J. W. Botzen, H. Kreibich, P. Bubeck, and J. C. J. H. Aerts. "Evaluating the effectiveness of flood damage mitigation measures by the application of propensity score matching." Natural Hazards and Earth System Sciences 14, no. 7 (July 15, 2014): 1731–47. http://dx.doi.org/10.5194/nhess-14-1731-2014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. The employment of damage mitigation measures (DMMs) by individuals is an important component of integrated flood risk management. In order to promote efficient damage mitigation measures, accurate estimates of their damage mitigation potential are required. That is, for correctly assessing the damage mitigation measures' effectiveness from survey data, one needs to control for sources of bias. A biased estimate can occur if risk characteristics differ between individuals who have, or have not, implemented mitigation measures. This study removed this bias by applying an econometric evaluation technique called propensity score matching (PSM) to a survey of German households along three major rivers that were flooded in 2002, 2005, and 2006. The application of this method detected substantial overestimates of mitigation measures' effectiveness if bias is not controlled for, ranging from nearly EUR 1700 to 15 000 per measure. Bias-corrected effectiveness estimates of several mitigation measures show that these measures are still very effective since they prevent between EUR 6700 and 14 000 of flood damage per flood event. This study concludes with four main recommendations regarding how to better apply propensity score matching in future studies, and makes several policy recommendations.
13

Hudson, P., W. J. W. Botzen, H. Kreibich, P. Bubeck, and J. C. J. H. Aerts. "Evaluating the effectiveness of flood damage mitigation measures by the application of Propensity Score Matching." Natural Hazards and Earth System Sciences Discussions 2, no. 1 (January 22, 2014): 681–723. http://dx.doi.org/10.5194/nhessd-2-681-2014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. The employment of damage mitigation measures by individuals is an important component of integrated flood risk management. In order to promote efficient damage mitigation measures, accurate estimates of their damage mitigation potential are required. That is, for correctly assessing the damage mitigation measures' effectiveness from survey data, one needs to control for sources of bias. A biased estimate can occur if risk characteristics differ between individuals who have, or have not, implemented mitigation measures. This study removed this bias by applying an econometric evaluation technique called Propensity Score Matching to a survey of German households along along two major rivers major rivers that were flooded in 2002, 2005 and 2006. The application of this method detected substantial overestimates of mitigation measures' effectiveness if bias is not controlled for, ranging from nearly € 1700 to € 15 000 per measure. Bias-corrected effectiveness estimates of several mitigation measures show that these measures are still very effective since they prevent between € 6700–14 000 of flood damage. This study concludes with four main recommendations regarding how to better apply Propensity Score Matching in future studies, and makes several policy recommendations.
14

K. Devasenapathy, Arun Padmanabhan,. "Uncovering Bias: Exploring Machine Learning Techniques for Detecting and Mitigating Bias in Data – A Literature Review." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (October 30, 2023): 776–81. http://dx.doi.org/10.17762/ijritcc.v11i9.8965.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The presence of Bias in models developed using machine learning algorithms has emerged as a critical issue. This literature review explores the topic of uncovering the existence of bias in data and the application of techniques for detecting and mitigating Bias. The review provides a comprehensive analysis of the existing literature, focusing on pre-processing techniques, post-pre-processing techniques, and fairness constraints employed to uncover and address the existence of Bias in machine learning models. The effectiveness, limitations, and trade-offs of these techniques are examined, highlighting their impact on advocating fairness and equity in decision-making processes. The methodology consists of two key steps: data preparation and bias analysis, followed by machine learning model development and evaluation. In the data preparation phase, the dataset is analyzed for biases and pre-processed using techniques like reweighting or relabeling to reduce bias. In the model development phase, suitable algorithms are selected, and fairness metrics are defined and optimized during the training process. The models are then evaluated using performance and fairness measures and the best-performing model is chosen. The methodology ensures a systematic exploration of machine learning techniques to detect and mitigate bias, leading to more equitable decision-making. The review begins by examining the techniques of pre-processing, which involve cleaning the data, selecting the features, feature engineering, and sampling. These techniques play an important role in preparing the data to reduce bias and promote fairness in machine learning models. The analysis highlights various studies that have explored the effectiveness of these techniques in uncovering and mitigating bias in data, contributing to the development of more equitable and unbiased machine learning models. Next, the review delves into post-pre-processing techniques that focus on detecting and mitigating bias after the initial data preparation steps. These techniques include bias detection methods that assess the disparate impact or disparate treatment in model predictions, as well as bias mitigation techniques that modify model outputs to achieve fairness across different groups. The evaluation of these techniques, their performance metrics, and potential trade-offs between fairness and accuracy are discussed, providing insights into the challenges and advancements in bias mitigation. Lastly, the review examines fairness constraints, which involve the imposition of rules or guidelines on machine learning algorithms to ensure fairness in predictions or decision-making processes. The analysis explores different fairness constraints, such as demographic parity, equalized odds, and predictive parity, and their effectiveness in reducing bias and advocating fairness in machine learning models. Overall, this literature review provides a comprehensive understanding of the techniques employed to uncover and mitigate the existence of bias in machine learning models. By examining pre-processing techniques, post-pre-processing techniques, and fairness constraints, the review contributes to the development of more fair and unbiased machine learning models, fostering equity and ethical decision-making in various domains. By examining relevant studies, this review provides insights into the effectiveness and limitations of various pre-processing techniques for bias detection and mitigation via Pre-processing, Adversarial learning, Fairness Constraints, and Post-processing techniques.
15

Clegg, Benjamin A., Brian McKernan, Rosa M. Martey, Sarah M. Taylor, Jennifer Stromer-Galley, Kate Kenski, E. Tobi Saulnier, et al. "Effective Mitigation of Anchoring Bias, Projection Bias, and Representativeness Bias from Serious Game-based Training." Procedia Manufacturing 3 (2015): 1558–65. http://dx.doi.org/10.1016/j.promfg.2015.07.438.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Cai, Zhenyu. "Quantum Error Mitigation using Symmetry Expansion." Quantum 5 (September 21, 2021): 548. http://dx.doi.org/10.22331/q-2021-09-21-548.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Even with the recent rapid developments in quantum hardware, noise remains the biggest challenge for the practical applications of any near-term quantum devices. Full quantum error correction cannot be implemented in these devices due to their limited scale. Therefore instead of relying on engineered code symmetry, symmetry verification was developed which uses the inherent symmetry within the physical problem we try to solve. In this article, we develop a general framework named symmetry expansion which provides a wide spectrum of symmetry-based error mitigation schemes beyond symmetry verification, enabling us to achieve different balances between the estimation bias and the sampling cost of the scheme. We show that certain symmetry expansion schemes can achieve a smaller estimation bias than symmetry verification through cancellation between the biases due to the detectable and undetectable noise components. A practical way to search for such a small-bias scheme is introduced. By numerically simulating the Fermi-Hubbard model for energy estimation, the small-bias symmetry expansion we found can achieve an estimation bias 6 to 9 times below what is achievable by symmetry verification when the average number of circuit errors is between 1 to 2. The corresponding sampling cost for random shot noise reduction is just 2 to 6 times higher than symmetry verification. Beyond symmetries inherent to the physical problem, our formalism is also applicable to engineered symmetries. For example, the recent scheme for exponential error suppression using multiple noisy copies of the quantum device is just a special case of symmetry expansion using the permutation symmetry among the copies.
17

Siddique, Sunzida, Mohd Ariful Haque, Roy George, Kishor Datta Gupta, Debashis Gupta, and Md Jobair Hossain Faruk. "Survey on Machine Learning Biases and Mitigation Techniques." Digital 4, no. 1 (December 20, 2023): 1–68. http://dx.doi.org/10.3390/digital4010001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Machine learning (ML) has become increasingly prevalent in various domains. However, ML algorithms sometimes give unfair outcomes and discrimination against certain groups. Thereby, bias occurs when our results produce a decision that is systematically incorrect. At various phases of the ML pipeline, such as data collection, pre-processing, model selection, and evaluation, these biases appear. Bias reduction methods for ML have been suggested using a variety of techniques. By changing the data or the model itself, adding more fairness constraints, or both, these methods try to lessen bias. The best technique relies on the particular context and application because each technique has advantages and disadvantages. Therefore, in this paper, we present a comprehensive survey of bias mitigation techniques in machine learning (ML) with a focus on in-depth exploration of methods, including adversarial training. We examine the diverse types of bias that can afflict ML systems, elucidate current research trends, and address future challenges. Our discussion encompasses a detailed analysis of pre-processing, in-processing, and post-processing methods, including their respective pros and cons. Moreover, we go beyond qualitative assessments by quantifying the strategies for bias reduction and providing empirical evidence and performance metrics. This paper serves as an invaluable resource for researchers, practitioners, and policymakers seeking to navigate the intricate landscape of bias in ML, offering both a profound understanding of the issue and actionable insights for responsible and effective bias mitigation.
18

Mosteiro, Pablo, Jesse Kuiper, Judith Masthoff, Floortje Scheepers, and Marco Spruit. "Bias Discovery in Machine Learning Models for Mental Health." Information 13, no. 5 (May 5, 2022): 237. http://dx.doi.org/10.3390/info13050237.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fairness and bias are crucial concepts in artificial intelligence, yet they are relatively ignored in machine learning applications in clinical psychiatry. We computed fairness metrics and present bias mitigation strategies using a model trained on clinical mental health data. We collected structured data related to the admission, diagnosis, and treatment of patients in the psychiatry department of the University Medical Center Utrecht. We trained a machine learning model to predict future administrations of benzodiazepines on the basis of past data. We found that gender plays an unexpected role in the predictions—this constitutes bias. Using the AI Fairness 360 package, we implemented reweighing and discrimination-aware regularization as bias mitigation strategies, and we explored their implications for model performance. This is the first application of bias exploration and mitigation in a machine learning model trained on real clinical psychiatry data.
19

De Biasio, Francesco, and Stefano Zecchetto. "Tuning the Model Winds in Perspective of Operational Storm Surge Prediction in the Adriatic Sea." Journal of Marine Science and Engineering 11, no. 3 (March 3, 2023): 544. http://dx.doi.org/10.3390/jmse11030544.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the Adriatic Sea, the sea surface wind forecasts are often underestimated, with detrimental effects on the accuracy of sea level and storm surge predictions. Among the various causes, this mainly depends on the meteorological forcing of the wind. In this paper, we try to improve an existing numerical method, called “wind bias mitigation”, which relies on scatterometer wind observations to determine a multiplicative factor Δw, whose application to the model wind reduces its inaccuracy with respect to the scatterometer wind. Following four different mathematical approaches, we formulate and discuss seven new expressions of the multiplicative factor. The eight different expressions of the bias mitigation factor, the original one and the seven formulated in this study, are assessed with the aid of four datasets of real sea surface wind events in a variety of sea level conditions in the northern Adriatic Sea, several of which gave rise to high water events in the Venice Lagoon. The statistical analysis shows that some of the seven new formulations of the wind bias mitigation factor are able to lower the model-scatterometer bias with respect to the original formulation. For some other of the seven new formulations, the absolute bias, with respect to scatterometer, of the mitigated model wind field, results lower than that supplied by the unmodified model wind field in 81% of the considered storm surge events in the area of interest, against the 73% of the original formulation of the wind bias mitigation. This represents an 11% improvement in the bias mitigation process, with respect to the original formulation. The best performing of the seven new wind bias mitigation factors, that based on the linear least square regression of the squared wind speed (LLSRE), has been implemented in the operational sea level forecast chain of the Tide Forecast and Early Warning Centre of the Venice Municipality (CPSM), to provide support to the operation of the MO.SE. barriers in Venice.
20

Prater, James, Konstantinos Kirytopoulos, and Tony Ma. "Optimism bias within the project management context." International Journal of Managing Projects in Business 10, no. 2 (April 4, 2017): 370–85. http://dx.doi.org/10.1108/ijmpb-07-2016-0063.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose One of the major challenges for any project is to prepare and develop an achievable baseline schedule and thus set the project up for success, rather than failure. The purpose of this paper is to explore and investigate research outputs in one of the major causes, optimism bias, to identify problems with developing baseline schedules and analyse mitigation techniques and their effectiveness recommended by research to minimise the impact of this bias. Design/methodology/approach A systematic quantitative literature review was followed, examining Project Management Journals, documenting the mitigation approaches recommended and then reviewing whether these approaches were validated by research. Findings Optimism bias proved to be widely accepted as a major cause of unrealistic scheduling for projects, and there is a common understanding as to what it is and the effects that it has on original baseline schedules. Based upon this review, the most recommended mitigation method is Flyvbjerg’s “Reference class,” which has been developed based upon Kahneman’s “Outside View”. Both of these mitigation techniques are based upon using an independent third party to review the estimate. However, within the papers reviewed, apart from the engineering projects, there has been no experimental and statistically validated research into the effectiveness of this method. The majority of authors who have published on this topic are based in Europe. Research limitations/implications The short-listed papers for this review referred mainly to non-engineering projects which included information technology focussed ones. Thus, on one hand, empirical research is needed for engineering projects, while on the other hand, the lack of tangible evidence for the effectiveness of methods related to the alleviation of optimism bias issues calls for greater research into the effectiveness of mitigation techniques for not only engineering projects, but for all projects. Originality/value This paper documents the growth within the project management research literature over time on the topic of optimism bias. Specifically, it documents the various methods recommended to mitigate the phenomenon and highlights quantitatively the research undertaken on the subject. Moreover, it introduces paths for further research.
21

Chu, Charlene, Simon Donato-Woodger, Shehroz Khan, Kathleen Leslie, Tianyu Shi, Rune Nyrup, and Amanda Grenier. "STRATEGIES TO MITIGATE MACHINE LEARNING BIAS AFFECTING OLDER ADULTS: RESULTS FROM A SCOPING REVIEW." Innovation in Aging 7, Supplement_1 (December 1, 2023): 717–18. http://dx.doi.org/10.1093/geroni/igad104.2325.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Digital ageism, defined as age-related bias in artificial intelligence (AI) and technological systems, has emerged as a significant concern for its potential impact on society, health, equity, and older people’s well-being. This scoping review aims to identify mitigation strategies used in research studies to address age-related bias in machine learning literature. We conducted a scoping review following Arksey & O’Malley’s methodology, and completed a comprehensive search strategy of five databases (Web of Science, CINAHL, EMBASE, IEEE Xplore, and ACM digital library). Articles were included if there was an AI application, age-related bias, and the use of a mitigation strategy. Efforts to mitigate digital ageism were sparse: our search generated 7595 articles, but only a limited number of them met the inclusion criteria. Upon screening, we identified only nine papers which attempted to mitigate digital ageism. Of these, eight involved computer vision models (facial, age prediction, brain age) while one predicted activity based on accelerometer and vital sign measurements. Three broad categories of approaches to mitigating bias in AI were identified: i) sample modification: creating a smaller, more balanced sample from the existing dataset; ii) data augmentation: modifying images to create more training data from the existing datasets without adding additional images; and iii) application of statistical or algorithmic techniques to reduce bias. Digital ageism is a newly-established topic of research, and can affect machine learning models through multiple pathways. Our results advance research on digital ageism by presenting the challenges and possibilities for mitigating digital ageism in machine learning models.
22

Dunbar, Norah E., Matthew L. Jensen, Claude H. Miller, Elena Bessarabova, Yu-Hao Lee, Scott N. Wilson, Javier Elizondo, et al. "Mitigation of Cognitive Bias with a Serious Game." International Journal of Game-Based Learning 7, no. 4 (October 2017): 86–100. http://dx.doi.org/10.4018/ijgbl.2017100105.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
One of the benefits of using digital games for education is that games can provide feedback for learners to assess their situation and correct their mistakes. We conducted two studies to examine the effectiveness of different feedback design (timing, duration, repeats, and feedback source) in a serious game designed to teach learners about cognitive biases. We also compared the digital game-based learning condition to a professional training video. Overall, the digital game was significantly more effective than the video condition. Longer durations and repeats improve the effects on bias-mitigation. Surprisingly, there was no significant difference between just-in-time feedback and delayed feedback, and computer-generated feedback was more effective than feedback from other players.
23

Guan, Maime, and Joachim Vandekerckhove. "A Bayesian approach to mitigation of publication bias." Psychonomic Bulletin & Review 23, no. 1 (July 1, 2015): 74–86. http://dx.doi.org/10.3758/s13423-015-0868-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Davison, Robert M. "Editorial - Cultural Bias in Reviews and Mitigation Options." Information Systems Journal 24, no. 6 (August 5, 2014): 475–77. http://dx.doi.org/10.1111/isj.12046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Korzhenevych, I. P., and O. V. Gots. "THE MITIGATION STEERING BIAS CURVES FOR INDUSTRIAL TRANSPORT." Science and Transport Progress, no. 16 (June 25, 2007): 26–28. http://dx.doi.org/10.15802/stp2007/17607.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Penn, Jerrod M., Daniel R. Petrolia, and J. Matthew Fannin. "Hypothetical bias mitigation in representative and convenience samples." Applied Economic Perspectives and Policy 45, no. 2 (May 18, 2023): 721–43. http://dx.doi.org/10.1002/aepp.13374.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Li, Xuesong, Dajun Sun, and Zhongyi Cao. "Mitigation method of acoustic doppler velocity measurement bias." Ocean Engineering 306 (August 2024): 118082. http://dx.doi.org/10.1016/j.oceaneng.2024.118082.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kempf, Arlo, and Preeti Nayak. "Practicing Professional Discomfort as Self-Location: White Teacher Experiences With Race Bias Mitigation." Journal of the Canadian Association for Curriculum Studies 18, no. 1 (June 27, 2020): 51–52. http://dx.doi.org/10.25071/1916-4467.40584.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study is among the first in Canada to research implicit race bias mitigation in secondary teacher practice. The findings emerge from data collected from a ten-month engagement period with 12 Ontario teachers who, alongside the research team, codesigned a race bias mitigation plan based on four to six varied mitigation strategies. These included technical and dialogical activities and a required reading of one anti-racist and/or anti-colonial book. Throughout the project, teachers engaged in ongoing reflection, journaling, email exchanges and an in-person interview. A thematic analysis of this data was completed (Ryan & Bernard, 2003). The design of this study was underpinned by a braiding of social psychology with critical race theory, second wave White teacher identity studies and other approaches. This multimodal approach brings a critical and dynamic reading of whiteness in education. Three broad preliminary findings have emerged from this study. First, teacher perceptions of efficacy of implicit race bias mitigation strategies relied on their noticing of conscious changes in their perceptions of and experiences with race, racism and Black, Indigenous and People of Colour (BIPOC) students. Second, the concurrent use of critical anti-racist strategies, alongside implicit race bias mitigation strategies, seemed to instigate participants’ deepest reflections on race. Finally, this synergy and the long duration of the project contributed to the participants’ evolving understandings of racism in education as a phenomenon that goes beyond the domain of the individual. The results may deepen our understandings of the challenges and opportunities surrounding implicit race bias mitigation work in terms of teacher practices and theoretical considerations.
29

Kohn, Rachel. "Eliminating Bias in Survival Estimation: Statistical Bias Mitigation Is the First Step Forward*." Critical Care Medicine 52, no. 3 (February 21, 2024): 506–9. http://dx.doi.org/10.1097/ccm.0000000000006110.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Bulut, Solmaz, Mehdi Rostami, Shahla Shokatpour Lotfi, Naser Jafarzadeh, Sefa Bulut, Baidi Bukhori, Seyed Hadi Seyed Alitabar, Zohreh Zadhasn, and Farzaneh Mardani. "The Impact of Counselor Bias in Assessment: A Comprehensive Review and Best Practices." Journal of Assessment and Research in Applied Counseling 5, no. 4 (2023): 89–103. http://dx.doi.org/10.61838/kman.jarac.5.4.11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Objective: This review article aims to comprehensively explore the impact of counselor bias on assessment processes within the counseling profession. It seeks to identify the types and manifestations of biases, assess their implications on counseling outcomes, and recommend best practices for mitigating these biases to promote more equitable counseling practices. Methods and Materials: A systematic literature review was conducted, examining peer-reviewed articles, books, and conference proceedings published between 1997 and 2023. Databases such as PsycINFO, PubMed, ERIC, and Google Scholar were searched using keywords related to counselor bias, psychological assessment, and best practices in bias mitigation. The selection criteria focused on studies that explicitly addressed counselor biases in the context of assessment practices. Theoretical frameworks relevant to understanding and addressing counselor bias, such as Implicit Association Theory, Social Cognition Theory, and the Multicultural Counseling Competency Framework, were also reviewed to provide a conceptual backdrop for the analysis. Findings: The review reveals that counselor bias—spanning from pre-assessment and in-assessment to post-assessment phases—significantly undermines the objectivity and fairness of psychological assessments. These biases, deeply rooted in societal stereotypes and personal prejudices, manifest in various forms, including racial, ethnic, gender, and socioeconomic biases. Theoretical frameworks highlight the complexity of counselor biases and underscore the importance of self-awareness, reflective practice, and multicultural competencies in mitigating their impact. Best practices identified include enhancing counselor self-awareness, integrating comprehensive bias-awareness training in counselor education, and implementing systemic changes to support equity in counseling practices. Conclusion: Counselor bias presents a pervasive challenge within the counseling profession, impacting the validity and efficacy of psychological assessments. Addressing this issue requires a concerted effort that encompasses individual, educational, and systemic interventions. By adopting best practices focused on bias mitigation and promoting cultural sensitivity, the counseling profession can move towards more equitable and effective practices. Future research should aim to explore the effectiveness of specific interventions and expand the understanding of biases beyond the traditionally examined dimensions.
31

Nazer, Lama H., Razan Zatarah, Shai Waldrip, Janny Xue Chen Ke, Mira Moukheiber, Ashish K. Khanna, Rachel S. Hicklen, et al. "Bias in artificial intelligence algorithms and recommendations for mitigation." PLOS Digital Health 2, no. 6 (June 22, 2023): e0000278. http://dx.doi.org/10.1371/journal.pdig.0000278.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The adoption of artificial intelligence (AI) algorithms is rapidly increasing in healthcare. Such algorithms may be shaped by various factors such as social determinants of health that can influence health outcomes. While AI algorithms have been proposed as a tool to expand the reach of quality healthcare to underserved communities and improve health equity, recent literature has raised concerns about the propagation of biases and healthcare disparities through implementation of these algorithms. Thus, it is critical to understand the sources of bias inherent in AI-based algorithms. This review aims to highlight the potential sources of bias within each step of developing AI algorithms in healthcare, starting from framing the problem, data collection, preprocessing, development, and validation, as well as their full implementation. For each of these steps, we also discuss strategies to mitigate the bias and disparities. A checklist was developed with recommendations for reducing bias during the development and implementation stages. It is important for developers and users of AI-based algorithms to keep these important considerations in mind to advance health equity for all populations.
32

Somasundaram, Ananthi, David Vállez García, Elisabeth Pfaehler, Joyce van Sluis, Rudi A. J. O. Dierckx, Elisabeth G. E. de Vries, and Ronald Boellaard. "Mitigation of noise-induced bias of PET radiomic features." PLOS ONE 17, no. 8 (August 25, 2022): e0272643. http://dx.doi.org/10.1371/journal.pone.0272643.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Introduction One major challenge in PET radiomics is its sensitivity to noise. Low signal-to-noise ratio (SNR) affects not only the precision but also the accuracy of quantitative metrics extracted from the images resulting in noise-induced bias. This phantom study aims to identify the radiomic features that are robust to noise in terms of precision and accuracy and to explore some methods that might help to correct noise-induced bias. Methods A phantom containing three 18F-FDG filled 3D printed inserts, reflecting heterogeneous tracer uptake and realistic tumor shapes, was used in the study. The three different phantom inserts were filled and scanned with three different tumor-to-background ratios, simulating a total of nine different tumors. From the 40-minute list-mode data, ten frames each for 5 s, 10 s, 30 s, and 120 s frame duration were reconstructed to generate images with different noise levels. Under these noise conditions, the precision and accuracy of the radiomic features were analyzed using intraclass correlation coefficient (ICC) and similarity distance metric (SDM) respectively. Based on the ICC and SDM values, the radiomic features were categorized into four groups: poor, moderate, good, and excellent precision and accuracy. A “difference image” created by subtracting two statistically equivalent replicate images was used to develop a model to correct the noise-induced bias. Several regression methods (e.g., linear, exponential, sigmoid, and power-law) were tested. The best fitting model was chosen based on Akaike information criteria. Results Several radiomic features derived from low SNR images have high repeatability, with 68% of radiomic features having ICC ≥ 0.9 for images with a frame duration of 5 s. However, most features show a systematic bias that correlates with the increase in noise level. Out of 143 features with noise-induced bias, the SDM values were improved based on a regression model (53 features to excellent and 67 to good) indicating that the noise-induced bias of these features can be, at least partially, corrected. Conclusion To have a predictive value, radiomic features should reflect tumor characteristics and be minimally affected by noise. The present study has shown that it is possible to correct for noise-induced bias, at least in a subset of the features, using a regression model based on the local image noise estimates.
33

Bartley, Tanya, Rebecca O'Connor, and Kenya Beard. "Should All Nurses Be Required to Complete Implicit Bias Training?" AJN, American Journal of Nursing 123, no. 11 (November 2023): 20–21. http://dx.doi.org/10.1097/01.naj.0000995336.31551.4e.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Sethi, Rahul, Vedang Ratan Vatsa, and Parth Chhaparwal. "IDENTIFICATION AND MITIGATION OF ALGORITHMIC BIAS THROUGH POLICY INSTRUMENTS." International Journal of Advanced Research 8, no. 7 (July 31, 2020): 1515–22. http://dx.doi.org/10.21474/ijar01/11418.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Ashokan, Ashwathy, and Christian Haas. "Fairness metrics and bias mitigation strategies for rating predictions." Information Processing & Management 58, no. 5 (September 2021): 102646. http://dx.doi.org/10.1016/j.ipm.2021.102646.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Fortunato, S., A. Flammini, F. Menczer, and A. Vespignani. "Topical interests and the mitigation of search engine bias." Proceedings of the National Academy of Sciences 103, no. 34 (August 10, 2006): 12684–89. http://dx.doi.org/10.1073/pnas.0605525103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Rohbani, Nezam, Mojtaba Ebrahimi, Seyed-Ghassem Miremadi, and Mehdi B. Tahoori. "Bias Temperature Instability Mitigation via Adaptive Cache Size Management." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 25, no. 3 (March 2017): 1012–22. http://dx.doi.org/10.1109/tvlsi.2016.2606579.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Li, Roger Zhe. "Metric Optimization and Mainstream Bias Mitigation in Recommender Systems." ACM SIGIR Forum 57, no. 2 (December 2023): 1–2. http://dx.doi.org/10.1145/3642979.3643010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recommender Systems have drawn extensive attention in recent decades. The most important factor to make a recommender succeed is user satisfaction, which is largely reflected by the recommendation accuracy. In accordance, this thesis dives into recommendation accuracy from two different perspectives that, unfortunately, are at tension with each other: achieving the maximum overall recommendation accuracy, and balancing that accuracy among all users. The first part of this thesis focuses on maximizing the overall recommendation accuracy Li et al. [2021a]. This accuracy is usually evaluated with some user-oriented metrics tailored to the recommendation scenario. However, recommendation models could be trained to maximize some other generic criteria that do not necessarily align with the criteria ultimately captured by the metric above. Recent research usually assumes that the metric used for evaluation is also the metric used for training. We challenge this assumption, mainly because some metrics are more informative than others. Indeed, we show that models trained via the optimization of a loss inspired by Rank-Biased Precision (RBP) tend to yield higher accuracy, even when accuracy is measured with metrics other than RBP. However, the superiority of this RBP-inspired loss stems from further benefiting users who are already well-served, rather than helping those who are not. This observation inspires the second part of this thesis, where our focus turns to helping nonmainstream users, who are difficult to recommend to either because of the lack of data for effective modeling or a niche taste that is hard to match similar users. Our first effort consists in using side data, beyond the user-item interaction matrix, so that users and items are better represented. This will be of benefit especially for the non-mainstream users, for which the user-item matrix alone is ineffective. We propose Neural AutoEncoder Collaborative Filtering (NAECF) Li et al. [2021b], an adversarial learning architecture that, in addition to maximizing the recommendation accuracy, leverages side data to preserve the intrinsic properties of users and items and show that NAECF leads to better recommendations specially for non-mainstream users, while at the same time there is a marginal loss for the mainstream ones. Our second effort Li et al. [2023] consists in explicitly focusing more on non-mainstream users in a recommendation model. In particular, we propose a mechanism based on cost-sensitive learning that weighs users according to their mainstreamness, so that they get more attention during training. The result is a recommendation model tailored to non-mainstream users, that narrows the accuracy gap, and again at negligible cost to the mainstream users. Awarded by : Delft University of Technology, Delft, The Netherlands on 14 November 2023. Supervised by : Alan Hanjalic and Julián Urbano. Available at : https://doi.org/10.4233/uuid:235a5f5c-5e18-4e67-a8e6-84a7e310db12.
39

Zhang, Chunxiao. "Exploration, detection, and mitigation: Unveiling gender bias in NLP." Applied and Computational Engineering 52, no. 1 (March 27, 2024): 62–68. http://dx.doi.org/10.54254/2755-2721/52/20241234.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Natural Language Processing (NLP) systems have a mundane impact, yet they harbour either obvious or potential gender bias. The automation of decision-making in NLP models even exacerbates unfair treatment. In recent years, researchers have started to notice this issue and have made some approaches to detect and mitigate these biases, yet no consensus on the approaches exists. This paper discusses the interdisciplinary field of linguistics and computer sciences by presenting the most common gender bias categories and breaking them down with ethical and artificial intelligence approaches. Specific methods for detecting and minimizing bias are shown around biases present in raw data, annotator, model, and the linguistic gender system. In this paper, an overview of the hotspots and future perspectives of this research topic is presented. Limitations of some detection methods are pinpointed, providing novel insights into future research.
40

Alshareef, Norah, Xiaohong Yuan, Kaushik Roy, and Mustafa Atay. "A Study of Gender Bias in Face Presentation Attack and Its Mitigation." Future Internet 13, no. 9 (September 14, 2021): 234. http://dx.doi.org/10.3390/fi13090234.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In biometric systems, the process of identifying or verifying people using facial data must be highly accurate to ensure a high level of security and credibility. Many researchers investigated the fairness of face recognition systems and reported demographic bias. However, there was not much study on face presentation attack detection technology (PAD) in terms of bias. This research sheds light on bias in face spoofing detection by implementing two phases. First, two CNN (convolutional neural network)-based presentation attack detection models, ResNet50 and VGG16 were used to evaluate the fairness of detecting imposer attacks on the basis of gender. In addition, different sizes of Spoof in the Wild (SiW) testing and training data were used in the first phase to study the effect of gender distribution on the models’ performance. Second, the debiasing variational autoencoder (DB-VAE) (Amini, A., et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure) was applied in combination with VGG16 to assess its ability to mitigate bias in presentation attack detection. Our experiments exposed minor gender bias in CNN-based presentation attack detection methods. In addition, it was proven that imbalance in training and testing data does not necessarily lead to gender bias in the model’s performance. Results proved that the DB-VAE approach (Amini, A., et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure) succeeded in mitigating bias in detecting spoof faces.
41

Chu, Charlene, Simon Donato-Woodger, Shehroz S. Khan, Tianyu Shi, Kathleen Leslie, Samira Abbasgholizadeh-Rahimi, Rune Nyrup, and Amanda Grenier. "Strategies to Mitigate Age-Related Bias in Machine Learning: Scoping Review." JMIR Aging 7 (March 22, 2024): e53564. http://dx.doi.org/10.2196/53564.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background Research suggests that digital ageism, that is, age-related bias, is present in the development and deployment of machine learning (ML) models. Despite the recognition of the importance of this problem, there is a lack of research that specifically examines the strategies used to mitigate age-related bias in ML models and the effectiveness of these strategies. Objective To address this gap, we conducted a scoping review of mitigation strategies to reduce age-related bias in ML. Methods We followed a scoping review methodology framework developed by Arksey and O’Malley. The search was developed in conjunction with an information specialist and conducted in 6 electronic databases (IEEE Xplore, Scopus, Web of Science, CINAHL, EMBASE, and the ACM digital library), as well as 2 additional gray literature databases (OpenGrey and Grey Literature Report). Results We identified 8 publications that attempted to mitigate age-related bias in ML approaches. Age-related bias was introduced primarily due to a lack of representation of older adults in the data. Efforts to mitigate bias were categorized into one of three approaches: (1) creating a more balanced data set, (2) augmenting and supplementing their data, and (3) modifying the algorithm directly to achieve a more balanced result. Conclusions Identifying and mitigating related biases in ML models is critical to fostering fairness, equity, inclusion, and social benefits. Our analysis underscores the ongoing need for rigorous research and the development of effective mitigation approaches to address digital ageism, ensuring that ML systems are used in a way that upholds the interests of all individuals. Trial Registration Open Science Framework AMG5P; https://osf.io/amg5p
42

Blow, Christina Hastings, Lijun Qian, Camille Gibson, Pamela Obiomon, and Xishuang Dong. "Comprehensive Validation on Reweighting Samples for Bias Mitigation via AIF360." Applied Sciences 14, no. 9 (April 30, 2024): 3826. http://dx.doi.org/10.3390/app14093826.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Fairness Artificial Intelligence (AI) aims to identify and mitigate bias throughout the AI development process, spanning data collection, modeling, assessment, and deployment—a critical facet of establishing trustworthy AI systems. Tackling data bias through techniques like reweighting samples proves effective for promoting fairness. This paper undertakes a systematic exploration of reweighting samples for conventional Machine-Learning (ML) models, utilizing five models for binary classification on datasets such as Adult Income and COMPAS, incorporating various protected attributes. In particular, AI Fairness 360 (AIF360) from IBM, a versatile open-source library aimed at identifying and mitigating bias in machine-learning models throughout the entire AI application lifecycle, is employed as the foundation for conducting this systematic exploration. The evaluation of prediction outcomes employs five fairness metrics from AIF360, elucidating the nuanced and model-specific efficacy of reweighting samples in fostering fairness within traditional ML frameworks. Experimental results illustrate that reweighting samples effectively reduces bias in traditional ML methods for classification tasks. For instance, after reweighting samples, the balanced accuracy of Decision Tree (DT) improves to 100%, and its bias, as measured by fairness metrics such as Average Odds Difference (AOD), Equal Opportunity Difference (EOD), and Theil Index (TI), is mitigated to 0. However, reweighting samples does not effectively enhance the fairness performance of K Nearest Neighbor (KNN). This sheds light on the intricate dynamics of bias, underscoring the complexity involved in achieving fairness across different models and scenarios.
43

Featherston, Rebecca Jean, Aron Shlonsky, Courtney Lewis, My-Linh Luong, Laura E. Downie, Adam P. Vogel, Catherine Granger, Bridget Hamilton, and Karyn Galvin. "Interventions to Mitigate Bias in Social Work Decision-Making: A Systematic Review." Research on Social Work Practice 29, no. 7 (December 23, 2018): 741–52. http://dx.doi.org/10.1177/1049731518819160.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose: This systematic review synthesized evidence supporting interventions aimed at mitigating cognitive bias associated with the decision-making of social work professionals. Methods: A systematic search was conducted within 10 social services and health-care databases. Review authors independently screened studies in duplicate against prespecified inclusion criteria, and two review authors undertook data extraction and quality assessment. Results: Four relevant studies were identified. Because these studies were too heterogeneous to conduct meta-analyses, results are reported narratively. Three studies focused on diagnostic decisions within mental health and one considered family reunification decisions. Two strategies were reportedly effective in mitigating error: a nomogram tool and a specially designed online training course. One study assessing a consider-the-opposite approach reported no effect on decision outcomes. Conclusions: Cognitive bias can impact the accuracy of clinical reasoning. This review highlights the need for research into cognitive bias mitigation within the context of social work practice decision-making.
44

Vega-Gonzalo, María, and Panayotis Christidis. "Fair Models for Impartial Policies: Controlling Algorithmic Bias in Transport Behavioural Modelling." Sustainability 14, no. 14 (July 9, 2022): 8416. http://dx.doi.org/10.3390/su14148416.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The increasing use of new data sources and machine learning models in transport modelling raises concerns with regards to potentially unfair model-based decisions that rely on gender, age, ethnicity, nationality, income, education or other socio-economic and demographic data. We demonstrate the impact of such algorithmic bias and explore the best practices to address it using three different representative supervised learning models of varying levels of complexity. We also analyse how the different kinds of data (survey data vs. big data) could be associated with different levels of bias. The methodology we propose detects the model’s bias and implements measures to mitigate it. Specifically, three bias mitigation algorithms are implemented, one at each stage of the model development pipeline—before the classifier is trained (pre-processing), when training the classifier (in-processing) and after the classification (post-processing). As these debiasing techniques have an inevitable impact on the accuracy of predicting the behaviour of individuals, the comparison of different types of models and algorithms allows us to determine which techniques provide the best balance between bias mitigation and accuracy loss for each case. This approach improves model transparency and provides an objective assessment of model fairness. The results reveal that mode choice models are indeed affected by algorithmic bias, and it is proven that the implementation of off-the-shelf mitigation techniques allows us to achieve fairer classification models.
45

Kurmi, Vinod K., Rishabh Sharma, Yash Vardhan Sharma, and Vinay P. Namboodiri. "Gradient Based Activations for Accurate Bias-Free Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7255–62. http://dx.doi.org/10.1609/aaai.v36i7.20687.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Bias mitigation in machine learning models is imperative, yet challenging. While several approaches have been proposed, one view towards mitigating bias is through adversarial learning. A discriminator is used to identify the bias attributes such as gender, age or race in question. This discriminator is used adversarially to ensure that it cannot distinguish the bias attributes. The main drawback in such a model is that it directly introduces a trade-off with accuracy as the features that the discriminator deems to be sensitive for discrimination of bias could be correlated with classification. In this work we solve the problem. We show that a biased discriminator can actually be used to improve this bias-accuracy tradeoff. Specifically, this is achieved by using a feature masking approach using the discriminator's gradients. We ensure that the features favoured for the bias discrimination are de-emphasized and the unbiased features are enhanced during classification. We show that this simple approach works well to reduce bias as well as improve accuracy significantly. We evaluate the proposed model on standard benchmarks. We improve the accuracy of the adversarial methods while maintaining or even improving the unbiasness and also outperform several other recent methods.
46

Li, Yunyi, Maria De-Arteaga, and Maytal Saar-Tsechansky. "When More Data Lead Us Astray: Active Data Acquisition in the Presence of Label Bias." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 10, no. 1 (October 14, 2022): 133–46. http://dx.doi.org/10.1609/hcomp.v10i1.21994.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An increased awareness concerning risks of algorithmic bias has driven a surge of efforts around bias mitigation strategies. A vast majority of the proposed approaches fall under one of two categories: (1) imposing algorithmic fairness constraints on predictive models, and (2) collecting additional training samples. Most recently and at the intersection of these two categories, methods that propose active learning under fairness constraints have been developed. However, proposed bias mitigation strategies typically overlook the bias presented in the observed labels. In this work, we study fairness considerations of active data collection strategies in the presence of label bias. We first present an overview of different types of label bias in the context of supervised learning systems. We then empirically show that, when overlooking label bias, collecting more data can aggravate bias, and imposing fairness constraints that rely on the observed labels in the data collection process may not address the problem. Our results illustrate the unintended consequences of deploying a model that attempts to mitigate a single type of bias while neglecting others, emphasizing the importance of explicitly differentiating between the types of bias that fairness-aware algorithms aim to address, and highlighting the risks of neglecting label bias during data collection.
47

Gill, Michael J., and Alexandra Pizzuto. "Unwilling to Un-Blame: Whites Who Dismiss Historical Causes of Societal Disparities Also Dismiss Personal Mitigating Information for Black Offenders." Social Cognition 40, no. 1 (February 2022): 55–87. http://dx.doi.org/10.1521/soco.2022.40.1.55.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
When will racial bias in blame and punishment emerge? Here, we focus on White people's willingness to “un-blame” Black and White offenders upon learning of their unfortunate life histories or biological impairments. We predicted that personal mitigating narratives of Black (but not White) offenders would be ignored by Whites who are societal-level anti-historicists. Societal-level anti-historicists deny that a history of oppression by Whites has shaped current societal-level intergroup disparities. Thus, our prediction centers on how societal-level beliefs relate to bias against individuals. Our predictions were confirmed in three studies. In one of those studies, we also showed how racial bias in willingness to un-blame can be removed: Societal-level anti-historicists became open to mitigation for Black offenders if they were reminded that the offender began as an innocent baby. Results are discussed in terms of how the rich literature on blame and moral psychology could enrich the study of racial bias.
48

Li, Xiangchong, Yin Li, and Richard Massey. "Weak gravitational lensing shear measurement with FPFS: analytical mitigation of noise bias and selection bias." Monthly Notices of the Royal Astronomical Society 511, no. 4 (February 9, 2022): 4850–60. http://dx.doi.org/10.1093/mnras/stac342.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ABSTRACT Dedicated ‘Stage IV’ observatories will soon observe the entire extragalactic sky, to measure the ‘cosmic shear’ distortion of galaxy shapes by weak gravitational lensing. To measure the apparent shapes of those galaxies, we present an improved version of the Fourier Power Function Shapelets (FPFS) shear measurement method. This now includes analytic corrections for sources of bias that plague all shape measurement algorithms: Including noise bias (due to noise in non-linear combinations of observable quantities) and selection bias (due to sheared galaxies being more or less likely to be detected). Crucially, these analytic solutions do not rely on calibration from external image simulations. For isolated galaxies, the small residual ${\sim}10^{-3}$ multiplicative bias and ${\lesssim}10^{-4}$ additive bias now meet science requirements for Stage IV experiments. FPFS also works accurately for faint galaxies and robustly against stellar contamination. Future work will focus on deblending overlapping galaxies. The code used for this paper can process ${\gt}1000$ galaxy images per CPU second and is available from https://github.com/mr-superonion/FPFS.
49

Zhang, Hengrui, Wei (Wayne) Chen, James M. Rondinelli, and Wei Chen. "ET-AL: Entropy-targeted active learning for bias mitigation in materials data." Applied Physics Reviews 10, no. 2 (June 2023): 021403. http://dx.doi.org/10.1063/5.0138913.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Growing materials data and data-driven informatics drastically promote the discovery and design of materials. While there are significant advancements in data-driven models, the quality of data resources is less studied despite its huge impact on model performance. In this work, we focus on data bias arising from uneven coverage of materials families in existing knowledge. Observing different diversities among crystal systems in common materials databases, we propose an information entropy-based metric for measuring this bias. To mitigate the bias, we develop an entropy-targeted active learning (ET-AL) framework, which guides the acquisition of new data to improve the diversity of underrepresented crystal systems. We demonstrate the capability of ET-AL for bias mitigation and the resulting improvement in downstream machine learning models. This approach is broadly applicable to data-driven materials discovery, including autonomous data acquisition and dataset trimming to reduce bias, as well as data-driven informatics in other scientific domains.
50

Gaffney, Sean, and Vineeta Rao. "Reducing Bias in Opioid Risk Mitigation: Piloting a Pharmacy-Led Mitigation Strategy in Oncology Palliative Care." Journal of Pain and Symptom Management 67, no. 5 (May 2024): e695. http://dx.doi.org/10.1016/j.jpainsymman.2024.02.165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії